CN114596378A - Sparse angle CT artifact removing method - Google Patents

Sparse angle CT artifact removing method Download PDF

Info

Publication number
CN114596378A
CN114596378A CN202210242057.7A CN202210242057A CN114596378A CN 114596378 A CN114596378 A CN 114596378A CN 202210242057 A CN202210242057 A CN 202210242057A CN 114596378 A CN114596378 A CN 114596378A
Authority
CN
China
Prior art keywords
image
sparse
rsdb
network
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210242057.7A
Other languages
Chinese (zh)
Inventor
谢世朋
喻丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202210242057.7A priority Critical patent/CN114596378A/en
Publication of CN114596378A publication Critical patent/CN114596378A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/424Iterative

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for removing a sparse angle CT artifact, which is based on a perception loss and a feature fusion attention residual error network and comprises the steps of preprocessing an input sparse CT image with the artifact and an original CT image; performing feature extraction on the preprocessed sparse CT image and the original CT image to obtain a feature map; performing improved feature fusion attention residual error network training on the obtained feature map by taking the perception loss as a loss function, and taking an original CT image as a label, so that the sparse CT image can remove artifacts but keep the structure of the sparse CT image unchanged; and finally, taking the sparse CT image as input, removing artifacts by using a trained network model, and calculating the peak signal-to-noise ratio and the structural similarity. The method takes the perception loss as a loss function, and can enhance the detail information. And an attention mechanism is added into the characteristic fusion residual error network, so that the performance of the network is improved, and finally, the effect of quickly and effectively removing the artifacts is realized.

Description

Sparse angle CT artifact removing method
Technical Field
The invention relates to the technical field of image processing, in particular to a sparse angle CT artifact removing method.
Background
Computed Tomography (CT) is a key medical imaging technique. However, the potential problem of X-ray radiation in CT imaging is of concern and the radiation dose increases the risk of cancer. Therefore, much research has focused on reducing the radiation dose, which has led to the development of many effective methods. The initial approach focused on reducing X-ray exposure by reducing or adjusting the tube. Methods that reduce the number of projections for a given scan trajectory can cause severe streak artifacts in the CT image. Reconstructing an image from sparsely sampled data may solve the problem of insufficient data due to limited sampling angles. However, reconstruction of sparse view CT projection data (e.g., using FBP) will produce streak artifacts in the CT image.
In recent years, Deep Learning (DL) has become a powerful feature learning method. The natural image problem has made great progress by virtue of the powerful function of deep learning and the end-to-end deep supervised learning method and by means of the convolutional neural network. Such as object detection, image segmentation and image super resolution. Many researchers have used deep learning methods for sparse CT image reconstruction and artifact removal.
Chinese patent application 'CT sparse reconstruction artifact correction method and system based on residual learning' (patent application No. CN201711097204.1, publication No. CN107871332A), learning artifact characteristics by establishing a residual neural network structure to obtain an artifact map; and finally, performing residual error by using the sparsely reconstructed image and the artifact image to restore a clear CT image. However, the method has the problems of long training time, more model parameters and the like, and increases the calculation amount of the network.
Disclosure of Invention
In view of the problems of the above-mentioned techniques and materials in the prior art, the present invention provides a fast, simple and robust method for removing the artifacts of sparse angular CT.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the invention discloses a method for removing sparse angle CT artifacts, which comprises the following specific steps:
step 1, preprocessing sparse angle CT data with fringe artifacts and original CT data, and processing the sparse angle CT data and the original CT data into a dcm-format image with a specified size;
step 2, extracting image characteristics of the preprocessed sparse angle CT data through a VGG19 network so as to facilitate subsequent training;
step 3, constructing a feature fusion attention residual error network, wherein a residual error jumping dense block is used as a foundation of a network shallow layer in the network, a local feature fusion layer is positioned behind two convolution layers, the residual error jumping dense block jumps over the connection of the two local feature fusion layers, a feature fusion result of a former module is used as the input of a latter/next module, the features of local feature fusion are stacked, and residual error information is used for integrating feature information to form a basic network architecture;
step 4, training by taking the sparse CT image as input, the original CT image as a label and the perception loss as a loss function to obtain a trained feature fusion attention residual error network model;
and 5, testing by taking the sparse CT images of the same part as input to obtain an image with removed artifacts, and calculating the peak signal-to-noise ratio and the structural similarity between the output CT image and the labeled CT image.
The invention is further improved in that: the step 1 specifically comprises the following steps:
step 1.1, modifying the sizes of the head sparse CT image and the original CT image in the data set to enable the resolution of each image to be m multiplied by n, wherein m and n are the length and the height of the image respectively, and the sparse CT image and the original CT image are in a format of dcm.
And step 1.2, converting the image with the new size in the step 1.1 into a sensor data type suitable for the Pythrch framework.
The invention is further improved in that: in step 2, three identical images are put together to form three channels to adapt to a VGG19 network, and the preprocessed sparse CT image and the original CT image are subjected to feature extraction through a trained VGG19 model, wherein the sparse CT image is used for artifact removal training, and the original CT image is used for prediction training with streak artifacts.
The invention is further improved in that: the specific operation of constructing the feature fusion attention residual error network in the step 3 is as follows: connecting among a plurality of residual error dense blocks, fusing local features in each RSDB, and transferring the local features to a subsequent residual error block, wherein the f-th RSDB directly introduces local features in the (f-1) -th RSDB or RB and performs feature fusion, if f is 1, for the model adaptation residual error block RB, the local feature fusion LFF is expressed as:
Figure RE-GDA0003618260430000021
wherein Ff,LFShowing the f-th RSDB local feature fusion result,
Figure RE-GDA0003618260430000022
represents the Concat feature graph Stacking function, Ff-1,LFRepresents the (F-1) th RSDB local feature fusion result, Ff,2The result of shallow feature extraction performed on the f-th RSDB input through two Conv layers is shown as:
Figure RE-GDA0003618260430000023
where δ denotes the ReLU activation function, Wf,2And Wf,1Respectively represent the weight of the 1 st convolutional layer of the 2 nd convolutional layer, b2And b1Respectively, the 2 nd and 1 st convolution layers, Ff-1Represents the results of the f-1 st RSDB output;
after local feature fusion is carried out in each RSDB, the feature graph obtained by the attention module is transmitted to the next RSDB for the same operation, and the features of the appointed layers in different basic blocks are stacked together to realize jump connection with intervals and effective feature extraction, wherein the feature graph stack is expressed as:
Figure RE-GDA0003618260430000031
wherein, F0For the 1 st RSDB pre-model to adapt to the characteristic map of the Conv layer in the residual block RB, it can be mentioned in the following FFRN network structure that in order to adapt to the RSDB structure, an RB block is introduced first in the RSDB. Finally, local residual learning is introduced to further improve the information flow, Ff,3The presentation translation layer function uses a convolutional layer with a kernel size of axa.
The attention mechanism module has two ways to obtain characteristics, namely a main road can replace any leading edge convolution module; and the mask channel is mainly used for processing the features to generate soft masks, and becomes a control gate to control new features, so that the features are more concentrated.
Suppose the input of the main road is x, the output is T (x), the mask output is M (x), and the output of the attention module at this time is M (x)
Hi,c(x)=Mi,c(x)*Ti,c(x)
Where i represents the spatial position of the feature and c represents the feature channel.
In the attention module, the mask can be used not only as a feature selector in the forward inference process, but also as a gradient update filter in the backward propagation process.
Figure RE-GDA0003618260430000032
When the gradient is updated in the reverse propagation way, the parameter theta of the mask part is not updated, but the parameter of the main road is only updated
Figure RE-GDA0003618260430000033
Where θ is the parameter of the mask branch and φ is the parameter of the trunk branch. Parameters to trunk branches
Figure RE-GDA0003618260430000034
And (5) derivation and gradient calculation. Therefore, the robustness of the attention module to noise is strong, and the influence of the noise on gradient updating can be effectively reduced. This property makes the network very robust to noise signatures. Each main road is matched with one mask road, and the characteristic mask of the step is learned, so that the mask is more targeted.
Since the mask value is [0,1], refer to the residual learning structure of Resnet. The output of the module is improved into a residual error structure:
Hi,c(x)=(1+Mi,c(x))*Ti,c(x)
the invention is further improved in that: the perceived loss in step 4 is expressed as:
Figure RE-GDA0003618260430000041
wherein
Figure RE-GDA0003618260430000042
And
Figure RE-GDA0003618260430000043
features of sparse CT and original CT images, respectively, extracted through a VGG19 network of j layers, where WjIs the weight coefficient of the j-th layer, GjAnd HjRespectively representing the length and width of the j-th network.
The loss of absolute value mainly measures the difference between the pixel values of the neural network output image and the reference image, and its formula is as follows:
Figure RE-GDA0003618260430000044
m and N are the number of rows and columns of the image, which calculates the expectation of the absolute value of the difference in pixels between the convolutional neural network output image I and the reference image R, where I (k, l) and R (k, l) represent the input image pixel and the output image pixel of the ith row and ith column, respectively;
the loss function in the network combines absolute loss and perceptual loss, expressed as follows:
Lloss=Lmae+λLfeat
where λ is the weighting factor occupied by the perceptual loss.
The invention has the beneficial effects that:
1. the method is based on the perception loss, and takes the perception loss and the image characteristic loss as loss functions in a trained network model, so that the loss of the network is reduced, and the quality of the output CT image is improved;
2. the invention combines a characteristic fusion attention residual error network, adopts a residual error learning method, and realizes the prediction of the streak artifact by using the image with the streak artifact as a label after subtracting the output CT image from the input sparse CT image with the streak artifact;
3. compared with the traditional residual error network, the network of the invention makes full use of local characteristics, introduces an attention module and learns more detailed information;
4. the invention does not change the size of the image in the training process, and perfectly keeps the details and edge information of the image;
5. compared with the traditional residual error network, the network of the invention has simple structure, and consumes less memory resource and less training time;
6. the images in the training process can adopt other sizes, such as 256 multiplied by 256, 512 multiplied by 512 and the like, so that the applicability of the network is greatly expanded.
Drawings
FIG. 1 is a general flow diagram of a method of an embodiment of the invention.
Fig. 2 is an RB module of the present invention.
Fig. 3 is an RSDB block of the present invention.
FIG. 4 is an attention mechanism module of the present invention.
Fig. 5 is an overall structural view of the present invention.
Detailed Description
In order to make the objects, technical solutions and novel points of the present invention more clear, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
The invention discloses a method for removing sparse angle CT artifacts
Step 1, preprocessing each image in a sparse CT and original CT data set to enable the size of each image to be the same and convert the image into a data type suitable for network training; the method comprises the following specific steps:
step 1.1, intercepting original CT data (dcm format) obtained from a hospital into the same size, wherein the pixel size is m × n, m and n are the length and height of an image respectively, and m is 512;
and 1.2, converting the adjusted dcm data into a sensor data type suitable for the Pythrch frame.
Step 2, extracting image characteristics of the preprocessed sparse angle CT data through a VGG19 network so as to facilitate subsequent training; the specific operation is as follows:
the preprocessed data are all single-channel medical images, and the input of the selected VGG19 network is a three-channel color image, so the dimension needs to be added to the input image to adapt to the network. The method comprises the following steps: three identical images are selected to form three-channel (512, 512, 3) input, and feature extraction is carried out through an open-source trained VGG19 model imagenet-VGG-verydep-19 version.
Step 3, constructing a feature fusion attention residual error network, wherein a residual error jumping dense block is used as a foundation of a network shallow layer in the network, a local feature fusion layer is positioned behind two convolution layers, the residual error jumping dense block jumps over the connection of the two local feature fusion layers, a feature fusion result of a former module is used as the input of a latter/next module, the features of local feature fusion are stacked, and residual error information is used for integrating feature information to form a basic network architecture;
the Residual Dense Block (RDB) can extract multi-level features in the original image, further improving network performance. RDB is a residual error dense block proposed in RDN, all layers of RDB are fully utilized by local dense connection, accumulated features are adaptively saved by Local Feature Fusion (LFF), and shallow features and deep features are combined together by global residual error learning to perform Global Feature Fusion (GFF) to obtain global dense features.
Connecting between blocks, fusing local features in each RSDB, and finally transmitting to a subsequent residual block; the f-th RSDB directly introduces local features in the (f-1) -th RSDB or RB and performs feature fusion, wherein if f is 1, the model adaptive residual block RB is represented as:
Figure RE-GDA0003618260430000061
wherein Ff,LFShowing the f-th RSDB local feature fusion result,
Figure RE-GDA0003618260430000062
represents the Concat feature graph Stacking function, Ff-1,LFRepresents the (F-1) th RSDB local feature fusion result, Ff,2The result of shallow feature extraction performed on the f-th RSDB input through two Conv layers can be expressed as:
Figure RE-GDA0003618260430000063
where δ denotes the ReLU activation function, Wf,2And Wf,1Respectively represent the weight of the 1 st convolutional layer of the 2 nd convolutional layer, b2And b1Respectively, the 2 nd and 1 st convolution layers, Ff-1Representing the results of the f-1 RSDB output.
After local feature fusion is carried out in each RSDB, the feature graph obtained by the attention module is transmitted to the next RSDB for the same operation, and the features of the appointed layers in different basic blocks are stacked together to realize jump connection with intervals and effective feature extraction, wherein the feature graph stack is expressed as:
Figure RE-GDA0003618260430000064
wherein, F0For the 1 st RSDB pre-model to adapt to the characteristic map of the Conv layer in the residual block RB, it can be mentioned in the following FFRN network structure that in order to adapt to the RSDB structure, an RB block is introduced first in the RSDB. Finally, local residual learning is introduced to further improve the information flow, Ff,3The presentation translation layer function may use convolutional layers with cores of 1x1 or 3x3 size, with different core sizes serving different functions.
The attention mechanism module has two ways to obtain characteristics, namely a main road which can replace any leading edge convolution module; and the mask channel is mainly used for processing the features to generate soft masks, and becomes a control gate to control new features, so that the features are more concentrated.
Suppose the input of the main trunk is x, the output is the main trunk branch feature map T (x), the output of the mask branch (mask) is M (x), and the output of the attention module at this time is M (x)
Hi,c(x)=Mi,c(x)*Ti,c(x)
Where i represents the spatial position of the feature and c represents the feature channel.
In the attention module, the mask can be used not only as a feature selector in the forward inference process, but also as a gradient update filter in the backward propagation process.
Figure RE-GDA0003618260430000071
When the gradient is updated in the reverse propagation way, the parameter theta of the mask part is not updated, but the parameter of the main road is only updated
Figure RE-GDA0003618260430000072
Where θ is the parameter of the mask branch and φ is the parameter of the trunk branch. Parameters to trunk branches
Figure RE-GDA0003618260430000073
And (5) derivation and gradient calculation. Therefore, the robustness of the attention module to noise is strong, and the influence of the noise on gradient updating can be effectively reduced. This property makes the network very robust to noise signatures. Each main road is matched with one mask road, and the characteristic mask of the step is learned, so that the mask is more targeted.
Since the mask value is [0,1], refer to the residual learning structure of Resnet. The output of the module is improved into a residual error structure:
Hi,c(x)=(1+Mi,c(x))*Ti,c(x)
the convolutional neural network in this example is shown in fig. 2, where the Residual Block (RB) structure is: the first layer is a 3 × 3 convolutional layer, followed by a ReLU layer, then a 3 × 3 convolutional layer, a 3 × 1 convolutional layer, a 1 × 3 convolutional layer. As shown in fig. 3, the dense Residual Skip (RSDB) block structure is: the first layer is a 3 × 3 convolutional layer, followed by a ReLU layer, then a 3 × 3 convolutional layer, a ReLU layer, a tie layer, then a 3 × 1 convolutional layer, 1 × 3 convolutional layer. The receptive field is increased, and meanwhile, the calculation speed is accelerated. As shown in fig. 4, the attention module mechanism.
And 4, taking the sparse CT as input, the original CT as a label, the perception loss as a loss function, training a feature fusion residual error network, calculating the upper and lower problem losses according to the perception difference between the sparse CT and the original CT image and the features of the corresponding convolutional layer, and updating the image value of the reconstructed CT image along with the change of the background loss. The invention adopts an iterative optimization method to correct the sparse CT image. According to the artifact removing result, adjusting the context loss function, the network parameter and the ratio of the feature sample size as necessary, and repeating the network training until the artifact is effectively removed.
The image problem can be regarded as an image transformation task that accepts an input image and then outputs the transformed image. Such as noise extraction, super-resolution reconstruction, colorization of images, etc. in image processing, an input image is a degraded low-quality image (noise, low resolution, graying) and the output is a color, high-resolution, high-quality image.
The traditional convolutional neural network-based loss function is defined by using pixel basic loss, two images are basically compared at a pixel, and if the two images are perceived to be the same but the pixels are different, the pixel loss causes great deviation. The perception loss realizes perception comparison based on the characteristics of the high-level convolutional neural network without the basic loss of pixels.
The perceptual loss function is defined as follows:
Figure RE-GDA0003618260430000081
wherein
Figure RE-GDA0003618260430000082
And
Figure RE-GDA0003618260430000083
features of sparse CT and original CT images, respectively, extracted through a VGG19 network of j layers, where WjIs the weight coefficient of the j-th layer, CjAnd HjRespectively representing the length and width of the j-th network.
The loss of absolute value mainly measures the difference between the pixel values of the neural network output image and the reference image, and its formula is as follows:
Figure RE-GDA0003618260430000084
m and N are the number of rows and columns of the image, which calculates the expectation of the absolute value of the difference in pixels between the convolutional neural network output image I and the reference image R. Where I (k, l) and R (k, l) represent the input image pixel and the output image pixel, respectively, of the kth row and the l column. The resulting image details of the neural network with MAE loss are better.
Loss function in the network:
Lloss=Lmae+λLfeat
the loss function in the network combines absolute loss and perceptual loss, where λ is the weighting factor occupied by the perceptual loss.
This embodiment uses a Pythrch framework in a Python environment on a GeForce GTX 1080Ti processor. For network optimization, the present invention uses an Adam optimizer and a nonlinear activation function ReLU. During network training, the number of the past iterations of the invention can be set to 100, and the learning rate is set to be adaptive. To achieve accurate convergence, the step size is set to 2.
And (5) testing the sparse CT images of the same part as input to obtain an image with the artifact removed, and calculating the peak signal-to-noise ratio and the structural similarity between the output CT image and the labeled CT image. The Peak Signal-to-Noise Ratio (PSNR) is defined as follows:
Figure RE-GDA0003618260430000091
Figure RE-GDA0003618260430000092
m and N represent the length and width of the image, L is the maximum value of the gray level in the image, im is the image to be measured, and ref is the reference image. The PSNR calculates the difference between the measurement image im and the reference image ref, and the larger the PSNR value is, the smaller the difference between the measurement image and the reference image is, i.e., the smaller the noise in the measurement image im is.
Structural Similarity Index Measurement (SSIM) is defined as follows:
SSIM(im,ref)=L(im,ref)×C(im,ref)×S(im,ref)
Figure RE-GDA0003618260430000093
Figure RE-GDA0003618260430000094
Figure RE-GDA0003618260430000095
uimand urefRepresenting the mean, σ, of the measurement and reference images, respectivelyimAnd σrefRespectively representing the standard deviation, σ, of the measured image and the reference imageim,refRepresenting the covariance of the measurement image and the reference image, C1、C2And C3Is a constant. The structural similarity is used for comparing the similarity between the measured image im and the reference image ref, the similarity between the images is calculated from three aspects of image brightness, contrast and structure, the value range of the structural similarity is between 0 and 1, and the higher the structural similarity is, the more similar the measured image and the reference image are.
Although the present invention has been described with reference to the preferred embodiments, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (6)

1. A method for sparse angular CT artifact removal, characterized by: the method comprises the following specific steps:
step 1, preprocessing sparse angle CT data with fringe artifacts and original CT data, and processing the sparse angle CT data and the original CT data into a dcm-format image with a specified size;
step 2, extracting image characteristics of the preprocessed sparse angle CT data through a VGG19 network so as to facilitate subsequent training;
step 3, constructing a feature fusion attention residual error network;
step 4, training by taking the sparse CT image as input, the original CT image as a label and the perception loss as a loss function to obtain a trained feature fusion attention residual error network model;
and 5, testing by taking the sparse CT images of the same part as input to obtain an image with removed artifacts, and calculating the peak signal-to-noise ratio and the structural similarity between the output CT image and the labeled CT image.
2. The method of sparse angular CT artifact removal as claimed in claim 1, wherein: the step 1 specifically comprises the following steps:
step 1.1, modifying the sizes of the head sparse CT image and the original CT image in the data set to enable the resolution of each image to be m multiplied by n, wherein m and n are the length and the height of the image respectively, and the sparse CT image and the original CT image are in a format of dcm.
And step 1.2, converting the image with the new size in the step 1.1 into a sensor data type suitable for the Pythrch framework.
3. The method of sparse angular CT artifact removal as claimed in claim 1, wherein: in step 2, three identical images are put together to form three channels to adapt to a VGG19 network, and the preprocessed sparse CT image and the original CT image are subjected to feature extraction through a trained VGG19 model, wherein the sparse CT image is used for artifact removal training, and the original CT image is used for prediction training with streak artifacts.
4. The method of sparse angular CT artifact removal as claimed in claim 1, wherein: the specific operation of constructing the feature fusion attention residual error network in the step 3 is as follows: connecting among a plurality of dense residual error jumping modules, fusing local features in each RSDB and transmitting the fused local features to a subsequent residual error block, wherein the F-th RSDB directly introduces the local features and F in the (F-1) th RSDB or RBf,2And performing feature fusion, wherein if f is 1, the model adaptive residual block RB is represented as:
Figure RE-FDA0003618260420000011
wherein Ff,LFShowing the f-th RSDB local feature fusion result,
Figure RE-FDA0003618260420000012
represents the Concat feature graph Stacking function, Ff-1,LFRepresents the (F-1) th RSDB local feature fusion result, Ff,2The result of shallow feature extraction performed on the f-th RSDB input through two Conv layers is shown as:
Figure RE-FDA0003618260420000021
where δ denotes the ReLU activation function, Wf,2And Wf,1Respectively represent the 2 ndWeight of convolution layer 1 st convolution layer, b2And b1Respectively, the 2 nd and 1 st convolution layers, Ff-1Represents the results of the f-1 st RSDB output;
after local feature fusion is carried out in each RSDB, the feature graph obtained by the attention module is transmitted to the next RSDB for the same operation, and the features of the appointed layers in different basic blocks are stacked together to realize jump connection with intervals and effective feature extraction, wherein the feature graph stack is expressed as:
Figure RE-FDA0003618260420000022
wherein, F0For the 1 st RSDB pre-model to adapt to the characteristic map of the Conv layer in the residual block RB, it can be mentioned in the following FFRN network structure that in order to adapt to the RSDB structure, an RB block is introduced first in the RSDB. Finally, local residual learning is introduced to further improve the information flow, Ff,3The presentation translation layer function uses a convolutional layer with a kernel size of axa.
5. The method of sparse angular CT artifact removal as claimed in claim 4, wherein: suppose the input of the main road is x, the output is T (x), the mask output is M (x), and the output of the attention module is:
Hi,c(x)=Mi,c(x)*Ti,c(x)
wherein i represents the spatial position of the feature and c represents the feature channel;
when mask is used as the gradient update filter in the back propagation process:
Figure RE-FDA0003618260420000023
when the gradient is updated in the reverse propagation way, the parameter theta of the mask part is not updated, but the parameter of the main road is only updated
Figure RE-FDA0003618260420000024
Where θ is a parameter of the mask branch and φ is a parameter of the trunk branch;
since the mask value is [0,1], referring to the residual learning structure of Resnet, the output of this module is the residual structure:
Hi,c(x)=(1+Mi,c(x))*Ti,c(x)。
6. the method of sparse angular CT artifact removal as claimed in claim 1, wherein: the perceived loss in step 4 is expressed as:
Figure RE-FDA0003618260420000031
wherein
Figure RE-FDA0003618260420000032
And
Figure RE-FDA0003618260420000033
features of sparse CT and original CT images, respectively, extracted through a VGG19 network of j layers, where WjIs the weight coefficient of the j-th layer, CjAnd HjRespectively representing the length and width of the j-th network.
The loss of absolute value mainly measures the difference between the pixel values of the neural network output image and the reference image, and its formula is as follows:
Figure RE-FDA0003618260420000034
m and N are the number of rows and columns of the image, which calculates the expectation of the absolute value of the difference in pixels between the convolutional neural network output image I and the reference image R, where I (k, l) and R (k, l) represent the input image pixel and the output image pixel of the kth row and the l column, respectively,
the loss function in the network combines absolute loss and perceptual loss, expressed as follows:
Lloss=Lmae+λLfeat
where λ is the weighting factor occupied by the perceptual loss.
CN202210242057.7A 2022-03-11 2022-03-11 Sparse angle CT artifact removing method Pending CN114596378A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210242057.7A CN114596378A (en) 2022-03-11 2022-03-11 Sparse angle CT artifact removing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210242057.7A CN114596378A (en) 2022-03-11 2022-03-11 Sparse angle CT artifact removing method

Publications (1)

Publication Number Publication Date
CN114596378A true CN114596378A (en) 2022-06-07

Family

ID=81809137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210242057.7A Pending CN114596378A (en) 2022-03-11 2022-03-11 Sparse angle CT artifact removing method

Country Status (1)

Country Link
CN (1) CN114596378A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115103118A (en) * 2022-06-20 2022-09-23 北京航空航天大学 High dynamic range image generation method, device, equipment and readable storage medium
CN118351210A (en) * 2024-06-17 2024-07-16 南昌睿度医疗科技有限公司 CT image artifact removal method, system, storage medium and electronic equipment
CN118351210B (en) * 2024-06-17 2024-08-27 南昌睿度医疗科技有限公司 CT image artifact removal method, system, storage medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871332A (en) * 2017-11-09 2018-04-03 南京邮电大学 A kind of CT based on residual error study is sparse to rebuild artifact correction method and system
WO2021042270A1 (en) * 2019-09-03 2021-03-11 中山大学 Compression artifacts reduction method based on dual-stream multi-path recursive residual network
CN114004912A (en) * 2021-11-08 2022-02-01 南京邮电大学 CBCT image artifact removing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871332A (en) * 2017-11-09 2018-04-03 南京邮电大学 A kind of CT based on residual error study is sparse to rebuild artifact correction method and system
WO2021042270A1 (en) * 2019-09-03 2021-03-11 中山大学 Compression artifacts reduction method based on dual-stream multi-path recursive residual network
CN114004912A (en) * 2021-11-08 2022-02-01 南京邮电大学 CBCT image artifact removing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHIPENG XIE: "《Artifact Removal in Sparse-Angle CT Based on Feature Fusion Residual Network》", 《IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES》, vol. 5, no. 2, 9 June 2020 (2020-06-09), pages 261 - 271 *
吴从中;陈曦;季栋;詹曙;: "结合深度残差学习和感知损失的图像去噪", 中国图象图形学报, no. 10, 16 October 2018 (2018-10-16) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115103118A (en) * 2022-06-20 2022-09-23 北京航空航天大学 High dynamic range image generation method, device, equipment and readable storage medium
CN115103118B (en) * 2022-06-20 2023-04-07 北京航空航天大学 High dynamic range image generation method, device, equipment and readable storage medium
CN118351210A (en) * 2024-06-17 2024-07-16 南昌睿度医疗科技有限公司 CT image artifact removal method, system, storage medium and electronic equipment
CN118351210B (en) * 2024-06-17 2024-08-27 南昌睿度医疗科技有限公司 CT image artifact removal method, system, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN108734659B (en) Sub-pixel convolution image super-resolution reconstruction method based on multi-scale label
Wang et al. Esrgan: Enhanced super-resolution generative adversarial networks
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN115049936B (en) High-resolution remote sensing image-oriented boundary enhanced semantic segmentation method
CN113012172A (en) AS-UNet-based medical image segmentation method and system
Chen et al. MICU: Image super-resolution via multi-level information compensation and U-net
CN112669248A (en) Hyperspectral and panchromatic image fusion method based on CNN and Laplacian pyramid
Qu et al. TransFuse: A unified transformer-based image fusion framework using self-supervised learning
Ma et al. SD-GAN: Saliency-discriminated GAN for remote sensing image superresolution
CN117408924A (en) Low-light image enhancement method based on multiple semantic feature fusion network
CN117333750A (en) Spatial registration and local global multi-scale multi-modal medical image fusion method
CN111222453B (en) Remote sensing image change detection method based on dense connection and geometric structure constraint
CN115511767A (en) Self-supervised learning multi-modal image fusion method and application thereof
CN115578262A (en) Polarization image super-resolution reconstruction method based on AFAN model
CN116977208A (en) Low-illumination image enhancement method for double-branch fusion
CN115293968A (en) Super-light-weight high-efficiency single-image super-resolution method
CN114596378A (en) Sparse angle CT artifact removing method
Lai et al. Generative focused feedback residual networks for image steganalysis and hidden information reconstruction
CN117893409A (en) Face super-resolution reconstruction method and system based on illumination condition constraint diffusion model
CN117593275A (en) Medical image segmentation system
CN115546030B (en) Compressed video super-resolution method and system based on twin super-resolution network
CN114937154B (en) Significance detection method based on recursive decoder
Qin et al. Remote sensing image super-resolution using multi-scale convolutional neural network
Li et al. Super-resolution of fisheye rectified image based on deep multi-path cascaded network
CN111951177B (en) Infrared image detail enhancement method based on image super-resolution loss function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination