CN111870245B - Cross-contrast-guided ultra-fast nuclear magnetic resonance imaging deep learning method - Google Patents

Cross-contrast-guided ultra-fast nuclear magnetic resonance imaging deep learning method Download PDF

Info

Publication number
CN111870245B
CN111870245B CN202010627841.0A CN202010627841A CN111870245B CN 111870245 B CN111870245 B CN 111870245B CN 202010627841 A CN202010627841 A CN 202010627841A CN 111870245 B CN111870245 B CN 111870245B
Authority
CN
China
Prior art keywords
magnetic resonance
nuclear magnetic
contrast
image
attention network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010627841.0A
Other languages
Chinese (zh)
Other versions
CN111870245A (en
Inventor
杨燕
孙剑
王娜
杨鹤然
徐宗本
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202010627841.0A priority Critical patent/CN111870245B/en
Publication of CN111870245A publication Critical patent/CN111870245A/en
Application granted granted Critical
Publication of CN111870245B publication Critical patent/CN111870245B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/4818MR characterised by data acquisition along a specific k-space trajectory or by the temporal order of k-space coverage, e.g. centric or segmented coverage of k-space
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/5608Data processing and visualization specially adapted for MR, e.g. for feature analysis and pattern recognition on the basis of measured MR data, segmentation of measured MR data, edge contour detection on the basis of measured MR data, for enhancing measured MR data in terms of signal-to-noise ratio by means of noise filtering or apodization, for enhancing measured MR data in terms of resolution by means for deblurring, windowing, zero filling, or generation of gray-scaled images, colour-coded images or images displaying vectors instead of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Signal Processing (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Pathology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a cross-contrast-guided ultra-fast nuclear magnetic resonance imaging deep learning method, which can reconstruct a high-quality nuclear magnetic resonance image from high k-space undersampled data acquired by nuclear magnetic resonance imaging equipment under the guidance of cross-contrast images. The method mainly comprises four steps of cross-contrast guided image reconstruction model construction, model-driven deep attention network training process and ultra-fast nuclear magnetic resonance imaging application. And adopting a plurality of groups of k-space undersampled data, corresponding full sampling data reconstruction images and guide nuclear magnetic resonance image training network parameters to enable output images of the network to approximate to the full sampling reconstruction images as much as possible. In application, k-space undersampled data and a guide nuclear magnetic resonance image are input, and the network output of the k-space undersampled data and the guide nuclear magnetic resonance image is a reconstructed high-quality nuclear magnetic resonance image.

Description

Cross-contrast-guided ultra-fast nuclear magnetic resonance imaging deep learning method
The technical field is as follows:
the invention belongs to the field of medical nuclear magnetic resonance imaging, and particularly relates to a cross-contrast-guided ultra-fast nuclear magnetic resonance imaging depth learning method which is used for reconstructing a high-quality nuclear magnetic resonance image from k-space highly undersampled data acquired by nuclear magnetic resonance equipment under the guidance of a cross-contrast image.
Background art:
the nuclear magnetic resonance imaging technology is a leading and nondestructive biomedical imaging technology. A typical magnetic resonance imaging protocol includes multiple contrast sequences of the same anatomy that provide complementary information to enhance clinical diagnosis. However, an important problem of the magnetic resonance imaging technology is that the data acquisition time is long and the imaging speed is slow.
Three types of existing fast mri techniques are briefly described below.
The compressive sensing nuclear magnetic resonance technology is a mainstream method for accelerating nuclear magnetic resonance imaging, and the nuclear magnetic resonance imaging speed is accelerated by undersampling sample data in a k-space and then reconstructing a clear nuclear magnetic resonance image based on a small amount of sample data. Traditional model-based compressive sensing nuclear magnetic resonance methods rely on regularization associated with image priors, such as total variation regularization [1,2], wavelet regularization [2,3], non-local regularization [4,5], and dictionary learning [6,7 ]. But it is challenging to manually design an optimal regularization. In recent years, the deep learning method is widely applied to compressive sensing nuclear magnetic resonance. It learns a mapping from a low-quality image reconstructed from undersampled data to a high-quality reconstructed image using a deep neural network based on training data [8,9,10,11,12,13 ]. However, these compressive sensing nmr methods only consider reconstructing nmr images using a single contrast sequence (e.g., T1WI or T2WI), and the acceleration factor is limited.
Another method to accelerate mri is to generate missing contrast mri from other contrast fully sampled data. Such methods either learn dictionaries or sparse representations of source contrast image blocks for target contrast [14,15] or learn mappings from source contrast to target contrast directly through deep neural networks [16,17,18,19 ]. However, this method has low reconstruction accuracy.
In clinical applications, some contrast sequences, such as T1WI, require shorter acquisition times to allow full sampling, while other sequences, such as T2WI and FLAIR, require longer acquisition times to accelerate imaging by undersampling. Recently, Xiang et al [20] proposed a cross-contrast guided nuclear magnetic resonance image reconstruction method to accelerate nuclear magnetic resonance imaging. And fusing the undersampled reconstructed T2WI image and the full sampling reconstructed T1WI guide image by adopting a Dense-Unet network, and outputting a reconstructed T2WI nuclear magnetic resonance image. However, the network does not introduce the nuclear magnetic resonance imaging mechanism and the domain knowledge into the design of the network structure, the network has no interpretability, a special network module is not elaborately designed aiming at the fusion problem, and the image reconstruction quality is not high.
Reference documents:
[1]Block K T,Uecker M,Frahm J.Undersampled radial MRI with multiple coils.Iterative image reconstruction using a total variation constraint[J].Magnetic Resonance in Medicine,57(6):1086-1098,2007.
[2]Huang J,Chen C,Axel L.Fast multi-contrast MRI reconstruction[J].Magnetic resonance imaging,32(10):1344-1352,2014.
[3]Lustig M,Donoho D,Pauly J M.Sparse MRI:The application of compressed sensing for rapid MR imaging[J].Magnetic Resonance in Medicine,58(6):1182-1195,2007.
[4]Eksioglu E M.Decoupled algorithm for MRI reconstruction using nonlocal block matching model:BM3D-MRI[J].Journal of Mathematical Imaging and Vision,56(3):430-440,2016.
[5]Qu X,Hou Y,Lam F,et al.Magnetic resonance image reconstruction from undersampled measurements using a patch-based nonlocal operator[J].Medical image analysis,18(6):843-856,2014.
[6]Ravishankar S,Bresler Y.MR image reconstruction from highly undersampled k-space data by dictionary learning[J].IEEE transactions on medical imaging,30(5):1028-1041,2010.
[7]Zhan Z,Cai J F,Guo D,et al.Fast multiclass dictionaries learning with geometrical directions in MRI reconstruction[J].IEEE Transactions on biomedical engineering,63(9):1850-1861,2015.
[8]Wang S,Su Z,Ying L,et al.Accelerating magnetic resonance imaging via deep learning[C].In International Symposium on Biomedical Imaging(ISBI).IEEE:514-517,2016.
[9]Lee D,Yoo J,Ye J C.Deep residual learning for compressed sensing MRI[C].In International Symposium on Biomedical Imaging(ISBI).IEEE:15-18,2017.
[10]Hammernik K,Klatzer T,Kobler E,et al.Learning a variational network for reconstruction of accelerated MRI data[J].Magnetic resonance in medicine,79(6):3055-3071,2018.
[11]Meng N,Yang Y,Xu Z,et al.A Prior Learning Network for Joint Image and Sensitivity Estimation in Parallel MR Imaging[C].In International Conference on Medical Image Computing and Computer-Assisted Intervention(MICCAI).Springer,Cham:732-740,2019.
[12]Schlemper J,Caballero J,Hajnal J V,et al.A deep cascade of convolutional neural networks for dynamic MR image reconstruction[J].IEEE transactions on Medical Imaging,37(2):491-503,2017.
[13]Yang Y,Sun J,Li H,Xu Z.Deep ADMM-Net for compressive sensing MRI[C].Advances in neural information processing systems(NIPS),10-18,2016.
[14]Huang Y,Shao L,Frangi A F.Cross-modality image synthesis via weakly coupled and geometry co-regularized joint dictionary learning[J].IEEE transactions on medical imaging,37(3):815-827,2017.
[15]Roy S,Carass A,Prince J L.Magnetic resonance image example-based contrast synthesis[J].IEEE transactions on medical imaging,32(12):2348-2363,2013.
[16]Chartsias A,Joyce T,Giuffrida M V,et al.Multimodal MR synthesis via modality-invariant latent representation[J].IEEE transactions on medical imaging,37(3):803-814,2017.
[17]Dar S U H,Yurt M,Karacan L,et al.Image synthesis in multi-contrast MRI with conditional generative adversarial networks[J].IEEE transactions on medical imaging,38(10):2375-2388,2019.
[18]Joyce T,Chartsias A,Tsaftaris S A.Robust multi-modal MR image synthesis[C].In International Conference on Medical Image Computing and Computer-Assisted Intervention(MICCAI).Springer,Cham:347-355,2017.
[19]Li H,Paetzold J C,Sekuboyina A,et al.DiamondGAN:Unified Multi-Modal Generative Adversarial Networks for MRI Sequences Synthesis[C].In International Conference on Medical Image Computing and Computer-Assisted Intervention(MICCAI).Springer,Cham:795-803,2019.
[20]Xiang L,Chen Y,Chang W,et al.Ultra-fast t2-weighted mr reconstruction using complementary t1-weighted information[C].In International Conference on Medical Image Computing and Computer-Assisted Intervention(MICCAI).Springer,Cham:215-223,2018.
the invention content is as follows:
the invention aims to provide a cross-contrast-guided ultra-fast nuclear magnetic resonance imaging deep learning method aiming at the defects and shortcomings of the existing fast nuclear magnetic resonance imaging technology. The invention reconstructs high quality nuclear magnetic resonance images from highly k-space undersampled data acquired by nuclear magnetic resonance imaging equipment under guidance of cross-contrast images. Because the acquired k-space data is highly undersampled and the data volume is far less than the full-sampling data volume, the nuclear magnetic resonance imaging equipment can realize ultra-fast imaging speed and simultaneously needs a reconstruction algorithm to achieve high nuclear magnetic resonance imaging precision.
In order to achieve the purpose, the invention adopts the following technical scheme to realize the purpose:
a cross-contrast-guided ultra-fast nuclear magnetic resonance imaging deep learning method comprises the following steps:
1) constructing a cross-contrast-guided ultra-fast nuclear magnetic resonance image reconstruction model: constructing a reconstruction model based on a nuclear magnetic resonance imaging mechanism and a compressive sensing theory;
2) constructing a model-driven deep attention network, wherein the construction method of the deep attention network comprises the following steps: developing and abstracting the calculation process of the half-quadratic splitting iterative algorithm for optimizing the ultra-fast nuclear magnetic resonance image reconstruction model into a depth attention network;
3) model-driven deep attention network training process: based on a training data set, learning the optimal parameters of the deep attention network by using an Adam optimization algorithm, so that the network output of the deep attention network when the highly undersampled data are input approaches a nuclear magnetic resonance image reconstructed by corresponding fully-sampled data;
4) performing a nuclear magnetic resonance imaging process by using the trained model-driven deep attention network: and inputting the k-space height undersampled data and the cross-contrast guided nuclear magnetic resonance image, wherein the output of the depth attention network is the reconstructed nuclear magnetic resonance image.
The invention has the further improvement that in the step 1), the cross-contrast-guided ultra-fast nuclear magnetic resonance image reconstruction model is a compressed sensing reconstruction model, and comprises a k-space undersampled data consistent term for modeling a nuclear magnetic resonance imaging mechanism and a cross-contrast prior term for modeling the correlation of two contrast images.
The invention further improves the operation of the adjacent point operator for image fusion and the image reconstruction in the half-quadratic splitting iterative algorithm in the step 2)Operation determining model-driven depth consisting of cross-contrast fusion blocks and image reconstruction blocks a further improvement of the invention is that in said step 3) the training dataset Γ is composed of a plurality of data sets, each data set being composed of k-space undersampled data ysCorresponding full sampling data reconstruction image
Figure BDA0002567206370000051
And cross-contrast guided nuclear magnetic resonance image xgComposition is carried out; objective function for deep attention network training
Figure BDA0002567206370000052
Is defined as:
Figure BDA0002567206370000053
wherein | Γ | represents the number of elements in the training data set,
Figure BDA0002567206370000054
outputting an image for the depth attention network, wherein theta is a parameter of the depth attention network; calculating the gradient of the target function relative to the depth attention network parameters by adopting a back propagation algorithm, and then optimizing the depth attention network parameters by adopting an Adam algorithm based on a training data set to obtain the optimal parameter theta*
The further improvement of the invention is that the specific application process of the deep attention network in the step 4) is as follows: when nuclear magnetic resonance imaging is carried out, k-space undersampled data and cross-contrast guided nuclear magnetic resonance image full-sampling data are collected through nuclear magnetic resonance equipment, then the k-space undersampled data and the guided nuclear magnetic resonance image are sent to a trained model-driven deep attention network, and an image output by the deep attention network is a reconstructed nuclear magnetic resonance image.
The invention has at least the following beneficial technical effects:
the invention provides a cross-contrast-guided ultra-fast nuclear magnetic resonance imaging deep learning method, which can reconstruct a high-quality nuclear magnetic resonance image from k-space undersampled data acquired by nuclear magnetic resonance imaging equipment under the guidance of cross-contrast images. Compared with the existing compressed sensing nuclear magnetic resonance method only considering a single-contrast image (such as a model-based compressed sensing nuclear magnetic resonance method and a deep learning-based compressed sensing nuclear magnetic resonance method), the method can achieve higher undersampling times, and has higher reconstruction precision and similar reconstruction speed. Compared with the image generation method, the quality of the nuclear magnetic resonance image reconstructed by the method is remarkably improved. Compared with the existing cross-contrast image guided nuclear magnetic resonance reconstruction depth attention network, the method has better interpretability and higher reconstruction precision.
In conclusion, the invention can be mainly used for realizing the rapid imaging function in the nuclear magnetic resonance imaging equipment and has important application value for the research, development and production of the nuclear magnetic resonance imaging equipment.
Description of the drawings:
FIG. 1 is a flow chart of an embodiment of the present invention.
FIG. 2 is a diagram of a model-driven deep attention network architecture.
Fig. 3 is a diagram of a cross-contrast fusion block network architecture.
Fig. 4 is a diagram of an example of magnetic resonance image reconstruction (1-dimensional cartesian sampling, 1/32 sampling rate).
The specific implementation mode is as follows:
in order to make the objects, technical solutions and advantages of the present invention more fresh, the present invention will be further described in detail with reference to the accompanying drawings and specific examples. These examples are merely illustrative and not restrictive of the invention.
As shown in fig. 1, the method for cross-contrast guided ultra-fast mri deep learning provided by the present invention includes the following steps:
the cross-contrast guided ultra-fast nuclear magnetic resonance image reconstruction model construction comprises the following steps:
given a nuclear magnetic resonance image to be reconstructed
Figure BDA0002567206370000071
(e.g., T2WI) of k-space sampled data
Figure BDA0002567206370000072
Cross-contrast guided nuclear magnetic resonance image reconstructed by full sampling
Figure BDA0002567206370000073
(e.g., T1WI) where N and U represent cardinality of the MRI image and k-space sampled data and U < N, based on MRI mechanisms and compressive sensing theory, the present invention designs the following MRI reconstruction models:
Figure BDA0002567206370000074
wherein
Figure BDA0002567206370000075
Is a Fourier transform, and the Fourier transform is,
Figure BDA0002567206370000076
is a sampling matrix of k-space. The first term is the data item which constrains the reconstructed nuclear magnetic resonance image x in k-spacesWith its undersampled data ysData consistency between. The second term is a cross-contrast prior term that models the magnetic resonance image xsAnd a guide image xgThe correlation between them.
Model-driven deep attention network construction:
the reconstruction model (1) can be efficiently solved by a semi-quadratic splitting algorithm. Introducing auxiliary nuclear magnetic resonance image zsLet z bes=xsThe reconstruction model (1) is then equivalent to optimizing the energy model as follows:
Figure BDA0002567206370000077
with a penalty factor ρ → ∞ during optimization. The energy model (2) can be estimated by alternating iterations to be unknownVariable zsAnd xsTo minimize. And solving the following two subproblems aiming at the k-th iteration.
Estimating z by neighbor operatorss(subproblem 1): given the step (k-1) of iterative generation of a nuclear magnetic resonance image
Figure BDA0002567206370000078
Auxiliary nuclear magnetic resonance image zsMay be based on guiding the nuclear magnetic resonance image xgUpdating is carried out by the following method:
Figure BDA0002567206370000079
the neighbor operator of the regularization term g (-) is defined as
Figure BDA00025672063700000710
Then auxiliary nmr image zsThe updating can be done by means of a neighborhood operator:
Figure BDA00025672063700000711
wherein the operators of neighboring points
Figure BDA00025672063700000712
Is determined by cross-contrast prior f (x)s,xg) Determined non-linear mapping in image xgWill input a nuclear magnetic resonance image under the guidance of
Figure BDA00025672063700000713
Is mapped as
Figure BDA00025672063700000714
Wherein xsInitialization by zero-filling:
Figure BDA00025672063700000715
estimating x by image reconstructions(subproblem 2): given the k stepIteratively updated auxiliary nuclear magnetic resonance image
Figure BDA0002567206370000081
Reconstructed nuclear magnetic resonance image xsThe update may be by:
Figure BDA0002567206370000082
this subproblem has a closed form solution:
Figure BDA0002567206370000083
wherein Λ ═ MHM + diag (p) is a diagonal matrix.
The present invention expands and abstracts the above-mentioned iterative algorithm calculation processes (equations (4), (6)) into a deep attention network, i.e., a model-driven deep attention network. As shown in FIG. 2, the whole depth attention network comprises K operation units, each operation unit corresponds to two modules, namely a cross-contrast fusion block (F-B) and an image reconstruction block (R-B), and variable z in formula (4) and formula (6) is respectively realizedsAnd xsAnd (4) updating.
The present invention does not artificially design a cross-contrast prior f (x)s,xg) Instead, a learnable cross-contrast fusion block network is used instead of the neighborhood operators in equation (4):
Figure BDA0002567206370000084
thereby learning indirectly across contrast priors. And the cross-contrast fusion block network takes the two fused contrast images as input and outputs an updated auxiliary nuclear magnetic resonance image. As shown in fig. 3, the network is designed as a dual path deep convolutional neural network that contains a coding block, a feature fusion block, and a decoding block. The coding block is respectively from the guide image x by two sub-codersgAnd reconstructed image of step k-1 iteration (i.e. step k-1)
Figure BDA0002567206370000085
) Each sub-encoder consists of a convolution and four cascaded sub-block networks. Then, the features from different coding network layers with different contrasts are connected in series, and further, a decoding network comprising a convolution, five cascade sub-blocks, a multi-scale feature fusion (i.e. feature series connection), two convolutions and a Sigmoid operator is fused, and a decoder outputs an updated auxiliary nuclear magnetic resonance image
Figure BDA0002567206370000086
Each sub-block network is designed with a hopping connection, a Channel Attention (CA) and a Space Attention (SA) module. Two attention modules are described in detail below.
Channel Attention (CA): the channel attention module is to make the network focus more on important channel features from different contrasts. First, the present invention converts the features represented by F into channel descriptors represented by G by means of channel-by-channel average pooling:
Figure BDA0002567206370000091
wherein FcIs a feature of the c-th channel, and H and W are the height and width of the feature. The channel descriptor G then passes through the cascaded network layers: convolution → ReLU layer → convolution → Sigmoid operator, so that the invention obtains the weights W of different channelsCA. The final weight is applied to the input features by element-wise multiplication:
Figure BDA0002567206370000092
spatial Attention (SA): the spatial attention module is to focus the network more on important spatial regions of the image, such as high frequency and heavy artifact regions. For feature F, through the cascaded network layers: convolution → ReLU layer → convolution → Sigmoid operator, the invention obtains the weight W of different pixelsSA. The weights are then applied to the input features by element-wise multiplication:
Figure BDA0002567206370000093
model-driven deep attention network training process:
to determine the optimal parameters of the model-driven depth attention network, the present invention constructs a training data set for a cross-contrast image-guided magnetic resonance imaging problem. The training dataset consists of a plurality of data sets, each data set consisting of k-space undersampled data of T2WI, a corresponding fully sampled data reconstructed image, and a guided nuclear magnetic resonance image of T1 WI. In practical construction, the invention firstly uses T2WI data of the nuclear magnetic resonance imaging equipment under the full sampling setting to reconstruct a nuclear magnetic resonance image corresponding to the full sampling data
Figure BDA0002567206370000094
The fully sampled data is then undersampled to obtain corresponding k-space sampled data ys. Reconstructing an image from k-space full-sampled data
Figure BDA0002567206370000095
As a standard reconstructed image, k-space sampling data y thereofsAnd a guide image xgAs the input of the depth attention network, the k-space sampling data, the standard reconstruction image and the guide image form a set of training data
Figure BDA0002567206370000096
Many sets of such training data form the deep attention network training set Γ.
The present invention uses Mean Square Error (MSE) loss to train the network:
Figure BDA0002567206370000097
wherein gamma is a training data set, | gamma | represents the number of elements in the training data set, ysK-space undersampled data acquired for a magnetic resonance imaging apparatus,
Figure BDA0002567206370000098
magnetic resonance image, x, reconstructed for corresponding full-sampled datagIn order to guide the nuclear magnetic resonance image,
Figure BDA0002567206370000099
for model-driven deep attention network output, Θ is a parameter of the deep attention network. Calculating the gradient of the target function relative to the depth attention network parameters by adopting a back propagation algorithm, and then optimizing the depth attention network parameters by adopting an Adam algorithm based on a training data set to obtain the optimal parameter theta*
Performing ultra-fast nuclear magnetic resonance imaging by using the trained model-driven deep attention network:
through the training process of the third step, the optimal parameters of the model-driven depth attention network can be determined, k-space undersampled data and a guided nuclear magnetic resonance image are input based on the trained depth attention network, and the output of the depth attention network is the reconstructed high-precision nuclear magnetic resonance image. Because the deep attention network parameter training process in the step three enables the output image of the deep attention network to be as close to the standard nuclear magnetic resonance image as possible, and the input of the deep attention network comprises the guide image of the full-sampling reconstruction, the trained deep attention network can still obtain a high-quality reconstructed image under the condition of extremely low data sampling rate.
In a numerical experiment, 379 brain multi-contrast nuclear magnetic resonance full-sampling reconstruction images are used, T2WI modal images are subjected to undersampling in k-space of the images according to different sampling rates, and therefore a brain nuclear magnetic resonance data set required by 379 experiments is obtained. 190 sets of data were selected as training data and 189 were used for testing. The k-space sampling pattern selects 1-dimensional cartesian sampling with sampling rates of 1/8, 1/16, and 1/32, respectively. In training, the present invention uses 2-dimensional slices to train the deep attention network, while in testing, the present invention tests reconstruction accuracy on a 3-dimensional volume of size 240 × 240 × 115. For objective evaluation of the different methods, the mean reconstruction accuracy over the test set was measured in terms of standard root mean square error (nRMSE) and peak signal to noise ratio (PSNR).
The model-driven deep attention network (MD-DAN) of the present invention was compared to other methods at different sampling rates, as shown in table 1. The contrast method includes a method of reconstructing T2WI from an undersampled T2WI modality (T2 → T2) as follows: Zero-Filling, DC-CNN, Dense-Unnet-R, ResNet-R, DPA-FusionNet-R; the method of generating T2WI from the T1WI modality (T1 → T2) is as follows: Dense-Unnet-S, ResNet-S, DPA-FusionNet-S; the method of generating T2WI from the undersampled T2WI modality and the guided T1WI modality (T1+ T2 → T2) is as follows: FCSA-MAT, Dense-Unet, ResNet. Where Dense-Unet-R (S), ResNet-R (S) are based on Dense-Unet (2 upsampling operations and 5 Dense blocks) and ResNet (9 convolutional residual blocks), respectively. DPA-fusion net-R (S) is a variant of cross-contrast fusion block network (DPA-fusion net) in fig. 3 for monomodal reconstruction, which represents an upper path ("-R") or a lower path ("-S") without cross-contrast fusion, respectively. The depth attention network designed by the invention achieves the best reconstruction precision under different sampling rates, and the result is greatly improved compared with other methods. Fig. 4 is a visualization result of a reconstructed image, and it can be seen that the method of the present invention achieves good reconstruction quality without significant artifacts.
Table one: comparison of different sampling rates in brain data test sets
Figure BDA0002567206370000111

Claims (4)

1. A cross-contrast-guided ultra-fast nuclear magnetic resonance imaging deep learning method is characterized by comprising the following steps:
1) constructing a cross-contrast-guided ultra-fast nuclear magnetic resonance image reconstruction model: constructing a reconstruction model based on a nuclear magnetic resonance imaging mechanism and a compressive sensing theory; the trans-contrast-guided ultra-fast nuclear magnetic resonance image reconstruction model is a compressed sensing reconstruction model and comprises a k-space undersampled data consistent item for modeling a nuclear magnetic resonance imaging mechanism and a trans-contrast prior item for modeling the correlation of two contrast images;
2) constructing a model-driven deep attention network, wherein the construction method of the deep attention network comprises the following steps: developing and abstracting the calculation process of the half-quadratic splitting iterative algorithm for optimizing the ultra-fast nuclear magnetic resonance image reconstruction model into a depth attention network;
3) model-driven deep attention network training process: based on a training data set, learning the optimal parameters of the deep attention network by using an Adam optimization algorithm, so that the network output of the deep attention network when the highly undersampled data are input approaches a nuclear magnetic resonance image reconstructed by corresponding fully-sampled data;
4) performing a nuclear magnetic resonance imaging process by using the trained model-driven deep attention network: and inputting the k-space height undersampled data and the cross-contrast guided nuclear magnetic resonance image, wherein the output of the depth attention network is the reconstructed nuclear magnetic resonance image.
2. The cross-contrast-guided ultrafast MRI deep learning method as claimed in claim 1, wherein the neighboring point operator operation and image reconstruction operation for image fusion in the semi-quadratic splitting iterative algorithm in step 2) determines a model-driven depth attention network composed of cross-contrast fusion blocks and image reconstruction blocks.
3. The method as claimed in claim 1, wherein in step 3), the training dataset Γ is composed of a plurality of data sets, each data set is composed of k-space undersampled data ysCorresponding full sampling data reconstruction image
Figure FDA0003400103660000011
And cross-contrast guided nuclear magnetic resonance image xgComposition is carried out; objective function for deep attention network training
Figure FDA0003400103660000012
Is defined as:
Figure FDA0003400103660000013
wherein | Γ | represents the number of elements in the training data set,
Figure FDA0003400103660000014
outputting an image for the depth attention network, wherein theta is a parameter of the depth attention network; calculating the gradient of the target function relative to the depth attention network parameters by adopting a back propagation algorithm, and then optimizing the depth attention network parameters by adopting an Adam algorithm based on a training data set to obtain the optimal parameter theta*
4. The cross-contrast-guided ultrafast MRI deep learning method according to claim 3, wherein the deep attention network of step 4) is specifically applied as follows: when nuclear magnetic resonance imaging is carried out, k-space undersampled data and cross-contrast guided nuclear magnetic resonance image full-sampling data are collected through nuclear magnetic resonance equipment, then the k-space undersampled data and the guided nuclear magnetic resonance image are sent to a trained model-driven deep attention network, and an image output by the deep attention network is a reconstructed nuclear magnetic resonance image.
CN202010627841.0A 2020-07-02 2020-07-02 Cross-contrast-guided ultra-fast nuclear magnetic resonance imaging deep learning method Active CN111870245B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010627841.0A CN111870245B (en) 2020-07-02 2020-07-02 Cross-contrast-guided ultra-fast nuclear magnetic resonance imaging deep learning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010627841.0A CN111870245B (en) 2020-07-02 2020-07-02 Cross-contrast-guided ultra-fast nuclear magnetic resonance imaging deep learning method

Publications (2)

Publication Number Publication Date
CN111870245A CN111870245A (en) 2020-11-03
CN111870245B true CN111870245B (en) 2022-02-11

Family

ID=73149922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010627841.0A Active CN111870245B (en) 2020-07-02 2020-07-02 Cross-contrast-guided ultra-fast nuclear magnetic resonance imaging deep learning method

Country Status (1)

Country Link
CN (1) CN111870245B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950644B (en) * 2021-03-17 2024-04-05 西北大学 Neonatal brain image segmentation method and model construction method based on deep learning
CN113192151B (en) * 2021-04-08 2022-12-27 广东工业大学 MRI image reconstruction method based on structural similarity
CN113359077A (en) * 2021-06-08 2021-09-07 苏州深透智能科技有限公司 Magnetic resonance imaging method and related equipment
CN113476029B (en) * 2021-06-25 2024-02-02 陕西尚品信息科技有限公司 Nuclear magnetic resonance imaging method based on compressed sensing
CN113920213B (en) * 2021-09-27 2022-07-05 深圳技术大学 Multi-layer magnetic resonance imaging method and device based on long-distance attention model reconstruction
CN113842134B (en) * 2021-11-09 2024-04-12 清华大学 Double-sequence acceleration nuclear magnetic imaging optimization method based on double-path artificial neural network
CN116597037B (en) * 2023-05-22 2024-06-04 厦门大学 Physical generation data-driven rapid magnetic resonance intelligent imaging method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103505207A (en) * 2012-06-18 2014-01-15 山东大学威海分校 Fast and effective dynamic MRI method based on compressive sensing technology
CN106373167A (en) * 2016-11-15 2017-02-01 西安交通大学 Compressed sensing nuclear magnetic resonance imaging method based on deep neural network
CN108090871A (en) * 2017-12-15 2018-05-29 厦门大学 A kind of more contrast MR image reconstruction methods based on convolutional neural networks
CN109493394A (en) * 2018-10-26 2019-03-19 上海东软医疗科技有限公司 Method, method for reconstructing and the device of magnetic resonance imaging acquisition deep learning training set
CN110151181A (en) * 2019-04-16 2019-08-23 杭州电子科技大学 Rapid magnetic resonance imaging method based on the U-shaped network of recurrence residual error
CN111028306A (en) * 2019-11-06 2020-04-17 杭州电子科技大学 AR2U-Net neural network-based rapid magnetic resonance imaging method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10096109B1 (en) * 2017-03-31 2018-10-09 The Board Of Trustees Of The Leland Stanford Junior University Quality of medical images using multi-contrast and deep learning
US11029381B2 (en) * 2018-01-12 2021-06-08 Korea Advanced Institute Of Science And Technology Method for varying undersampling dimension for accelerating multiple-acquisition magnetic resonance imaging and device for the same

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103505207A (en) * 2012-06-18 2014-01-15 山东大学威海分校 Fast and effective dynamic MRI method based on compressive sensing technology
CN106373167A (en) * 2016-11-15 2017-02-01 西安交通大学 Compressed sensing nuclear magnetic resonance imaging method based on deep neural network
CN108090871A (en) * 2017-12-15 2018-05-29 厦门大学 A kind of more contrast MR image reconstruction methods based on convolutional neural networks
CN109493394A (en) * 2018-10-26 2019-03-19 上海东软医疗科技有限公司 Method, method for reconstructing and the device of magnetic resonance imaging acquisition deep learning training set
CN110151181A (en) * 2019-04-16 2019-08-23 杭州电子科技大学 Rapid magnetic resonance imaging method based on the U-shaped network of recurrence residual error
CN111028306A (en) * 2019-11-06 2020-04-17 杭州电子科技大学 AR2U-Net neural network-based rapid magnetic resonance imaging method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A deep information sharing network for multi-contrast compressed sensing MRI reconstruction;Liyan Sun等;《IEEE transactions on image processing》;20191231;第28卷(第12期);第6141-6153页 *
ADMM-CSNet:a deep learning approach for image compressive sensing;Yan Yang等;《IEEE transactions on pattern analysis and machine intelligence》;20181128;第42卷(第3期);第521-538页 *
Deep learing for fast MR imaging:a review for learning reconstruction from incomplete k-space data;Shanshan Wang等;《Invited Review Biomedical Signal Processing and Control》;20191231;第5页右栏第3段 *
Learning non-locally regularized compressed sensing network with half-quadratic splitting;Yubao Sun等;《IEEE transactions on multimedia》;20200214;第22卷(第12期);参见第3237页左栏第2段 *
Model-driven deep attention network for ultra-fast compressive sensing MRI guided by cross-contrast MR image;Yan Yang等;《MICCAI2020》;20201007 *
磁共振多对比度成像技术的研究;麻高超;《中国优秀硕士学位论文全文数据库医药卫生科技辑》;20190815(第8期);E060-155 *

Also Published As

Publication number Publication date
CN111870245A (en) 2020-11-03

Similar Documents

Publication Publication Date Title
CN111870245B (en) Cross-contrast-guided ultra-fast nuclear magnetic resonance imaging deep learning method
CN107610194B (en) Magnetic resonance image super-resolution reconstruction method based on multi-scale fusion CNN
Pawar et al. Suppressing motion artefacts in MRI using an Inception‐ResNet network with motion simulation augmentation
CN104933683B (en) A kind of non-convex low-rank method for reconstructing for magnetic resonance fast imaging
Cole et al. Unsupervised MRI reconstruction with generative adversarial networks
WO2020114329A1 (en) Fast magnetic resonance parametric imaging and device
Sudarshan et al. Joint PET-MRI image reconstruction using a patch-based joint-dictionary prior
CN114450599B (en) Maxwell Wei Binghang imaging
Fan et al. A segmentation-aware deep fusion network for compressed sensing mri
Liu et al. Online deep equilibrium learning for regularization by denoising
CN114913262B (en) Nuclear magnetic resonance imaging method and system with combined optimization of sampling mode and reconstruction algorithm
Golbabaee et al. Compressive MRI quantification using convex spatiotemporal priors and deep encoder-decoder networks
CN117011673B (en) Electrical impedance tomography image reconstruction method and device based on noise diffusion learning
Cheng et al. Model-based deep medical imaging: the roadmap of generalizing iterative reconstruction model using deep learning
Lei et al. Deep unfolding convolutional dictionary model for multi-contrast MRI super-resolution and reconstruction
He et al. Deep frequency-recurrent priors for inverse imaging reconstruction
CN117333750A (en) Spatial registration and local global multi-scale multi-modal medical image fusion method
Gan et al. Block coordinate plug-and-play methods for blind inverse problems
Corda-D'Incan et al. Syn-net for synergistic deep-learned PET-MR reconstruction
Levac et al. Accelerated motion correction with deep generative diffusion models
CN113052840B (en) Processing method based on low signal-to-noise ratio PET image
CN115471580A (en) Physical intelligent high-definition magnetic resonance diffusion imaging method
CN114926559A (en) PET reconstruction method based on dictionary learning thought attenuation-free correction
Guan et al. MRI reconstruction using deep energy-based model
Alkan et al. Learning to sample MRI via variational information maximization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant