CN113192150B - Magnetic resonance interventional image reconstruction method based on cyclic neural network - Google Patents

Magnetic resonance interventional image reconstruction method based on cyclic neural network Download PDF

Info

Publication number
CN113192150B
CN113192150B CN202010077426.2A CN202010077426A CN113192150B CN 113192150 B CN113192150 B CN 113192150B CN 202010077426 A CN202010077426 A CN 202010077426A CN 113192150 B CN113192150 B CN 113192150B
Authority
CN
China
Prior art keywords
image
neural network
output
loss
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010077426.2A
Other languages
Chinese (zh)
Other versions
CN113192150A (en
Inventor
冯原
赵睿洋
杜一平
梁志培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202010077426.2A priority Critical patent/CN113192150B/en
Publication of CN113192150A publication Critical patent/CN113192150A/en
Application granted granted Critical
Publication of CN113192150B publication Critical patent/CN113192150B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

A magnetic resonance interventional image reconstruction method based on a recurrent neural network adopts the recurrent neural network to reconstruct five continuous frames of undersampled magnetic resonance images, the input of the trained recurrent neural network is a fully sampled preoperative reference image and five continuous frames of undersampled images, and the output is a reconstructed image. The invention can realize the rapid acquisition and real-time reconstruction of the image and obtain the image meeting the navigation requirement quality.

Description

Magnetic resonance interventional image reconstruction method based on cyclic neural network
Technical Field
The invention relates to a technology in the field of image processing, in particular to a magnetic resonance interventional image reconstruction method based on a recurrent neural network.
Background
The soft tissue contrast of the magnetic resonance image is good, the imaging method is various, and an important way is provided for the current image navigation operation and intervention operation. However, the magnetic resonance imaging acquisition time is long, and it is difficult to perform real-time image acquisition and image reconstruction during the intra-operative procedure. The existing magnetic resonance fast acquisition and reconstruction method comprises the following steps: parallel imaging methods, as well as short TR acquisition methods, partial k-space acquisition, such as the key-hole method, non-cartesian coordinate system k-space acquisition, such as the radial acquisition and helical acquisition methods, compressive sensing methods, and machine learning methods.
However, the defects of the above-mentioned techniques include that the acceleration multiple of the conventional parallel imaging and short TR acquisition methods cannot meet the requirement of real-time imaging, the image reconstruction resolution of the partial k-space acquisition and the key-hole method is low, the image quality is poor, the reconstructed image quality is poor, the image reconstruction speed of the compressed sensing method is slow under the condition that the k-space acquisition of the non-cartesian coordinate system is subjected to high-reduction sampling, the real-time requirement cannot be met, and the algorithm based on machine learning is mostly applied to the reconstruction of the magnetic resonance structure image and cannot meet the requirement of fast reconstruction of the interventional image.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a magnetic resonance interventional image reconstruction method based on a recurrent neural network, which can realize rapid acquisition and real-time reconstruction of images and obtain images meeting the quality of navigation requirements.
The invention is realized by the following technical scheme:
the invention adopts a recurrent neural network to reconstruct five continuous under-sampled magnetic resonance images, the input of the trained recurrent neural network is a fully sampled preoperative reference image and five continuous under-sampled images, and the output is a reconstructed image.
Technical effects
The invention integrally solves the technical problems that: in the interventional process of real-time magnetic resonance image guidance, aiming at the requirements of higher time resolution and spatial resolution, how to realize the fast reconstruction of the interventional image based on less k-space sampling information is realized.
Compared with the prior art, the magnetic resonance interventional imaging method has the advantages that the magnetic resonance interventional imaging is acquired and reconstructed, the image down-sampling rate is high, and the imaging speed is high; aiming at the time sequence information in the interventional process, the interventional background and the preorder information are fully utilized, the whole image of the interventional image is rapidly acquired and reconstructed, and the reconstruction effect is good.
Drawings
FIG. 1(a) is a schematic diagram of a neural network architecture; FIG. 1(b) is a schematic diagram of the information flow of the long-short term memory layer.
Fig. 2 shows the detailed parameters of each part of the neural network, in which the number of input convolutional layers, long-short term memory layers and output convolutional layers included in each Conv LSTM module is the same, and is indicated by the numbers in the module.
FIG. 3 is a schematic diagram illustrating a reconstruction result of five consecutive frames of interventional images according to an embodiment;
in the figure: (a) is the true value; (b) reconstructing by using a NUFFT method; (c) rebuilding for GRASP method; (d) adopting a DAGAN method for reconstruction; (e) is the process of the present invention.
Detailed Description
Due to undersampled acquisition signal y ═ F in magnetic resonance image reconstructionux + e, wherein: x is the full sample image, FuFor a down-sampled fourier coding matrix, e is the noise at acquisition, so recovering x from y is an ill-posed problem, and reconstruction by zero-padding y results in an aliased under-sampled image xunI
According to the prior knowledge, the background information is correlated between different undersampled images, so that the images have time consistency, and the neural network based on the convolution-long short-term memory module (Conv-LSTM) can utilize the time consistency. Therefore, the recurrent neural network adopted in this embodiment reconstructs undersampled magnetic resonance images of five consecutive frames, and the input of the trained recurrent neural network is a fully sampled preoperative reference image xrefAnd five consecutive frames of undersampled images
Figure GDA0003489635930000021
The output is a reconstructed image
Figure GDA0003489635930000022
As shown in fig. 1, the recurrent neural network is based on a U-net module, in which a convolution-long short-term memory Conv-LSTM module is embedded to utilize consistency of background information between different image frames, and specifically includes: an initial function module Initializer for receiving a preoperative reference image and five U-net modules for receiving five undersampled images respectively, wherein: the initial function module outputs deep features of the reference image, and the five U-net modules respectively integrate undersampled image information at the current moment and historical image information brought by the Conv-LSTM module and output a reconstructed image at the current moment.
As shown in fig. 2, the initial function module (Initializer) includes 8 convolutional layers, the preoperative reference image is used as input, and the outputs of 2 nd to 8 th convolutional layers are used as initial states of 7 convolutional-long short-term memory modules:
Figure GDA0003489635930000023
the U-net module comprises: 15 convolution modules and 7 convolution-long short-term memory modules (Conv-LSTM Block), wherein: the 7 convolution-long-short term memory modules are constructed in the form of a residual error network and comprise a residual error path and a direct mapping path, each convolution-long-short term memory module is composed of an input convolution layer, a long-short term memory Layer (LSTM) and an output convolution layer which are connected in sequence, wherein: the long and short term memory layer and the output convolution layer are arranged in a residual error path, and the output of the residual error path and the output of the input convolution layer are superposed through a direct mapping path to be used as the output of the convolution-long and short term memory module.
As shown in FIG. 1(b), for the time t, the kth long-short term memory layer in the U-net module
Figure GDA0003489635930000024
The long and short term memory layer integrates the output of the input convolutional layer
Figure GDA0003489635930000025
And the state of the convolution long and short memory module at the previous moment
Figure GDA0003489635930000026
And output
Figure GDA0003489635930000027
Obtaining the state of the current time
Figure GDA0003489635930000028
And output
Figure GDA0003489635930000029
The method specifically comprises the following steps:
Figure GDA00034896359300000210
the training comprises pre-training and combined training, wherein the pre-training comprises the following steps: shielding a residual path of a U-net module in the recurrent neural network, namely setting the output of the residual path to be 0 and then pre-training a direct mapping path in the network; the joint training refers to: and (4) unmasking a residual path of a U-net module in the recurrent neural network, namely adding the residual path, and then training the combination of the residual module and the direct mapping path.
As shown in fig. 3(a), the pre-training and the joint training samples are: the magnetic resonance images of the coronal brain position of 23 patients were 1500 sets, each set of simulations generated 5 time-instant pictures with different interventional feature locations.
The loss function adopted by the training is obtained by weighting the mean square error loss of an image domain, the mean square error loss of a frequency domain, the perception loss and the confrontation loss in a confrontation generation network, wherein: the mean square error of the image domain and the frequency domain can ensure the consistency of the image with the standard image in the image domain and the frequency domain. Let the image reconstructed by the neural network be youtputThe corresponding fully sampled image is ygroundtruthFourier transform, to F, the mean square error loss in the image domain is:
Figure GDA0003489635930000031
the mean square error loss in the frequency domain is:
Figure GDA0003489635930000032
and the perception loss adopts a VGG model pre-trained on ImageNet to extract the consistency of deep features of the image. It is formulated as:
Figure GDA0003489635930000033
the resistance lossAnd (3) adopting a loss function of the post-generator classifier to better enhance some detail characteristics of the generated image. The classification details are shown in FIG. 2, and the classifier is represented by formula L when D is set as the classifierGEN=-log(D(youtput))。
Said loss function Ltotal=αLiMSE+βLfMSE+γLVGG+LGEN,α=15,β=0.25γ=0.002。
When in test, the method is firstly compared with the traditional compressed sensing method, namely non-uniform fast Fourier transform (NUFFT), golden angle radial sparse parallel reconstruction (GRASP). And meanwhile, the test result is compared with the convolution neural network shielded by the Conv-LSTM module so as to verify the effective utilization of the background information in the reconstruction by the Conv-LSTM module. The main contrast indicators are peak signal-to-noise ratio (PSNR), Structural Similarity (SSIM) and the time taken for reconstruction. The peak signal-to-noise ratio and the structural similarity are common image reconstruction evaluation indexes, and the reconstruction time comparison is carried out by considering the requirement on the speed of a reconstruction algorithm during real-time surgical navigation. The final reconstructed comparison results are shown in FIG. 3, and the corresponding evaluation numerical indicators are shown in Table 1.
TABLE 1 GRASP, Conv-LSTM masked (Conv-LSTM masked) and comparison of evaluation indices of the described methods
GRASP LSTM-masked Proposed
Computation time 15s 7.25ms 23.5ms
PSNR 4.8 18.5 20.1
SSIM 0.04 0.62 0.74
Through specific practical experiments, training is performed on a Tesla P100, 16G GPU, and the running speed and the reconstructed image quality of the algorithm are measured on 188 groups of pictures of 6 patients different from those on a training set to obtain the data in the table 1.
Compared with the prior art, the method has good interventional image reconstruction quality, better peak signal to noise ratio (PSNR) and better Structural Similarity (SSIM) index; the invention realizes the end-to-end image reconstruction and completes the reconstruction of the interventional image at one time; can be directly applied to the image reconstruction of clinical intervention.
The foregoing embodiments may be modified in many different ways by those skilled in the art without departing from the spirit and scope of the invention, which is defined by the appended claims and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (8)

1. A magnetic resonance interventional image reconstruction method based on a recurrent neural network is characterized in that the recurrent neural network is adopted to reconstruct five continuous frames of undersampled magnetic resonance images, the input of the trained recurrent neural network is a fully sampled preoperative reference image and five continuous frames of undersampled images, and the output is a reconstructed image;
the preoperative reference image is as follows: a magnetic resonance conventional T1 or T2 weighted structural image, the acquisition level and position of the preoperative reference image being consistent with the level and position selected during the intervention;
the undersampling refers to the following steps: continuously acquiring k space in an interventional process by adopting a radial acquisition method, and classifying acquired k space information according to a time point of each image reconstruction;
the recurrent neural network is based on a U-net module, a convolution-long short-term memory Conv-LSTM module is embedded in the recurrent neural network so as to utilize consistency of background information between different image frames, and the recurrent neural network specifically comprises: an initial function module for receiving a preoperative reference image and five U-net modules for receiving five undersampled images, respectively, wherein: the initial function module outputs deep features of the reference image, and the five U-net modules respectively integrate undersampled image information at the current moment and historical image information brought by the Conv-LSTM module and output a reconstructed image at the current moment.
2. A method for reconstructing a magnetic resonance interventional image as defined in claim 1, wherein the undersampled acquired signals y-Fux + e, wherein: x is the full sample image, FuA down-sampled Fourier coding matrix is adopted, and e is noise during acquisition;
the input of the trained recurrent neural network is a fully sampled preoperative reference image xrefAnd five consecutive frames of undersampled images
Figure FDA0003489635920000011
The output is a reconstructed image
Figure FDA0003489635920000012
3. The method of claim 1, wherein the initial function module comprises 8 convolutional layers, the initial function module takes the preoperative reference image as input, and takes the outputs of 2 nd to 8 th convolutional layers as the initial states of 7 convolutional-long short-term memory modules;
the U-net module comprises: 15 convolution modules and 7 convolution-long short-term memory modules, wherein: the 7 convolution-long-short term memory modules are constructed in the form of a residual error network and comprise a residual error path and a direct mapping path, and each convolution-long-short term memory module is composed of an input convolution layer, a long-short term memory layer and an output convolution layer which are connected in sequence.
4. The mri interventional image reconstruction method of claim 3, wherein the long-short term memory layer and the output convolutional layer are disposed in a residual path, and an output of the residual path and an output of the input convolutional layer are superimposed by a direct mapping path as an output of the convolutional-long-short term memory module, specifically: the kth long-short term memory layer at time t
Figure FDA0003489635920000013
Integrating outputs of input convolutional layers
Figure FDA0003489635920000014
And the state of the convolution long and short memory module at the previous moment
Figure FDA0003489635920000015
And output
Figure FDA0003489635920000016
Obtaining the state of the current time
Figure FDA0003489635920000021
And output
Figure FDA0003489635920000022
The method specifically comprises the following steps:
Figure FDA0003489635920000023
Figure FDA0003489635920000024
5. the method of claim 1, wherein the training comprises pre-training and joint training, wherein the pre-training comprises: shielding a residual path of a U-net module in the recurrent neural network, namely setting the output of the residual path to be 0 and then pre-training a direct mapping path in the network; the joint training refers to: and (4) unmasking a residual path of a U-net module in the recurrent neural network, namely adding the residual path, and then training the combination of the residual module and the direct mapping path.
6. The method of claim 5, wherein the pre-training and the co-training samples are each: the magnetic resonance images of the coronal brain position of 23 patients were 1500 sets, each set of simulations generated 5 time-instant pictures with different interventional feature locations.
7. The method of claim 1 or 5, wherein the loss function used in the training is obtained by weighting the mean square error loss in the image domain, the mean square error loss in the frequency domain, the perceptual loss, and the countermeasure loss in the countermeasure generating network;
the mean square error loss in the image domain is:
Figure FDA0003489635920000025
the mean square error loss in the frequency domain is:
Figure FDA0003489635920000026
wherein: the image reconstructed by the neural network is youtputThe corresponding fully sampled image is ygroundtruthFourier transform to F;
perception loss adopts a VGG model pre-trained on ImageNet to extract the consistency of deep features of the image, and the method specifically comprises the following steps:
Figure FDA0003489635920000027
adopting a loss function of a post-generator classifier to better enhance some detail features of a generated image against loss, specifically: l isGEN=-log(D(youtput) Whereinsaid: the classifier is D.
8. The method of claim 7, wherein the loss function L is a function of a magnetic resonance imagetotalThe mean square error loss of the alpha image domain + the mean square error loss of the beta frequency domain + the gamma perception loss + the countermeasure loss, alpha is 15, and beta is 0.25 gamma is 0.002.
CN202010077426.2A 2020-01-29 2020-01-29 Magnetic resonance interventional image reconstruction method based on cyclic neural network Active CN113192150B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010077426.2A CN113192150B (en) 2020-01-29 2020-01-29 Magnetic resonance interventional image reconstruction method based on cyclic neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010077426.2A CN113192150B (en) 2020-01-29 2020-01-29 Magnetic resonance interventional image reconstruction method based on cyclic neural network

Publications (2)

Publication Number Publication Date
CN113192150A CN113192150A (en) 2021-07-30
CN113192150B true CN113192150B (en) 2022-03-15

Family

ID=76972527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010077426.2A Active CN113192150B (en) 2020-01-29 2020-01-29 Magnetic resonance interventional image reconstruction method based on cyclic neural network

Country Status (1)

Country Link
CN (1) CN113192150B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113866695B (en) * 2021-10-12 2022-08-26 上海交通大学 Image acquisition and reconstruction method and system for magnetic resonance real-time guidance intervention

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629816A (en) * 2018-05-09 2018-10-09 复旦大学 The method for carrying out thin layer MR image reconstruction based on deep learning
CN109584244A (en) * 2018-11-30 2019-04-05 安徽海浪智能技术有限公司 A kind of hippocampus dividing method based on Sequence Learning
CN109871808A (en) * 2019-02-21 2019-06-11 天津惊帆科技有限公司 Atrial fibrillation model training and detecting method and device
CN109872377A (en) * 2019-02-28 2019-06-11 上海交通大学 Brain tissue fast imaging and image rebuilding method for magnetic resonance navigation
CN109993809A (en) * 2019-03-18 2019-07-09 杭州电子科技大学 Rapid magnetic resonance imaging method based on residual error U-net convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11030744B2 (en) * 2018-06-26 2021-06-08 Astrazeneca Computational Pathology Gmbh Deep learning method for tumor cell scoring on cancer biopsies

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629816A (en) * 2018-05-09 2018-10-09 复旦大学 The method for carrying out thin layer MR image reconstruction based on deep learning
CN109584244A (en) * 2018-11-30 2019-04-05 安徽海浪智能技术有限公司 A kind of hippocampus dividing method based on Sequence Learning
CN109871808A (en) * 2019-02-21 2019-06-11 天津惊帆科技有限公司 Atrial fibrillation model training and detecting method and device
CN109872377A (en) * 2019-02-28 2019-06-11 上海交通大学 Brain tissue fast imaging and image rebuilding method for magnetic resonance navigation
CN109993809A (en) * 2019-03-18 2019-07-09 杭州电子科技大学 Rapid magnetic resonance imaging method based on residual error U-net convolutional neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ODE-based Deep Network for MRI Reconstruction;Ali Pour Yazdanpanah,et al;《arXiv:1912.12325v1》;20191227;正文第1-4页 *
Reducing Navigators in Free-Breathing Abdominal MRI via Temporal Interpolation Using Convolutional Neural Networks;Neerav Karani,et al;《IEEE TRANSACTIONS ON MEDICAL IMAGING》;20181031;正文第2333-2343页 *
基于深度学习的快速磁共振成像技术研究;胡源;《中国优秀硕士学位论文全文数据库》;20200115;E060-181 *

Also Published As

Publication number Publication date
CN113192150A (en) 2021-07-30

Similar Documents

Publication Publication Date Title
Knoll et al. Deep-learning methods for parallel magnetic resonance imaging reconstruction: A survey of the current approaches, trends, and issues
CN108335339B (en) Magnetic resonance reconstruction method based on deep learning and convex set projection
JP6998218B2 (en) MR imaging with motion detection
CN113096208B (en) Reconstruction method of neural network magnetic resonance image based on double-domain alternating convolution
US10852376B2 (en) Magnetic resonance imaging method and device
CN112150568A (en) Magnetic resonance fingerprint imaging reconstruction method based on Transformer model
CN111951344B (en) Magnetic resonance image reconstruction method based on cascade parallel convolution network
CN113470139B (en) CT image reconstruction method based on MRI
CN109615675A (en) A kind of image rebuilding method of multi-channel magnetic resonance imaging
US20220308147A1 (en) Enhancements to quantitative magnetic resonance imaging techniques
CN111784793B (en) Dynamic magnetic resonance imaging reconstruction method
CN112368745A (en) Method and system for image reconstruction for magnetic resonance imaging
CN113971706A (en) Rapid magnetic resonance intelligent imaging method
Lv et al. Parallel imaging with a combination of sensitivity encoding and generative adversarial networks
CN113192150B (en) Magnetic resonance interventional image reconstruction method based on cyclic neural network
WO2023038910A1 (en) Dual-domain self-supervised learning for accelerated non-cartesian magnetic resonance imaging reconstruction
CN112051531B (en) Multi-excitation navigation-free magnetic resonance diffusion imaging method and device
WO2021228515A1 (en) Correction of magnetic resonance images using multiple magnetic resonance imaging system configurations
Akçakaya et al. Subject-specific convolutional neural networks for accelerated magnetic resonance imaging
CN113866694B (en) Rapid three-dimensional magnetic resonance T1 quantitative imaging method, system and medium
CN114972562B (en) Fast magnetic resonance imaging method combining coil sensitivity estimation and image reconstruction
CN113628298B (en) Feature vector based self-consistency and non-local low-rank parallel MRI reconstruction method
US20240219502A1 (en) Method and apparatus for motion-robust reconstruction in magnetic resonance imaging systems
US20230196556A1 (en) Systems and methods of magnetic resonance image processing using neural networks having reduced dimensionality
Tryfonopoulos Brain MRI reconstruction with cascade of CNNs and a learnable regularization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant