CN116745803A - Deep learning method for noise suppression in medical imaging - Google Patents

Deep learning method for noise suppression in medical imaging Download PDF

Info

Publication number
CN116745803A
CN116745803A CN202180082192.3A CN202180082192A CN116745803A CN 116745803 A CN116745803 A CN 116745803A CN 202180082192 A CN202180082192 A CN 202180082192A CN 116745803 A CN116745803 A CN 116745803A
Authority
CN
China
Prior art keywords
data
noisy
image
images
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180082192.3A
Other languages
Chinese (zh)
Inventor
N·戴伊
J·施伦珀
S·S·莫申·萨利希
米哈尔·索夫卡
P·昆杜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hepperfina Operation Co ltd
Original Assignee
Hepperfina Operation Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hepperfina Operation Co ltd filed Critical Hepperfina Operation Co ltd
Priority claimed from PCT/US2021/053918 external-priority patent/WO2022076654A1/en
Publication of CN116745803A publication Critical patent/CN116745803A/en
Pending legal-status Critical Current

Links

Landscapes

  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

Techniques for denoising Magnetic Resonance (MR) images are provided, comprising: obtaining a noisy MR image; denoising the noisy MR image of the subject using a denoising neural network model; and outputting the denoised MR image. The denoising neural network model is trained by: generating first training data for training a first neural network model to denoise the MR images by generating a plurality of first noisy MR images using the clean MR data associated with the source domain and the first MR noise data associated with the target domain; training a first neural network model using the first training data; generating training data for training the denoised neural network model by applying the first neural network model to the plurality of second denoised MR images and generating a plurality of denoised MR images; and training the denoising neural network model using the training data for training the denoising neural network model.

Description

Deep learning method for noise suppression in medical imaging
Cross Reference to Related Applications
The present application claims priority from U.S. provisional application serial No. 63/088,672 entitled "DEEP LEARNING METHODS FOR NOISE SUPPRESSION IN MEDICAL IMAGING" filed on 7 th month 10 of 2020 and U.S. provisional application serial No. 63/155,696 entitled "REALISTIC MRI NOISE REMOVAL WITHOUT GROUND TRUTH USING TWO-STEP SUPERVISED AND UNSUPERVISED LEARNING" filed on 2 nd month 3 of 2021, each of which is incorporated herein by reference in its entirety.
Technical Field
The present invention relates generally to machine learning techniques for removing noise from medical images obtained from data collected using an imaging device (e.g., a medical imaging device).
Background
Due to potential physical limitations of the imaging devices used to obtain the images, the images often include noise artifacts. Examples of such noise artifacts include noise (e.g., thermal noise) generated by imaging hardware, which may reduce the quality of the obtained image and thus its usefulness. For example, in various applications such as medical imaging, it may be desirable to suppress such noise artifacts in an image (e.g., denoising an image).
Disclosure of Invention
Some embodiments provide a method for denoising a magnetic resonance image, i.e., an MR image. The method comprises the following steps: using at least one computer hardware processor to: obtaining a noisy MR image of the subject, the noisy MR image being associated with a target domain; denoising the noisy MR image of the subject using the denoising neural network model to obtain a denoised MR image; and outputting the denoised MR image. The denoising neural network model is trained by: generating first training data for training a first neural network model to denoise MR images using (1) clean MR data associated with a source domain and (2) first MR noise data associated with the target domain, training the first neural network model using the first training data, generating training data for training the denoised neural network model at least in part by applying the first neural network model to a plurality of second noisy MR images and generating a corresponding plurality of denoised MR images, and training the denoised neural network model using the training data for training the denoised neural network model.
Some embodiments provide a magnetic resonance imaging system, MRI, system. The MRI system includes: a magnetic system having a plurality of magnetic components to generate a magnetic field for performing MRI; and at least one processor. The at least one processor is configured to: obtaining a noisy MR image of the subject, the noisy MR image being associated with a target domain; denoising the noisy MR image of the subject using the denoising neural network model to obtain a denoised MR image; and outputting the denoised MR image. The denoising neural network model is trained by: generating first training data for training a first neural network model to denoise MR images using (1) clean MR data associated with a source domain and (2) first MR noise data associated with the target domain, training the first neural network model using the first training data, generating training data for training the denoised neural network model at least in part by applying the first neural network model to a plurality of second noisy MR images and generating a corresponding plurality of denoised MR images, and training the denoised neural network model using the training data for training the denoised neural network model.
Some embodiments provide at least one non-transitory computer-readable storage medium storing processor-executable instructions that, when executed by at least one processor, cause the at least one processor to perform a method for denoising a magnetic resonance image, i.e., an MR image. The method comprises the following steps: obtaining a noisy MR image of the subject, the noisy MR image being associated with a target domain; denoising the noisy MR image of the subject using the denoising neural network model to obtain a denoised MR image; and outputting the denoised MR image. The denoising neural network model is trained by: generating first training data for training a first neural network model to denoise MR images using (1) clean MR data associated with a source domain and (2) first MR noise data associated with the target domain, training the first neural network model using the first training data, generating training data for training the denoised neural network model at least in part by applying the first neural network model to a plurality of second noisy MR images and generating a corresponding plurality of denoised MR images, and training the denoised neural network model using the training data for training the denoised neural network model.
Some embodiments provide a method for training a denoising neural network model to denoise an MR image of a subject. The method comprises the following steps: using at least one computer hardware processor to: generating first training data for training a first neural network model to denoise the MR images at least in part by generating a plurality of first noisy MR images using (1) clean MR data associated with the source domain and (2) first MR noise data associated with the target domain; training the first neural network model using the first training data; generating training data for training the denoised neural network model at least in part by applying the first neural network model to a plurality of second denoised MR images and generating a corresponding plurality of denoised MR images; and training the denoising neural network model using training data for training the denoising neural network model.
Some embodiments provide an MRI system comprising: a magnetic system having a plurality of magnetic components to generate a magnetic field for performing MRI; and at least one processor. The at least one processor is configured to: generating first training data for training a first neural network model to denoise the MR images at least in part by generating a plurality of first noisy MR images using (1) clean MR data associated with the source domain and (2) first MR noise data associated with the target domain; training the first neural network model using the first training data; generating training data for training the denoised neural network model at least in part by applying the first neural network model to a plurality of second denoised MR images and generating a corresponding plurality of denoised MR images; and training the denoising neural network model using training data for training the denoising neural network model.
Some embodiments provide at least one non-transitory computer-readable storage medium storing processor-executable instructions that, when executed by at least one processor, cause the at least one processor to perform a method for training a denoising neural network model to denoise an MR image of a subject. The method comprises the following steps: using at least one computer hardware processor to: generating first training data for training a first neural network model to denoise MR images at least in part by generating a plurality of first noisy MR images using (1) clean MR data associated with a source domain and (2) first MR noise data associated with a target domain, training the first neural network model using the first training data, generating training data for training the denoised neural network model at least in part by applying the first neural network model to a plurality of second noisy MR images and generating a corresponding plurality of denoised MR images, and training the denoised neural network model using the training data for training the denoised neural network model.
In some embodiments, the first training data includes the plurality of first noisy MR images and a corresponding plurality of clean MR images. Generating the first training data includes: generating first noisy MR data using clean MR data associated with the source domain and first MR noise data associated with the target domain; generating the plurality of first noisy MR images by applying a reconstruction process to the first noisy MR data; and generating the plurality of clean MR images by applying the reconstruction process to clean MR data associated with the source domain.
In some embodiments, applying the reconstruction process to the first noisy MR data includes: a machine learning model is used to generate MR images from the first noisy MR data. In some embodiments, applying the reconstruction process to the first noisy MR data includes: MR images are generated from the first noisy MR data using compressed sensing. In some embodiments, applying the reconstruction process to the first noisy MR data includes: at least one linear transformation is used to generate an MR image from the first noisy MR data. The at least one linear transformation includes: coil decorrelation transformation; gridding transformation; and (3) coil combination transformation.
In some embodiments, the method further comprises: generating second training data for training the second neural network model to denoise the MR image at least in part by generating a plurality of dual noisy MR images using both: (1) Second noisy MR data associated with the target domain, and (2) second MR noise data associated with the target domain; and training the second neural network model using the second training data.
In some embodiments, the second training data includes the plurality of dual noisy MR images and the plurality of second noisy MR images. Generating the second training data includes: generating dual noisy MR data using second noisy MR data associated with the target domain and second MR noise data associated with the target domain; generating the plurality of dual noisy MR images by applying a reconstruction process to the dual noisy MR data; and generating the plurality of second noisy MR images by applying the reconstruction process to second noisy MR data associated with the target domain.
In some embodiments, generating training data for training the denoising neural network model further comprises: the second neural network model is applied to the plurality of second noisy MR images.
In some embodiments, generating training data for training the denoising neural network model further comprises: a plurality of enhanced de-noised MR images are generated by: applying one or more transforms to an image of the plurality of denoised MR images to generate a plurality of transformed MR images, and combining the plurality of transformed MR images with the plurality of denoised MR images to generate the plurality of enhanced denoised MR images; and generating clean MR data associated with the target domain by applying a non-uniform transformation to an image of the plurality of enhanced denoised MR images.
In some embodiments, the training data for training the denoised neural network model comprises a plurality of noisy MR training images and a plurality of clean MR training images. Generating training data for training the denoising neural network model further comprises: generating clean MR training data by combining clean MR data associated with the source domain and clean MR data associated with the target domain; generating noisy MR training data using the clean MR training data and third MR noise data associated with the target domain; generating the plurality of noisy MR training images by applying a reconstruction process to the noisy MR training data; and generating the plurality of clean MR training images by applying the reconstruction process to clean MR training data associated with the target domain.
In some embodiments, the denoising neural network model comprises a plurality of convolution layers. In some embodiments, the plurality of convolution layers includes a two-dimensional convolution layer. In some embodiments, the plurality of convolution layers includes a three-dimensional convolution layer.
In some embodiments, the first MR noise data is generated prior to obtaining the first noisy MR image. In some embodiments, the first MR noise data is generated at least in part by empirical measurements of noise in the target domain. In some embodiments, the first MR noise data is generated at least in part by simulating the first MR noise data using at least one noise model associated with the target domain. Simulating the first MR noise data is performed using one or more of a gaussian distribution, a poisson distribution, and/or a student t distribution.
In some embodiments, obtaining a noisy MR image of the subject includes accessing the noisy MR image. In some embodiments, obtaining a noisy MR image of the subject includes: collecting first noisy MR data by imaging a subject using an MRI system; and generating the noisy MR image of the subject using the collected first noisy MR data. In some embodiments, the first noisy MR data was previously collected using the MRI system, and obtaining the noisy MR image of the subject comprises: accessing the first noisy MR data; and generating the noisy MR image using the accessed first noisy MR data.
In some embodiments, the first noisy MR data is collected by the MRI system using a diffusion weighted imaging pulse sequence, DWI pulse sequence. In some embodiments, the first MR noise data is generated by empirical measurements of noise within the MRI system during operation of the MRI system using the DWI pulse sequence.
In some embodiments, the clean MR data associated with the source domain comprises MR data collected using a magnetic resonance imaging system, MRI system, having a main magnetic field strength of 0.5T or greater, the plurality of second noisy MR images are generated using second noisy MR data associated with the target domain, and the second noisy MR data associated with the target domain comprises MR data collected using an MRI system having a main magnetic field strength of greater than or equal to 20mT and less than or equal to 0.2T.
In some embodiments, the clean MR data associated with the source domain comprises MR data collected by imaging a first portion of the anatomy of the subject, the plurality of second noisy MR images are generated using second noisy MR data associated with the target domain, and the second noisy MR data associated with the target domain comprises MR data collected by imaging a second portion of the anatomy different from the first portion of the anatomy of the subject.
In some embodiments, the clean MR data associated with the source domain comprises MR data collected using a first pulse sequence, the plurality of second noisy MR images are generated using second noisy MR data associated with the target domain, and the second noisy MR data associated with the target domain comprises MR data collected using a second pulse sequence different from the first pulse sequence.
In some embodiments, the method further comprises: training the denoising neural network model by: generating first training data for training a first neural network model to denoise MR images at least in part by generating the plurality of first noisy MR images using (1) clean MR data associated with a source domain and (2) first MR noise data associated with a target domain, training the first neural network model using the first training data, generating training data for training the denoised neural network model at least in part by applying the first neural network model to the plurality of second noisy MR images and generating a corresponding plurality of denoised MR images; and training the denoising neural network model using training data for training the denoising neural network model.
Some embodiments provide a method for denoising a medical image of a subject, the medical image being generated using data collected by a medical imaging apparatus. The method comprises the following steps: using at least one computer hardware processor to: obtaining a medical image of a subject, combining the medical image of the subject with a noise image to obtain a noise corrupted medical image of the subject, generating a denoising medical image corresponding to the noise corrupted medical image of the subject using the noise corrupted medical image of the subject and the trained neural network, and outputting the denoising medical image.
Some embodiments provide at least one non-transitory computer-readable storage medium storing processor-executable instructions that, when executed by at least one processor, cause the at least one processor to perform a method for denoising an image of a subject, the image generated using data collected by an MRI system. The method comprises the following steps: obtaining an image of a subject; combining an image of the subject with the noise image to obtain a noise corrupted image of the subject; generating a denoising image corresponding to a corrupted image of a subject using the corrupted image of the subject and a trained neural network; and outputting the denoised image.
Some embodiments provide a magnetic resonance imaging system, MRI, system. The MRI system includes: a magnetic system having a plurality of magnetic components to generate a magnetic field for performing MRI; and at least one processor configured to perform a method for denoising an image of a subject, the image being generated using data collected by the MRI system. The method comprises the following steps: obtaining an image of a subject; combining an image of the subject with the noise image to obtain a noise corrupted image of the subject; generating a denoising image corresponding to a corrupted image of a subject using the corrupted image of the subject and a trained neural network; and outputting the denoised image.
In some embodiments, the trained neural network is trained using training data comprising an image pair, a first pair of the image pairs comprising a first image generated using data collected by the medical imaging device and a second image generated by combining the first image and a noise image.
In some embodiments, obtaining the noise image includes selecting the noise image from a plurality of noise images. In some embodiments, selecting the noise image from a plurality of noise images comprises: the noise image is randomly selected from a plurality of noise images. In some embodiments, the plurality of noise images are generated prior to obtaining the medical image of the subject.
In some embodiments, the method further comprises: the noise image is generated at least in part by making one or more empirical measurements of noise using the medical imaging device and/or at least one medical imaging device of the same type as the medical imaging device. In some embodiments, generating the noise image includes: at least a portion of one or more empirical measurements of noise are scaled relative to a maximum intensity value of a medical image of the subject. In some embodiments, scaling at least a portion of the one or more empirical measurements of noise relative to a maximum intensity value of the medical image of the subject comprises: the selected medical image is scaled to a range of from 2% to 30% of the maximum intensity value of the medical image of the subject. In some embodiments, scaling at least a portion of the one or more empirical measurements of noise relative to a maximum intensity value of the medical image of the subject comprises: the selected medical image is scaled to 5%, 10% or 20% of the maximum intensity value of the medical image of the subject.
In some embodiments, the method further comprises: the noise image is generated by simulating the noise image using at least one noise model associated with the medical imaging device. In some embodiments, simulating the noise image is performed using one or more of a gaussian distribution, a poisson distribution, and/or a student t distribution.
In some embodiments, the medical imaging device is one of an ultrasound imaging device, an elastography device, an X-ray imaging device, a functional near infrared spectroscopy imaging device, an endoscopic imaging device, a positron emission tomography imaging device, or a PET imaging device, a computed tomography imaging device, or a CT imaging device, and a single photon emission computed tomography imaging device, or a SPECT imaging device.
In some embodiments, the medical imaging device is an MRI system. In some embodiments, the method further comprises: the noise image is generated using an image reconstruction technique used by the MRI system to generate a magnetic resonance image, or MR image, from MR data acquired by the MRI system in the spatial frequency domain.
In some embodiments, obtaining a medical image of the subject includes: collecting the data by imaging a subject using the medical imaging device; and generating the medical image using the collected data. In some embodiments, the data is previously collected using the medical imaging device, and wherein obtaining the medical image of the subject comprises: accessing the data; and generating the medical image using the accessed data. In some embodiments, obtaining a medical image of the subject includes accessing the medical image.
In some embodiments, the data is collected by the MRI system using a diffusion weighted imaging pulse sequence, DWI pulse sequence. In some embodiments, the noise image is generated by empirical measurement of noise within the MRI system using the DWI pulse sequence.
In some embodiments, the trained neural network includes a plurality of convolutional layers. In some embodiments, the plurality of convolution layers have a U-net structure.
Some embodiments provide a method for denoising a medical image of a subject, the medical image being generated using data collected by a medical imaging apparatus. The method comprises the following steps: using at least one computer hardware processor to: obtaining a medical image of a subject, generating a denoised medical image corresponding to the medical image of the subject using the medical image of the subject and the trained neural network; and outputting the denoised medical image. The trained neural network is trained using training data comprising an image pair, a first pair of the image pairs comprising a first image generated using data collected by the medical imaging device and a second image generated by combining the first image and a noise image.
In some embodiments, obtaining the noise image includes selecting the noise image from a plurality of noise images. In some embodiments, the method further comprises: the noise image is generated at least in part by making one or more empirical measurements of noise using the medical imaging device and/or at least one medical imaging device of the same type as the medical imaging device. In some embodiments, the method further comprises: the noise image is generated by simulating the noise image using at least one noise model associated with the medical imaging device.
Some embodiments provide a method for denoising a medical image of a subject, the medical image being generated using data collected by a medical imaging apparatus. The method comprises the following steps: using at least one computer hardware processor to: obtaining a medical image of a subject, generating a denoised medical image corresponding to the medical image of the subject using the medical image of the subject and a generator neural network, wherein the generator neural network is trained using a discriminator neural network trained to distinguish between an image of noise obtained using an output of the generator neural network and the noise image, and outputting the denoised medical image.
Some embodiments provide at least one non-transitory computer-readable storage medium storing processor-executable instructions that, when executed by at least one processor, cause the at least one processor to perform a method for denoising an image of a subject, the image generated using data collected by a medical imaging system. The method comprises the following steps: obtaining an image of a subject; generating a denoising image corresponding to an image of a subject using the image of the subject and a generator neural network, wherein the generator neural network is trained using a discriminator neural network trained to distinguish between an image of noise obtained using an output of the generator neural network and the noise image; and outputting the denoised image.
Some embodiments provide a magnetic resonance imaging system, MRI, system comprising: a magnetic system having a plurality of magnetic components to generate a magnetic field for performing MRI; and at least one processor configured to perform a method for denoising an image of a subject, the image being generated using data collected by the MRI system. The method comprises the following steps: obtaining an image of a subject; generating a denoising image corresponding to an image of a subject using the image of the subject and a generator neural network, wherein the generator neural network is trained using a discriminator neural network trained to distinguish between an image of noise obtained using an output of the generator neural network and the noise image; and outputting the denoised image.
In some embodiments, the output of the generator neural network is used to obtain an image in the first noisy image by subtracting a denoised medical image from a corresponding medical image of the subject.
In some embodiments, generating the denoised medical image comprises: the residual image output by the generator neural network is subtracted from the medical image of the subject.
In some embodiments, the second noise image is generated before the medical image of the subject is obtained. In some embodiments, the second noise image is generated without using the generator neural network. In some embodiments, the method further comprises: the second noise image is generated at least in part by making one or more empirical measurements of noise using the medical imaging device and/or at least one medical imaging device of the same type as the medical imaging device.
In some embodiments, the method further comprises: the second noise image is generated by simulating the second noise image using at least one noise model associated with the medical imaging device. In some embodiments, simulating the second noise image is performed using one or more of a gaussian distribution, a poisson distribution, and/or a student t distribution.
In some embodiments, the medical imaging device is one of an ultrasound imaging device, an elastography device, an X-ray imaging device, a functional near infrared spectroscopy imaging device, an endoscopic imaging device, a positron emission tomography imaging device, or a PET imaging device, a computed tomography imaging device, or a CT imaging device, and a single photon emission computed tomography imaging device, or a SPECT imaging device.
In some embodiments, the medical imaging device is an MRI system.
In some embodiments, generating the second noise image further comprises: an image reconstruction technique used by the MRI system is used to generate a magnetic resonance image, or MR image, from MR data acquired in the spatial frequency domain by the MRI system.
In some embodiments, the data is collected by the MRI system using a diffusion weighted imaging pulse sequence, DWI pulse sequence.
In some embodiments, the second noise image is generated by empirical measurement of noise within the MRI system using the DWI pulse sequence.
In some embodiments, obtaining a medical image of the subject includes: collecting the data by imaging a subject using the medical imaging device; and generating the medical image using the collected data.
In some embodiments, the data is previously collected using the medical imaging device, and obtaining the medical image of the subject comprises: accessing the data; and generating the medical image using the accessed data.
In some embodiments, obtaining a medical image of the subject includes accessing the medical image.
In some embodiments, the generator neural network includes a plurality of convolutional layers. In some embodiments, the plurality of convolution layers have a U-net structure.
The foregoing is a non-limiting summary defined by the appended claims.
Drawings
Various aspects and embodiments of the disclosed technology will be described with reference to the following figures. It should be understood that the figures are not necessarily drawn to scale.
Fig. 1A is a diagram illustrating a process performed by a trained neural network model to denoise medical images of a subject in accordance with some embodiments of the techniques described herein.
Fig. 1B is a diagram illustrating a process of training a neural network to denoise medical images of a subject in accordance with some embodiments of the techniques described herein.
Fig. 2 is a diagram illustrating a process of generating a noise image using an image reconstruction module in accordance with some embodiments of the technology described herein.
Fig. 3 is a diagram of an exemplary pipeline (pipeline) of an example image reconstruction module, in accordance with some embodiments of the technology described herein.
Fig. 4A and 4B illustrate examples of Magnetic Resonance (MR) images of a brain of a subject obtained using a Diffusion Weighted Imaging (DWI) pulse sequence and including different levels of generated noise in accordance with some embodiments of the techniques described herein.
Fig. 5 is a flowchart of an exemplary process 500 for generating a denoised medical image of a subject using a trained neural network, according to some embodiments of the technology described herein.
Fig. 6 illustrates example MR images of a brain of a subject before and after denoising by a trained neural network, according to some embodiments of the techniques described herein.
Fig. 7A is a diagram illustrating a process performed by a generator neural network model to denoise medical images of a subject in accordance with some embodiments of the techniques described herein.
Fig. 7B is a diagram illustrating a process of training a generator neural network using a arbiter neural network to denoise medical images of a subject in accordance with some embodiments of the techniques described herein.
Fig. 8 shows an example noisy image (noise image) and a corresponding denoised image of a conventional neural network.
Fig. 9 illustrates a comparison of denoised images when denoised by a conventional neural network and a generator neural network trained using a discriminant neural network, according to some embodiments described herein.
Fig. 10 shows examples of noisy intra-and extra-domain images before and after denoising by a conventional neural network.
Fig. 11 illustrates an example of a noisy image after denoising by a conventional neural network and after denoising by a generator neural network trained using a arbiter neural network, according to some embodiments described herein.
Fig. 12 is a flowchart of an exemplary process 1200 for generating a denoised medical image of a subject using a generator neural network, according to some embodiments of the technology described herein.
Fig. 13 illustrates an example of MR images of a brain of a subject before and after denoising by a generator neural network, according to some embodiments of the technology described herein.
Fig. 14 is a schematic diagram of a low-field MRI system according to some embodiments of the technology described herein.
Fig. 15A and 15B illustrate diagrams of portable MRI systems according to some embodiments of the techniques described herein.
Fig. 16A illustrates a portable MRI system that performs scanning of a head in accordance with some embodiments of the technology described herein.
Fig. 16B illustrates a portable MRI system that performs scanning of the knee in accordance with some embodiments of the technology described herein.
FIG. 17 is a diagram of an exemplary computer system in which embodiments described herein may be implemented.
Fig. 18A is a diagram of the following in accordance with some embodiments of the technology described herein: (1) An exemplary MR image reconstruction and denoising pipeline 1800 comprising an image reconstruction module and a denoising module; and (2) a training pipeline 1825 for training the machine learning model for use as part of the denoising module.
Fig. 18B is a diagram of an MR image reconstruction and denoising pipeline 1850 that includes an image reconstruction module and a denoising module, according to some embodiments of techniques described herein.
FIG. 19 is a diagram of an exemplary architecture of an example denoising neural network model for generating a denoising MR image from an input noisy MR image, according to some embodiments of the techniques described herein.
FIG. 20A is a diagram of an exemplary process 2010 to generate first training data to train a first neural network for denoising MR images, according to some embodiments of the techniques described herein.
FIG. 20B is a diagram of an exemplary process 2020 to generate second training data to train a second neural network for denoising MR images, according to some embodiments of the techniques described herein.
FIG. 20C is a diagram of an exemplary process to generate clean (clear) MR data associated with a target domain, according to some embodiments of the technology described herein.
FIG. 20D is a diagram of an exemplary process to generate training data for training a denoising neural network model, according to some embodiments of the techniques described herein.
Fig. 21 is a flowchart of an exemplary process 2100 for generating a denoised MR image of a subject using a denoised neural network model, in accordance with some embodiments of the techniques described herein.
FIG. 22 is a flowchart of an exemplary process 2200 for training a denoising neural network model, according to some embodiments of the techniques described herein.
FIG. 23 illustrates an example of a denoised MR image and corresponding noise map of an MR image denoised using different denoising techniques, according to some embodiments of the techniques described herein.
24A-24D illustrate examples of denoised MRI images and corresponding noise maps acquired using Diffusion Weighted Imaging (DWI) pulse sequences of MR images denoised using different denoising techniques according to some embodiments of the techniques described herein.
FIG. 25A is a diagram of an exemplary transformation to generate first training data to train a first neural network for reconstructing and denoising MR images, according to some embodiments of the techniques described herein.
FIG. 25B is a diagram of an exemplary transformation to generate second training data to train a second neural network for reconstructing and denoising MR images, according to some embodiments of the techniques described herein.
FIG. 25C is a diagram of an exemplary transformation to generate a clean MR training image associated with a target domain in accordance with some embodiments of the techniques described herein.
FIG. 25D is a diagram of an exemplary transformation to generate training data for training a reconstruction and denoising neural network model, according to some embodiments of the techniques described herein.
FIG. 26A is a diagram of an exemplary architecture of an example neural network model for generating MR images from input MR spatial frequency data, according to some embodiments of the technology described herein.
Fig. 26B is a diagram of one type of architecture of a block of the neural network model of fig. 26A, according to some embodiments of the technology described herein.
FIG. 26C is a diagram of an exemplary architecture of a data consistency block (which may be part of the block shown in FIG. 26B) in accordance with some embodiments of the technology described herein.
Fig. 26D is a diagram of an exemplary architecture of a convolutional neural network block (which may be part of the block shown in fig. 26B) in accordance with some embodiments of the techniques described herein.
Fig. 26E is a diagram of another type of architecture of a block of the neural network model of fig. 26A, according to some embodiments of the technology described herein.
Detailed Description
The hardware of the medical imaging device may introduce unwanted noise into the data acquired at the time of capturing the medical image. For example, heat dissipation within the hardware electronics may introduce thermal noise into the data acquired by the medical imaging device. After the medical imaging device acquires the data, the software generates a medical image using the acquired data. The software may include several processes (e.g., warp recovery, bias correction, etc.) for rendering the image that, while functioning based on assumptions of idealized data (e.g., noiseless data), make the introduced noise relevant noise in the output medical image so obtained.
Thus, medical images generated from acquired data may include unwanted correlated noise artifacts. Such introduced noise (e.g., by blurring image features, by reducing image sharpness, etc.) may reduce the usefulness of images generated by the medical imaging device. In particular, for medical imaging devices, such noise may reduce the clinical usefulness of the generated image. For example, in the case of a Magnetic Resonance Imaging (MRI) system, such introduced noise may reduce the signal-to-noise ratio (SNR) of the acquired Magnetic Resonance (MR) data, which results in MR images that are difficult to interpret by a medical practitioner (e.g., due to reduced contrast or sharpness in the MR images).
Machine learning has been accepted and developed as a tool for removing such correlated noise from noisy images, and has shown improved performance over conventional denoising methods. However, the inventors have appreciated that there are several limitations to such machine learning techniques for medical imaging applications. These limitations include, for example, the need to generate a large set of training data for supervised machine learning techniques, which is often impractical in medical imaging contexts. Additionally, limitations of some conventional machine learning techniques include the introduction of image artifacts in which the raw image data includes pixel-dependent noise.
To address the shortcomings of conventional supervised machine learning denoising techniques, the inventors developed a number of machine learning techniques to remove or suppress noise from medical images. The machine learning techniques developed by the inventors provide improvements to medical imaging techniques because they more effectively remove or suppress noise from medical images. As a result, these techniques produce higher quality, more clinically relevant medical images (e.g., with better tissue contrast, clearer features, and/or limited noise artifacts).
One machine learning technique developed by the inventors for denoising medical images involves introducing additive noise into medical images obtained from medical imaging devices, and then providing noisy images to a trained neural network for denoising. In some embodiments, the method includes combining a noise image (e.g., a simulated or measured image of noise generated by a medical imaging device) with a medical image of a subject (e.g., a patient) to obtain a noise corrupted medical image of the subject. The noise corrupted medical image of the subject thus obtained may be considered a "dual noisy" image, as it includes noise from the initial acquisition of image data by the medical imaging device and additive noise from the noisy image. Thereafter, the trained neural network may receive the noise corrupted medical image of the subject as an input and generate a denoised medical image of the subject corresponding to the noise corrupted medical image of the subject for output.
In some embodiments, the trained neural network may be trained using a supervised training method and a training dataset comprising image pairs. The image pair may include a first image generated using data collected by the medical imaging device and a second image generated by combining the first image with a noise image (e.g., a noise corrupted image). The noise image may be selected from a set of noise images generated (e.g., randomly or in any other suitable manner) prior to obtaining a medical image of the subject using the medical imaging device. In this way, the trained neural network may be trained to denoise medical images of the subject. In some embodiments, the trained neural network may be a convolutional network and may include a plurality of convolutional layers. In some embodiments, for example, multiple convolution layers may have a U-net structure.
Another machine learning technique developed by the inventors for denoising medical images involves the use of neural networks trained using countermeasure methods. For example, generating an countermeasure network framework may be used to train a neural network for denoising medical images. In some embodiments, the generator neural network may be trained to denoise the noisy medical image of the subject by using the arbiter neural network to obtain a denoised medical image corresponding to the medical image of the subject. In some embodiments, the arbiter neural network may be trained to distinguish noise residuals generated by the generator neural network from synthesized or empirically measured noise.
In some embodiments, the generator neural network may be a convolutional neural network comprising a plurality of convolutional layers. For example, in some embodiments, multiple convolution layers may have a U-net structure. In some embodiments, the generator neural network may be trained using a arbiter neural network. The arbiter neural network may be trained to distinguish an image of noise obtained using the output of the generator neural network from a noise image generated before the medical image of the subject is obtained. In some embodiments, an image of noise obtained using the output of the generator neural network may be generated by subtracting the denoised medical image from the medical image of the subject (e.g., to create a "noise map"). Alternatively, the image of the noise may be directly output by the generator neural network.
In some embodiments, training a neural network (e.g., a trained neural network or a generator neural network) may include modifying parameters associated with layers of the neural network. For example, in some embodiments, training the neural network may include modifying values of about 100 tens of thousands of parameters associated with layers of the neural network. In some embodiments, training the neural network may include modifying at least 10000 parameters, at least 50000 parameters, at least 100000 parameters, at least 250000 parameters, at least 500000 parameters, at least 1000000 parameters, at least 2000000 parameters, between 100000 and 1000000 parameters, between 50000 and 2000000 parameters, between 500000 and 5000000 parameters, or any suitable range of values within these ranges.
In some embodiments, the noise image may be generated by using the medical imaging device and/or one or more empirical measurements of noise of the same type of medical imaging device used to image the subject. For example, in the case of an MRI system, a noise image may be generated by acquiring MR data using an MRI system of the type used to image a subject and in the same imaging procedure (e.g., using the same pulse sequence as that used to image the subject, such as a Diffusion Weighted Imaging (DWI) pulse sequence in some embodiments, etc.).
Alternatively or additionally, in some embodiments, the noise image may be generated by simulating the noise image. Simulating the noise image may include using at least one noise model associated with the medical imaging device. For example, in the case of an MRI system, simulating the noise image may include using image reconstruction techniques used by the MRI system to generate a Magnetic Resonance (MR) image in the image domain from MR data acquired by the MRI system in the spatial frequency domain. In some embodiments, simulating the noise image may be performed using one or more of a gaussian distribution, a poisson distribution, and/or a student t distribution.
In some embodiments, the plurality of noise images may be generated prior to imaging the subject, and generating the noise image may include selecting one of the plurality of noise images. Additionally, in some embodiments, the selected noise image may be scaled relative to a maximum intensity value of the obtained medical image of the subject before being combined with the medical image of the subject or provided to the arbiter neural network. In some embodiments, the selected noise image may be scaled to a range of 2% to 30% of the maximum intensity value of the medical image of the subject, or to 5%, 10% or 20% of the maximum intensity of the image of the subject, or any value in between these values.
In some embodiments, obtaining a medical image of the subject includes collecting data by imaging the subject using a medical imaging device and generating a medical image using the collected data. In some embodiments, obtaining the medical image of the subject includes accessing data previously collected by using the medical imaging device and generating the medical image using the accessed data. In some embodiments, obtaining the medical image of the subject includes accessing the medical image.
In some embodiments, the medical image may be a two-dimensional image or a three-dimensional image volume. Alternatively or additionally, the medical image may be a video sequence of two-dimensional or three-dimensional images. In some embodiments, the machine learning techniques described herein may be configured to denoise a single image of a video sequence or denoise a video sequence as a whole.
It should be appreciated that the methods developed by the inventors and described herein may be implemented across a variety of medical imaging devices. For example, in some embodiments, the medical imaging device may be one of an ultrasound imaging device, an elastography device, an X-ray imaging device, a functional near infrared spectroscopy imaging device, an endoscopic imaging device, a Positron Emission Tomography (PET) imaging device, a Computed Tomography (CT) imaging device, and a Single Photon Emission Computed Tomography (SPECT) imaging device.
In some embodiments, the medical imaging device may be a Magnetic Resonance Imaging (MRI) system. For example, the MRI system may be a low-field MRI system. As used herein, "high field" refers to MRI systems currently used in clinical settings, and more particularly, to those that utilize a main magnetic field of 0.5T or higher than 0.5T (i.e., B 0 A field) operating MRI systems. As used herein, "midfield" refers to an MRI system that operates with a main magnetic field having an intensity between 0.2T and 0.5T. In contrast, as used herein, "low field" generally refers to utilizing B less than or equal to 0.2T 0 A field-operated MRI system. For example, a low-field MRI system as described herein may utilize B less than or equal to 0.2T and greater than or equal to 20mT 0 Using a field, B of less than or equal to 0.2T and greater than or equal to 50mT 0 A field, and/or with B of less than or equal to 0.1T and greater than or equal to 50mT 0 Field operation. In the low field region, B of less than 10mT is utilized 0 A field-operated low-field MRI system is referred to herein as an "ultra-low field".
In some embodiments, the techniques described herein for denoising MR images may be adapted for application to spatial frequency data collected using a low-field MRI system including, for example and without limitation, any of the low-field MR systems described herein and/or any of the low-field MR systems described in U.S. patent 10,222,434 (which is incorporated herein by reference in its entirety) entitled "Portable Magnetic Resonance Imaging Methods and Apparatus" filed 24 at 2018.
The following is a more detailed description of various concepts related to methods and apparatus for denoising medical images and embodiments of these methods and apparatus. It should be appreciated that while the techniques described herein may be described in connection with denoising medical images obtained using a medical device, the techniques developed by the inventors and described herein are not limited and may be applied to other types of images obtained using a non-medical imaging device. It should be appreciated that the various aspects described herein may be implemented in any of a number of ways. Examples of specific implementations are provided herein for illustrative purposes only. In addition, the various aspects described in the following embodiments may be used alone or in any combination and are not limited to the combinations explicitly described herein.
Example denoising: dual noisy image
Fig. 1A is a diagram illustrating a process performed by a trained neural network to denoise medical images of a subject in accordance with some embodiments of the technology described herein. As shown in fig. 1A, the trained neural network 110 may be configured to accept the noise corrupted image 108 as input and to denoise the noise corrupted image 108. The trained neural network 110 may output a denoised image 112 corresponding to the medical image 102 of the subject. Alternatively, in some embodiments, the neural network 110 may output information that may be used to generate the denoised image 112 (e.g., the denoised image may be obtained by doubling the output of the neural network 110 and subtracting the noisy image 108, as described below). In some embodiments, the trained neural network 110 may be trained according to the exemplary process described in connection with fig. 1B.
In some embodiments, the noise corrupted image 108 may be generated by combining the medical image 102 of the subject with the noise image 104. The medical image 102 of the subject may be obtained from data collected by a medical imaging device, such as an MRI system or any other suitable type of medical imaging device, examples of which are described herein. For example, the medical image 102 of the subject may be obtained by collecting data using a medical imaging apparatus (e.g., by imaging a patient), and then generating the medical image 102 of the subject based on the collected data. As another example, obtaining the medical image of the subject 102 may include accessing data previously collected by the medical imaging device from a storage device and generating the medical image 102 of the subject using the accessed data. As yet another example, the medical image 102 of the subject may be generated prior to the process shown in fig. 1A and may be accessed from a storage device for denoising.
In some embodiments, the noise image 104 may be selected from a plurality of noise images. The noise image 104 may be selected from the plurality of images randomly (e.g., with respect to any suitable distribution) or using any other suitable method, as aspects of the techniques described herein are not limited in this respect. In some embodiments, the noise image may be generated before the medical image 102 of the subject is obtained using the medical imaging device.
In some embodiments, the noise image 104 may be obtained using empirical measurements of noise within the medical imaging device. For example, a medical imaging device may be used to obtain noise measurements (e.g., in the absence of a subject). Alternatively, the noise measurement result may be obtained using the same type of medical imaging apparatus as that used to acquire the image data of the subject (e.g., before acquiring the image data of the subject). In some embodiments, the noise measurements may be obtained using the same medical imaging procedure and/or setting(s) of the medical imaging device used to obtain the medical image 102 of the subject. For example, in the context of MRI, if a subject is to be imaged using a Diffusion Weighted Imaging (DWI) pulse sequence, the same DWI pulse sequence may be used to obtain noise measurements. It should be appreciated that the DWI pulse sequence may be replaced with another pulse sequence including, but not limited to, a spin echo pulse sequence, a fast spin echo pulse sequence, or a Steady State Free Precession (SSFP) pulse sequence to generate the medical image 102 and/or the noise image 104 of the subject.
In some embodiments, generating the noise image 104 may include scaling the noise image 104 relative to a maximum intensity value of the medical image 102 of the subject. For example, the intensity of the noise image 104 may be scaled relative to the maximum intensity value of the medical image 102 of the subject. Such scaling may determine the amount of noise added to the medical image 102 of the subject to form the noise corrupted image 108. For example, the noise image 104 may be scaled to a range of 2% to 30% of the maximum intensity value of the medical image 102 of the subject. In some embodiments, the noise image 104 may be scaled to one of 5%, 10%, 20% and any value within the above range of maximum intensity values of the medical image 102 of the subject. In some embodiments, the noise image 104 may be scaled to a value for generating the test noise corrupted image that is less than the value for generating the training noise corrupted image. For example, in some embodiments, the noise image 104 may be scaled to 5% for testing and to 10% for training.
Additionally or alternatively, in some embodiments, the noise image 104 may be obtained by simulating noise rather than obtaining empirical measurements of the noise. For example, the noise image 104 may be simulated using a noise model associated with the medical imaging device. In some embodiments, the noise image 104 may be simulated using a gaussian distribution, a poisson distribution, and/or a student t distribution.
In some embodiments, the medical image 102 of the subject may not be combined with the noise image 104 during use of the trained neural network 110 to denoise the medical image 102 of the subject. Instead, the medical image 102 of the subject may be provided as input directly to the trained neural network 110 for denoising. In such an embodiment, the trained neural network 110 may be trained in the same manner as described in connection with the example of fig. 1B. For example, in such embodiments, a method for denoising a medical image of a subject, the medical image being generated using data collected by a medical imaging apparatus, may be provided. The method may include: obtaining a medical image of the subject using at least one computer hardware processor; generating a denoised medical image corresponding to the medical image of the subject using the medical image of the subject and the trained neural network; and outputting the denoising medical image. In such embodiments, the trained neural network may be trained using training data comprising image pairs, a first of which comprises a first image generated using data collected by the medical imaging device and a second image generated by combining the first image and the noise image.
In some embodiments, the trained neural network 110 may be implemented as a deep neural network. For example, the trained neural network 110 may include multiple layers. In some embodiments, the layers may include one or more convolutional layers, one or more pooling layers (e.g., average pooling, maximum pooling), one or more upsampling layers, one or more downsampling layers, one or more fully connected layers, and/or any other suitable type of layer. In some embodiments, the plurality of layers may be arranged in one of a U-net structure and a Res-net structure. For example, trained neural network 110 may include the following layers arranged in a U-net structure:
1. input n features
2. Code convolution, kernel size = 3 x 3, 48 features, reLU
3. Code convolution, kernel size = 3 x 3, 48 features, reLU
4. Pooling 1, core size = 2 x 2, 48 features, reLU
5. Code convolution, kernel size = 3 x 3, 48 features, reLU
6. Pooling 2, core size = 2 x 2, 48 features, reLU
7. Code convolution, kernel size = 3 x 3, 48 features, reLU
8. Pooling 3, core size = 2 x 2, 48 features, reLU
9. Code convolution, kernel size = 3 x 3, 48 features, reLU
10. Pooling 4, core size = 2 x 2, 48 features, reLU
11. Code convolution, kernel size = 3 x 3, 48 features, reLU
12. Pooling 5, core size = 2 x 2, 48 features, reLU
13. Code convolution, kernel size = 3 x 3, 48 features, reLU
14. Upsampling, core size = 2 x 2, 48 features, reLU
15. Cascading (of outputs of pooling 4), 96 features, reLU
16. Decoding convolution, kernel size = 3 x 3, 96 features, reLU
17. Decoding convolution, kernel size = 3 x 3, 96 features, reLU
18. Upsampling, core size = 2 x 2, 96 features, reLU
19. Cascading (of pooled 3 outputs), 96 features, reLU
20. Decoding convolution, kernel size = 3 x 3, 96 features, reLU
21. Decoding convolution, kernel size = 3 x 3, 96 features, reLU
22. Upsampling, core size = 2 x 2, 96 features, reLU
23. Cascading (of outputs of pooling 2), 144 features, reLU
24. Decoding convolution, kernel size = 3 x 3, 96 features, reLU
25. Decoding convolution, kernel size = 3 x 3, 96 features, reLU
26. Upsampling, core size = 2 x 2, 96 features, reLU
27. Cascading (of outputs of pool 1), 144 features, reLU
28. Decoding convolution, kernel size = 3 x 3, 96 features, reLU
29. Decoding convolution, kernel size = 3 x 3, 96 features, reLU
30. Upsampling, core size = 2 x 2, 96 features, reLU
31. Cascading 96+n features, reLU
32. Decoding convolution, kernel size = 3 x 3, 64 features, reLU
33. Decoding convolution, kernel size = 3 x 3, 32 features, reLU
34. Decoding convolution, kernel size = 3 x 3, m features, linear activation
In some embodiments, the convolutional layer may use a bias value of zero (e.g., the layer may be unbiased). Alternatively, in some embodiments, the convolutional layer may include a non-zero offset. Further details of the above U-net structure are described in "Robust and Interpretable Blind Image Denoising via Bias-Free Convolutional Neural Networks" published by S.Mohan, Z.Kadkhodaie, E.P.Simonlli and C.Fernandez-Granda in the month 4 of 2020 in combination with International characterization Learn Congress (International Conference on Learning Representations).
In some embodiments, an additional deconvolution step may be applied to the output of the trained neural network 110. Deconvolution may reduce the blur introduced by the trained neural network 110 in the final denoised image. For example, in some embodiments, the output of the trained neural network 110 may be iterated by Richardson-Lucy deconvolution. In some embodiments, the Richardson-Lucy deconvolution may have a 5 x 5 gaussian kernel with σ=0.5.
In some embodiments, the trained neural network 110 may be trained according to the process shown in fig. 1B. The example of fig. 1B shows a process of training the neural network 110 to perform denoising of a medical image of a subject. In some embodiments, the neural network 110 may be provided with one or more noise corrupted images 108 generated by combining the medical image 102 of the subject with the noise image 104.
The inventors recognize that assuming X as the denoising image 112 and y=x+n as the medical image 102 of the subject, the neural network 110 may be trained to perform denoising of the image Y by enhancing Y with additive noise M as z=y+m (where P (m+.u) =p (n+.u) for all 0+.1, P (m+.u) =p (n+.u)). The additive noise M may be obtained from empirical measurements or noise simulators, as described herein.
In some embodiments, the neural network 110 may be trained according to the loss function 114 such that: h is Z-Y. The loss function 114 may be determined based on the denoised image 112 and the medical image 102 of the subject. The loss function 114 may then be used to train the neural network 110 (e.g., update the weights of the neural network 110). In some embodiments, the loss function 114 may be a Mean Square Error (MSE) loss function calculated by taking the mean of the squared differences between the denoised image 112 and the medical image 102 of the subject. In some embodiments, other penalty functions may be implemented including, for example, binary Cross Entropy (BCE), class cross entropy (CC), or sparse class cross entropy (SCC) penalty functions.
Under the MSE approach, the neural network 110 may be configured to minimize the expected valueOf (3), whereinThis is given by:
the expression may be rearranged such that:
that is to say that the first and second,is equal to +.>Is a minimum value of (2). Thus, the denoised image 112 may be estimated by doubling the output (h (z)) of the neural network 110 and subtracting the noise corrupted image 108.
In some embodiments, the noise corrupted images 108 for both training and testing processes may be used to train (e.g., as shown in fig. 1B) and test (e.g., as shown in fig. 1A) the neural network 110. The inventors have recognized that the noise corrupted image 108 used for testing may have a lower variance (e.g., lower noise strength) than the noise corrupted image 108 used for training to improve the denoising performance of the trained neural network 110. For any noise simulator, the correction to the neural network 110 may have the following form:
ph (z) -qz, wherein: Z-Z; p is p>2q;
Wherein: p and q are weights of the neural network 110. The above expression enables heuristic tuning of p and q. The values of p and q may be further improved if training data comprising a noise-free medical image 102 is available.
Any suitable training algorithm may be used to train the neural network 110. For example, the neural network 110 may be trained using random gradient descent and back propagation. In some embodiments, adam optimizers may be used to train the neural network 110. For example, a learning rate of 0.0003 and having β can be used 1 =0.9 and β 2 Adam optimizer of =0.999 trains neural network 110 with batch size 32. In one placeIn some embodiments, the neural network 110 may be trained for about 150000 iterations. In some embodiments, training the neural network 110 may include training at least 10000 parameters, at least 50000 parameters, at least 100000 parameters, at least 250000 parameters, at least 500000 parameters, at least 1000000 parameters, at least 2000000 parameters, between 100000 and 1000000 parameters, between 50000 and 2000000 parameters, between 500000 and 5000000 parameters, or any suitable range within these ranges.
FIG. 2 is a diagram illustrating a process of generating a noise corrupted image using an image reconstruction module associated with an MRI system in accordance with some embodiments of the techniques described herein. Although the exemplary processes of fig. 1A and 1B describe generating the noise corrupted image 108 directly in the image domain, the example of fig. 2 illustrates a process of generating noise corrupted image data in a domain other than the image domain (e.g., the signal domain for the example of MRI) and then reconstructing the noise corrupted image 220. Such processing is generally applicable to medical imaging devices that capture data in domains other than the image domain. The process described by way of example in fig. 2 may be used to generate images for training or testing the neural network 110.
In some embodiments, the MR data 202 may be generated empirically (e.g., using an MRI system to collect the MR data 202). For example, the MR data 202 may be obtained by an MRI system (e.g., an MRI system for acquiring MR data of a subject or an MRI system of the same type as an MRI system for acquiring MR data of a subject). In such an embodiment, the MR data 202 may be obtained using the same pulse sequence as that used to acquire the MR data of the subject. For example, if a subject is to be imaged using a DWI pulse sequence, MR data 202 for generating a noise corrupted image 220 may also have been acquired using the same DWI pulse sequence. It is understood that the DWI pulse sequence may be replaced with another pulse sequence (which includes, but is not limited to, a spin echo pulse sequence, a fast spin echo pulse sequence, and/or an SSFP pulse sequence) to generate the noise corrupted image 220.
In some embodiments, the MR data 202 may be generated by synthesizing the MR data 202. For example, the MR data 202 may be synthesized based on one or more characteristics of the MRI system, including the number of Radio Frequency (RF) coils of the MRI system, the geometry and sensitivity of the RF coils of the MRI system, the field strength of the MRI system, and RF interference that the MRI system may be expected to experience during operation, among other factors. More description of MR data synthesis to be used for training a machine learning model is provided in U.S. patent publication 2020-0294282 filed 3/12/2020 and entitled "Deep Learning Techniques for Alignment of Magnetic Resonance Images," which is incorporated herein by reference in its entirety.
In some embodiments, the noise data 204 may be generated in a similar manner as the noise image 104 of fig. 1A and 1B. For example, the noise data 204 may be generated based on empirical measurements (e.g., by using an MRI system to measure noise within the MRI system in the absence of a patient). Alternatively or additionally, the noise data 204 may be generated by simulating noise, as described herein.
In some embodiments, the MR data 202 and the noise data 204 may be combined to form noise corrupted MR data 208. Combining the MR data 202 and the noise data 204 may include adding the MR data 202 and the noise data 204 in the signal domain. Alternatively or additionally, combining the MR data 202 and the noise data 204 may include any suitable steps (e.g., multiplication, convolution, or other forms of transformation).
In some embodiments, the noise data 204 may be scaled to the MR data 202 before being combined. For example, the intensity of the noise data 204 may be scaled relative to the maximum intensity value of the MR data 202. Such scaling may determine the amount of noise added to the MR data 202 and ultimately the amount of noise present in the noise corrupted image 220. For example, the noise data 204 may be scaled to be in the range of 2% to 30% of the maximum intensity value of the MR data 202. In some embodiments, the noise data 204 may be scaled to one of 5%, 10%, 20% and any value within the above range of maximum intensity values of the MR data 202. In some embodiments, the noise data 204 may be scaled to a value for generating a test noise corrupted image that is less than a value for generating a training noise corrupted image. For example, in some embodiments, the noise data 204 may be scaled to 5% for testing and to 10% for training.
In some embodiments, the noise corrupted MR data 208 may then be provided to an image reconstruction module 210. The image reconstruction module 210, which is described in more detail below in conjunction with fig. 3, may be configured to reconstruct the noise corrupted MR data 208 into a noise corrupted image 220. That is, the image reconstruction module 210 may be configured to transform the noise corrupted MR data 208 in the signal domain into a noise corrupted image 220 in the image domain. The noise corrupted image 220 may then be provided to, for example, the neural network 110 for denoising.
In some embodiments, the image reconstruction module 210 may be a pipeline comprising a plurality of processing steps configured to transform, correct and/or reconstruct the input MR data from the signal domain to the image domain. Fig. 3 shows a block diagram of an exemplary image reconstruction module 210 including such a pipeline, in accordance with some embodiments of the technology described herein.
In some embodiments, the image reconstruction module 210 may include a plurality of modules, one or more of which include a neural network configured to perform a particular task in the image reconstruction pipeline. For example, these modules may include, but are not limited to, a phase drift correction module 211, a pre-whitening module 212, a gridding module 213, a multi-echo multi-coil combination module 214, B 0 A warp recovery module 215 and/or an intensity correction module 216. Additional description of the image reconstruction module 210 is provided in U.S. patent publication 2020-0294282, filed 3/12/2020 and entitled "Deep Learning Techniques for Alignment of Magnetic Resonance Images," which is incorporated herein by reference in its entirety.
In some embodiments, the phase drift correction module 211 and the pre-whitening module 212 may be processed prior to reconstructing the MR image. For example, the phase drift correction module 211 and the pre-whitening module 212 may be configured to process MR data in the signal domain. The phase drift correction module 211 may be configured toCorrection of phase drift due to thermal drift over time is performed (e.g. causing B 0 Shift of the field) and the pre-whitening module 212 may be configured to correct for differences in noise levels between the individual RF coils of the MRI system. In some embodiments, the phase drift correction module 211 and the pre-whitening module 212 may each include a trained neural network that is co-trained.
In some embodiments, the gridding module 213 may be configured to reconstruct MR images using a linear method of gridding. It will be appreciated that other methods of image reconstruction may be implemented instead of or in addition to the gridding module 213. For example, principal Component Analysis (PCA), sensitivity encoding (SENSE), generalized self-calibrating partial parallel acquisition (GRAPPA), or Compressed Sensing (CS) may be implemented instead of or in addition to the meshing module 213. Alternatively or additionally, a deep learning method may be implemented for image reconstruction.
In some embodiments, additional processing may be performed after image reconstruction. Such post-reconstruction processing may include multi-echo multi-coil combination modules 214, B 0 A warp recovery module 215 and/or an intensity correction module 216. In some embodiments, the multi-echo multi-coil combination module 214 may be configured to combine multiple MR images generated based on data acquired from multiple RF coils of an MRI system, or to combine multiple MR images generated based on multiple acquisitions of MR data acquired by the same RF coil. Additionally, in some embodiments, B 0 The warp restoration module 215 may be configured to remove warp artifacts (e.g., as generated by a DWI pulse sequence). In some embodiments, the intensity correction module 216 may be configured to perform intensity correction between MR images generated by the meshing module 213. In some embodiments, the post-reconstruction module may include one or more neural networks configured to perform post-reconstruction processing of the MR image.
Fig. 4A and 4B illustrate examples of MR images of a brain of a subject obtained using DWI pulse sequences and including different levels of noise destruction according to some embodiments of the techniques described herein. The MR image of fig. 4A is acquired using a DWI pulse sequence without a diffusion weighting (e.g., b=0), and the MR image of fig. 4B is acquired using a DWI pulse sequence with a diffusion weighting value b=890. The level of noise destruction is scaled to 5%, 10% and 20% of the maximum value of the intensity of the original MR image, respectively.
Turning to fig. 5, additional aspects of denoising medical images using a trained neural network are illustrated. An exemplary process 500 for generating a denoised medical image of a subject using a trained neural network according to some embodiments of the technology described herein is described in connection with fig. 5.
Process 500 may be performed using any suitable computing device. For example, in some embodiments, the process 500 may be performed by a computing device co-located with (e.g., in the same room as) the medical imaging device. As another example, in some embodiments, process 500 may be performed by one or more processors located at a location remote from the medical imaging device (e.g., as part of a cloud computing environment).
Process 500 may optionally begin at act 502, where a noise image may be generated by empirically measuring noise using a medical imaging device and/or by simulating the noise image as described in connection with noise image 104 of fig. 1A. In some embodiments, the noise image may be obtained before the medical image of the subject is obtained using the medical imaging device (e.g., by measurement or simulation), and may be accessed (e.g., from a computer storage device) after the medical image of the subject is de-noised.
Following act 502, process 500 may proceed to act 504, where a medical image of the subject may be obtained at act 504. Medical images of a subject may be obtained from a medical imaging device (e.g., any medical imaging device as described herein). For example, a medical image of a subject may be obtained by collecting data using a medical imaging device (e.g., by imaging a patient) and then generating a medical image of the subject based on the collected data. Alternatively, obtaining the medical image of the subject may include accessing data collected by the medical imaging device from the computer storage and generating the medical image of the subject using the accessed data, or the medical image of the subject may be generated and accessed from the computer storage prior to the beginning of process 500.
Following act 504, process 500 may proceed to act 506, at which act 506, in some embodiments, the medical image of the subject may be combined with the noise image to obtain a noise corrupted medical image of the subject. For example, a noise image may be added to a medical image of a subject to obtain a noise corrupted medical image of the subject. Alternatively or additionally, the noise image may be combined with the medical image of the subject in any other suitable way (e.g., via multiplication, convolution, or other transformation) to obtain a noise corrupted medical image of the subject.
Following act 506, process 500 may proceed to act 508, at which act 508 a denoised medical image corresponding to the noise corrupted medical image of the subject may be generated using the noise corrupted medical image of the subject and the trained neural network. The trained neural network may include multiple layers (e.g., convolutional layers in some embodiments). In some embodiments, the plurality of layers may have a U-net structure. For example, the trained neural network may be trained as described herein in connection with fig. 1B.
In some embodiments, generating the denoising medical image using the trained neural network may include generating the denoising medical image directly by the trained neural network. Alternatively, in some embodiments, the trained neural network may generate denoising information that may be used to generate the denoised medical image. For example, the denoising information may indicate which noise is to be removed from the noise corrupted medical image such that generating the denoised medical image may be performed by subtracting the denoising information from the noise corrupted medical image.
Following act 508, process 500 may proceed to act 510, where a denoised medical image may be output at act 510. The denoised medical image may be output using any suitable method. For example, the denoised medical image may be output by being saved for subsequent access, transmitted over a network to a recipient, and/or displayed to a user of the medical imaging device.
Fig. 6 illustrates an example of MR images of a brain of a subject before (top) and after (bottom) denoising by a trained neural network, according to some embodiments of the technology described herein. Such as the article "Diffusion-weighted MR imaging of the brain" Radiology 2000 in Schaefer PW, grant PE and Gonzalez RG; 217:331-345 (which is incorporated by reference in its entirety) MR images are acquired from left to right using DWI pulse sequences without diffusion weighting (e.g., b=0), DWI pulse sequences with diffusion weighting values b=890, and by generating Apparent Diffusion Coefficient (ADC) maps.
Turning to fig. 7A, another machine learning technique developed by the inventors for denoising medical images involves the use of an countermeasure approach. Fig. 7A illustrates a diagram of a process performed by a generator neural network 704 to denoise a medical image 702 of a subject in accordance with some embodiments of the techniques described herein.
In some embodiments, the medical image 702 of the subject may be provided to a generator neural network 704 configured to denoise the medical image 702 of the subject. The medical image 702 of the subject may be obtained from a medical imaging device (e.g., any medical imaging device as described herein). For example, the medical image 702 of the subject may be obtained by collecting data using a medical imaging device (e.g., by imaging a patient) and then generating the medical image 702 of the subject based on the collected data. Alternatively, obtaining the medical image 702 of the subject may include accessing data collected by the medical imaging device from the storage device and generating the medical image 702 of the subject using the accessed data, or the medical image 702 of the subject may be generated prior to the processing shown in fig. 7A and accessed from the storage device for denoising.
In some embodiments, the generator neural network 704 may be implemented as a deep neural network. For example, the generator neural network 704 may include multiple layers. In some embodiments, the generator neural network 704 may be a convolutional neural network, and the plurality of layers may include convolutional layers. The plurality of layers may be arranged in one of a U-net structure and a Res-net structure. In some embodiments, the generator neural network 704 may have the same architecture as the trained neural network 110 described herein in connection with fig. 1A and 1B.
In some embodiments, the denoised medical image 706 may be generated based on an output of the generator neural network 704. The denoised medical image 706 may be generated directly by the generator neural network 704 (e.g., the generator neural network 704 outputs the denoised medical image 706), or the generator neural network 704 may be configured to output denoised information that may be used to generate the denoised medical image 706. For example, the denoising information may be subtracted from the medical image 702 of the subject to generate a denoised medical image 706.
In some embodiments, as shown in fig. 7B and in accordance with some embodiments of the techniques described herein, the generator neural network 704 may be trained using the arbiter neural network 714. The arbiter neural network 714 may be trained to distinguish between an image 710 of noise obtained using the output of the generator neural network 704 and a noise image 712 generated prior to obtaining the medical image 702 of the subject. For example, the noisy image 710 may be obtained by subtracting the denoised medical image 706 generated from the output of the generator neural network 704 from the medical image 702 of the subject. Alternatively, the generator neural network 704 may directly output the noisy image 710.
In some embodiments, the arbiter neural network 714 may compare the image 710 of noise generated from the output of the generator neural network 704 with the noise image 712. The noise image 712 may be generated in the same manner as the noise image 104 described in connection with fig. 1A and 1B. For example, the noise image 712 may be generated based on empirical measurements of noise and/or simulations of noise.
The minimum maximum objective function of the countermeasure training method including the generator neural network 704, G, and the arbiter neural network 714, D, may be written as:
wherein, the liquid crystal display device comprises a liquid crystal display device,is the true residual sample, z is the true image of the noise, G (z) is the denoised image,is to generate residual samples, +.>Is->Invariant mask->Is->Complement of invariant mask, multiplication by element, and s (z) replaces the mask value in z with the local average. Thus, the generator neural network 704 and the arbiter neural network 714 compete by updating their weight matrix via random gradient descent until they reach stark-terger (stabelberg) equalization.
In some embodiments, the arbiter neural network 714 may be implemented as a deep learning model. The arbiter neural network 714 may include multiple layers; in some embodiments, these layers may be convolutional layers. These layers may be arranged according to a U-net structure. In some embodiments, the arbiter neural network 714 may include the following layers:
An rgb image of a picture frame,
2. under the residual block, 8 to 16
3. Under the residual block, 16 to 32
4. Optional non-local block (64X 64)
5. Under the residual block, 32 to 64
6. Under the residual block, 64 to 64
7. Under the residual block, 64- > 128, reLU, global sum pooling, linearity- > 1
8. On the residual block, 128- > 64
9. On the residual block, 128- > 64
10. On the residual block, 128- > 32
11. On the residual block, 64 to 16
12. On the residual block, 32 to 8
13. On the residual block, 16-8
14. On the residual block, 8.fwdarw.1
15.Sigmoid
At E.Additional details of the above-described arbiter neural network are described in Schiele and A.Khoreva, U.S. Pat. No. 6, A U-Net Based Discriminator for Generative Adversarial Networks to 2020, incorporated by reference Conference on Computer Vision and Pattern Recognition.
In some embodiments, the arbiter neural network 714 may generate the classification output 716 based on the received image 710 of noise and the noise image 712. The classification output 716 may describe a level of similarity between the noisy image 710 and the noisy image 712, reflecting whether the generator neural network 704 accurately denoises the medical image 702 of the subject. The classification output 716 may be used to generate a loss function L adversarial . The loss function may be used to change a parameter (e.g., a weight value or other parameter) of the generator neural network 704. In some embodiments, the loss function L adversarial May include a JS divergence (Jensen-Shannon divergence) (JSD) loss function, MSE loss function, or any other suitable loss function. Additionally, in some embodiments, a loss function L (e.g., similar to the loss function 114 described in connection with fig. 1B) may be generated based on the denoised image 706 and the medical image 702 of the subject adversarial . In this way, the generator neural network 704 can be trained using both countermeasure and self-feedback.
In some embodimentsIn an example, the generator neural network 704 may be trained using Adam optimizers. Both the generator neural network 704 and the arbiter neural network 714 may use a two time scale update rule (TTUR), where the learning rate of the generator neural network 704 is set to 0.0001 and the learning rate of the arbiter neural network 714 is set to 0.0003. In some embodiments, momentum items may not be used, and β 1 And 2 the values of (2) may be set to 0.0 and 0.9, respectively. In some embodiments, training generator neural network 704 and/or arbiter neural network 714 may include training at least 10000 parameters, at least 50000 parameters, at least 100000 parameters, at least 250000 parameters, at least 500000 parameters, at least 1000000 parameters, at least 2000000 parameters, between 100000 and 1000000 parameters, between 50000 and 2000000 parameters, between 500000 and 5000000 parameters, or any suitable range within these ranges.
Fig. 8 shows an example of a noisy image (top) from an MNIST and a corresponding denoised image (middle) generated by a conventional neural network. Traditional neural networks are trained using intra-domain (left) images and exhibit poor performance on the out-of-domain, intensity-reversed images (right). In contrast, FIG. 9 illustrates a comparison of denoising performance of noise corrupted images (left) from MNIST when denoising by a conventional neural network trained on an out-of-domain image (middle) and a generator neural network trained using a discriminant neural network (right; e.g., as described in connection with FIGS. 7A and 7B), according to some embodiments described herein. The de-noised image generated by the generator neural network exhibits relatively superior performance to conventional neural networks.
Fig. 10 shows examples of noisy in-domain, hue, saturation, value (HSV) images (top) and out-of-domain, red, green, blue (RGB) images (bottom) from CIFAR-10 before and after denoising by a conventional neural network. The out-of-domain image suffers from additional blurring and contrast artifacts. In contrast, fig. 11 illustrates an example of a noisy image (top) from CIFAR-10 after denoising by a conventional neural network (middle) and after denoising by a generator neural network (bottom) trained using a arbiter neural network, according to some embodiments described herein. Lower denoised images denoised using the countermeasure method exhibit a sharper contrast and more natural color than denoised images denoised using a conventional neural network.
Fig. 12 is a flowchart of an exemplary process 1200 for generating a denoised medical image of a subject using a generator neural network, according to some embodiments of the technology described herein. Process 1200 may be performed using any suitable computing device. For example, in some embodiments, process 1200 may be performed by a computing device co-located with (e.g., in the same room as) the medical imaging device. As another example, in some embodiments, process 1200 may be performed by one or more processors located at a location remote from the medical imaging device (e.g., as part of a cloud computing environment).
In some embodiments, process 1200 may optionally begin with act 1202, at which act 1202 a discriminator neural network may be used to train a generator neural network. The medical image of the subject may be provided to a generator neural network, and a denoised image and a noisy image may be generated based on an output of the generator neural network. For example, the noise image may be generated by subtracting the denoising image from the medical image of the subject, or the generator neural network may directly output the noise image.
In some embodiments, the arbiter neural network may be trained to distinguish images of noise obtained based on the output of the generator neural network from noise images. The noise image may be generated in the same manner as the noise image 104 described in connection with fig. 1A and 1B. For example, the noise image may be generated based on empirical measurements of noise and/or simulations of noise.
In some embodiments, the arbiter neural network may compare the noisy image with the noisy image and output classification information indicating a level of similarity between the two images. The classification information (e.g., classification values) may be used to generate a loss function configured to provide feedback to the generator neural network.
Following act 1202, process 1200 may proceed to act 1204, where a medical image of the subject may be obtained at act 1204. Medical images of a subject may be obtained from a medical imaging device (e.g., any medical imaging device as described herein). For example, a medical image of a subject may be obtained by collecting data using a medical imaging device (e.g., by imaging a patient) and then generating a medical image of the subject based on the collected data. Alternatively, obtaining the medical image of the subject may include accessing data collected by the medical imaging device from the computer storage and generating the medical image of the subject using the accessed data, or the medical image of the subject may be generated and accessed from the computer storage prior to the process 1200 beginning.
After act 1204, process 1200 may proceed to act 1206, where, using the medical image of the subject and the generator neural network, a denoised medical image corresponding to the medical image of the subject may be generated. In some embodiments, generating the denoised medical image using the generator neural network may include generating the denoised medical image directly by the generator neural network. Alternatively, in some embodiments, the generator neural network may generate denoising information that may be used to generate a denoising medical image. For example, the denoising information may indicate noise to be removed from the noise corrupted medical image such that generating the denoised medical image may be performed by subtracting the denoising information from the noise corrupted medical image.
After act 1206, process 1200 may proceed to act 1208, at which act 1208 a denoised medical image may be output. The denoised medical image may be output using any suitable method. For example, the denoised medical image may be output by saving for subsequent access, transmission over a network to a recipient, and/or display to a user of the medical imaging device.
Fig. 13 illustrates an example of MR images of a brain of a subject before (top) and after (bottom) denoising by a generator neural network, according to some embodiments of the technology described herein. MR images are acquired using diffusion imaging MRI techniques, and the denoised images clearly show boundaries between tissue structures of the brain of the subject.
Example MRI System
Some embodiments of the techniques described herein may be implemented using a portable low-field MRI system (aspects of which are described below with reference to fig. 14, 15A-15B, and 16A-16B). Some aspects of such portable low-field MRI systems are further described in U.S. patent 10,222,434 (which is incorporated by reference herein in its entirety) entitled "Portable Magnetic Resonance Imaging Methods and Apparatus" filed on month 1 and 24 of 2018.
Fig. 14 is a block diagram of example components of an MRI system 1400. In the illustrative example of fig. 14, MRI system 1400 includes workstation 1404, controller 1406, pulse sequence storage 1408, power management system 1410, and magnetic assembly 1420. It should be appreciated that the system 1400 is illustrative and that the MRI system may have one or more other components of any suitable type in addition to or instead of the components shown in fig. 14.
As shown in fig. 14, the magnetic assembly 1420 includes B 0 A magnet 1422, a shim 1424, RF transmit and receive coils 1426, and gradient coils 1428.B (B) 0 The magnet 1422 may be used to at least partially generate the main magnetic field B 0 。B 0 The magnet 1422 may be any suitable type of magnet that can generate a main magnetic field and may include one or more B' s 0 Coils, correction coils, pole pieces, etc. In some embodiments, B 0 The magnet 1422 may be a permanent magnet. For example, in some embodiments, B as described herein including with reference to fig. 23 0 The magnet 1422 may comprise a plurality of permanent magnet pieces organized in a biplane arrangement of concentric permanent magnet rings. In some embodiments, B 0 The magnet 1422 may be an electromagnet. In some embodiments, B 0 The magnet 1422 may be a hybrid magnet comprising one or more permanent magnets and one or more electromagnets.
In some embodiments, shims 1424 may be used to contribute magnetic field(s) to improve B generated by magnet 1422 0 Uniformity of the field. In some embodiments, gasket 1424May be a permanent magnet shim. In some embodiments, shims 1424 may be electromagnetic and may include one or more shim coils configured to generate a shim magnetic field. In some embodiments, gradient coils 1428 may be arranged to provide gradient fields, and may be arranged to generate gradients in three substantially orthogonal directions (X, Y, Z) in a magnetic field, for example, to locate a position in which MR signals are induced. In some embodiments, one or more magnetic assemblies 1420 (e.g., shims 1424 and/or gradient coils 1428) may be fabricated using lamination techniques.
In some embodiments, the RF transmit and receive coil 1426 may include a coil operable to generate RF pulses to induce a magnetic field B 1 Is provided. The transmit/receive coil(s) may be configured to generate any suitable type of RF pulse configured to excite MR responses in the subject and to detect the transmitted so-obtained MR signals. The RF transmit and receive coils 1426 may include one or more transmit coils and one or more receive coils. The configuration of the transmit/receive coils varies with implementation, and may include a single coil for both transmit and receive, separate coils for transmit and receive, multiple coils for transmit and/or receive, or any combination to implement a single channel or parallel MRI system. In some embodiments, the RF transmit and receive coil 1426 includes multiple RF coils, which enable the MRI system 1400 to receive MR signals on multiple channels simultaneously.
The power management system 1410 includes electronics that provide operating power to one or more components of the low-field MRI system 1400. For example, the power management system 1410 may include one or more power sources, gradient power amplifiers, transmit coil amplifiers, and/or any other suitable power electronics required to provide suitable operating power to energize and operate the components of the low-field MRI system 1400.
As shown in fig. 14, the power management system 1410 includes a power supply 1412, one or more amplifiers 1414, a transmit/receive switch 1416, and a thermal management component 1418. The power supply 1412 includes a magnetic component 1420 to provide operating power to the low-field MRI system 1400Force electronics. For example, in some embodiments, the power supply 1412 may include a power supply to one or more B' s 0 Coil (e.g. B when acting as an electromagnet 0 Magnets 1422) provide operating power to generate the main magnetic field for the low-field MRI system, one or more shims 1424, and/or one or more gradient coils 1628. In some embodiments, the power supply 1412 may be a single-pole Continuous Wave (CW) power supply. The transmit/receive switch 1416 may be used to select whether the RF transmit coil or the RF receive coil is being operated.
In some embodiments, amplifier(s) 1414 may include: one or more RF receive (Rx) preamplifiers that amplify MR signals detected by the RF receive coil(s) (e.g., coil 1424); an RF transmit (Tx) amplifier(s) configured to provide power to the RF transmit coil(s) (e.g., coil 1426); a gradient power amplifier(s) configured to provide power to the gradient coil(s) (e.g., gradient coil 1428); and/or a shim amplifier(s) configured to provide power to the shim coil(s) (e.g., the shim 1424 in embodiments in which the shim 1424 includes one or more shim coils).
In some embodiments, the thermal management component 1418 provides cooling for components of the low-field MRI system 1400, and may be configured to provide cooling by facilitating transfer of thermal energy generated by one or more components of the low-field MRI system 1400 away from those components. The thermal management component 1418 may include components to perform water-based or air-based cooling, which may be integrated with or disposed in close proximity to heat-generating MRI components, including but not limited to B 0 Coils, gradient coils, shim coils, and/or transmit/receive coils.
As shown in fig. 14, the low-field MRI system 1400 includes a controller 1406 (also referred to as a console) having control electronics that send instructions to and receive information from a power management system 1410. The controller 1406 may be configured to implement one or more pulse sequences for determining instructions to send to the power management system 1410 to operate the magnetic component 1420 according to a desired sequence. For example, the controller 1406 may be configured to control the power management system 1410 to operate the magnetic component 1420 in accordance with a balanced steady state free precession (bSSFP) pulse sequence, a low field gradient echo pulse sequence, a low field spin echo pulse sequence, a low field inversion recovery pulse sequence, arterial spin labeling, diffusion Weighted Imaging (DWI), and/or any other suitable pulse sequence.
In some embodiments, the controller 1406 may be configured to implement a pulse train by obtaining information related to the pulse train from a pulse train repository 1408 (which stores information for each of one or more pulse trains). The information stored by the pulse sequence repository 1408 for a particular pulse sequence may be any suitable information that enables the controller 1406 to implement the particular pulse sequence. For example, the information stored in the pulse sequence repository 1408 for the pulse sequence may include one or more parameters for operating the magnetic component 1420 according to the pulse sequence (e.g., parameters for operating the RF transmit and receive coils 1426, parameters for operating the gradient coils 1428, etc.), one or more parameters for operating the power management system 1410 according to the pulse sequence, one or more programs including instructions that, when executed by the controller 1406, cause the controller 1406 to control the system 1400 to operate according to the pulse sequence, and/or any other suitable information. The information stored in the pulse sequence repository 1408 may be stored on one or more non-transitory storage media.
As shown in fig. 14, in some embodiments, the controller 1406 may interact with a computing device 1404 programmed to process received MR data (which may be spatial frequency domain MR data in some embodiments). For example, the computing device 1404 may process the received MR data using any suitable image reconstruction process (es) including any of the techniques described herein to generate one or more MR images. Additionally, the computing device 1404 may process one or more generated MR images to generate one or more denoised MR images. For example, the computing device 1404 may perform any of the processes described herein with reference to fig. 1A-1B, 5, 7A-7B, and 12. The controller 1406 may provide information related to one or more pulse sequences to the computing device 1404 for processing of data by the computing device. For example, the controller 1406 may provide information related to one or more pulse sequences to the computing device 1404, and the computing device may perform denoising of the MR image based at least in part on the provided information.
In some embodiments, the computing device 1404 may be any electronic device(s) configured to process acquired MR data and generate image(s) of the subject being imaged. However, the inventors have appreciated that portable MRI systems have sufficient on-board computing power to perform neural network calculations to generate MR images from input spatial frequency data, which would be advantageous because in many settings (e.g., hospitals) the network bandwidth available to offload spatial frequency MR data from the MRI machine for processing elsewhere (e.g., in the cloud) is limited. Thus, in some environments where MRI system 1400 may be deployed, the inventors have recognized that it is advantageous that MRI system include hardware dedicated to neural network computation to perform some of the processes described herein.
Thus, in some embodiments, computing device 1404 may include one or more Graphics Processing Units (GPUs) configured to perform neural network computations to be performed when implementing the neural network models described herein (e.g., trained neural network 110, producer neural network 704, arbiter neural network 714, and/or any other neural network). In some such embodiments, the computing device 1404 may be onboard (e.g., within the housing of the low-field MRI system 1400). Thus, in some embodiments, MRI system 1400 may include one or more GPUs, and the GPU(s) may be onboard, for example, by being housed within the same housing as the one or more components of power management system 1410. Additionally or alternatively, the computing device 1404 may include one or more hardware processors, FPGAs, and/or ASICs configured to process the acquired MR data and generate image(s) of the subject being imaged.
In some embodiments, the user 1402 can interact with the computing device 1404 to control aspects of the low-field MR system 1400 (e.g., program the system 1400 to operate according to a particular pulse sequence, adjust one or more parameters of the system 1400, etc.) and/or view images obtained by the low-field MR system 1400.
Fig. 15A and 15B illustrate diagrams of a portable MRI system 1500 according to some embodiments of the techniques described herein. Portable MRI system 1500 includes a B formed in part by an upper magnet 1510a and a lower magnet 1510B 0 Magnet 1510, upper magnet 1510a and lower magnet 1510b are coupled with a yoke 1520 to increase the flux density within the imaging region. B (B) 0 The magnet 1510 may be housed in a magnet housing 1512 along with gradient coils 1515. B (B) 0 Magnet 1510 may be a permanent magnet and/or any other suitable type of magnet.
The exemplary portable MRI system 1500 also includes a base 1550 that houses electronics for operating the MRI system. For example, base 1550 may house electronics including, but not limited to, one or more gradient power amplifiers, a system level computer (e.g., including one or more GPUs to perform neural network calculations in accordance with some embodiments of the techniques described herein), a power distribution unit, one or more power supplies, and/or any other power components configured to operate an MRI system using mains power (e.g., via a connection to a standard wall outlet and/or a large appliance outlet). For example, the base 1570 may house low power components such as those described herein, thereby enabling the portable MRI system to be powered at least in part from readily available wall outlets. Thus, the portable MRI system 1500 may be brought to a patient and plugged into a wall outlet in his or her vicinity.
The portable MRI system 1500 also includes a movable slide 1560, which movable slide 1560 may be switched and positioned in various configurations. Slide 1560 includes electromagnetic shield 1565, which may be made of any suitable conductive or magnetic material, to form a movable shield to attenuate electromagnetic noise in the operating environment of the portable MRI system, thereby shielding the imaging region from at least some electromagnetic noise.
In the portable MRI system 1500 shown in fig. 15A and 15B, the movable shield may be configured to provide shielding in different arrangements that may be adjusted as needed to accommodate a patient, provide access to a patient, and/or in accordance with a given imaging protocol. For example, for imaging procedures such as brain scans, once the patient has been positioned, the slide 1560 may be closed, for example, using the handle 1562 to provide electromagnetic shielding 1565 around an imaging region other than the opening that accommodates the upper torso of the patient. As another example, for imaging procedures such as knee scanning, the sled 1560 may be arranged with openings on both sides to accommodate one or both legs of a patient. Thus, the movable shield enables the shield to be configured in an arrangement suitable for the imaging procedure and enables the shield to facilitate proper positioning of the patient within the imaging region. An electrical gasket may be arranged to provide a continuous shield along the perimeter of the movable shield. For example, as shown in fig. 15B, electrical washers 1567a and 1567B may be provided at the interface between the slider 1560 and the magnet housing to maintain continuous shielding therealong. In some embodiments, the electrical washers are beryllium fingers or beryllium-copper fingers, etc. (e.g., aluminum washers) that maintain the electrical connection between the shield 1565 and ground during and after the slide 1560 is moved to the desired position around the imaging region.
Motorized assembly 1580 is provided for ease of transport to enable the portable MRI system to be driven from one location to another, for example, using controls such as a joystick or other control mechanism provided on or remote from the MRI system. In this way, the portable MRI system 1500 may be transported to a patient and maneuvered to the bedside for imaging.
Fig. 16A illustrates a portable MRI system 1600 that has been transported to the bedside of a patient for brain scanning. Fig. 16B illustrates the portable MRI system 1600 that has been transported to the bedside of a patient for a knee scan of the patient. As shown in fig. 16B, shield 1665 includes a shield 1660 having an electrical washer 1667 c.
FIG. 17 is a diagram of an exemplary computer system in which embodiments described herein may be implemented. An exemplary implementation of a computer system 1700 that can be used in connection with any of the embodiments of the disclosure provided herein is shown in fig. 17. For example, the processes described with reference to fig. 1A-1B, 5, 7A-7B, and 12 may be implemented on computer system 1700 and/or using computer system 1700. As another example, computer system 1700 may be used to train and/or use any of the neural network statistical models described herein. The computer system 1700 may include one or more processors 1710 and one or more articles of manufacture including non-transitory computer readable storage media (e.g., memory 1720 and one or more non-volatile storage media 1730). The processor 1710 may control writing data to the memory 1720 and the nonvolatile storage 1730 and reading data from the memory 1720 and the nonvolatile storage 1730 in any suitable manner, as the aspects of the disclosure provided herein are not limited in this respect. To perform any of the functions described herein, the processor 1710 may execute one or more processor-executable instructions stored in one or more non-transitory computer-readable storage media (e.g., memory 1720), which may be used as a non-transitory computer-readable storage medium for storing processor-executable instructions for execution by the processor 1710.
Example denoising without ground truth: two-stage learning
Machine learning models (e.g., neural network models) for denoising noisy corrupted images have traditionally been trained using supervised learning techniques that rely on pairs of clean images and large datasets of noisy images ("training data"). Such data sets may be difficult or impossible to acquire for medical imaging techniques used in clinical settings. For example, in the context of MR imaging, large clinical datasets are generally available only for certain portions of the fully studied anatomy (e.g., brain) and certain types of MRI systems (e.g., high-field MRI systems). Accordingly, the inventors have appreciated that for certain types of medical imaging devices (e.g., new medical imaging devices, low-field MRI systems, etc.), there may be little or even no training data available with paired clean and noisy medical images.
Instead, unsupervised learning techniques have been used to train machine learning models for denoising noisy images. However, such techniques produce a machine learning model that can denoise images with only independent co-distributed (i.i.d.) noise. However, in actual medical imaging, the noise distribution in the acquired medical image is rarely i.i.d. For example, in the context of MRI, spatial frequency data is reconstructed into the image domain using a reconstruction process to generate MR images. The reconstruction process may introduce correlated non-linear noise into the output MR image, which is inconsistent with the assumption of i.i.d. noise that forms the basis of conventional unsupervised training techniques.
The inventors have recognized that it is a challenge to obtain sufficient training data for certain medical imaging modalities. For example, obtaining sufficient training data acquired by a low-field MRI system using unique or unusual MRI system parameters and/or for different portions of the human anatomy may present challenges for developing a machine learning model for denoising medical images. To address the above challenges of training a machine learning model for denoising medical images, the inventors have recognized that in situations where clean training data is not available, approximate training data may be generated and substituted for the clean training data. In particular, the inventors have recognized and appreciated that approximate training data may be generated for a target domain based on data acquired from a source domain that is different from the target domain. In this way, images from a freely available large dataset may be used to generate approximate training data that may be used to train a denoising neural network model to denoise images generated using new or unusual imaging modalities.
Accordingly, the inventors developed a two-stage process to generate training data for training a machine learning model for denoising noisy medical images. In a first stage of the process, training data obtained from the source domain and noise data associated with the target domain are used to generate approximate training data in the target domain. In the second stage, the approximate training data is used to train the denoising neural network model. The machine learning techniques developed by the inventors provide improvements to medical imaging techniques in that they more effectively remove or suppress noise from medical images acquired using medical imaging techniques or devices for which large training data sets are not available. As a result, these techniques produce higher quality, clinically more relevant medical images (e.g., with better tissue contrast, clearer features, and/or limited noise artifacts).
One machine learning technique developed by the inventors for denoising medical images involves training a denoising neural network model in a two-stage process. In a first stage, training data for training a first neural network model may be generated using: (1) Clean medical image data associated with the source domain (e.g., medical image data collected by a first type of medical imaging device (e.g., a high-field MRI device), medical image data acquired using a first set of parameters (e.g., using a particular MRI pulse sequence), medical image data of a particular type of patient anatomy of, for example, the brain, etc.); and (2) first MR noise data associated with the target domain (e.g., MR noise data collected by a medical imaging device of a second type (e.g., a low-field MRI device) different from the first type; MR noise data acquired using a second set of parameters (e.g., using a different MRI pulse sequence; MR noise data collected while imaging different types of patient anatomy (e.g., knees that may be imaged using a different RF receive coil than the coil used for brain imaging; etc.). The generated training data may then be used to train the first neural network model. In the second stage, training data for training the denoising neural network model may be generated by applying the first neural network model to a plurality of noisy medical images associated with the target domain to generate a plurality of denoising medical images. The denoising neural network model may then be trained using the training data for training the denoising neural network model. After training the denoising neural network model, the denoising neural network model may be provided with the noisy medical image as an input, and the denoising medical image may be generated as an output.
In some embodiments, the first training data includes a plurality of first noisy medical images and a corresponding plurality of clean medical images. To generate the first training data, the first noisy medical image data is generated using clean medical image data associated with the source domain (e.g., medical image data collected using a high-field MRI apparatus) and first medical image noise data associated with the target domain (e.g., simulated or collected noise data representing the type of noise present in the data collected by the low-field MRI apparatus). Thereafter, a plurality of first noisy medical images and a corresponding plurality of clean medical images are generated by applying a reconstruction process to the first noisy medical image data and the clean medical image data associated with the source domain, respectively.
In some embodiments, the reconstruction process includes generating the medical image using a machine learning model (e.g., a neural network model), using compressed sensing, or using at least one non-uniform transformation, and/or using at least one linear transformation. In some embodiments, for example, where the medical imaging device is an MRI system, the reconstruction process generates MR images from spatial frequency data acquired by the MRI system. In such embodiments, and where the reconstruction process includes at least one linear transformation, the reconstruction process may include the use of one or more of a coil decorrelation transformation, a meshing transformation, and/or a coil combination transformation. However, it should be appreciated that in some embodiments, the reconstruction process may be any suitable image reconstruction process, as aspects of the techniques described herein are not limited in this respect.
In some embodiments, the technique includes generating second training data for training a second neural network model to denoise the medical image. To generate the second training data, a plurality of dual noisy medical images are generated using: (1) Second noisy medical image data associated with the target domain; and (2) second medical image noise data associated with the target domain. Thereafter, a second neural network model is trained using the second training data. In some embodiments, generating training data for training the denoising neural network model further comprises applying the second neural network model to the plurality of second noisy medical images.
In some embodiments, the second training data includes a plurality of dual noisy medical images and a plurality of second noisy medical images. To generate the second training data, dual noisy medical image data is generated using the second noisy medical image data associated with the target domain and the second medical image noise data associated with the target domain. Thereafter, a plurality of dual noisy medical images and a plurality of second noisy medical images are generated by applying a reconstruction process to the dual noisy medical image data and the second noisy medical image data associated with the target domain, respectively.
In some embodiments, generating training data for training the denoising neural network model further comprises: a plurality of enhanced denoising medical images is generated by applying a transform to an image of the plurality of denoising medical images, and clean medical image data associated with the target domain is generated by applying a non-uniform transform to an image of the plurality of enhanced denoising medical images.
In some embodiments, the training data for training the denoising neural network model includes a plurality of noisy medical training images and a plurality of clean medical training images. To generate training data for training the denoising neural network model, clean medical image training data is generated by combining clean medical image data associated with the source domain and clean medical image data associated with the target domain. Additionally, noisy medical training data is generated using the clean medical training data and third medical image noise data associated with the target domain. Then, a plurality of noisy medical training images and a plurality of clean medical training images are generated by applying a reconstruction process to the noisy medical image training data and the clean medical image training data, respectively.
In some embodiments, the source domain describes a source of clean medical image data used to construct training data for training the denoising neural network model. The target domain describes the source of noisy medical image data that is provided as input to the denoising neural network model for denoising. For example, in some embodiments, the clean medical image data associated with the source domain includes medical image data collected by imaging a first portion of the anatomy (e.g., brain), while the second noisy medical image data associated with the target domain includes medical image data collected by imaging a second portion of the anatomy (e.g., limb, joint, torso, pelvis, appendage) that is different from the first portion of the anatomy. As another example, in some embodiments, the clean medical image data associated with the source domain includes medical image data collected using a first type of medical imaging device (e.g., a high-field MRI system), and the second noisy medical image data associated with the target domain includes medical image data collected using a second type of medical imaging device (e.g., a low-field MRI system) that is different from the first type of medical imaging device. As another example, in some embodiments, the clean medical image data associated with the source domain includes medical image data collected using a first imaging procedure or protocol (e.g., a first type of pulse sequence), while the second noisy medical image data associated with the target domain includes medical image data collected using a second imaging procedure or protocol (e.g., a second type of pulse sequence) different from the first imaging procedure or protocol.
Fig. 18A is a diagram of an exemplary pipeline 1800 illustrating an image reconstruction and denoising process according to some embodiments of the techniques described herein. As shown in fig. 18A, the denoising module 1830 may be configured to accept as input a noisy medical image 1820. The denoising module 1830 may also be configured to denoise the noisy medical image 1820 and generate a denoised medical image 1840 and/or information that may be used to generate the denoised medical image 1840.
As described in connection with fig. 18A, the exemplary MR image reconstruction and denoising pipeline 1800 and training pipeline 1825 include a plurality of program modules configured to perform various corresponding functions. Each module may be implemented in software and as such may include processor-executable instructions that, when executed by at least one computer hardware processor, cause the at least one computer hardware processor to perform the function(s) of the module. Such modules are sometimes referred to herein as "program modules".
In some embodiments, the denoising module 1830 may include a denoising neural network model 1835. In some embodiments, the denoising neural network model 1835 may be implemented as a deep neural network model. For example, the denoising neural network model 1835 may include a plurality of layers. The layers may include one or more convolutional layers, one or more pooling layers (e.g., average pooling, maximum pooling, spectrum pooling), one or more declassification layers (e.g., average declassification, maximum declassification), one or more upsampling layers, one or more downsampling layers, one or more fully connected layers, and/or any other suitable type of layer. An exemplary architecture for the denoising neural network model 1835 is described herein in connection with fig. 19.
In some embodiments, the denoising neural network model 1835 may be trained using the training pipeline 1825. The training pipeline 1825 may include a training data generation module 1832 and a machine learning training module 1834. In some embodiments, the training data generation module 1832 may generate training data and the machine learning training module 1834 may train the denoising neural network model 1835 according to the exemplary processing described herein in connection with fig. 20A-20D.
In some embodiments, the noisy medical image 1820 may be generated using the image reconstruction module 1810. The image reconstruction module 1810 may generate a noisy medical image 1820 using the medical image data 1802. Medical image data 1802 may be acquired using a medical imaging device, such as an MRI system or any other suitable type of medical imaging device, examples of which are described herein. For example, the noisy medical image 1820 may be obtained by collecting medical image data 1802 using a medical imaging device (e.g., by imaging a patient) and then generating a noisy medical image 1820 based on the collected data using the image reconstruction module 1810. As shown in the example of fig. 18A, the noisy medical image 1820 may be an MR image generated using spatial frequency data 1802 acquired with an MRI system. As another example, obtaining the noisy medical image 1820 may include: medical image data 1802 previously collected by the medical imaging device is accessed from the storage device and a noisy medical image 1820 is generated using the accessed medical image data 1802 and the image reconstruction module 1810. As yet another example, the noisy medical image 1820 may be generated prior to the denoising process and accessed from storage for denoising.
In some embodiments, the image reconstruction module 1810 may be configured to generate the noisy medical image 1820 by applying a reconstruction process to the medical image data 1802. Such a procedure is generally applicable to medical imaging devices that capture data in domains other than the image domain. In the context of MRI, and as shown in fig. 18A, the reconstruction process is configured to use MR data collected in the spatial frequency domain (e.g., in k-space) to generate MR images in the image domain. In some embodiments, the image reconstruction module 1810 may be configured to generate the noisy medical image 1820 using a reconstruction process including compressed sensing.
In some embodiments, the image reconstruction module 1810 may be configured to generate the noisy medical image 1820 using a reconstruction process that includes a machine learning model. For example, the reconstruction process may include a machine learning model that may be implemented as a deep neural network model. The deep neural network model may include multiple layers. The layers may include one or more convolutional layers, one or more pooling layers (e.g., average pooling, maximum pooling, spectrum pooling), one or more declassification layers (e.g., average declassification, maximum declassification), one or more upsampling layers, one or more downsampling layers, one or more fully connected layers, and/or any other suitable type of layer.
In some embodiments, the machine learning model may be the machine learning model described herein in connection with fig. 26A-26E. Fig. 26A is a diagram of an exemplary architecture of an example neural network model 2610 for generating MR images from input MR spatial frequency data, in accordance with some embodiments of the techniques described herein. As shown in fig. 26A, the neural network model 2610 reconstructs an output MR image 2615 from the input MR spatial frequency data 2605 by processing the input MR spatial frequency data in stages. First, the input MR spatial frequency data 2605 is processed using an initial processing block 2612 to produce an initial image 2614, and then the initial image 2614 is processed through a series of neural network blocks 2616-1, 2616-2, …, 2616-n.
In some embodiments, one or more of blocks 2616-1, 2616-2,..2616-n may operate in the image domain. In some embodiments, one or more of blocks 2616-1, 2616-2,..once again, 2616-n may transform the input data into a different domain (which includes, but is not limited to, the spatial frequency domain), process in a different domain (e.g., reconstruction process), and then transform back into the image domain.
In some embodiments, the initialization block transforms the input MR spatial frequency data to the image domain to generate an initial image for subsequent processing by the neural network model 2610. The initialization block may be implemented in any suitable manner. For example, in some embodiments, the initialization block may apply a companion matrix non-uniform fourier transform to the input MR spatial frequency data to obtain an initial image. As another example, in some embodiments, the initialization block may apply a meshed reconstruction to the input MR spatial frequency data to obtain an initial image.
Exemplary architectures for the neural network block 2616 are shown in fig. 26B (corresponding to a non-uniformly varying network) and fig. 26E (corresponding to a generalized non-uniformly varying network). Thus, in some embodiments, at least one, at least some, or all of blocks 2616-1, 2616-2,..2616-n may have an architecture as shown for the exemplary block 2616-i in fig. 26B. As shown in fig. 26B, the neural network blocks 2616-i include a data consistency block 2620 and a convolutional neural network block 2650, both of which are applied to an input x labeled 2621 i . Input x i An MR image reconstruction generated by the neural network model 2610 at the completion of the (i-1) th neural network block may be represented. In this example, by combiningData consistency block 2620 applies to input x i To obtain a first result, apply convolutional neural network block 2650 to x i To obtain a second result, and from x i Subtracting a linear combination of the first result and the second result (wherein the linear combination is using a block specific weight lambda i Calculated) to obtain an output 2635 of block 2616-i.
The data consistency block 2620 may be implemented in any of a number of ways. In some embodiments, the data consistency block 2620 may be represented by x by using a non-uniform fourier transform i The represented input image is transformed into the spatial frequency domain, the result is compared to the initial MR spatial frequency data 2605, and the difference between the two is transformed back into the image domain using a companion matrix of the non-uniform fourier transform for data consistency processing.
An exemplary implementation of the data consistency block 2620 is shown in fig. 26C. In the illustrative implementation of fig. 26C, the image domain is input 2622 (which may be an intermediate reconstruction x) through a series of three transforms 2624, 2626, and 2628 (which constitute a non-uniform fast fourier transform to achieve a spatial frequency domain from the image domain) i 2621 A) to the spatial frequency domain. In particular, the transform 2624 is a de-apodization and zero padding transform D, and the transform 2626 is an oversampled FFT transform F s And transform 2628 is a gridded interpolation transform G. As described herein, the non-uniform fast fourier transform a is determined according to a= D F s The composition of these transformations of G. Example implementations of these constituent transformations are described herein.
After transforming the image domain input 2622 to the spatial frequency domain, it is compared to the initial MR spatial frequency data 2605 and the differences between the two are transformed back to the image domain in that order using transforms 2630, 2632, and 2634. The transform 2630 is the companion matrix of the gridded interpolation transform 2628. The transform 2632 is the companion matrix of the oversampled FFT transform 2626. The transform 2634 is a companion matrix of the apodization transform 2624. Thus, the composition of 2630, 2632, 2634 (which may be written as G H F H s D H =A H ) An accompanying matrix A representing a non-uniform Fourier transform A H
The convolutional neural network block 2650 may be implemented in any of a number of ways. In some embodiments, block 2650 may have multiple convolution layers including one or more convolution layers and one or more transposed convolution layers. In some embodiments, block 2650 may have a U-net structure whereby multiple convolution layers downsample data and subsequent transpose convolution layers upsample data, for example, as shown in the exemplary U-net architecture of fig. 26D for block 2650.
As shown in fig. 26D, the input to the convolutional network block 2650 is a process of following an up-sampling path through a down-sampling path. In the downsampling path, the input is processed by: the two convolutions with 3 x 3 kernels are applied repeatedly, after each one a non-linear (e.g. modified linear units or ReLU), average 2 x 2 pooling operation with step 2 is applied for downsampling. The number of characteristic channels doubles from 64 to 128 to 256 in each downsampling step. In the upsampling path, data is processed by: the feature map is repeatedly upsampled using an average pooling step that halves the number of feature channels, a concatenation with the corresponding feature map from the downsampling path, and two 3 x 3 convolutions, each followed by the application of non-linearities (e.g., reLU).
Fig. 26E is a diagram of another type of architecture of a block of the neural network model of fig. 26A, according to some embodiments of the technology described herein. The neural network model including blocks having an architecture such as the architecture shown in fig. 26E may be referred to as a "generalized heterogeneous distribution network" or "GNVN". Is "generalized" in the following sense: although data consistency blocks are not directly used, features similar to those of the images generated by these blocks may facilitate incorporation into the neural network model.
As shown in fig. 26E, the ith GNVN block 2660-i takes as inputs the following: (1) Image domain data x labeled 2662 i The method comprises the steps of carrying out a first treatment on the surface of the And (2) initial MR spatial frequency data 2664. Input x i The MR image reconstruction generated by the neural network model 2610 at the completion of the (i-1) th GNVN block (2660- (i-1)) may be represented. These inputs to block 2660-i are then used to generate a vector as block 2660-iAn input to a convolutional neural network block 2672 that is part of the (c). Further, CNN block 2672 generates a generated vector of x from these inputs i+1 The next MR image reconstruction of the representation.
In the embodiment of fig. 26E, inputs 2662 and 2664 are used to generate three inputs to CNN block 2672: (1) Reconstructing x i Itself is provided as input to the CNN block; (2) To reconstruct x i The result of applying a non-uniform fourier transform 2666 followed by a spatial frequency domain convolutional neural network 2668 followed by a companion matrix non-uniform fourier transform 2670; and (3) applying the spatial frequency domain convolutional neural network 2668 to the initial MR spatial frequency data 2664 followed by the application of the result of the accompanying matrix non-uniform fourier transform 2670.
In some embodiments, the non-uniform fourier transform 2666 may be a transform a represented as a composition of three transforms: de-apodization transform D, over-sampling Fourier transform F s And a local gridding interpolation transform G such that a= D F s G. Example implementations of these constituent transformations are described herein.
Spatial frequency domain CNN2668 may be any suitable type of convolutional neural network. For example, CNN2668 may be a five-layer convolutional neural network with residual connections. However, in other embodiments, spatial-frequency domain network 2668 may be any other type of neural network (e.g., a full convolutional neural network, a recurrent neural network, and/or any other suitable type of neural network), as aspects of the techniques described herein are not limited in this respect.
Additional aspects related to machine learning models for image reconstruction are described below: U.S. patent application publication 2020/0034998 entitled "Deep Learning Techniques for Magnetic Resonance Image Reconstruction" filed on 7/29/2019, U.S. patent application publication 2020/0058106 entitled "Deep Learning Techniques for Suppressing Artefacts in Magnetic Resonance Images" filed on 8/15/2019, and U.S. patent application publication 2020/0289019 entitled "Deep Learning Techniques for Generating Magnetic Resonance Images from Spatial Frequency Data" filed on 12/3/2020, each of which is incorporated herein by reference in its entirety.
Returning to fig. 18A, in some embodiments, the image reconstruction module 1810 may be configured to generate a noisy medical image 1820 using a reconstruction process that includes applying at least one non-uniform transformation to the medical image data 1802. An example of a reconstruction procedure for an MR image and comprising at least one non-uniform transformation is described in connection with fig. 18B. Fig. 18B is a diagram of an exemplary pipeline 1850 including an exemplary image reconstruction module 1860 that includes at least one non-uniform transform, in accordance with some embodiments of the technology described herein. In some embodiments, the image reconstruction module 1860 may be used to generate a noisy medical image 1820 as described herein in connection with fig. 18A.
In some embodiments, and as shown in fig. 18B, an image reconstruction module 1860 is configured to be received at the sensor domainThe acquired medical image data 1802 is used as input. Medical image data 1802 is sampled by a sampling operatorAnd image field->Is correlated with the corresponding MR image of (b). For a real world medical imaging procedure, medical image data 1802 comprises noise (denoted herein by the wave number superscript:/-and>) So that the medical image data 1802 is described by:
/>
wherein: n is the correlated additive noise For exampleNoise n may be correlated along an RF receive coil (e.g., RF receive coil 1426 of fig. 14). The image reconstruction module 1860 may be configured as a reconstruction pipeline comprising one or more transforms, denoted +.>Such that: />
In some embodiments, the image reconstruction module 1860 includes a coil decorrelation module 1862, a gridding module 1864, and a coil combining module 1866. The coil decorrelation module 1862, the gridding module 1864, and the coil combination module 1866 each apply a different mapping to the medical image data as it passes through the image reconstruction module 1860. For example, for multi-coil, non-Cartesian MR image acquisition, transformationAnd->Can be expressed as:
where S is the multichannel coil sensitivity matrix, a is the fourier sampling matrix, P is the pre-whitening matrix for decorrelating the MR signals along the RF receive coil dimension, W is the sampling density compensation matrix for non-uniform coverage of k-space acquired by non-cartesian MR images, and abs takes the amplitude values by element.
In some embodiments, the coil decorrelation module 1862 may be configured to decorrelate MR signals of the input medical image data 1802 along the RF receive coil dimensions Number (x). To perform this decorrelation, the coil decorrelation module 1862 may be configured to transform P H (which is a Hermitian companion matrix or conjugate transpose of the pre-whitened matrix P) is applied to the input medical image data 1802 (byRepresentation). Then, the coil decorrelation module 1862 may correlate the decorrelated medical image data +.>Output to the next module of the image reconstruction module 1860.
In some embodiments, the gridding module 1864 may be configured to receive decorrelated medical image dataAnd applying gridding to the instance of the decorrelated medical image data to transform the decorrelated medical image data from the spatial frequency domain to the image domain. Gridding module 1864 may be configured to transform a H W is applied to decorrelated medical image data +.>Transform A H W may be configured to compensate for sampling density due to non-cartesian acquisition of medical image data 1802, and resample the decorrelated medical image data +.>To perform image reconstruction from the spatial frequency domain to the image domain. Gridding module 1864 may be configured to output a series of medical images +.>Wherein each image in the series of medical images corresponds to a set of MR signals acquired by the receiving RF coil.
In some embodiments, the coil assembly module 1866 may be configured to receive the series of medical images And combining the series of medical images into a single noisy medical image +.>The coil combination module 1866 may be configured to transform S H Applied to the series of medical images +.>To combine the MR signal responses of the multiple receive RF coils into a single noisy medical image. The coil combination module 1866 may also be configured to apply the magnitude computation abs (), such that
In some embodiments, the image reconstruction module 1860 may be configured to output noisy medical imagesFor further processing. The output noisy medical image +.>There may be spatially correlated non-uniform noise present in the image. As described herein in connection with fig. 18A, the denoising module 1830 may be configured to +_f the output noisy medical image>Denoising and generating a denoised medical image 1840, i.e., +.>
FIG. 19 is a diagram of an exemplary architecture of an example denoising neural network model 1900 for generating a denoising MR image from an input noisy MR image, according to some embodiments of the techniques described herein. In some embodiments, the denoising neural network model 1900 may be used to implement the denoising neural network model 1835 of fig. 18A and 18B, although it should be understood that the denoising module 1830 may be implemented using any suitable neural network model, as aspects of the techniques described herein are not so limited.
In some embodiments, the denoising neural network model 1900 may be implemented as a deep convolutional neural network having multiple layers. The deep convolutional neural network may include convolutional layers to perform domain transforms, and a modified linear unit (ReLU) for noise reduction after one or more of the convolutional layers. It should be appreciated that the convolution layer may be a two-dimensional convolution layer (e.g., for processing two-dimensional images) or a three-dimensional convolution layer (e.g., for processing three-dimensional volumes), as aspects of the techniques described herein are not so limited.
In some embodiments, and as shown in fig. 19, the denoising neural network model 1900 includes an input 1902, a first convolution layer 1904, a second convolution layer 1906, a third convolution layer 1908, a summation layer 1910, and an output 1912. The summing layer 1910 may add the input 1902 to the output of the third convolution layer 1908 by skipping connection 1911. The denoising neural network model 1900 has the following architecture:
1. input: (n) x ,n y ,1)
2. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
3. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
4. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
5. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
6. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
7. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
8. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
9. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
10. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
11. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
12. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
13. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
14. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
15. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
16. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
17. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
18. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
19. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
20. Convolution (kernel size=3×3, 64 filters, stride 1, with offset term), followed by ReLU
21. Convolution (kernel size=3×3,1 filter, stride 1, with offset term)
22. Summing layer (sum of layer (1) and layer (21))
23. And (3) outputting: (n) x ,n y ,1)
20A-20D are illustrative diagrams of a process for generating training data to train a denoising neural network model, according to some embodiments of the techniques described herein. The examples of fig. 20A-20D are described herein in the context of MR imaging, but it should be understood that the examples of fig. 20A-20D may be applied to other medical imaging techniques as described herein.
The denoising medical image is aimed at from the noisy medical imageRestore the bottom clean image +.> If clean medical image data of the target domain are available, then +.>The denoising neural network model f can be learned by an empirical risk minimization framework θ
However, in some instances, it may be desirable to be in the target domainUpper training denoising neural network model f θ But only the source domain can be utilized>Clean data and target field on->Noise-containing data. Noise data and images are herein passed +.>Representation (e.g.)>). To address the lack of clean training data in the target domain, the inventors developed a two-stage process to generate in the target domainApproximate training data, and train the denoising neural network model using the generated training data. Fig. 20A to 20D illustrate this two-stage process. The first stage may be performed in a variety of ways including using process 2010 shown in fig. 20A, using process 2020 shown in fig. 20B, or using both processes 2010 and 2020. The second stage may be performed in a variety of ways including using the processes 2030 and 2040 shown in fig. 20C and 20D, respectively.
FIG. 20A is a diagram of an exemplary process 2010 for generating first training data to train a first neural network for denoising medical images and using the generated first training data to train the first neural network, in accordance with some embodiments of the techniques described herein. In process 2010, first training data is generated to train a first neural network model 2017, i.e. To denoise data in the source domain>
In some embodiments, the first training data includes a plurality of first noisy MR images 2014, i.eAnd a plurality of clean MR images 2015, i.e. +.>Clean MR data 2011 associated with the source domain can be used, i.e. +.>And first MR noise data 2012 associated with the target domain, i.e + ->To generate first training data. Clean MR data 2011 associated with source domain, i.e +.>For example, may be obtained from publicly available study databases (e.g., of high-field MR images) or from other clinical acquisitions of MR data in the source domain (e.g., using a source-type MRI system (e.g., a high-field MRI system), imaging a source portion of anatomy (e.g., brain), acquiring MR data using a source pulse sequence, etc.).
In some embodiments, process 2010 may begin with: using clean MR data 2011 associated with source domainAnd first MR noise data 2012 associated with the target domain, i.e + ->To generate first noisy MR data 2013, i.eFor example, the clean MR data 2011 associated with the source domain is +.>Can be combined with the first MR noise data 2012 associated with the target domain, i.e. +.>To generate first noisy MR data 2013, i.e. +.>
In some embodiments, the first MR noise data 2012 associated with the target domain, i.e., may be generated in a similar manner to the noise image 104 of fig. 1A and 1B For example, the first MR noise data associated with the target domain may be generated based on empirical measurements (e.g., by measuring noise within the MRI system in the absence of the patient using the MRI system)2012, i.eFor example, the first MR noise data 2012 associated with the target domain may be generated by measuring noise within the MRI system absent from the patient using the MRI system and by using the same pulse sequence (e.g., a Diffusion Weighted Imaging (DWI) pulse sequence) as that used to acquire noisy MR data for denoising>Alternatively or additionally, the first MR noise data 2012 associated with the target domain, i.e +.>In some embodiments, the first MR noise data 2012 associated with the target domain, i.e./may be generated prior to imaging the subject (e.g., prior to acquiring the noisy MR data 1802)>/>
In some embodiments, the first noisy MR data 2013 is generatedThereafter, a plurality of clean MR images 2015, i.e. +.>And a plurality of first noisy MR images 2014, i.e. +.>For example, and as shown in FIG. 20A, the source domain can be identified by the source domain associated with the clean MR data 2011 +.>Applying reconstruction procedure 2016 i.e.)>Generating a plurality of clean MR images 2015, i.e Similarly, the first noisy MR data 2013 can be obtained by>Applying reconstruction procedure 2016 i.e.)>To generate a plurality of first noisy MR images 2014, i.e. +.>Reconstruction process 2016 i.e.)>May be any suitable type of reconstruction process configured to transform MR data from the spatial frequency domain to the image domain (e.g., as described herein in connection with fig. 18A and 18B).
In some embodiments, a plurality of first noisy MR images 2014 are generated, i.e.And a plurality of clean MR images 2015, i.e. +.>Thereafter, a first neural network model 2017, i.e., +.>For example, the first noisy MR images 2014 may be provided by providing a plurality of first noisy MR images>And a plurality of clean MR images 2015, i.e. +.>As input to the related image pair of (2)Inward training of the first neural network model 2017, i.e., +.>So that the first neural network model 2017, i.e. +.>
FIG. 20B is a diagram of an exemplary process 2020 to generate second training data to train a second neural network for denoising MR images, according to some embodiments of the techniques described herein. In process 2020, second training data is generated to train a second neural network model 2027, i.eTo denoise dual noisy data in the target domain>It should be appreciated that the exemplary process 2020 is optional and may not be implemented in some embodiments.
In some embodiments, the second training data includes a plurality of dual noisy MR images 2024, i.eAnd a plurality of second noisy MR images 2025, i.e. +.>A second noisy MR data 2021 associated with the target region, i.e., +.>And the second MR noise data 2022 associated with the target domain, i.e. +.>To generate second training data. Second noisy MR data 2021 associated with the target region>For example, MR data may be acquired clinically in the target domain (e.g., using a target-type MRI system (e.g., a low-field MRI system), imaging a target portion of the anatomy (e.g., a limb, joint, appendage, etc.), acquiring MR data using a target pulse sequence, etc.).
In some embodiments, process 2020 may begin with: using second noisy MR data 2021 associated with the target domainAnd the second MR noise data 2022 associated with the target domain, i.e. +.>Generating dual noisy MR data 2023>For example, the second noisy MR data 2021 associated with the target region is +.>Can be combined with the second MR noise data 2022 associated with the target domain, i.e. +.>To generate dual noisy MR data 2023 +.>
In some embodiments, the second MR noise data 2022 associated with the target domain is Can be associated with the first MR noise data 2012 associated with the target domain of fig. 20A, i.e +.>Generated in the same way, or can beFirst MR noise data 2012, i.e. +.>The same noise data. In some embodiments, the second MR noise data 2022 associated with the target domain isCan be represented by first MR noise data 2012 associated with the target domain +.>Generated differently or may comprise a different relation to the first MR noise data 2012, i.e. +.>Different data. In some embodiments, the second MR noise data 2022 associated with the target domain is +.>May be generated prior to imaging the subject (e.g., prior to acquiring noisy MR data 1802).
In some embodiments, the dual noisy MR data 2023 is generatedThereafter, a plurality of second noisy MR images 2025 can be generated, i.e. +.>And a plurality of dual noisy MR images 2024, i.e. +.>For example, and as shown in FIG. 20B, the target domain can be identified by the first noisy MR data 2021, i.e. +.>Application of the reconstruction procedure 2026->To generate a plurality of second noisy MR images 2025, i.e. +.>Similarly, the data can be obtained by applying the method to double noisy MR data 2023, i.e. +.>Applying the reconstruction process 2026, namelyTo generate a plurality of dual noisy MR images 2024, i.e. +. >Reconstruction procedure 2026 is->May be any suitable type of reconstruction process configured to transform MR data from the spatial frequency domain to the image domain (e.g., as described herein in connection with fig. 18A and 18B).
In some embodiments, a plurality of dual noisy MR images 2024 are generatedAnd a plurality of second noisy MR images 2025, i.e. +.>Thereafter, a second neural network model 2027 may be trained, i.e. +.>For example, by providing a plurality of dual noisy MR images 2024, i.e. +.>And a plurality of second noisy MR images 2025, i.e. +.>Is used as input to train the second neural network model 2027, i.e. +.>
FIG. 20C is a diagram of an exemplary process 2030 to generate clean MR data associated with a target domain according to some embodiments of the technology described herein. Training the first neural network model 2017 as described in connection with fig. 20A and 20B, namelyAnd optionally a second neural network model 2027, i.e. +.>Thereafter, process 2030 sets the first neural network model 2017 +.>And optionally a second neural network model 2027, i.e. +.>Applied to noisy MR images associated with the target region (e.g., to a plurality of second noisy MR images 2025, i.e. +.>) To generate corresponding denoised MR images and data.
In some embodiments, generating training data for training the denoising neural network model includes generating a first neural network model 2017, i.e., a first model of the neural network Application to a plurality of second noisy MR images 2025 +.>To generate a plurality of denoised MR images 2031, i.eOptionally, in some embodiments, generating training data for training the denoising neural network model further comprises adding the second neural network model 2027 +.>Application to a plurality of second noisy MR images 2025 +.>In such an embodiment, a plurality of denoised MR images 2031 are generated, i.e. +.>May include combining a first neural network model 2017, i.e., +.>And a second neural network model 2027, namely +.>Is provided. For example, a plurality of denoised MR images 2031, i.e. +.>Can be expressed as being represented by the first neural network model 2017, i.e. +.>The output denoised MR image and the output de-noised MR image are combined by the second neural network model 2027>The union of the output denoised MR images is +.>
In some embodiments, a plurality of denoised MR images 2031 are generated, i.e.Thereafter, the processing 2030 may then include a plurality of denoised MR images 2031, i.e. +.>Transform to generate multiple enhanced de-noised MR images 2032, i.e. +.>Thereby ensuring that there are a sufficient number of images in the generated training data for training the denoising neural network model. For example, it may be configured to make the denoised MR image 2031 +.>The sharpened transformation is applied to multiple denoised MR images 2031, i.e. +.>To generate a sharpened MR image. After that, the sharpened MR image can be added to the plurality of denoised MR images 2031, i.e +. >To generate a plurality of enhanced de-noised MR images 2032, i.e. +.>Alternatively or additionally, transformations such as rotation, cropping, horizontal and/or vertical flipping, etc., or any other suitable transformations may be applied to the denoised MR image 2031, i.e./or +.>To generate a plurality of enhanced de-noised MR images 2032, i.e. +.>As another example, the brightness and/or contrast of the image may be changed to generate a plurality of enhanced de-noised MR images 2032, i.e +.>Is a new image of the image. Additionally, and as another example, a complex conjugate transformation may be applied to the spatial frequency data to make the matrices symmetric, or to replace one or more matrices with their complex conjugate transpose in the spatial frequency domain, to generate a plurality of enhanced denoised MR images 2032, i.e.>Is a new image of the image. Some of these transforms may be used alone or in combination with other transforms including the transforms described above and/or any other suitable transforms.
In some embodiments, processing 2030 may then include transforming 2034, i.e.Application to multiple enhanced denoised MR images 2032, i.e. +.>To enhance the denoised MR image 2032, i.e. +.>From the image domain to the spatial frequency domain, and generates clean MR data 2033 associated with the target domain +. >Transform 2034 is->For example, it may be that the enhanced de-noised MR image 2032 is configured to be +.>A non-uniform transformation from the image domain to the spatial frequency domain. It is to be appreciated that transform 2034 is +.>May be configured to enhance the denoised MR image 2032, i.e. +.>Any other suitable transformation from the image domain to the spatial frequency domain, as aspects of the techniques described herein are not limited in this respect.
FIG. 20D is a diagram of an exemplary process 2040 to generate training data for training a denoising neural network model, according to some embodiments of the techniques described herein. Process 2040 may begin with: from the clean MR data 2033 associated with the target domainGenerating clean MR training data 2041, i.e. +.>For example, the clean MR data 2033 associated with the target domain can be +.>And clean MR data 2011 associated with the source domain, i.e +.>Combining to generate clean MR training data 2041 +.>In some embodiments, the data set may be obtained by the union of the two data sets, i.e. +.>Clean MR data 2033 to be associated with the target domain, i.e +.>And clean MR data 2011 associated with the source domain, i.e +.>And (5) combining.
In some embodiments, for training the noise-removing spiritThrough network model 2047, i.eComprises a plurality of noisy MR training images 2044, i.e.) >And a plurality of clean MR training images 2045 +.>Clean MR training data 2041, i.e., can be used>And third MR noise data 2042 associated with the target domain>Generating a model 2047 for training the denoised neural network, i.e +.>Is used for training.
In some embodiments, process 2040 may begin with: using clean MR training data 2041, i.eAnd third MR noise data 2042 associated with the target domain>To generate noisy MR training data 2043 +.>For example, the clean MR training data 2041, i.e. +.>Is associated with the target domain, namely +.2>To generate noisy MR training data 2043, i.e. +.>
In some embodiments, the third MR noise data 2042 associated with the target domain isCan be associated with the first MR noise data 2012 associated with the target domain of fig. 20A, i.e +.>Generated in the same way, or can be identical to the first MR noise data 2012, i.e. +.>The same noise data, or with the second MR noise data 2022 associated with the target domain of FIG. 20B +.>Generated in the same way, or can be identical to the second MR noise data 2022, i.e. +.>The same noise data. In some embodiments, the third MR noise data 2042 associated with the target domain is +. >Can be represented by first MR noise data 2012 associated with the target domain +.>Generated differently or may comprise a difference with the first MR noise data 2012, i.e. +.>Different data, or with a second MR noise number associated with the target domainAccording to 2022->Generated differently or may comprise a difference with the second MR noise data 2022, i.e. +.>Different data. In some embodiments, the third MR noise data 2042 associated with the target domain is +.>May be generated prior to imaging the subject (e.g., prior to acquiring noisy MR data 1802).
In some embodiments, the noisy MR training data 2043 is generatedThereafter, a plurality of clean MR training images 2045, i.e. +.>And a plurality of noisy MR training images 2044 +.>For example, and as shown in FIG. 20D, the data may be obtained by training the data 2041 for clean MR>Application of the reconstruction procedure 2046->Generating a plurality of clean MR training images 2045Similarly, the noise-containing MR training data 2043, i.e. +.>Application of the reconstruction procedure 2046->To generate a plurality of noisy MR training images 2044, i.e.>Reconstruction procedure 2046->May be any suitable type of reconstruction process configured to transform MR data from the spatial frequency domain to the image domain (e.g., as described herein in connection with fig. 18A and 18B).
In some embodiments, a plurality of noisy MR training images 2044 are generatedAnd a plurality of clean MR training images 2045 +.>Thereafter, a model 2047 for training the denoising neural network may be used>Training the denoised neural network model 2047, i.e. +.>For example, one can determine 2044, i.e., +.>A plurality of clean MR training images 2045 +.>Is used as input to train the de-noising neural network model 2047, i.e. +.>
Fig. 21 is a flowchart of an exemplary process 2100 for generating a denoised MR image of a subject using a denoised neural network model, in accordance with some embodiments of the techniques described herein. Process 2100 may be performed using any suitable computing device. For example, in some embodiments, process 2100 may be performed by a computing device co-located with (e.g., in the same room as) a medical imaging device. As another example, in some embodiments, process 2100 may be performed by one or more processors located at a location remote from the medical imaging device (e.g., as part of a cloud computing environment). It should be appreciated that although process 2100 is described in connection with an MR image and an MRI system, process 2100 may be applied to any suitable type of medical image and medical imaging device, as aspects of the techniques described herein are not limited in this respect.
In some embodiments, process 2100 may begin with action 2102. In act 2102, a noisy MR image of a subject associated with a target domain may be obtained. Noisy MR images of a subject may be obtained from an MRI system (e.g., any MRI system as described herein). For example, a noisy MR image of the subject may be obtained by collecting noisy MR data using an MRI system (e.g., by imaging the subject) and then generating a noisy MR image of the subject based on the collected noisy MR data (e.g., as described herein in connection with fig. 18A and 18B). In some embodiments, noisy MR data may be collected by an MRI system using DWI pulse sequences, among other things. It should be appreciated that the noisy MR image may be a noisy MR image of any suitable subject anatomy (e.g., brain, neck, spine, knee, etc.), as aspects of the techniques described herein are not limited in this respect.
Alternatively, in some embodiments, obtaining the noisy MR image of the subject may include accessing data collected by the MRI system from a computer storage device and generating the noisy MR image of the subject using the accessed data. Alternatively, a noisy MR image of the subject may be generated and accessed from computer storage prior to the beginning of process 2100.
In some embodiments, noisy MR images of the subject may be associated with a suitable target domain. The target domain may describe a source of noisy MR image data provided as input to a denoising neural network model for denoising, and the source domain may describe a source of MR image data for generating training data for training the denoising neural network model. For example, the clean MR image data associated with the source domain may include MR data collected by imaging a first portion of the anatomy (e.g., brain), while the noisy MR image data associated with the target domain may include MR data collected by imaging a second portion of the anatomy (e.g., limb, joint, torso, pelvis, appendage, etc.) that is different from the first portion of the anatomy. As another example, clean MR image data associated with the source domain may include MR data collected using a first type of MRI system (e.g., a high-field MRI system), while noisy MR data associated with the target domain may include MR data collected using a second type of MRI system (e.g., a low-field MRI system) that is different from the first type of MRI system. As another example, clean MR data associated with the source domain may include MR data collected using a first pulse sequence (e.g., a Fast Spin Echo (FSE) pulse sequence, a fluid decay inversion recovery (FLAIR) pulse sequence, a Diffusion Weighted Imaging (DWI) pulse sequence, a Steady State Free Precession (SSFP) pulse sequence, or any other suitable pulse sequence), while noisy MR data associated with the target domain may include MR data collected using a second pulse sequence different from the first imaging pulse sequence.
After act 2102, in some embodiments, process 2100 may proceed to act 2104. In act 2104, the noisy MR image of the subject can be denoised using a denoised neural network model to obtain a denoised MR image. The denoising neural network model may include multiple layers (e.g., convolutional layers in some embodiments). For example, the denoising neural network model may be trained as described herein in connection with fig. 20A-20D and/or fig. 22.
In some embodiments, denoising the noisy MR image using the denoising neural network model may include directly generating the denoising MR image using the denoising neural network model. Alternatively, in some embodiments, the denoising neural network model may generate denoising information that may be used to generate a denoising MR image. For example, the denoising information may indicate which noise to remove from the noisy MR image such that generating the denoised MR image may be performed by subtracting the denoising information from the noisy MR image.
After performing act 2104, in some embodiments, process 2100 may enter act 2106. In act 2106, a denoised MR image can be output. The denoised MR images may be output using any suitable method. The denoised MR image may be output, for example, by saving for subsequent access, transmission over a network to a recipient, and/or display to a user of the MRI system.
FIG. 22 is a flowchart of an exemplary process 2200 for training a denoising neural network model, according to some embodiments of the techniques described herein. Process 2200 can be performed using any suitable computing device. For example, in some embodiments, the process 2200 may be performed by a computing device co-located with (e.g., in the same room as) the medical imaging device. As another example, in some embodiments, the process 2200 may be performed by one or more processors located at a location remote from the medical imaging device (e.g., as part of a cloud computing environment). It should be appreciated that although process 2200 is described in connection with an MR image and MRI system, process 2200 may be applied to any suitable type of medical image and medical imaging device, as aspects of the techniques described herein are not limited in this respect.
In some embodiments, process 2200 begins with act 2202. In act 2202, first training data for training a first neural network model to denoise an MR image may be generated. The first training data may include a plurality of first noisy MR images and a corresponding plurality of clean MR images. The first training data may be generated at least in part by generating a plurality of first noisy MR images using (1) clean MR data associated with the source domain and (2) first MR noise data associated with the target domain.
For example, in some embodiments, generating the first training data may include generating first noisy MR data using clean MR data associated with the source domain and first MR noise data associated with the target domain. Thereafter, a plurality of first noisy MR images and a plurality of clean MR images may be generated by applying a reconstruction process to the first noisy MR data and the clean MR data associated with the source domain, respectively. The reconstruction process may be any suitable reconstruction process configured to transform MR data from the spatial frequency domain to the image domain. For example, the reconstruction process may be any reconstruction process as described herein in connection with fig. 18A and 18B.
After act 2202, in some embodiments, process 2200 may proceed to act 2204. In act 2204, a first neural network model may be trained using the first training data. For example, the first neural network model may be trained in a supervised manner by providing the first neural network model with respective pairs of MR images (e.g., pairs of images of the plurality of first noisy MR images and the plurality of clean MR images) of the first training data.
In some embodiments, actions 2202 and 2204 may optionally include: second training data is generated to train the second neural network model, and the second neural network model is trained. The second training data may include a plurality of second noisy MR images and a plurality of dual noisy MR images. The second training data may be generated by generating a plurality of dual noisy MR images using (1) the second noisy MR data associated with the target domain and (2) the second MR noise data associated with the target domain.
For example, in some embodiments, generating the second training data may be performed by first generating dual noisy MR data using the second noisy MR data associated with the target domain and the second MR noise data associated with the target domain. Thereafter, a plurality of dual noisy MR images and a plurality of second noisy MR images may be generated by applying a reconstruction process to the dual noisy MR data and the second noisy MR data associated with the target domain, respectively.
After generating the second training data, process 2200 may optionally proceed to train the second neural network model using the second training data. For example, the second neural network model may be trained by providing the second neural network model with respective pairs of MR images of the second training data (e.g., image pairs of a plurality of second noisy MR images and a plurality of dual noisy MR images).
After act 2204, in some embodiments, process 2200 may proceed to act 2206. In act 2206, training data may be generated for training a denoising neural network model. Training data for training the denoised neural network model may be generated by applying the first neural network model to a plurality of second denoised MR images and generating a corresponding plurality of denoised MR images.
In some embodiments, in which the second training data is optionally generated and the second neural network model is optionally trained, the training data for training the denoising neural network model may be further generated by applying the second neural network model to the plurality of second noisy MR images. Applying the second neural network model to the plurality of second noisy MR images may generate another corresponding plurality of denoised MR images. In such embodiments, training data for training the denoised neural network model may be generated by combining the plurality of denoised MR images generated by the first neural network model with the plurality of denoised MR images generated by the second neural network model. For example, training data for training the denoised neural network model may be generated as a union of a plurality of denoised MR images generated by the first neural network model and a plurality of denoised MR images generated by the second neural network model.
After act 2206, in some embodiments, process 2200 may proceed to act 2208. In act 2208, the denoising neural network model may be trained using training data for training the denoising neural network model. For example, the denoising neural network model may be trained by providing the denoising neural network model with respective pairs of MR images for training data of the denoising neural network model.
To test the effectiveness of the training methods described herein in fig. 18A-22, a simulation-based study was performed. Three-dimensional brain MR images were randomly selected from the human connected set of items (Human Connectome Project), of which 505 were weighted by T1 and 125 were weighted by T2. The selected image is resampled to 1.5X1.5X15 mm 3 To simulate the resolution found in a clinical setting. Consider two-dimensional non-Cartesian multi-wire turnsAnd obtaining the data. The coil sensitivity profile S is analytically generated, the coil correlation matrix P is randomly generated, and a variable density sampling pattern is used for the sampling matrix a. Additive Gaussian noiseIs added to each image such that the standard deviation of noise in k-space after the pre-whitening step is set to σ. />
In the first experiment, a scene was designed in which a large number of source images, but only a small number of noisy target images were available. For the source domain, 500T 1 weighted MR volumes are used, and for the target domain, 20 noisy T2 weighted MR volumes are used. Five cases were used for the validation set and 100T 2 weighted volumes were reserved for testing. Training was performed using a value of σ=0.05, and the denoising neural network model was evaluated at σ=0.05, 0.1.
In a second experiment, the training method described herein with reference to fig. 18A-22 was applied to a denoising diffusion weighted MR image acquired at 64mT using the MRI system described herein in connection with fig. 14-17. A diffusion weighted image is acquired using an eight-channel RF receive coil array, three-dimensional cartesian sampling, and variable densities for both phase encoding directions (b=860). The resolution of the acquired image is 2.2×2.2×6mm 3 . The source domain dataset is obtained from a human connected group project and includes 400T 1 weighted and T2 weighted images. The target domain dataset includes 400T 1 weighted, T2 weighted, and FLAIR images acquired at 64mT using an MRI system as described in connection with fig. 14-17.
For comparison, the following training method was used to prepare the denoising neural network model: noise2Self (N2S), noise 2Noise (Nr 2N), supervised learning using T1 weighted images (Sup-T1), supervised learning using T2 weighted images (Sup-T2). For Nr2N, a denoising neural network model is trained to predict an image of σ=0.05 from an input with a noise level of σ=0.1. For all denoising neural network models, unbiased U-net is used, and for the supervision model, l is used 1 Loss. For 150000 iterations, a batch size of 32 and a size with α=3×10 is used -4 A of (2)dam trains all models.
For the training methods described herein with reference to fig. 18A to 22, two training data generation methods are implemented. The first model (Sup-FT-S) is trained using a training data set generated using the trained Sup-T1 model. The Sup-T1 model is used to generate 20 available noisy T2 weighted images, which are added to the training dataset. For data enhancement, image sharpening using a random gaussian kernel is used. The second model (Sup-FT-N) is trained in the same way as Sup-FT-S but using the output of Nr 2N. Each training takes about 17 hours and all methods are implemented in the Tensorflow.
In Table 1, mean Square Error (MSE), peak signal to Noise ratio (PSNR), and Structural Similarity Index Metric (SSIM) in the foreground of the test image are reported for Noise2Self (N2S), noise 2Noise (Nr 2N), supervised learning using T1 weighted images (Sup-T1), supervised learning using T2 weighted images (Sup-T2), two-stage learning with training images generated by Sup-T1 (Sup-FT-S), and two-stage learning with training images generated by Nr2N (Sup-FT-N).
Table 1: quantitative results for 100T 2 weighted test images with real MRI noise. MSE is scaled by 10 6 . All losses were calculated in the front Jing Zhongji of the image.
An example of a denoised MR image and corresponding noise map for each of these models is shown in fig. 23. The MR image is a T2 weighted image generated with σ=0.05. The upper two rows show MR images including noise corrupted images ("noisy") provided to the respective methods for denoising and ground truth images ("GT") for generating the noise corrupted images. MR images output by the denoising neural network models trained by N2S, nr2N, sup-T1, sup-FT-S, sup-FT-N, and Sup-T2 are provided. Corresponding noise maps for the individual MR images are provided in the bottom two rows of fig. 23. Qualitatively, a checkerboard pattern can be observed in the MR image generated by the N2S trained model, and the Nr2N trained model amplifies the background noise. In both Sup-FT-S and Sup-FT-N, artifacts are significantly reduced.
Fig. 24A-24D illustrate examples of denoised MRI images acquired using DWI pulse sequences and generated using different denoising techniques, and their corresponding noise maps. Fig. 24A-24D each illustrate a noisy MR image generated from MR data acquired using a DWI pulse sequence ("noisy") and a set of four denoised MR images generated from the noisy MR image using a denoised neural network model trained in four different ways. The plug-in each MR image shows an enlarged version of a portion of the brain. Four training methods include block matching and 3D filtering, both of which are described in terms of "Image Denoising by Sparse 3D Transform-Domain Collaborative Filtering" (which is incorporated herein by reference in its entirety) in IEEE Transactions on Image Processing, vol.16, no.8, pp.2080-2095, aug.2007 by k.dabov et al. The four training methods also include Nr2N and supervised learning (Sup) based on training image pairs as described above. The four training methods also include sequential semi-supervised learning (SeqSSL), which is a two-stage training process as described herein in connection with fig. 20A-20D. Qualitatively, the Nr2N trained model produced a more blurred image, while the Sup trained model yielded an excessively smooth image. The proposed training method provides the best compromise and improves the excessive overcomplete that occurs in the Sup-trained model.
As an alternative to the training process described herein in connection with fig. 18A-22, the inventors have recognized and appreciated that approximate training data may additionally be generated to train a reconstructed neural network model for image reconstruction and denoising of noisy images. 25A-25D are schematic illustrations of a two-stage process to generate training data to train a single reconstructed neural network model for image reconstruction and denoising of noisy images, according to some embodiments of the techniques described herein. The examples of fig. 25A-25D are described herein in the context of MR imaging, but it should be understood that the examples of fig. 25A-25D are applicable to other medical imaging techniques as described herein.
FIG. 25A is a schematic diagram according to the teachings hereinA diagram of an exemplary process 2510 of some embodiments of the technology to generate first training data to train a first neural network for reconstructing and denoising MR images. In process 2510, first training data is generated to train a first neural network model 2514, i.e.To reconstruct an image from data in the source domain +.>
In some embodiments, the first training data includes a plurality of clean MR images 2511 associated with the source domainAnd noisy MR data 2513, i.e. +.>The plurality of clean MR images 2511 associated with the source domain, i.e., may be obtained, for example, from a publicly available study database (e.g., of high-field MR images) or from other clinical acquisitions of MR data in the source domain (e.g., using a source-type MRI system (e.g., a high-field MRI system), imaging a source portion of an anatomical structure (e.g., brain, knee, neck, etc.), acquiring MR data using a source pulse sequence, etc.), or the like >
In some embodiments, multiple clean MR images 2511 associated with a source domain may be used, i.e.And first MR noise data 2512 associated with the target domain, i.e. +.>To generate noisy MR data 2513, i.e. +.>For example, transform 2515, i.e., +.>Multiple clean MR images 2511 to be associated with the Source Domain, i.e. +.>Is transformed from the spatial frequency domain to the image domain. The transformed image data and the first MR noise data 2512 associated with the target domain may then be combined +.>To generate noisy MR data 2513, i.e. +.>
In some embodiments, the first MR noise data 2512, i.e., associated with the target domain may be generated in a similar manner to the noise image 104 of FIGS. 1A and 1BFor example, the first MR noise data 2512 associated with the target domain, i.e.. May be generated based on empirical measurements (e.g., by measuring noise within the MRI system in the absence of the patient using the MRI system)>As another example, the first MR noise data 2512 associated with the target domain may be generated by measuring noise within the MRI system in the absence of the patient using the MRI system and by using the same pulse sequence (e.g., a Diffusion Weighted Imaging (DWI) pulse sequence) as that used to acquire noisy MR data for denoising, i.e.) >Alternatively or additionally, the first MR noise data 2512, i.e. +.f, associated with the target domain may be generated by simulating noise as described herein>In some embodiments, the first MR noise data 2512, i.e., ++f, associated with the target domain may be generated prior to imaging the subject (e.g., prior to acquiring noisy MR data 1802)>
In some embodiments, the noisy MR data 2513 is generatedThereafter, a first neural network model 2514, i.e., +.>For example, a plurality of clean MR images 2511 associated with a source domain may be provided, i.e. +.>And noisy MR data 2513, i.e. +.>Is used as input to train the first neural network model 2514, i.e. +.>So that the first neural network model 2514 can be trained in a supervised manner, i.e. +.>
FIG. 25B is a diagram of an exemplary process 2520 to generate second training data to train a second neural network for reconstructing and denoising MR images, according to some embodiments of the techniques described herein. In process 2520, second training data is generated to train a second neural network model 2524, i.eTo be from the target domainData reconstruction image +.>It should be appreciated that the exemplary process 2520 is optional and may not be implemented in some embodiments.
In some embodiments, the second training data includes a plurality of second noisy MR images 2521And dual noisy MR data 2523, i.e. +.>The plurality of second noisy MR images 2521 may be obtained, for example, by clinical acquisition of MR data in the target domain (e.g., using an MRI system of the target type (e.g., a low-field MRI system), imaging a target portion of the anatomy (e.g., a limb, joint, appendage, etc.), acquiring MR data using a target pulse sequence, etc.), i.e.)>
In some embodiments, a plurality of second noisy MR images 2521 may be usedAnd the second MR noise data 2522 associated with the target domain, i.e. +.>Generating dual noisy MR data 2523>For example, transform 2525 may be usedA plurality of second noisy MR images 2521, i.e. +.>From spatial frequencyThe domain is transformed into the image domain. The transformed image data and the second MR noise data 2522 associated with the target domain may then be combined +.>Is combined to generate dual noisy MR data 2523, i.e. +.>
In some embodiments, the second MR noise data 2522 associated with the target domain isMay be in the form of first MR noise data 2512 associated with the target domain of FIG. 25A +.>Generated in the same way, or can be identical to the first MR noise data 2512, i.e. +. >The same noise data. In some embodiments, the second MR noise data 2522 associated with the target domain is +.>May be in the form of first MR noise data 2512 associated with the target domain, i.e. +.>Generated differently or may comprise a different relation to the first MR noise data 2512, i.e. +.>Different data. In some embodiments, the second MR noise data 2522 associated with the target domain may be generated prior to imaging the subject (e.g., prior to acquiring noisy MR data 1802)
In some embodiments, the dual noisy MR data 2523 is generatedThereafter, a second neural network model 2524, i.e. +.>For example, the first noisy MR image 2521 may be obtained by providing a plurality of second noisy MR images>And dual noisy MR data 2523, i.e. +.>Is used as input to train a second neural network model 2524, i.e. +.>
FIG. 25C is a diagram of an exemplary process 2530 to generate a clean MR training image associated with a target domain in accordance with some embodiments of the techniques described herein. Training the first neural network model 2514 as described in connection with fig. 25A and 25B, namelyAnd optionally a second neural network model 2524, i.e. +.>Thereafter, process 2530 models 2514 the first neural network, i.e. And/or the second neural network model 2524 is +.>Applied to noisy MR data associated with the target domain (e.g., applied to second noisy MR data 2531, i.e., +.>) To generate corresponding denoised MR images.
In some embodiments, generating training data for training the reconstructed neural network model includes training the first neural network model 2514, i.e.Applied to the second noisy MR data 2531, i.e., +.>To generate a plurality of denoised MR images 2532, i.e +.>Optionally, in some embodiments, generating training data for training the reconstructed neural network model further comprises generating a second neural network model 2524, i.e +.>Applied to the second noisy MR data 2531, i.e., +.>In such an embodiment, a plurality of denoised MR images 2532 are generated, i.e. +.>May include combining the first neural network model 2514, i.e., +.>And a second neural network model 2524Is provided. For example, a plurality of denoised MR images 2532, i.e.>May be represented as being represented by the first neural network model 2514, i.e.The output denoised MR image and the output image obtained from the second neural network model 2524>The union of the output denoised MR images is +.>
In some embodiments, a plurality of denoised MR images 2532 are generated, i.eThereafter, the processing 2530 may then include processing 2532, i.e., +. >Transforming to generate a plurality of enhanced de-noised MR images 2533, i.e + ->To ensure that there are a sufficient number of images in the generated training data for training the reconstructed neural network model. For example, a method configured to make the denoised MR image 2532 +.>The sharpened transformation generates a plurality of sharpened MR images. Thereafter, the plurality of sharpened MR images and the plurality of denoised MR images 2532 may be combined +.>Adding to generate a plurality of enhanced denoised MR images 2533, i.eAlternatively or additionally, the denoised MR image 2031 can be provided, i.e. +.>Applying transformations such as rotation, cropping, horizontal and/or vertical flipping, and/or any other suitable enhancement, etc., to generate a plurality of enhanced de-noised MR images 2533, i.e.>As another example, the brightness and/or contrast of the image may be changed to generate a plurality of enhanced de-noised MR images 2533, i.e. +.>Is a new image of the image. Additionally, and as another example, a complex conjugate transformation may be applied to the spatial frequency data to make the matrices symmetrical or replace one or more of the matrices with its complex conjugate transpose in the spatial frequency domain to generate a plurality of enhanced de-noised MR images 2533, i.e.)>Is a new image of the image. Some of these transforms may be used alone or in combination with other transforms including the transforms described above and/or any other suitable transforms.
FIG. 25D is a diagram of an exemplary process 2540 to generate training data for training a reconstructed neural network model, according to some embodiments of the techniques described herein. Reconstructing neural network model 2534, namelyMay be any suitable neural network model configured to perform the reconstruction and/or denoising process. For example, rebuild neural network model 2534, i.e. +.>May be any suitable neural network model as described herein in connection with fig. 18A and/or fig. 26A-26E.
Process 2540 may begin with: from a plurality of enhanced denoised MR images 2533, i.eGenerating a plurality of clean MR training images 2541, i.e. +.>For example, multiple enhanced de-noised MR images 2533, i.e. +.>And a plurality of clean MR images 2511 associated with the source domain, i.e. +.>Combining to generate a plurality of clean MR training images 2541, i.e. +.>In some embodiments, the data is obtained by a union of the two data sets, i.e. +.>Multiple enhanced de-noised MR images 2533 can be made>And a plurality of clean MR images 2511 associated with the source domain, i.e. +.>And (5) combining.
In some embodiments, a training is used to reconstruct neural network model 2534, i.eComprises a plurality of clean MR training images 2541, i.e. +.>And noisy MR training data 2543 +.>Multiple clean MR training images 2541, i.e. +. >And third MR noise data 2542 associated with the target domain, namely +.>To generate noisy MR training data 2543, i.eFor example, the transform 2535, i.e., +.>A plurality of clean MR training images 2541, i.e. +.>Is transformed from the spatial frequency domain to the image domain. The transformed image data and the third MR noise data 2542 associated with the target domain can then be ∈>To generate noisy MR training data 2543, i.e.)>
In some embodiments, the third MR noise data 2542 associated with the target domain isMay be in the form of first MR noise data 2512 associated with the target domain of FIG. 25A +.>Generated in the same way, or can be identical to the first MR noise data 2512, i.e. +.>The same noise data, or may be in a second MR associated with the target domain of FIG. 25BNoise data 2522 is->Generated in the same way, or can be identical to the second MR noise data 2522, i.e. +.>The same noise data. In some embodiments, the third MR noise data 2542 associated with the target domain is ∈>May be in the form of first MR noise data 2512 associated with the target domain, i.e. +.>Generated differently or may comprise a same information as the first MR noise data 2512, i.e. +.>Different data, or may be encoded with the second MR noise data 2522 associated with the target domain +. >Generated differently or may comprise a relation to the second MR noise data 2522, i.e./or->Different data. In some embodiments, third MR noise data 2542 associated with the target domain, i.e., ++may be generated prior to imaging the subject (e.g., prior to acquiring noisy MR data 1802)>
In some embodiments, the noisy MR training data 2543 is generatedThereafter, a model 2534 for training the reconstructed neural network, i.e. +.>Training the reconstructed neural network model 2534, i.e., +.>For example, one can obtain a plurality of clean MR training images 2541, i.e. +.>And noisy MR training data 2543 +.>Is used as input to train the reconstructed neural network model 2534, i.e. +.>
It should also be appreciated that although fig. 25A-25D are described herein as being used to train a single reconstructed neural network model for both image reconstruction and denoising, aspects of the techniques described herein are not limited in this respect. For example, the two-stage training process of fig. 25A-25D may be adapted in some embodiments to only train the reconstructed neural network model, to generate a reconstructed neural network model by setting all MR noise data associated with the target domain to zero (e.g.,) To perform only image reconstruction. As another example, the two-stage training process of fig. 25A-25D may be adapted to train a reconstructed neural network model by using the two-stage training process of fig. 25A-25D, and then utilize the reconstructed neural network model as a reconstruction, i.e. < - >Training a denoising neural network model using the two-stage training process of fig. 20A to 20D to train a reconstruction neural network model for image reconstruction and denoising for noisy images sequentiallyIs described.
Having thus described several aspects and embodiments of the technology set forth in this disclosure, it is to be appreciated various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be within the spirit and scope of the technology described herein. For example, one of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the functions and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the embodiments described herein. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, embodiments of the invention may be practiced otherwise than as specifically described. In addition, if features, systems, articles, materials, kits, and/or methods described herein are not mutually inconsistent, any combination of two or more such features, systems, articles, materials, kits, and/or methods is included within the scope of the present invention.
The above-described embodiments may be implemented in any of a variety of ways. One or more aspects and embodiments of the present invention that relate to the performance of a process or method may utilize program instructions executable by a device (e.g., a computer, processor, or other device) to perform the process or method or to control the performance of the process or method. In this regard, the various inventive concepts may be embodied as a computer readable storage medium (or multiple computer readable storage media) (e.g., a computer memory, one or more floppy discs, compact discs, optical discs, magnetic tapes, flash memories, circuit structures in field programmable gate arrays or other semiconductor devices, or other tangible computer storage medium) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods for implementing one or more of the various embodiments described above. The computer readable medium or media may be transportable, such that the one or more programs stored thereon can be loaded onto one or more different computers or other processors to implement various aspects of the present invention. In some embodiments, the computer readable medium may be a non-transitory medium.
The term "program" or "software" as used herein refers in a generic sense to any type of computer code or set of computer-executable instructions that can be used to program a computer or other processor to implement the various aspects as discussed above. In addition, it should be appreciated that, according to one aspect, one or more computer programs that when executed perform methods of the present invention need not reside on a single computer or processor, but may be distributed in a modular fashion amongst a number of different computers or processors to implement various aspects of the present invention.
Computer-executable instructions may take many forms, such as program modules, being executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Generally, the functionality of the program modules may be combined or distributed as desired in various embodiments.
Additionally, the data structures may be stored in any suitable form in a computer readable medium. For simplicity of illustration, the data structure may be shown with fields related by location in the data structure. Again, such relationships may be implemented by assigning fields to storage having locations in a computer-readable medium for communicating relationships between fields. However, any suitable mechanism may be used to establish relationships between information in fields of a data structure, including through the use of pointers, tags or other mechanisms for establishing relationships between data elements.
When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether disposed in a single computer or distributed among multiple computers.
Further, it should be appreciated that the computer may be embodied in any of a number of forms, such as a rack-mounted computer, a desktop computer, a laptop computer, or a tablet computer, as non-limiting examples. In addition, a computer may be embedded in a device that is not generally regarded as a computer, but that has suitable processing capabilities, including a Personal Digital Assistant (PDA), a smart phone or any other suitable portable or stationary electronic device.
In addition, a computer may have one or more input and output devices. These devices may be used to present user interfaces, etc. Examples of output devices that may be used to provide a user interface include: a printer or display screen for visual presentation of the output and a speaker or other sound generating device for audible presentation of the output. Examples of input devices that may be used for the user interface include: keyboards and pointing devices, such as mice, touchpads, and digitizing tablets. As another example, a computer may receive input information through speech recognition or in other audible format.
Such computers may be interconnected by one or more networks IN any suitable form, including as a local area network or a wide area network, such as an enterprise network and an Intelligent Network (IN) or the internet, etc. Such networks may be based on any suitable technology and may operate according to any suitable protocol and may include wireless networks, wired networks, or fiber optic networks.
Additionally, as described, some aspects may be embodied as one or more methods. Acts performed as part of a method may be ordered in any suitable way. Thus, embodiments may be constructed in which acts are performed in a different order than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in the illustrative embodiments.
All definitions defined and used herein should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
In the description and claims, the indefinite articles "a" and "an" as used herein are to be understood as meaning "at least one" unless explicitly indicated to the contrary.
In the specification and claims, the phrase "and/or" as used herein should be understood to refer to "either or both" of the elements so combined (i.e., elements that are combined in some cases and separately presented in other cases). The use of "and/or" of a plurality of elements listed should be interpreted in the same manner, i.e. "one or more than one" of such elements combined. Other elements may optionally be present in addition to the elements specifically identified by the "and/or" clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, references to "a and/or B" when used in conjunction with an open language such as "comprising," etc., may refer, in one embodiment, to a alone (optionally including elements other than B); in another embodiment, only B (optionally including elements other than a) may be referred to; in yet another embodiment, both a and B (optionally including other elements) may be referred to; etc.
In the description and claims, the phrase "at least one" as used herein when referring to a list of one or more elements is understood to mean at least one element selected from any one or more of the elements of the list of elements, but does not necessarily include at least one element of each of the elements specifically listed within the list of elements, and does not exclude any combination of elements in the list of elements. The definition also allows that elements other than the specifically identified element within the list of elements referred to by the phrase "at least one" may optionally be present, whether related or unrelated to the specifically identified element. Thus, as a non-limiting example, "at least one of a and B" (or equivalently "at least one of a or B", or equivalently "at least one of a and/or B") may refer in one embodiment to optionally including more than one of at least one a without B (and optionally including elements other than B); in another embodiment may refer to optionally including more than one at least one B without a (and optionally including elements other than a); in yet another embodiment may refer to at least one a optionally including more than one and at least one B optionally including more than one (and optionally including other elements); etc.
In the claims, and in the description above, all transitional phrases such as "comprising," "including," "carrying," "having," "containing," "involving," "holding," and "consisting of" and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases "consisting of …" and "consisting essentially of …" should be closed or semi-closed transitional phrases, respectively.
The terms "about," "substantially," and "about" may be used in some embodiments to mean within ±20% of the target value, in some embodiments within ±10% of the target value, in some embodiments within ±5% of the target value, and in some embodiments within ±2% of the target value. The terms "about" and "approximately" may include target values.

Claims (191)

1. A method for denoising a magnetic resonance image, MR image, the method comprising:
using at least one computer hardware processor to:
obtaining a noisy MR image of the subject, the noisy MR image being associated with a target domain;
denoising a noisy MR image of a subject using a denoising neural network model to obtain a denoised MR image, the denoising neural network model being trained by:
Generating first training data for training a first neural network model to denoise the MR images at least in part by generating a plurality of first noisy MR images using (1) clean MR data associated with a source domain and (2) first MR noise data associated with the target domain,
training the first neural network model using the first training data,
generating training data for training the denoised neural network model at least in part by applying the first neural network model to a plurality of second denoised MR images and generating a corresponding plurality of denoised MR images, and
training the denoising neural network model using training data for training the denoising neural network model; and outputting the denoised MR image.
2. The method of claim 1, wherein the first training data comprises the plurality of first noisy MR images and a corresponding plurality of clean MR images, and wherein generating first training data comprises:
generating first noisy MR data using clean MR data associated with the source domain and first MR noise data associated with the target domain;
generating the plurality of first noisy MR images by applying a reconstruction process to the first noisy MR data; and
The plurality of clean MR images are generated by applying the reconstruction process to clean MR data associated with the source domain.
3. The method of claim 2 or any other preceding claim, wherein applying a reconstruction process to the first noisy MR data comprises: a machine learning model is used to generate MR images from the first noisy MR data.
4. The method of claim 2 or any other preceding claim, wherein applying a reconstruction process to the first noisy MR data comprises: MR images are generated from the first noisy MR data using compressed sensing.
5. The method of claim 2 or any other preceding claim, wherein applying a reconstruction process to the first noisy MR data comprises: at least one linear transformation is used to generate an MR image from the first noisy MR data.
6. The method of claim 5 or any other preceding claim, wherein the at least one linear transformation comprises:
coil decorrelation transformation;
gridding transformation; and
and (5) coil combination transformation.
7. The method of claim 1 or any other preceding claim, further comprising:
generating second training data for training the second neural network model to denoise the MR image at least in part by generating a plurality of dual noisy MR images using both:
(1) Second noisy MR data associated with the target domain, and
(2) Second MR noise data associated with the target domain; and
the second neural network model is trained using the second training data.
8. The method of claim 7 or any other preceding claim, wherein the second training data comprises the plurality of dual noisy MR images and the plurality of second noisy MR images, wherein generating the second training data comprises:
generating dual noisy MR data using second noisy MR data associated with the target domain and second MR noise data associated with the target domain;
generating the plurality of dual noisy MR images by applying a reconstruction process to the dual noisy MR data; and
the plurality of second noisy MR images are generated by applying the reconstruction process to second noisy MR data associated with the target domain.
9. The method of claim 7 or any other preceding claim, wherein generating training data for training the denoising neural network model further comprises: the second neural network model is applied to the plurality of second noisy MR images.
10. The method of claim 1 or any other preceding claim, wherein generating training data for training the denoising neural network model further comprises:
a plurality of enhanced de-noised MR images are generated by:
applying one or more transforms to an image of the plurality of denoised MR images to generate a plurality of transformed MR images, and
combining the plurality of transformed MR images with the plurality of denoised MR images to generate the plurality of enhanced denoised MR images; and
clean MR data associated with the target domain is generated by applying a non-uniform transformation to an image of the plurality of enhanced denoised MR images.
11. The method of claim 10 or any other preceding claim, wherein the training data for training the denoised neural network model comprises a plurality of noisy MR training images and a plurality of clean MR training images, wherein generating the training data for training the denoised neural network model further comprises:
generating clean MR training data by combining clean MR data associated with the source domain and clean MR data associated with the target domain;
generating noisy MR training data using the clean MR training data and third MR noise data associated with the target domain;
Generating the plurality of noisy MR training images by applying a reconstruction process to the noisy MR training data; and
the plurality of clean MR training images are generated by applying the reconstruction process to clean MR training data associated with the target domain.
12. The method of claim 1 or any other preceding claim, wherein the denoising neural network model comprises a plurality of convolutional layers.
13. The method of claim 12 or any other preceding claim, wherein the plurality of convolution layers comprises a two-dimensional convolution layer.
14. The method of claim 12 or any other preceding claim, wherein the plurality of convolution layers comprises a three-dimensional convolution layer.
15. The method of claim 1 or any other preceding claim, wherein the first MR noise data is generated prior to obtaining the first noisy MR image.
16. The method of claim 15 or any other preceding claim, further comprising: the first MR noise data is generated at least in part by empirical measurements of noise in the target domain.
17. The method of claim 15 or any other preceding claim, further comprising: the first MR noise data is generated by simulating the first MR noise data using at least one noise model associated with the target domain.
18. The method of claim 17 or any other preceding claim, wherein modeling the first MR noise data is performed using one or more of a gaussian distribution, a poisson distribution, and/or a student t distribution.
19. The method of claim 1 or any other preceding claim, wherein obtaining a noisy MR image of the subject comprises accessing the noisy MR image.
20. The method of claim 1 or any other preceding claim, wherein obtaining a noisy MR image of the subject comprises:
collecting first noisy MR data by imaging a subject using a magnetic resonance imaging system, i.e., an MRI system; and
the noisy MR image of the subject is generated using the collected first noisy MR data.
21. The method of claim 20 or any other preceding claim, wherein the first noisy MR data was previously collected using the MRI system, and wherein obtaining a noisy MR image of the subject comprises:
accessing the first noisy MR data; and
the noisy MR image is generated using the accessed first noisy MR data.
22. The method of claim 20 or any other preceding claim, wherein the first noisy MR data is collected by the MRI system using a diffusion weighted imaging pulse sequence, DWI pulse sequence.
23. The method of claim 22 or any other preceding claim, wherein the first MR noise data is generated by empirical measurements of noise within the MRI system during operation of the MRI system using the DWI pulse sequence.
24. The method according to claim 1 or any other preceding claim, wherein,
the clean MR data associated with the source domain includes MR data collected using a magnetic resonance imaging system having a main magnetic field strength of 0.5T or greater i.e. an MRI system,
generating the plurality of second noisy MR images using second noisy MR data associated with the target domain, and
the second noisy MR data associated with the target region includes MR data collected using an MRI system having a main magnetic field strength greater than or equal to 20mT and less than or equal to 0.2T.
25. The method according to claim 1 or any other preceding claim, wherein,
the clean MR data associated with the source domain includes MR data collected by imaging a first portion of the anatomy of the subject,
generating the plurality of second noisy MR images using second noisy MR data associated with the target domain, and
The second noisy MR data associated with the target region includes MR data collected by imaging a second portion of the anatomy different from the first portion of the anatomy of the subject.
26. The method according to claim 1 or any other preceding claim, wherein,
the clean MR data associated with the source domain includes MR data collected using a first pulse sequence,
generating the plurality of second noisy MR images using second noisy MR data associated with the target domain, and
the second noisy MR data associated with the target region includes MR data collected using a second pulse sequence different from the first pulse sequence.
27. The method of claim 1 or any other preceding claim, further comprising: training the denoising neural network model by:
generating first training data for training a first neural network model to denoise the MR images at least in part by generating the plurality of first noisy MR images using (1) clean MR data associated with the source domain and (2) first MR noise data associated with the target domain,
training the first neural network model using the first training data,
Generating training data for training the denoised neural network model at least in part by applying the first neural network model to the plurality of second denoised MR images and generating a corresponding plurality of denoised MR images; and
the denoising neural network model is trained using training data for training the denoising neural network model.
28. A magnetic resonance imaging system, MRI, system comprising:
a magnetic system having a plurality of magnetic components to generate a magnetic field for performing MRI; and
at least one processor configured to:
obtaining a noisy MR image of the subject, the noisy MR image being associated with a target domain;
denoising a noisy MR image of a subject using a denoising neural network model to obtain a denoised MR image, the denoising neural network model being trained by:
generating first training data for training a first neural network model to denoise the MR images at least in part by generating a plurality of first noisy MR images using (1) clean MR data associated with a source domain and (2) first MR noise data associated with the target domain,
training the first neural network model using the first training data,
Generating training data for training the denoised neural network model at least in part by applying the first neural network model to a plurality of second denoised MR images and generating a corresponding plurality of denoised MR images, and
training the denoising neural network model using training data for training the denoising neural network model; and
outputting the denoised MR image.
29. The MRI system of claim 28, wherein the first training data comprises the plurality of first noisy MR images and a corresponding plurality of clean MR images, and wherein generating first training data comprises:
generating first noisy MR data using clean MR data associated with the source domain and first MR noise data associated with the target domain;
generating the plurality of first noisy MR images by applying a reconstruction process to the first noisy MR data; and
the plurality of clean MR images are generated by applying the reconstruction process to clean MR data associated with the source domain.
30. The MRI system of claim 29 or any other preceding claim, wherein applying a reconstruction process to the first noisy MR data comprises: a machine learning model is used to generate MR images from the first noisy MR data.
31. The MRI system of claim 29 or any other preceding claim, wherein applying a reconstruction process to the first noisy MR data comprises: MR images are generated from the first noisy MR data using compressed sensing.
32. The MRI system of claim 29 or any other preceding claim, wherein applying a reconstruction process to the first noisy MR data comprises: at least one linear transformation is used to generate an MR image from the first noisy MR data.
33. The MRI system of claim 32 or any other preceding claim, wherein the at least one linear transformation comprises:
coil decorrelation transformation;
gridding transformation; and
and (5) coil combination transformation.
34. The MRI system of claim 28 or any other preceding claim, further comprising:
generating second training data for training the second neural network model to denoise the MR image at least in part by generating a plurality of dual noisy MR images using both:
(1) Second noisy MR data associated with the target domain, and
(2) Second MR noise data associated with the target domain; and
the second neural network model is trained using the second training data.
35. The MRI system of claim 34 or any other preceding claim, wherein the second training data comprises the plurality of dual noisy MR images and the plurality of second noisy MR images, wherein generating the second training data comprises:
generating dual noisy MR data using second noisy MR data associated with the target domain and second MR noise data associated with the target domain;
generating the plurality of dual noisy MR images by applying a reconstruction process to the dual noisy MR data; and
the plurality of second noisy MR images are generated by applying the reconstruction process to second noisy MR data associated with the target domain.
36. The MRI system of claim 34 or any other preceding claim, wherein generating training data for training the denoising neural network model further comprises: the second neural network model is applied to the plurality of second noisy MR images.
37. The MRI system of claim 28 or any other preceding claim, wherein generating training data for training the denoising neural network model further comprises:
a plurality of enhanced de-noised MR images are generated by:
Applying one or more transforms to an image of the plurality of denoised MR images to generate a plurality of transformed MR images, and
combining the plurality of transformed MR images with the plurality of denoised MR images to generate the plurality of enhanced denoised MR images; and
clean MR data associated with the target domain is generated by applying a non-uniform transformation to an image of the plurality of enhanced denoised MR images.
38. The MRI system of claim 37 or any other preceding claim, wherein the training data for training the denoised neural network model comprises a plurality of noisy MR training images and a plurality of clean MR training images, wherein generating the training data for training the denoised neural network model further comprises:
generating clean MR training data by combining clean MR data associated with the source domain and clean MR data associated with the target domain;
generating noisy MR training data using the clean MR training data and third MR noise data associated with the target domain;
generating the plurality of noisy MR training images by applying a reconstruction process to the noisy MR training data; and
the plurality of clean MR training images are generated by applying the reconstruction process to clean MR training data associated with the target domain.
39. The MRI system of claim 28 or any other preceding claim, wherein the denoising neural network model comprises a plurality of convolution layers.
40. The MRI system of claim 39 or any other preceding claim, wherein the plurality of convolution layers comprises a two-dimensional convolution layer.
41. The MRI system of claim 39 or any other preceding claim, wherein the plurality of convolution layers comprises a three-dimensional convolution layer.
42. The MRI system of claim 28 or any other preceding claim, wherein the first MR noise data is generated prior to obtaining the first noisy MR image.
43. An MRI system according to claim 42 or any other preceding claim, further comprising: the first MR noise data is generated at least in part by empirical measurements of noise in the target domain.
44. An MRI system according to claim 42 or any other preceding claim, further comprising: the first MR noise data is generated by simulating the first MR noise data using at least one noise model associated with the target domain.
45. The MRI system of claim 44 or any other preceding claim, wherein simulating the first MR noise data is performed using one or more of a gaussian distribution, a poisson distribution, and/or a student t distribution.
46. The MRI system of claim 28 or any other preceding claim, wherein obtaining a noisy MR image of the subject comprises accessing the noisy MR image.
47. The MRI system of claim 28 or any other preceding claim, wherein obtaining a noisy MR image of the subject comprises:
collecting first noisy MR data by imaging a subject using a magnetic resonance imaging system, i.e., an MRI system; and
the noisy MR image of the subject is generated using the collected first noisy MR data.
48. The MRI system of claim 47 or any other preceding claim, wherein the first noisy MR data was previously collected using the MRI system, and wherein obtaining a noisy MR image of the subject comprises:
accessing the first noisy MR data; and
the noisy MR image is generated using the accessed first noisy MR data.
49. The MRI system of claim 47 or any other preceding claim, wherein the first noisy MR data is collected by the MRI system using a diffusion weighted imaging pulse sequence, DWI pulse sequence.
50. An MRI system according to claim 49 or any other preceding claim, wherein the first MR noise data is generated by empirical measurements of noise within another MRI system of the same type as the MRI system during operation of the other MRI system using the DWI pulse sequence.
51. An MRI system according to claim 28 or any other preceding claim, wherein,
the clean MR data associated with the source domain includes MR data collected using a magnetic resonance imaging system having a main magnetic field strength of 0.5T or greater i.e. an MRI system,
generating the plurality of second noisy MR images using second noisy MR data associated with the target domain, and
the second noisy MR data associated with the target domain includes MR data collected using the MRI system having a main magnetic field strength greater than or equal to 20mT and less than or equal to 0.2T.
52. An MRI system according to claim 28 or any other preceding claim, wherein,
the clean MR data associated with the source domain includes MR data collected by imaging a first portion of the anatomy of the subject,
generating the plurality of second noisy MR images using second noisy MR data associated with the target domain, and
the second noisy MR data associated with the target region includes MR data collected by imaging a second portion of the anatomy different from the first portion of the anatomy of the subject.
53. An MRI system according to claim 28 or any other preceding claim, wherein,
the clean MR data associated with the source domain includes MR data collected using a first pulse sequence,
generating the plurality of second noisy MR images using second noisy MR data associated with the target domain, and
the second noisy MR data associated with the target region includes MR data collected using a second pulse sequence different from the first pulse sequence.
54. The MRI system of claim 28 or any other preceding claim, further comprising: training the denoising neural network model by:
generating first training data for training a first neural network model to denoise the MR images at least in part by generating the plurality of first noisy MR images using (1) clean MR data associated with the source domain and (2) first MR noise data associated with the target domain,
training the first neural network model using the first training data,
generating training data for training the denoised neural network model at least in part by applying the first neural network model to the plurality of second denoised MR images and generating a corresponding plurality of denoised MR images; and
The denoising neural network model is trained using training data for training the denoising neural network model.
55. At least one non-transitory computer-readable storage medium storing processor-executable instructions that, when executed by at least one processor, cause the at least one processor to perform a method for denoising a magnetic resonance image, i.e., an MR image, the method comprising:
obtaining a noisy MR image of the subject, the noisy MR image being associated with a target domain;
denoising a noisy MR image of a subject using a denoising neural network model to obtain a denoised MR image, the denoising neural network model being trained by:
generating first training data for training a first neural network model to denoise the MR images at least in part by generating a plurality of first noisy MR images using (1) clean MR data associated with a source domain and (2) first MR noise data associated with the target domain,
training the first neural network model using the first training data,
generating training data for training the denoised neural network model at least in part by applying the first neural network model to a plurality of second denoised MR images and generating a corresponding plurality of denoised MR images, and
Training the denoising neural network model using training data for training the denoising neural network model; and
outputting the denoised MR image.
56. The at least one non-transitory computer-readable storage medium of claim 55, wherein the first training data comprises the plurality of first noisy MR images and a corresponding plurality of clean MR images, and wherein generating first training data comprises:
generating first noisy MR data using clean MR data associated with the source domain and first MR noise data associated with the target domain;
generating the plurality of first noisy MR images by applying a reconstruction process to the first noisy MR data; and
the plurality of clean MR images are generated by applying the reconstruction process to clean MR data associated with the source domain.
57. The at least one non-transitory computer readable storage medium of claim 56 or any other preceding claim, wherein applying a reconstruction process to the first noisy MR data comprises: a machine learning model is used to generate MR images from the first noisy MR data.
58. The at least one non-transitory computer readable storage medium of claim 56 or any other preceding claim, wherein applying a reconstruction process to the first noisy MR data comprises: MR images are generated from the first noisy MR data using compressed sensing.
59. The at least one non-transitory computer readable storage medium of claim 56 or any other preceding claim, wherein applying a reconstruction process to the first noisy MR data comprises: at least one linear transformation is used to generate an MR image from the first noisy MR data.
60. The at least one non-transitory computer-readable storage medium of claim 59 or any other preceding claim, wherein the at least one linear transformation comprises:
coil decorrelation transformation;
gridding transformation; and
and (5) coil combination transformation.
61. The at least one non-transitory computer-readable storage medium of claim 55 or any other preceding claim, further comprising:
generating second training data for training the second neural network model to denoise the MR image at least in part by generating a plurality of dual noisy MR images using both:
(1) Second noisy MR data associated with the target domain, and
(2) Second MR noise data associated with the target domain; and
the second neural network model is trained using the second training data.
62. The at least one non-transitory computer-readable storage medium of claim 61 or any other preceding claim, wherein the second training data comprises the plurality of dual noisy MR images and the plurality of second noisy MR images, wherein generating the second training data comprises:
Generating dual noisy MR data using second noisy MR data associated with the target domain and second MR noise data associated with the target domain;
generating the plurality of dual noisy MR images by applying a reconstruction process to the dual noisy MR data; and
the plurality of second noisy MR images are generated by applying the reconstruction process to second noisy MR data associated with the target domain.
63. The at least one non-transitory computer-readable storage medium of claim 61 or any other preceding claim, wherein generating training data for training the denoising neural network model further comprises: the second neural network model is applied to the plurality of second noisy MR images.
64. The at least one non-transitory computer-readable storage medium of claim 55 or any other preceding claim, wherein generating training data for training the denoising neural network model further comprises:
a plurality of enhanced de-noised MR images are generated by:
applying one or more transforms to an image of the plurality of denoised MR images to generate a plurality of transformed MR images, and
Combining the plurality of transformed MR images with the plurality of denoised MR images to generate the plurality of enhanced denoised MR images; and
clean MR data associated with the target domain is generated by applying a non-uniform transformation to an image of the plurality of enhanced denoised MR images.
65. The at least one non-transitory computer-readable storage medium of claim 64 or any other preceding claim, wherein the training data for training the denoised neural network model comprises a plurality of noisy MR training images and a plurality of clean MR training images, wherein generating training data for training the denoised neural network model further comprises:
generating clean MR training data by combining clean MR data associated with the source domain and clean MR data associated with the target domain;
generating noisy MR training data using the clean MR training data and third MR noise data associated with the target domain;
generating the plurality of noisy MR training images by applying a reconstruction process to the noisy MR training data; and
the plurality of clean MR training images are generated by applying the reconstruction process to clean MR training data associated with the target domain.
66. The at least one non-transitory computer-readable storage medium of claim 55 or any other preceding claim, wherein the denoising neural network model comprises a plurality of convolutional layers.
67. The at least one non-transitory computer-readable storage medium of claim 66 or any other preceding claim, wherein the plurality of convolutional layers comprises a two-dimensional convolutional layer.
68. The at least one non-transitory computer-readable storage medium of claim 66 or any other preceding claim, wherein the plurality of convolutional layers comprises a three-dimensional convolutional layer.
69. The at least one non-transitory computer-readable storage medium of claim 55 or any other preceding claim, wherein the first MR noise data is generated prior to obtaining the first noisy MR image.
70. The at least one non-transitory computer-readable storage medium of claim 69 or any other preceding claim, further comprising: the first MR noise data is generated at least in part by empirical measurements of noise in the target domain.
71. The at least one non-transitory computer-readable storage medium of claim 69 or any other preceding claim, further comprising: the first MR noise data is generated by simulating the first MR noise data using at least one noise model associated with the target domain.
72. The at least one non-transitory computer-readable storage medium of claim 71 or any other preceding claim, wherein simulating the first MR noise data is performed using one or more of a gaussian distribution, a poisson distribution, and/or a student t distribution.
73. The at least one non-transitory computer-readable storage medium of claim 55 or any other preceding claim, wherein obtaining a noisy MR image of the subject comprises accessing the noisy MR image.
74. The at least one non-transitory computer-readable storage medium of claim 55 or any other preceding claim, wherein obtaining a noisy MR image of the subject comprises:
collecting first noisy MR data by imaging a subject using a magnetic resonance imaging system, i.e., an MRI system; and
the noisy MR image of the subject is generated using the collected first noisy MR data.
75. The at least one non-transitory computer-readable storage medium of claim 74 or any other preceding claim, wherein the first noisy MR data was previously collected using the MRI system, and wherein obtaining a noisy MR image of the subject comprises:
Accessing the first noisy MR data; and
the noisy MR image is generated using the accessed first noisy MR data.
76. The at least one non-transitory computer-readable storage medium of claim 74 or any other preceding claim, wherein the first noisy MR data is collected by the MRI system using a diffusion weighted imaging pulse sequence, DWI pulse sequence.
77. The at least one non-transitory computer readable storage medium of claim 76 or any other preceding claim, wherein the first MR noise data is generated by empirical measurements of noise within the MRI system during operation of the MRI system using the DWI pulse sequence.
78. The at least one non-transitory computer-readable storage medium of claim 55 or any other preceding claim, wherein,
the clean MR data associated with the source domain includes MR data collected using a magnetic resonance imaging system having a main magnetic field strength of 0.5T or greater i.e. an MRI system,
generating the plurality of second noisy MR images using second noisy MR data associated with the target domain, and
the second noisy MR data associated with the target region includes MR data collected using an MRI system having a main magnetic field strength greater than or equal to 20mT and less than or equal to 0.2T.
79. The at least one non-transitory computer-readable storage medium of claim 55 or any other preceding claim, wherein,
the clean MR data associated with the source domain includes MR data collected by imaging a first portion of the anatomy of the subject,
generating the plurality of second noisy MR images using second noisy MR data associated with the target domain, and
the second noisy MR data associated with the target region includes MR data collected by imaging a second portion of the anatomy different from the first portion of the anatomy of the subject.
80. The at least one non-transitory computer-readable storage medium of claim 55 or any other preceding claim, wherein,
the clean MR data associated with the source domain includes MR data collected using a first pulse sequence,
generating the plurality of second noisy MR images using second noisy MR data associated with the target domain, and
the second noisy MR data associated with the target region includes MR data collected using a second pulse sequence different from the first pulse sequence.
81. The at least one non-transitory computer-readable storage medium of claim 55 or any other preceding claim, further comprising: training the denoising neural network model by:
Generating first training data for training a first neural network model to denoise the MR images at least in part by generating the plurality of first noisy MR images using (1) clean MR data associated with the source domain and (2) first MR noise data associated with the target domain,
training the first neural network model using the first training data,
generating training data for training the denoised neural network model at least in part by applying the first neural network model to the plurality of second noisy MR images and generating a corresponding plurality of denoised MR images, and
the denoising neural network model is trained using training data for training the denoising neural network model.
82. A method for training a denoising neural network model to denoise a magnetic resonance image, MR image, of a subject, the method comprising:
using at least one computer hardware processor to:
generating first training data for training a first neural network model to denoise the MR images at least in part by generating a plurality of first noisy MR images using (1) clean MR data associated with the source domain and (2) first MR noise data associated with the target domain,
Training the first neural network model using the first training data,
generating training data for training the denoised neural network model at least in part by applying the first neural network model to a plurality of second denoised MR images and generating a corresponding plurality of denoised MR images, and
the denoising neural network model is trained using training data for training the denoising neural network model.
83. The method of claim 82, wherein the first training data includes the plurality of first noisy MR images and a corresponding plurality of clean MR images, and wherein generating first training data includes:
generating first noisy MR data using clean MR data associated with the source domain and first MR noise data associated with the target domain;
generating the plurality of first noisy MR images by applying a reconstruction process to the first noisy MR data; and
the plurality of clean MR images are generated by applying the reconstruction process to clean MR data associated with the source domain.
84. The method of claim 83 or any other preceding claim, wherein applying a reconstruction process to the first noisy MR data comprises: a machine learning model is used to generate MR images from the first noisy MR data.
85. The method of claim 83 or any other preceding claim, wherein applying a reconstruction process to the first noisy MR data comprises: MR images are generated from the first noisy MR data using compressed sensing.
86. The method of claim 83 or any other preceding claim, wherein applying a reconstruction process to the first noisy MR data comprises: at least one linear transformation is used to generate an MR image from the first noisy MR data.
87. The method of claim 86 or any other preceding claim, wherein the at least one linear transformation comprises:
coil decorrelation transformation;
gridding transformation; and
and (5) coil combination transformation.
88. The method of claim 82 or any other preceding claim, further comprising:
generating second training data for training the second neural network model to denoise the MR image at least in part by generating a plurality of dual noisy MR images using both:
(1) Second noisy MR data associated with the target domain, and
(2) Second MR noise data associated with the target domain; and
the second neural network model is trained using the second training data.
89. The method of claim 88 or any other preceding claim, wherein the second training data comprises the plurality of dual noisy MR images and the plurality of second noisy MR images, wherein generating the second training data comprises:
generating dual noisy MR data using second noisy MR data associated with the target domain and second MR noise data associated with the target domain;
generating the plurality of dual noisy MR images by applying a reconstruction process to the dual noisy MR data; and
the plurality of second noisy MR images are generated by applying the reconstruction process to second noisy MR data associated with the target domain.
90. The method of claim 88 or any other preceding claim, wherein generating training data for training the denoising neural network model further comprises: the second neural network model is applied to the plurality of second noisy MR images.
91. The method of claim 82 or any other preceding claim, wherein generating training data for training the denoising neural network model further comprises:
a plurality of enhanced de-noised MR images are generated by:
Applying one or more transforms to an image of the plurality of denoised MR images to generate a plurality of transformed MR images, and
combining the plurality of transformed MR images with the plurality of denoised MR images to generate the plurality of enhanced denoised MR images; and
clean MR data associated with the target domain is generated by applying a non-uniform transformation to an image of the plurality of enhanced denoised MR images.
92. The method of claim 91 or any other preceding claim, wherein the training data for training the denoised neural network model includes a plurality of noisy MR training images and a plurality of clean MR training images, wherein generating training data for training the denoised neural network model further comprises:
generating clean MR training data by combining clean MR data associated with the source domain and clean MR data associated with the target domain;
generating noisy MR training data using the clean MR training data and third MR noise data associated with the target domain;
generating the plurality of noisy MR training images by applying a reconstruction process to the noisy MR training data; and
the plurality of clean MR training images are generated by applying the reconstruction process to clean MR training data associated with the target domain.
93. The method of claim 91 or any other preceding claim, wherein the denoising neural network model comprises a plurality of convolutional layers.
94. The method of claim 93 or any other preceding claim, wherein the plurality of convolutional layers comprises a two-dimensional convolutional layer.
95. The method of claim 93 or any other preceding claim, wherein the plurality of convolution layers comprises a three-dimensional convolution layer.
96. The method of claim 82 or any other preceding claim, wherein the first MR noise data is generated prior to obtaining the first noisy MR image.
97. The method of claim 96 or any other preceding claim, further comprising: the first MR noise data is generated at least in part by empirical measurements of noise in the target domain.
98. The method of claim 96 or any other preceding claim, further comprising: the first MR noise data is generated by simulating the first MR noise data using at least one noise model associated with the target domain.
99. The method of claim 98 or any other preceding claim, wherein modeling the first MR noise data is performed using one or more of a gaussian distribution, a poisson distribution, and/or a student t distribution.
100. The method of claim 82 or any other preceding claim, wherein,
the clean MR data associated with the source domain includes MR data collected using a magnetic resonance imaging system having a main magnetic field strength of 0.5T or greater i.e. an MRI system,
generating the plurality of second noisy MR images using second noisy MR data associated with the target domain, and
the second noisy MR data associated with the target region includes MR data collected using an MRI system having a main magnetic field strength greater than or equal to 20mT and less than or equal to 0.2T.
101. The method of claim 82 or any other preceding claim, wherein,
the clean MR data associated with the source domain includes MR data collected by imaging a first portion of the anatomy of the subject,
generating the plurality of second noisy MR images using second noisy MR data associated with the target domain, and
the second noisy MR data associated with the target region includes MR data collected by imaging a second portion of the anatomy different from the first portion of the anatomy of the subject.
102. The method of claim 82 or any other preceding claim, wherein,
The clean MR data associated with the source domain includes MR data collected using a first pulse sequence,
generating the plurality of second noisy MR images using second noisy MR data associated with the target domain, and
the second noisy MR data associated with the target region includes MR data collected using a second pulse sequence different from the first pulse sequence.
103. A magnetic resonance imaging system, MRI, system comprising:
a magnetic system having a plurality of magnetic components to generate a magnetic field for performing MRI; and
at least one processor configured to:
generating first training data for training a first neural network model to denoise the MR images at least in part by generating a plurality of first noisy MR images using (1) clean MR data associated with the source domain and (2) first MR noise data associated with the target domain,
training the first neural network model using the first training data,
generating training data for training the denoised neural network model at least in part by applying the first neural network model to a plurality of second denoised MR images and generating a corresponding plurality of denoised MR images, and
The denoising neural network model is trained using training data for training the denoising neural network model.
104. The MRI system of claim 103, wherein the first training data comprises the plurality of first noisy MR images and a corresponding plurality of clean MR images, and wherein generating first training data comprises:
generating first noisy MR data using clean MR data associated with the source domain and first MR noise data associated with the target domain;
generating the plurality of first noisy MR images by applying a reconstruction process to the first noisy MR data; and
the plurality of clean MR images are generated by applying the reconstruction process to clean MR data associated with the source domain.
105. The MRI system of claim 104 or any other preceding claim, wherein applying a reconstruction process to the first noisy MR data comprises: a machine learning model is used to generate MR images from the first noisy MR data.
106. The MRI system of claim 104 or any other preceding claim, wherein applying a reconstruction process to the first noisy MR data comprises: MR images are generated from the first noisy MR data using compressed sensing.
107. The MRI system of claim 104 or any other preceding claim, wherein applying a reconstruction process to the first noisy MR data comprises: at least one linear transformation is used to generate an MR image from the first noisy MR data.
108. The MRI system of claim 107 or any other preceding claim, wherein the at least one linear transformation comprises:
coil decorrelation transformation;
gridding transformation; and
and (5) coil combination transformation.
109. The MRI system of claim 103 or any other preceding claim, further comprising:
generating second training data for training the second neural network model to denoise the MR image at least in part by generating a plurality of dual noisy MR images using both:
(1) Second noisy MR data associated with the target domain, and
(2) Second MR noise data associated with the target domain; and
the second neural network model is trained using the second training data.
110. The MRI system of claim 109 or any other preceding claim, wherein the second training data comprises the plurality of dual noisy MR images and the plurality of second noisy MR images, wherein generating the second training data comprises:
Generating dual noisy MR data using second noisy MR data associated with the target domain and second MR noise data associated with the target domain;
generating the plurality of dual noisy MR images by applying a reconstruction process to the dual noisy MR data; and
the plurality of second noisy MR images are generated by applying the reconstruction process to second noisy MR data associated with the target domain.
111. The MRI system of claim 109 or any other preceding claim, wherein generating training data for training the denoising neural network model further comprises: the second neural network model is applied to the plurality of second noisy MR images.
112. The MRI system of claim 103 or any other preceding claim, wherein generating training data for training the denoising neural network model further comprises:
a plurality of enhanced de-noised MR images are generated by:
applying one or more transforms to an image of the plurality of denoised MR images to generate a plurality of transformed MR images, and
combining the plurality of transformed MR images with the plurality of denoised MR images to generate the plurality of enhanced denoised MR images; and
Clean MR data associated with the target domain is generated by applying a non-uniform transformation to an image of the plurality of enhanced denoised MR images.
113. The MRI system of claim 112 or any other preceding claim, wherein the training data for training the denoised neural network model comprises a plurality of noisy MR training images and a plurality of clean MR training images, wherein generating training data for training the denoised neural network model further comprises:
generating clean MR training data by combining clean MR data associated with the source domain and clean MR data associated with the target domain;
generating noisy MR training data using the clean MR training data and third MR noise data associated with the target domain;
generating the plurality of noisy MR training images by applying a reconstruction process to the noisy MR training data; and
the plurality of clean MR training images are generated by applying the reconstruction process to clean MR training data associated with the target domain.
114. The MRI system of claim 103 or any other preceding claim, wherein the denoising neural network model comprises a plurality of convolution layers.
115. The MRI system of claim 114 or any other preceding claim, wherein the plurality of convolution layers comprises a two-dimensional convolution layer.
116. The MRI system of claim 114 or any other preceding claim, wherein the plurality of convolution layers comprises a three-dimensional convolution layer.
117. The MRI system of claim 103 or any other preceding claim, wherein the first MR noise data is generated prior to obtaining the first noisy MR image.
118. The MRI system of claim 117 or any other preceding claim, further comprising: the first MR noise data is generated at least in part by empirical measurements of noise in the target domain.
119. The MRI system of claim 117 or any other preceding claim, further comprising: the first MR noise data is generated by simulating the first MR noise data using at least one noise model associated with the target domain.
120. The MRI system of claim 119 or any other preceding claim, wherein simulating the first MR noise data is performed using one or more of a gaussian distribution, a poisson distribution, and/or a student t distribution.
121. The MRI system of claim 103 or any other preceding claim, wherein,
the clean MR data associated with the source domain includes MR data collected using an MRI system having a main magnetic field strength of 0.5T or greater,
generating the plurality of second noisy MR images using second noisy MR data associated with the target domain, and
the second noisy MR data associated with the target region includes MR data collected using an MRI system having a main magnetic field strength greater than or equal to 20mT and less than or equal to 0.2T.
122. The MRI system of claim 103 or any other preceding claim, wherein,
the clean MR data associated with the source domain includes MR data collected by imaging a first portion of the anatomy of the subject,
generating the plurality of second noisy MR images using second noisy MR data associated with the target domain, and
the second noisy MR data associated with the target region includes MR data collected by imaging a second portion of the anatomy different from the first portion of the anatomy of the subject.
123. The MRI system of claim 103 or any other preceding claim, wherein,
The clean MR data associated with the source domain includes MR data collected using a first pulse sequence,
generating the plurality of second noisy MR images using second noisy MR data associated with the target domain, and
the second noisy MR data associated with the target region includes MR data collected using a second pulse sequence different from the first pulse sequence.
124. At least one non-transitory computer-readable storage medium storing processor-executable instructions that, when executed by at least one processor, cause the at least one processor to perform a method for training a denoising neural network model to denoise a magnetic resonance image, i.e., an MR image, of a subject, the method comprising:
using at least one computer hardware processor to:
generating first training data for training a first neural network model to denoise the MR images at least in part by generating a plurality of first noisy MR images using (1) clean MR data associated with the source domain and (2) first MR noise data associated with the target domain,
training the first neural network model using the first training data,
Generating training data for training the denoised neural network model at least in part by applying the first neural network model to a plurality of second denoised MR images and generating a corresponding plurality of denoised MR images, and
the denoising neural network model is trained using training data for training the denoising neural network model.
125. The at least one non-transitory computer-readable storage medium of claim 124, wherein the first training data includes the plurality of first noisy MR images and a corresponding plurality of clean MR images, and wherein generating first training data comprises:
generating first noisy MR data using clean MR data associated with the source domain and first MR noise data associated with the target domain;
generating the plurality of first noisy MR images by applying a reconstruction process to the first noisy MR data; and
the plurality of clean MR images are generated by applying the reconstruction process to clean MR data associated with the source domain.
126. The at least one non-transitory computer-readable storage medium of claim 125 or any other preceding claim, wherein applying a reconstruction process to the first noisy MR data comprises: a machine learning model is used to generate MR images from the first noisy MR data.
127. The at least one non-transitory computer-readable storage medium of claim 125 or any other preceding claim, wherein applying a reconstruction process to the first noisy MR data comprises: MR images are generated from the first noisy MR data using compressed sensing.
128. The at least one non-transitory computer-readable storage medium of claim 125 or any other preceding claim, wherein applying a reconstruction process to the first noisy MR data comprises: at least one linear transformation is used to generate an MR image from the first noisy MR data.
129. The at least one non-transitory computer-readable storage medium of claim 128 or any other preceding claim, wherein the at least one linear transformation comprises:
coil decorrelation transformation;
gridding transformation; and
and (5) coil combination transformation.
130. The at least one non-transitory computer-readable storage medium of claim 124 or any other preceding claim, further comprising:
generating second training data for training the second neural network model to denoise the MR image at least in part by generating a plurality of dual noisy MR images using both:
(1) Second noisy MR data associated with the target domain, and
(2) Second MR noise data associated with the target domain; and
the second neural network model is trained using the second training data.
131. The at least one non-transitory computer-readable storage medium of claim 130 or any other preceding claim, wherein the second training data comprises the plurality of dual noisy MR images and the plurality of second noisy MR images, wherein generating the second training data comprises:
generating dual noisy MR data using second noisy MR data associated with the target domain and second MR noise data associated with the target domain;
generating the plurality of dual noisy MR images by applying a reconstruction process to the dual noisy MR data; and
the plurality of second noisy MR images are generated by applying the reconstruction process to second noisy MR data associated with the target domain.
132. The at least one non-transitory computer-readable storage medium of claim 130 or any other preceding claim, wherein generating training data for training the denoising neural network model further comprises: the second neural network model is applied to the plurality of second noisy MR images.
133. The at least one non-transitory computer-readable storage medium of claim 124 or any other preceding claim, wherein generating training data for training the denoising neural network model further comprises:
a plurality of enhanced de-noised MR images are generated by:
applying one or more transforms to an image of the plurality of denoised MR images to generate a plurality of transformed MR images, and
combining the plurality of transformed MR images with the plurality of denoised MR images to generate the plurality of enhanced denoised MR images; and
clean MR data associated with the target domain is generated by applying a non-uniform transformation to an image of the plurality of enhanced denoised MR images.
134. The at least one non-transitory computer-readable storage medium of claim 133 or any other preceding claim, wherein the training data for training the denoised neural network model includes a plurality of noisy MR training images and a plurality of clean MR training images, wherein generating training data for training the denoised neural network model further comprises:
generating clean MR training data by combining clean MR data associated with the source domain and clean MR data associated with the target domain;
Generating noisy MR training data using the clean MR training data and third MR noise data associated with the target domain;
generating the plurality of noisy MR training images by applying a reconstruction process to the noisy MR training data; and
the plurality of clean MR training images are generated by applying the reconstruction process to clean MR training data associated with the target domain.
135. The at least one non-transitory computer-readable storage medium of claim 133 or any other preceding claim, wherein the denoising neural network model comprises a plurality of convolution layers.
136. The at least one non-transitory computer-readable storage medium of claim 135 or any other preceding claim, wherein the plurality of convolutional layers comprises a two-dimensional convolutional layer.
137. The at least one non-transitory computer-readable storage medium of claim 135 or any other preceding claim, wherein the plurality of convolution layers comprises a three-dimensional convolution layer.
138. The at least one non-transitory computer-readable storage medium of claim 124 or any other preceding claim, wherein the first MR noise data is generated prior to obtaining the first noisy MR image.
139. The at least one non-transitory computer-readable storage medium of claim 138 or any other preceding claim, further comprising: the first MR noise data is generated at least in part by empirical measurements of noise in the target domain.
140. The at least one non-transitory computer-readable storage medium of claim 138 or any other preceding claim, further comprising: the first MR noise data is generated by simulating the first MR noise data using at least one noise model associated with the target domain.
141. The at least one non-transitory computer-readable storage medium of claim 140 or any other preceding claim, wherein simulating the first MR noise data is performed using one or more of a gaussian distribution, a poisson distribution, and/or a student t distribution.
142. The at least one non-transitory computer-readable storage medium of claim 124 or any other preceding claim, wherein,
the clean MR data associated with the source domain includes MR data collected using a magnetic resonance imaging system having a main magnetic field strength of 0.5T or greater i.e. an MRI system,
Generating the plurality of second noisy MR images using second noisy MR data associated with the target domain, and
the second noisy MR data associated with the target region includes MR data collected using an MRI system having a main magnetic field strength greater than or equal to 20mT and less than or equal to 0.2T.
143. The at least one non-transitory computer-readable storage medium of claim 124 or any other preceding claim, wherein,
the clean MR data associated with the source domain includes MR data collected by imaging a first portion of the anatomy of the subject,
generating the plurality of second noisy MR images using second noisy MR data associated with the target domain, and
the second noisy MR data associated with the target region includes MR data collected by imaging a second portion of the anatomy different from the first portion of the anatomy of the subject.
144. The at least one non-transitory computer-readable storage medium of claim 124 or any other preceding claim, wherein,
the clean MR data associated with the source domain includes MR data collected using a first pulse sequence,
Generating the plurality of second noisy MR images using second noisy MR data associated with the target domain, and
the second noisy MR data associated with the target region includes MR data collected using a second pulse sequence different from the first pulse sequence.
145. A method for denoising a medical image of a subject, the medical image being generated using data collected by a medical imaging apparatus, the method comprising:
using at least one computer hardware processor to:
a medical image of the subject is obtained,
combining the medical image of the subject with the noise image to obtain a noise corrupted medical image of the subject,
generating a denoising medical image corresponding to the noise corrupted medical image of the subject using the noise corrupted medical image of the subject and the trained neural network, and
and outputting the denoising medical image.
146. The method of claim 145, wherein the trained neural network is trained using training data comprising an image pair, a first pair of the image pairs comprising a first image generated using data collected by the medical imaging device and a second image generated by combining the first image and a noise image.
147. The method of claim 145 or any other preceding claim, wherein the method comprises: the noise image is obtained by selecting the noise image from a plurality of noise images.
148. The method of claim 147 or any other preceding claim, wherein selecting the noise image from a plurality of noise images comprises: the noise image is randomly selected from a plurality of noise images.
149. The method of claim 147 or any other preceding claim, wherein the plurality of noise images are generated prior to obtaining a medical image of the subject.
150. The method of claim 145 or any other preceding claim, further comprising:
the noise image is generated at least in part by making one or more empirical measurements of noise using the medical imaging device and/or at least one medical imaging device of the same type as the medical imaging device.
151. The method of claim 150 or any other preceding claim, wherein generating a noise image comprises: at least a portion of one or more empirical measurements of noise are scaled relative to a maximum intensity value of a medical image of the subject.
152. The method of claim 151 or any other preceding claim, wherein scaling at least a portion of one or more empirical measurements of noise relative to a maximum intensity value of a medical image of the subject comprises: the selected medical image is scaled to a range of from 2% to 30% of the maximum intensity value of the medical image of the subject.
153. The method of claim 152 or any other preceding claim, wherein scaling at least a portion of one or more empirical measurements of noise relative to a maximum intensity value of a medical image of the subject comprises: the selected medical image is scaled to 5%, 10% or 20% of the maximum intensity value of the medical image of the subject.
154. The method of claim 145 or any other preceding claim, further comprising:
the noise image is generated by simulating the noise image using at least one noise model associated with the medical imaging device.
155. The method of claim 154 or any other preceding claim, wherein simulating the noise image is performed using one or more of a gaussian distribution, a poisson distribution, and/or a student t distribution.
156. The method of claim 145 or any other preceding claim, wherein the medical imaging device is one of an ultrasound imaging device, an elastography device, an X-ray imaging device, a functional near infrared spectroscopy imaging device, an endoscopic imaging device, a positron emission tomography imaging device, a PET imaging device, a computed tomography imaging device, a CT imaging device, and a single photon emission computed tomography imaging device, a SPECT imaging device.
157. The method of claim 145 or any other preceding claim, wherein the medical imaging device is a magnetic resonance imaging system, MRI, system.
158. The method of claim 157 or any other preceding claim, further comprising: the noise image is generated using an image reconstruction technique used by the MRI system to generate a magnetic resonance image, or MR image, from MR data acquired by the MRI system in the spatial frequency domain.
159. The method of claim 145 or any other preceding claim, wherein obtaining a medical image of the subject comprises:
collecting the data by imaging a subject using the medical imaging device; and
The medical image is generated using the collected data.
160. The method of claim 145 or any other preceding claim, wherein the data was previously collected using the medical imaging device, and wherein obtaining a medical image of the subject comprises:
accessing the data; and
the medical image is generated using the accessed data.
161. The method of claim 145 or any other preceding claim, wherein obtaining a medical image of the subject includes accessing the medical image.
162. A method according to claim 145 or any other preceding claim, wherein the data is collected by the MRI system using a diffusion weighted imaging pulse sequence, DWI pulse sequence.
163. A method according to claim 162 or any other preceding claim, wherein the noise image is generated by empirical measurement of noise within the MRI system using the DWI pulse sequence.
164. The method of claim 145 or any other preceding claim, wherein the trained neural network comprises a plurality of convolutional layers.
165. The method of claim 164 or any other preceding claim, wherein the plurality of convolutional layers have a U-net structure.
166. At least one non-transitory computer-readable storage medium storing processor-executable instructions that, when executed by at least one processor, cause the at least one processor to perform a method for denoising an image of a subject, the image generated using data collected by a medical imaging system, the method comprising:
obtaining an image of a subject;
combining an image of the subject with the noise image to obtain a noise corrupted image of the subject;
generating a denoising image corresponding to a corrupted image of a subject using the corrupted image of the subject and a trained neural network; and
and outputting the denoising image.
167. A magnetic resonance imaging system, MRI, system comprising:
a magnetic system having a plurality of magnetic components to generate a magnetic field for performing MRI; and
at least one processor configured to perform a method for denoising an image of a subject, the image being generated using data collected by the MRI system, the method comprising:
obtaining an image of a subject;
combining an image of the subject with the noise image to obtain a noise corrupted image of the subject;
Generating a denoising image corresponding to a corrupted image of a subject using the corrupted image of the subject and a trained neural network; and
and outputting the denoising image.
168. A method for denoising a medical image of a subject, the medical image being generated using data collected by a medical imaging apparatus, the method comprising:
using at least one computer hardware processor to:
a medical image of the subject is obtained,
generating a denoised medical image corresponding to a medical image of a subject using the medical image of the subject and a trained neural network trained using training data comprising an image pair, a first of the image pair comprising a first image generated using data collected by the medical imaging device and a second image generated by combining the first image and the noise image, and
and outputting the denoising medical image.
169. The method of claim 168, wherein the method comprises: the noise image is obtained by selecting the noise image from a plurality of noise images.
170. The method of claim 169 or any other preceding claim, further comprising:
the noise image is generated at least in part by making one or more empirical measurements of noise using the medical imaging device and/or at least one medical imaging device of the same type as the medical imaging device.
171. The method of claim 169 or any other preceding claim, further comprising:
the noise image is generated by simulating the noise image using at least one noise model associated with the medical imaging device.
172. A method for denoising a medical image of a subject, the medical image being generated using data collected by a medical imaging apparatus, the method comprising:
using at least one computer hardware processor to:
a medical image of the subject is obtained,
generating a denoised medical image corresponding to a medical image of a subject using the medical image of the subject and a generator neural network, wherein the generator neural network is trained using a discriminant neural network trained to distinguish a first noise image obtained using an output of the generator neural network from a second noise image, and
And outputting the denoising medical image.
173. The method of claim 172, wherein the image in the first noisy image is obtained by subtracting a denoised medical image from a corresponding medical image of the subject using the output of the generator neural network.
174. The method of claim 172 or any other preceding claim, wherein generating a denoised medical image comprises: the residual image output by the generator neural network is subtracted from the medical image of the subject.
175. The method of claim 172 or any other preceding claim, wherein the second noise image is generated prior to obtaining a medical image of the subject.
176. The method of claim 172 or any other preceding claim, wherein the second noise image is generated without using the generator neural network.
177. The method of claim 172 or any other preceding claim, further comprising: the second noise image is generated at least in part by making one or more empirical measurements of noise using the medical imaging device and/or at least one medical imaging device of the same type as the medical imaging device.
178. The method of claim 172 or any other preceding claim, further comprising: the second noise image is generated by simulating the second noise image using at least one noise model associated with the medical imaging device.
179. The method of claim 178 or any other preceding claim, wherein simulating the second noise image is performed using one or more of a gaussian distribution, a poisson distribution, and/or a student t distribution.
180. The method of claim 172 or any other preceding claim, wherein the medical imaging device is one of an ultrasound imaging device, an elastography device, an X-ray imaging device, a functional near infrared spectroscopy imaging device, an endoscopic imaging device, a positron emission tomography imaging device, a PET imaging device, a computed tomography imaging device, a CT imaging device, and a single photon emission computed tomography imaging device, a SPECT imaging device.
181. The method of claim 172 or any other preceding claim, wherein the medical imaging device is a magnetic resonance imaging system, MRI, system.
182. The method of claim 181 or any other preceding claim, wherein generating a second noise image further comprises: an image reconstruction technique used by the MRI system is used to generate a magnetic resonance image, or MR image, from MR data acquired in the spatial frequency domain by the MRI system.
183. The method of claim 181 or any other preceding claim, wherein the data is collected by the MRI system using a diffusion weighted imaging pulse sequence, DWI pulse sequence.
184. The method of claim 181 or any other preceding claim, wherein the second noisy image is generated by empirically measuring noise within the MRI system using the DWI pulse sequence.
185. The method of claim 172 or any other preceding claim, wherein obtaining a medical image of the subject comprises:
collecting the data by imaging a subject using the medical imaging device; and
the medical image is generated using the collected data.
186. The method of claim 172 or any other preceding claim, wherein the data was previously collected using the medical imaging device, and wherein obtaining a medical image of the subject comprises:
accessing the data; and
the medical image is generated using the accessed data.
187. The method of claim 172 or any other preceding claim, wherein obtaining a medical image of a subject includes accessing the medical image.
188. The method of claim 172 or any other preceding claim, wherein the generator neural network comprises a plurality of convolutional layers.
189. The method of claim 188 or any other preceding claim, wherein the plurality of convolutional layers have a U-net structure.
190. At least one non-transitory computer-readable storage medium storing processor-executable instructions that, when executed by at least one processor, cause the at least one processor to perform a method for denoising an image of a subject, the image generated using data collected by a medical imaging system, the method comprising:
obtaining an image of a subject;
generating a denoising image corresponding to an image of a subject using the image of the subject and a generator neural network, wherein the generator neural network is trained using a discriminator neural network trained to distinguish a first noise image obtained using an output of the generator neural network from a second noise image; and
and outputting the denoising image.
191. A magnetic resonance imaging system, MRI, system comprising:
a magnetic system having a plurality of magnetic components to generate a magnetic field for performing MRI; and
At least one processor configured to perform a method for denoising an image of a subject, the image being generated using data collected by the MRI system, the method comprising:
obtaining an image of a subject;
generating a denoising image corresponding to an image of a subject using the image of the subject and a generator neural network, wherein the generator neural network is trained using a discriminator neural network trained to distinguish a first noise image obtained using an output of the generator neural network from a second noise image; and
and outputting the denoising image.
CN202180082192.3A 2020-10-07 2021-10-07 Deep learning method for noise suppression in medical imaging Pending CN116745803A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/088,672 2020-10-07
US202163155696P 2021-03-02 2021-03-02
US63/155,696 2021-03-02
PCT/US2021/053918 WO2022076654A1 (en) 2020-10-07 2021-10-07 Deep learning methods for noise suppression in medical imaging

Publications (1)

Publication Number Publication Date
CN116745803A true CN116745803A (en) 2023-09-12

Family

ID=87908349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180082192.3A Pending CN116745803A (en) 2020-10-07 2021-10-07 Deep learning method for noise suppression in medical imaging

Country Status (1)

Country Link
CN (1) CN116745803A (en)

Similar Documents

Publication Publication Date Title
US11300645B2 (en) Deep learning techniques for magnetic resonance image reconstruction
US11185249B2 (en) Self ensembling techniques for generating magnetic resonance images from spatial frequency data
CN111513716B (en) Method and system for magnetic resonance image reconstruction using an extended sensitivity model and a deep neural network
Knoll et al. Deep-learning methods for parallel magnetic resonance imaging reconstruction: A survey of the current approaches, trends, and issues
US20220107378A1 (en) Deep learning methods for noise suppression in medical imaging
CN113436290A (en) Method and system for selectively removing streak artifacts and noise from images using a deep neural network
US10895622B2 (en) Noise suppression for wave-CAIPI
Moreno López et al. Evaluation of MRI denoising methods using unsupervised learning
US20230342886A1 (en) Method and system for low-field mri denoising with a deep complex-valued convolutional neural network
Jacob et al. Improved model-based magnetic resonance spectroscopic imaging
Qu et al. Radial magnetic resonance image reconstruction with a deep unrolled projected fast iterative soft-thresholding network
CN116745803A (en) Deep learning method for noise suppression in medical imaging
CN113567901A (en) Spin lattice relaxation imaging method and system under magnetic resonance rotating coordinate system
Adibpour Discrete Fourier transform techniques to improve diagnosis accuracy in biomedical applications
FERNANDES MULTICHANNEL DENOISING STRATEGIES FOR ULTRAFAST MAGNETIC RESONANCE IMAGING
Balachandrasekaran Structured low rank approaches for exponential recovery-application to MRI
Chang A Study of Nonlinear Approaches to Parallel Magnetic Resonance Imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination