CN111630572A - Image figure of merit prediction based on deep learning - Google Patents

Image figure of merit prediction based on deep learning Download PDF

Info

Publication number
CN111630572A
CN111630572A CN201980009536.0A CN201980009536A CN111630572A CN 111630572 A CN111630572 A CN 111630572A CN 201980009536 A CN201980009536 A CN 201980009536A CN 111630572 A CN111630572 A CN 111630572A
Authority
CN
China
Prior art keywords
imaging
merit
figures
training
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980009536.0A
Other languages
Chinese (zh)
Inventor
张滨
B·潘达
白传勇
胡志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips NV filed Critical Koninklijke Philips NV
Publication of CN111630572A publication Critical patent/CN111630572A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/037Emission tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/507Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for determination of haemodynamic parameters, e.g. perfusion CT
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5205Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5294Devices using data or image processing specially adapted for radiation diagnosis involving using additional data, e.g. patient information, image labeling, acquisition parameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/424Iterative

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Nuclear Medicine (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A non-transitory computer readable medium stores instructions readable and executable by a workstation (18) comprising at least one electronic processor (20) to perform an imaging method (100). The method comprises the following steps: estimating one or more figures of merit for the reconstructed image by applying a trained deep learning transform (30) to input data, the input data including at least imaging parameters but not the reconstructed image; selecting a value for the imaging parameter based on the estimated one or more figures of merit; generating a reconstructed image using the selected values for the imaging parameters; and displaying the reconstructed image.

Description

Image figure of merit prediction based on deep learning
Technical Field
The following generally relates to the field of medical imaging, medical image interpretation, image reconstruction, and related fields.
Background
Positron Emission Tomography (PET) imaging provides key information for diagnosis and treatment planning in oncology and cardiology. Two types of figures of merit are important for clinical use of PET images: qualitative figures of merit (e.g., noise level of the image) and quantitative figures of merit (e.g., normalized uptake value (SUV) and contrast recovery rate of the lesion). In PET imaging, these figures of merit are measured on images reconstructed from an acquired data set. The figure of merit of the obtained image is the final result of the imaging chain, which provides no or limited feedback to the image chain generating the image.
In the event that some parameters (e.g., the patient's weight, scan time, or reconstruction parameters) change, the user is typically unable to predict how much a given figure of merit will change. A common approach to solving this problem is to use a "trial and error" approach by performing multiple reconstructions for each individual case. Through multiple attempts, the user learns the associations. However, for a high resolution image, the reconstruction takes about 5 to 10 minutes, and thus the process takes much time and effort.
The difficulty is even greater when the imaging parameters, including the image acquisition parameters, are to be adjusted. In the case of imaging modalities such as ultrasound, the process of acquiring imaging data and reconstructing an image is fast, and therefore adjusting ultrasound imaging data acquisition parameters based on a reconstructed ultrasound image is a practical approach. However, for PET, it is impractical to adjust the acquisition parameters with such a trial-and-error approach. This is because the acquisition of PET imaging data must be timed to coincide with the residence time of the administered radiopharmaceutical in the tissue of the patient to be imaged. The PET imaging data acquisition time window may be narrow depending on the half-life of the radiopharmaceutical and/or the rate at which the radiopharmaceutical is removed through the action of the kidneys or other bodily functions. Furthermore, it is often necessary to keep the dose of the radiopharmaceutical low to avoid exposing the patient to excessive radiation, which in turn requires a relatively long imaging data acquisition time in order to acquire sufficient counts to reconstruct a clinical quality PET image. These factors may prevent the trial-and-look method from acquiring PET imaging data, reconstructing a PET image, adjusting PET imaging data acquisition parameters based on the reconstructed PET image, and repeating the above.
The following discloses a new and improved system and method that overcomes these problems.
Disclosure of Invention
In one disclosed aspect, a non-transitory computer readable medium stores instructions readable and executable by a workstation comprising at least one electronic processor to perform an imaging method. The method comprises the following steps: estimating one or more figures of merit for the reconstructed image by applying a trained deep learning transformation to input data, the input data including at least imaging parameters but not the reconstructed image; selecting a value for the imaging parameter based on the estimated one or more figures of merit; generating a reconstructed image using the selected values for the imaging parameters; and displaying the reconstructed image.
In another disclosed aspect, an imaging system includes: a Positron Emission Tomography (PET) image acquisition device configured to acquire PET imaging data; and at least one electronic processor programmed to: estimating one or more figures of merit for a reconstructed image by applying a trained deep learning transformation to input data that includes at least image acquisition parameters and statistical information of imaging data but not the reconstructed image; selecting a value for the image reconstruction parameter based on the estimated one or more figures of merit; generating the reconstructed image by reconstructing the imaging data using the selected values for the image reconstruction parameters; and controlling a display device to display the reconstructed image.
In another disclosed aspect, an imaging system includes: a Positron Emission Tomography (PET) image acquisition device configured to acquire PET imaging data; and at least one electronic processor programmed to: estimating one or more figures of merit for a reconstructed image by applying a trained deep learning transformation to input data, the input data including at least image acquisition parameters but not the reconstructed image; selecting a value for the image acquisition parameter based on the estimated one or more figures of merit; generating the reconstructed image by: acquiring imaging data with the selected values for the image acquisition parameters using the image acquisition device and reconstructing the acquired imaging data to generate the reconstructed image; and controlling a display device to display the reconstructed image.
One advantage resides in providing an imaging system that generates a priori predictions of results of a target figure of merit (e.g., general image noise level, normalized uptake value (SUV) recovery) before consuming computational resources in performing complex image reconstructions.
Another advantage resides in designing an imaging protocol using a target figure of merit.
Another advantage is that the figure of merit, which can be achieved by different reconstruction methods and parameters, can be evaluated without performing a complex image reconstruction of the data set.
Another advantage resides in rapid prediction of imaging outcomes when patient specifications change (e.g., weight loss).
A given embodiment may provide none, one, two, more, or all of the aforementioned advantages, and/or may provide other advantages as will become apparent to those skilled in the art upon reading and understanding the present disclosure.
Drawings
The disclosure may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the disclosure.
FIG. 1 schematically illustrates an imaging system according to one aspect;
FIG. 2 illustrates exemplary flowchart operations of the system of FIG. 1;
FIG. 3 illustrates an exemplary flow chart training operation of the system of FIG. 1; and is
Fig. 4 and 5 illustrate exemplary flowchart operations of the system of fig. 1.
Detailed Description
Current high resolution image reconstruction requires 5-10 minutes for the image data set, while the time required for acquisition is much longer. Typically, the imaging session will employ default imaging acquisition parameters (e.g., default radiopharmaceutical dose per unit weight, default waiting time between administration of the radiopharmaceutical and commencement of PET imaging data acquisition, default acquisition time per frame, etc.) and default reconstruction parameters. It is expected that image quality factors such as noise level in the liver, mean normalized uptake value (SUV mean) in the tumor, contrast recovery of lesions, etc., will fall within certain target ranges. If this is not the case, either a clinical interpretation is performed with the reconstructed image that fails or the image reconstruction (or even the acquisition) has to be repeated with improved parameters. Furthermore, it may be difficult to determine in which direction to adjust a given parameter to improve the image figure of merit(s). In case the parameters of the image reconstruction are adjusted, the image reconstruction is repeated after each adjustment, as described above, requiring 5-10 minutes for each iteration of the image reconstruction. In the case of imaging data acquisition parameters, it is generally not advisable to repeat the PET imaging data acquisition, as such repetition would require administration of a second radiopharmaceutical dose.
The embodiments disclosed below utilize deep learning of a Support Vector Machine (SVM) or a Neural Network (NN) that is trained to predict the figure of merit(s) based on standard input that does not include reconstructed images.
In some embodiments disclosed herein, the input to the SVM or neural network includes only information available prior to imaging data acquisition, such as the patient's weight and/or Body Mass Index (BMI) and expected (default) imaging parameters (e.g., acquisition parameters (e.g., dose and latency) and image reconstruction parameters). The SVM or neural network is trained on training examples, each training example comprising input (training) PET imaging data paired with actual figure of merit(s) derived from the corresponding reconstructed training image. The training optimizes the SVM or neural network to output figure of merit(s) that best matches the corresponding figure of merit values measured for the actual reconstructed training image. In application, the available inputs for a scheduled clinical PET imaging session are fed to a trained SVM or neural network that outputs a prediction of the figure of merit(s). In the manual approach, the predicted figure of merit(s) is displayed and if the predicted value is not acceptable to the clinician, he or she can adjust the default imaging parameters and re-run through the SVM or neural network in an iterative manner until the desired figure of merit(s) is achieved. Thereafter, PET imaging is performed using the adjusted imaging parameters such that the highly expected resulting reconstructed image is likely to exhibit the desired figure of merit(s).
In other embodiments disclosed herein, the figure of merit prediction is performed after the imaging data acquisition but before the image reconstruction. In these embodiments, the input to the SVM or neural network also includes statistical information of the imaging data sets that have been acquired, e.g., total counts, counts/minute, etc. Training also uses this additional statistical information to train the imaging dataset. The resulting trained SVM or neural network can be applied again after the imaging data acquisition but before the image reconstruction starts and, due to the additionally provided statistical information, it is possible to provide more accurate figure of merit estimation result(s). In this case, the imaging parameters to be optimized are limited to the image reconstruction parameters, since the imaging data have already been acquired.
In some embodiments, the disclosed embodiments improve imaging and computational efficiency by enabling optimization of imaging parameters (e.g., acquisition parameters and/or reconstruction parameters) before any actual image reconstruction is performed, even before image acquisition.
Although described herein with respect to a PET imaging system, the disclosed methods can also be disclosed in a Computed Tomography (CT) imaging system, a hybrid PET/CT imaging system, a Single Photon Emission Computed Tomography (SPECT) imaging system, a hybrid SPECT/CT imaging system, a Magnetic Resonance (MR) imaging system, a hybrid PET/MR, a functional CT imaging system, a functional MR imaging system, and the like.
Referring to FIG. 1, an illustrative medical imaging system 10 is shown. As shown in fig. 1, system 10 includes an image acquisition device or imaging device 12. In one example, the image acquisition device 12 can include a PET imaging device. An illustrative example is a PET/CT imaging device further comprising a CT gantry 13, the CT gantry 13 being adapted for determining anatomical information and generating an attenuation map from the CT image for correcting absorption in a PET reconstruction. In other examples, the image acquisition device 12 can be any other suitable image acquisition device (e.g., MR, CT, SPECT, hybrid device, etc.). A patient table 14 is arranged to load a patient into an examination region 16 of the PET gantry 12.
The system 10 also includes a computer or workstation or other electronic data processing device 18 having typical components such as at least one electronic processor 20, at least one user input device (e.g., a mouse, keyboard, trackball, etc.) 22, and a display device 24. In some embodiments, the display device 24 can be a separate component from the computer 18. The workstation 18 can also include one or more non-transitory storage media 26 (e.g., magnetic disks, RAID or other magnetic storage media; solid state drives, flash drives, Electronically Erasable Read Only Memory (EEROM) or other electronic memory; optical disks or other optical storage devices; various combinations thereof, etc.). The display device 24 is configured to display a Graphical User Interface (GUI)28 including one or more fields to receive user input from the user input device 22.
The at least one electronic processor 20 is operatively connected to one or more non-transitory storage media 26 that store instructions that are readable and executable by the at least one electronic processor 20 to perform the disclosed operations, including performing the imaging method or process 100. In some examples, the imaging method or process 100 may be performed at least in part by cloud processing. The non-transitory storage medium 26 also stores information for training implementing a trained deep learning transformation 30 (e.g., SVM or NN).
Referring to fig. 2, an illustrative embodiment of an image reconstruction method 100 is schematically shown in flow chart form. At 102, the at least one electronic processor 20 is programmed to estimate one or more figures of merit for the reconstructed image by applying the trained deep learning transform 30 to input data that includes at least the imaging parameters but not the reconstructed image. In some embodiments, the trained deep learning transformation is a trained SVM or a trained neural network. In one example, the one or more figures of merit include a normalized uptake value (SUV) for the anatomical region. In another example, the one or more figures of merit include a noise level for the anatomical region. Since the trained deep learning transform 30 does not use the reconstructed image as an input, the figure of merit prediction 102 can advantageously be performed before computationally intensive image reconstruction is performed.
The input data can include patient parameters (e.g., weight, height, gender, etc.), imaging data acquisition parameters (e.g., scan duration, time of ingestion, activity, etc.); and in some embodiments, the input data can also include reconstruction parameters (e.g., an iterative reconstruction algorithm, a number of iterations to be performed, a number of subsets (e.g., where ordered subsets are desired to maximize OSEM reconstruction), regularization parameters in the case of regularized image reconstruction, smoothing parameters or regularization of an applied smoothing filter, etc.
In one embodiment, the input data includes imaging parameters including at least image reconstruction parameters, statistical information of the imaging data (e.g., total counts, counts/minute, etc.), and information available prior to imaging data acquisition (e.g., patient's weight and/or Body Mass Index (BMI)) and expected (default) imaging parameters (e.g., acquisition parameters (e.g., dose and wait time), type of imaging system, imaging system specifications (e.g., crystal geometry, crystal type, crystal size), etc.). In addition, the input data does not include imaging data. In this embodiment, the generating comprises generating a reconstructed image by reconstructing the imaging data using the selected values for the image reconstruction parameters.
In another embodiment, the input data includes at least the image acquisition parameters and information available prior to the imaging data acquisition (e.g., the patient's weight and/or BMI) and does not include the acquired imaging data in the statistical information of the acquired imaging data. In this embodiment, the generating comprises: imaging data is acquired using the imaging device 12 with the selected values for the image acquisition parameters and the acquired imaging data is reconstructed to generate a reconstructed image.
Existing methods for generating a corrected reconstructed image typically require the reconstructed image as input data for the correction operation. As previously mentioned, this can be problematic in certain imaging modalities such as PET. In imaging modalities such as ultrasound, the acquisition and reconstruction of imaging data is fast and generally without limitations to prevent multiple acquisitions of imaging data. In contrast, the calculation of PET image reconstruction is complex and in some cases may require around 5-10 minutes and the acquisition of PET imaging data must be timed with the residence time of the radiopharmaceutical in the tissue to be imaged, which can severely limit the time window during which imaging data acquisition can occur and is typically a slow process due to the low count provided by the low radiopharmaceutical dose prescribed by patient safety considerations. Advantageously, embodiments disclosed herein utilize a trained deep learning transform 30 to a priori predict the results of the target figures of merit (e.g., general image noise level, normalized uptake value (SUV) recovery) before computing resources are consumed in performing complex image reconstructions (and in some embodiments even before acquiring imaging data). In addition, the figure of merit can be estimated by the trained deep learning transform 30 using different reconstruction methods and parameters without performing a complex image reconstruction of the data set. In other words, the trained deep learning transform 30 is able to estimate the figure of merit without having to take the reconstructed image as a necessary input parameter (and in some embodiments even without requiring acquired imaging data).
At 104, the at least one electronic processor 20 is programmed to select a value for the imaging parameter based on the estimated one or more figures of merit. To this end, the at least one electronic processor 20 is programmed to compare the estimated one or more figures of merit to target values for the one or more figures of merit (i.e., target values stored in the one or more non-transitory storage media 26). The at least one electronic processor 20 is then programmed to adjust the imaging parameters based on the comparison operation. The at least one electronic processor 20 is then programmed to repeat the estimation of the one or more figures of merit for the reconstructed image by applying the trained deep learning transformation 30 to input data comprising at least the adjusted imaging parameters. In some embodiments, the input data does not include a reconstructed image.
At 106, the at least one electronic processor 20 is programmed to generate a reconstructed image by performing image reconstruction of the acquired imaging data using the selected values for the imaging parameters. If the figure of merit prediction/ optimization 102, 104 is performed prior to imaging data acquisition, step 106 includes acquiring PET imaging data and then performing reconstruction. On the other hand, if the figure of merit prediction/ optimization 102, 104 is performed after the imaging data acquisition (where the statistics of the imaging data are input to the SVM or NN 30), step 106 includes performing image reconstruction. Step 106 suitably employs the imaging parameters adjusted by the figure of merit prediction/ optimization 102, 104.
At 108, the at least one electronic processor 20 is programmed to control the display device 24 to display the reconstructed image. Additionally, step 108 may perform a figure of merit evaluation on the reconstructed image to determine, for example, a noise figure in the liver, a SUV value in the lesion, and/or other figures of merit. Due to the figure of merit prediction/ optimization 102, 104, the likelihood that the figure of merit(s) evaluated from the reconstructed image will approach the expected value is greatly increased.
Referring to FIG. 3, an illustrative embodiment of a training method 200 of a trained deep learning transform 30 is schematically shown in flow chart form. At 202, at least one electronic processor 20 is programmed to reconstruct training imaging data to generate corresponding training images. At 204, the at least one electronic processor 20 is programmed to determine values for one or more figures of merit for the training images by processing the training images. At 206, the at least one electronic processor 20 is programmed to estimate one or more figures of merit for the training imaging data by applying the deep learning transformation 30 to input data that includes at least image reconstruction parameters and statistical information for the training imaging data. At 208, the at least one electronic processor 20 is programmed to train the deep learning transform 30 to match the estimate of the one or more figures of merit for the training imaging data to the determined value. The training 208 may train a deep learning transformation comprising a neural network, for example, using known back propagation techniques. In the case of training a deep learning transformation including a Support Vector Machine (SVM), a known method for optimizing hyperplane parameters of an SVM is employed.
It should be noted that in the training process of fig. 3, the reconstruction 202 and the figure of merit determination 204 may be performed as part of a clinical task in some embodiments. For example, the training of fig. 3 may employ a historical PET imaging session stored in a Picture Archiving and Communication System (PACS). Each such PET imaging session typically includes reconstructed images and the figure of merit(s) extracted from those images as part of the clinical assessment of the PET imaging session. Thus, these data may be effectively "pre-computed" as part of routine clinical practice, and may be identified and retrieved from PACS for use in training the deep learning transformation 30.
Fig. 4 and 5 schematically show a more detailed flow chart of the imaging method 100 in the form of a flow chart. Fig. 4 illustrates an embodiment of an imaging method 400 in which the input data does not include imaging data. The inputs can include image acquisition parameter data (e.g., a portion of the target to be imaged) 402, acquisition process data (e.g., dose and latency, type of imaging system, imaging system specifications, etc.) 404, and reconstruction parameters 406. The input is input to a trained deep learning transform 30 (e.g., a neural network). At 408, the trained neural network 30 estimates one or more figures of merit (e.g., noise, SUV mean, etc.) based on the inputs 402 and 406. At 410, a user desired figure of merit is input to the trained neural network 30 of fig. 4 (e.g., via one or more user input devices 22 of fig. 1). At 412, the at least one electronic processor 20 is programmed to determine whether the estimated figure of merit is comparable (i.e., acceptable) relative to the user-desired figure of merit. If not, acquisition parameters 402 are adjusted at 414 and operations 402 and 412 are repeated. If the figure of merit is acceptable, at 416 the at least one electronic processor 20 is programmed to control the image acquisition device 12 to acquire imaging data and perform reconstruction of the PET image using the reconstruction parameters 406.
Fig. 5 shows another embodiment of an imaging method 500 in which the input data to the neural network includes imaging data (but not any reconstructed images). At 502, statistical information (e.g., total counts, counts/minute, etc.) is derived from the acquired list mode PET imaging data. The statistical information is input to the neural network 30. Operations 504 and 512 of fig. 5 substantially correspond to operations 404 and 412 of fig. 4 and are not repeated here for the sake of brevity. At 514, if the figure of merit is unacceptable, the reconstruction parameters are adjusted and used to reacquire list mode PET imaging data. If the figure of merit is acceptable, at 516 the at least one electronic processor 20 is programmed to perform reconstruction of the PET image using the reconstruction parameters 506.
The present disclosure has been described with reference to the preferred embodiments. Modifications and alterations may occur to others upon reading and understanding the preceding detailed description. It is intended that the invention be construed as including all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (20)

1. A non-transitory computer readable medium storing instructions readable and executable by a workstation (18) comprising at least one electronic processor (20) to perform an imaging method (100), the method comprising:
estimating one or more figures of merit for the reconstructed image by applying a trained deep learning transform (30) to input data, the input data including at least imaging parameters but not the reconstructed image;
selecting a value for the imaging parameter based on the estimated one or more figures of merit;
generating a reconstructed image using the selected values for the imaging parameters; and is
Displaying the reconstructed image.
2. The non-transitory computer-readable medium of claim 1,
the input data comprises imaging parameters, and the imaging parameters at least comprise image reconstruction parameters and statistical information of imaging data; and is
The generating comprises generating the reconstructed image by reconstructing the imaging data using the selected values for the image reconstruction parameters.
3. The non-transitory computer-readable medium of claim 2, wherein the input data does not include the imaging data.
4. The non-transitory computer readable medium of any one of claims 2-3, further comprising:
reconstructing training imaging data to generate corresponding training images;
determining values of the one or more figures of merit for the training images by processing the training images;
estimating the one or more figures of merit for the training imaging data by applying the deep learning transformation (30) to input data comprising at least the image reconstruction parameters and statistical information of the training imaging data; and is
Training the deep learning transformation to match the estimate of the one or more figures of merit for the training imaging data to the determined value.
5. The non-transitory computer-readable medium of claim 1,
the input data comprises imaging parameters, the imaging parameters comprising at least image acquisition parameters; and is
The generating comprises: acquiring imaging data with the selected values for the image acquisition parameters using an image acquisition device (12), and reconstructing the acquired imaging data to generate the reconstructed image.
6. The non-transitory computer-readable medium of claim 5, wherein the input data does not include the acquired imaging data and does not include statistical information of the acquired imaging data.
7. The non-transitory computer readable medium of any of claims 5 and 6, further comprising:
reconstructing training imaging data to generate corresponding training images;
determining values of the one or more figures of merit for the training images by processing the training images;
estimating the one or more figures of merit for the training imaging data by applying the deep learning transformation (30) to input data, the input data including at least the image acquisition parameters; and is
Training the deep learning transformation to match the estimate of the one or more figures of merit for the training imaging data to the determined value.
8. The non-transitory computer-readable medium of any one of claims 1-7, wherein the selecting comprises:
comparing the estimated one or more figures of merit to a target value for the one or more figures of merit;
adjusting the imaging parameters based on the comparison; and is
Repeating the estimating of the one or more figures of merit for the reconstructed image by applying the trained deep learning transform (30) to input data, the input data including at least the adjusted imaging parameters but not the reconstructed image.
9. The non-transitory computer-readable medium of any one of claims 1-8, wherein the one or more figures of merit includes a normalized uptake value (SUV) for an anatomical region.
10. The non-transitory computer-readable medium of any one of claims 1-9, wherein the one or more figures of merit includes a noise level for an anatomical region.
11. The non-transitory computer-readable medium of any one of claims 1-10, wherein the trained deep learning transformation is a trained Support Vector Machine (SVM) or a trained neural network.
12. An imaging system (10), comprising:
a Positron Emission Tomography (PET) image acquisition device (12) configured to acquire PET imaging data; and
at least one electronic processor (20) programmed to:
estimating one or more figures of merit for a reconstructed image by applying a trained deep learning transform (30) to input data, the input data including at least image acquisition parameters and statistics of imaging data but not the reconstructed image;
selecting a value for the image reconstruction parameter based on the estimated one or more figures of merit;
generating the reconstructed image by reconstructing the imaging data using the selected values for the image reconstruction parameters; and is
Controlling a display device (24) to display the reconstructed image.
13. The imaging system (10) of claim 12, wherein the input data does not include the imaging data.
14. The imaging system (10) according to either one of claims 12 and 13, wherein the at least one electronic processor (20) is programmed to:
reconstructing training imaging data to generate corresponding training images;
determining values of the one or more figures of merit for the training images by processing the training images;
estimating the one or more figures of merit for the training imaging data by applying the deep learning transformation (30) to input data comprising at least the image reconstruction parameters and statistical information of the training imaging data; and is
Training the deep learning transformation to match the estimate of the one or more figures of merit for the training imaging data to the determined value.
15. The imaging system (10) according to any one of claims 12-14, wherein the selecting includes:
comparing the estimated one or more figures of merit to a target value for the one or more figures of merit;
adjusting the imaging parameters based on the comparison; and is
Repeating the estimating of the one or more figures of merit for the reconstructed image by applying the trained deep learning transform (30) to input data, the input data including at least the adjusted imaging parameters but not the reconstructed image.
16. The imaging system (10) according to any one of claims 12-15, wherein the one or more figures of merit include at least one of: normalized uptake values (SUVs) for anatomical regions, and noise levels for anatomical regions.
17. An imaging system (10), comprising:
a Positron Emission Tomography (PET) image acquisition device (12) configured to acquire PET imaging data; and
at least one electronic processor (20) programmed to:
estimating one or more figures of merit for a reconstructed image by applying a trained deep learning transform (30) to input data, the input data including at least image acquisition parameters but not the reconstructed image;
selecting a value for the image acquisition parameter based on the estimated one or more figures of merit;
generating the reconstructed image by: acquiring imaging data with the selected values for the image acquisition parameters using the image acquisition device (12), and reconstructing the acquired imaging data to generate the reconstructed image; and is
Controlling a display device (24) to display the reconstructed image.
18. The imaging system (10) according to claim 17, wherein the input data does not include the acquired imaging data and does not include statistical information of the acquired imaging data.
19. The imaging system (10) according to either one of claims 17 and 18, wherein the at least one electronic processor (20) is programmed to:
reconstructing training imaging data to generate corresponding training images;
determining values of the one or more figures of merit for the training images by processing the training images;
estimating the one or more figures of merit for the training imaging data by applying the deep learning transformation (30) to input data comprising at least the image reconstruction parameters and statistical information of the training imaging data; and is
Training the deep learning transformation to match the estimate of the one or more figures of merit for the training imaging data to the determined value.
20. The imaging system (10) according to any one of claims 17-19, wherein the selecting includes:
comparing the estimated one or more figures of merit to a target value for the one or more figures of merit;
adjusting the imaging parameters based on the comparison; and is
Repeating the estimating of the one or more figures of merit for the reconstructed image by applying the trained deep learning transform (30) to input data, the input data including at least the adjusted imaging parameters but not the reconstructed image.
CN201980009536.0A 2018-01-22 2019-01-15 Image figure of merit prediction based on deep learning Pending CN111630572A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201862620091P 2018-01-22 2018-01-22
US62/620,091 2018-01-22
PCT/EP2019/050869 WO2019141651A1 (en) 2018-01-22 2019-01-15 Deep learning based image figure of merit prediction

Publications (1)

Publication Number Publication Date
CN111630572A true CN111630572A (en) 2020-09-04

Family

ID=65031081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980009536.0A Pending CN111630572A (en) 2018-01-22 2019-01-15 Image figure of merit prediction based on deep learning

Country Status (4)

Country Link
US (1) US20200388058A1 (en)
EP (1) EP3743890A1 (en)
CN (1) CN111630572A (en)
WO (1) WO2019141651A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11037338B2 (en) * 2018-08-22 2021-06-15 Nvidia Corporation Reconstructing image data
CN110477937B (en) * 2019-08-26 2023-07-25 上海联影医疗科技股份有限公司 Scattering estimation parameter determination method, device, equipment and medium
US11776679B2 (en) * 2020-03-10 2023-10-03 The Board Of Trustees Of The Leland Stanford Junior University Methods for risk map prediction in AI-based MRI reconstruction
DE102020216040A1 (en) * 2020-12-16 2022-06-23 Siemens Healthcare Gmbh Method of determining an adjustment to an imaging study

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130136328A1 (en) * 2011-11-30 2013-05-30 General Electric Company Methods and systems for enhanced tomographic imaging
CN105144241A (en) * 2013-04-10 2015-12-09 皇家飞利浦有限公司 Image quality index and/or imaging parameter recommendation based thereon
CN106466188A (en) * 2015-08-20 2017-03-01 通用电气公司 For the quantitative system and method for emission tomography imaging
US20170337713A1 (en) * 2016-08-12 2017-11-23 Siemens Healthcare Gmbh Method and data processing unit for optimizing an image reconstruction algorithm
US20170351937A1 (en) * 2016-06-03 2017-12-07 Siemens Healthcare Gmbh System and method for determining optimal operating parameters for medical imaging

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9152761B2 (en) * 2014-01-10 2015-10-06 Heartflow, Inc. Systems and methods for identifying medical image acquisition parameters
CN106659452B (en) * 2014-06-23 2020-05-15 美国西门子医疗解决公司 Reconstruction using multiple photoelectric peaks in quantitative single photon emission computed tomography
WO2016137972A1 (en) * 2015-02-23 2016-09-01 Mayo Foundation For Medical Education And Research Methods for optimizing imaging technique parameters for photon-counting computed tomography

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130136328A1 (en) * 2011-11-30 2013-05-30 General Electric Company Methods and systems for enhanced tomographic imaging
CN105144241A (en) * 2013-04-10 2015-12-09 皇家飞利浦有限公司 Image quality index and/or imaging parameter recommendation based thereon
CN106466188A (en) * 2015-08-20 2017-03-01 通用电气公司 For the quantitative system and method for emission tomography imaging
US20170351937A1 (en) * 2016-06-03 2017-12-07 Siemens Healthcare Gmbh System and method for determining optimal operating parameters for medical imaging
US20170337713A1 (en) * 2016-08-12 2017-11-23 Siemens Healthcare Gmbh Method and data processing unit for optimizing an image reconstruction algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GEORGIOS PAVLAKOS ET AL.: "Coarse-To-Fine Volumetric Prediction for Single-Image 3D Human Pose", 《CVPR 2017》, pages 7025 - 7034 *

Also Published As

Publication number Publication date
EP3743890A1 (en) 2020-12-02
US20200388058A1 (en) 2020-12-10
WO2019141651A1 (en) 2019-07-25

Similar Documents

Publication Publication Date Title
JP7381224B2 (en) Medical image processing device and program
US11200711B2 (en) Smart filtering for PET imaging including automatic selection of filter parameters based on patient, imaging device, and/or medical context information
JP7159167B2 (en) Standard Uptake Value (SUV) Guided Reconstruction Control for Improving Results Robustness in Positron Emission Tomography (PET) Imaging
CN111630572A (en) Image figure of merit prediction based on deep learning
CN112770838B (en) System and method for image enhancement using self-focused deep learning
US20100014734A1 (en) Functional Image Quality Assessment
US11069098B2 (en) Interactive targeted ultrafast reconstruction in emission and transmission tomography
US10593071B2 (en) Network training and architecture for medical imaging
EP3338636B1 (en) An apparatus and associated method for imaging
Naqa et al. Deblurring of breathing motion artifacts in thoracic PET images by deconvolution methods
US10064593B2 (en) Image reconstruction for a volume based on projection data sets
US20220343496A1 (en) Systems and methods for accurate and rapid positron emission tomography using deep learning
CN110136076B (en) Medical scanning imaging method, device, storage medium and computer equipment
WO2021041125A1 (en) Systems and methods for accurate and rapid positron emission tomography using deep learning
JP2019524356A (en) Feature-based image processing using feature images extracted from different iterations
CN110709889A (en) System and method for providing confidence values as a measure of quantitative assurance for iteratively reconstructed images in emission tomography
US20220172328A1 (en) Image reconstruction
WO2023219963A1 (en) Deep learning-based enhancement of multispectral magnetic resonance imaging
EP2360643A1 (en) Methods and systems for image reconstruction
Caldeira et al. Maximum a Posteriori Reconstruction using PRESTO and PET/MR data acquired Simultaneously with the 3TMR-BrainPET
US11354830B2 (en) System and method for tomographic image reconstruction
JP2022550988A (en) Continuous table-moving acquisition using a short-axis phantom for PET imaging system setup and quality control
TW201417769A (en) A method for improving image quality and imaging system using the same
Arunprasath et al. PET image reconstruction using ANN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20240419