CN107374657B - Method for correcting CT scanning data and CT scanning system - Google Patents

Method for correcting CT scanning data and CT scanning system Download PDF

Info

Publication number
CN107374657B
CN107374657B CN201710524197.2A CN201710524197A CN107374657B CN 107374657 B CN107374657 B CN 107374657B CN 201710524197 A CN201710524197 A CN 201710524197A CN 107374657 B CN107374657 B CN 107374657B
Authority
CN
China
Prior art keywords
scanning
neural network
network model
training
scan
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710524197.2A
Other languages
Chinese (zh)
Other versions
CN107374657A (en
Inventor
刘炎炎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN201710524197.2A priority Critical patent/CN107374657B/en
Publication of CN107374657A publication Critical patent/CN107374657A/en
Priority to US15/954,953 priority patent/US10977843B2/en
Priority to US17/228,690 priority patent/US11908046B2/en
Application granted granted Critical
Publication of CN107374657B publication Critical patent/CN107374657B/en
Priority to US18/437,210 priority patent/US20240185486A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/58Testing, adjusting or calibrating thereof
    • A61B6/582Calibration
    • A61B6/583Calibration using calibration phantoms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Optics & Photonics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Pulmonology (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a method for correcting CT scanning data and a CT scanning system, wherein the method comprises the following steps: determining a first scanning parameter; obtaining a feature value based at least in part on the first scanning parameter; obtaining a trained neural network model; and inputting the characteristic value into the trained neural network model to obtain a correction coefficient corresponding to the first scanning parameter.

Description

Method for correcting CT scanning data and CT scanning system
Technical Field
The present invention relates to the field of medical imaging devices, and in particular, to a method for correcting CT scan data and a CT scanning system.
Background
Since a CT (Computed Tomography) apparatus actually uses multi-energy spectrum X-rays, soft rays (low energy rays) are attenuated more than hard rays (high energy rays) as the penetration depth of the rays increases, i.e., beam hardening, resulting in continuous change of the energy spectrum of the rays, and image reconstruction of the CT apparatus is based on attenuation of single energy X-rays, and thus spectral correction is required to compensate for the change of the energy spectrum. Currently used beam hardening corrections are mostly air and standard phantoms scanned separately under specific scan parameters, and then correction coefficients are calculated according to a specific algorithm to correct the beam projection values. The correction coefficient obtained by the method is accurate, but is only suitable for specific scanning parameters. When clinical scanning parameters change, for example, when the voltage of an X-ray tube changes, the energy spectrum of emitted X-rays also changes, and at this time, the air and the phantom need to be rescanned and the correction coefficient needs to be calculated, which is not favorable for practical application.
Disclosure of Invention
In view of the above problems, the present invention provides a method for rapidly obtaining a correction coefficient under any scanning parameters.
In order to achieve the purpose of the invention, the technical scheme provided by the invention is as follows:
the invention discloses a method for correcting CT scanning data, which comprises the following steps: determining a first scanning parameter; obtaining a feature value based at least in part on the first scanning parameter; obtaining a trained neural network model; and inputting the characteristic value into the trained neural network model to obtain a correction coefficient corresponding to the first scanning parameter. The characteristic value comprises at least one of the first scanning parameter and a first projection value, and the first projection value comprises a projection value obtained by scanning air under the first scanning parameter;
according to some embodiments of the invention, the obtaining the trained neural network model comprises: and constructing a neural network model, and training the neural network model to obtain the trained neural network model.
According to some embodiments of the invention, the training the neural network model comprises: acquiring one or more training samples, wherein the training samples comprise at least one of second scanning parameters and second projection values, and the second projection values comprise projection values obtained by scanning air under the second scanning parameters; and training the neural network model using the one or more training samples.
According to some embodiments of the invention, the training of the neural network model comprises: using min | { ai}-{ai}idealI is the optimization objective function, where { a |)iIs the output value of the training sample after being input into the neural network model, { ai}idealIs a reference output value of the training sample.
According to some embodiments of the invention, the training of the neural network model using the one or more training samples comprises: and judging whether the trained neural network model meets a preset condition, and stopping training the neural network model if the trained neural network model meets the preset condition.
According to some embodiments of the invention, the method further comprises the steps of: acquiring CT scanning data under the first scanning parameters; and correcting the CT scan data using the correction coefficient.
According to some embodiments of the invention, correcting the CT scan data using the correction factor comprises using a correction function as follows:
Figure BDA0001338222100000021
where j is the correction order, x is the CT scan data, aiAnd y is the corrected CT scanning data.
The invention also discloses a CT scanning system, comprising: a processing component configured to determine first scan parameters; a correction coefficient calculation component configured to obtain a feature value based at least in part on the first scanning parameter, obtain a trained neural network model, and input the feature value into the trained neural network model to obtain a correction coefficient corresponding to the first scanning parameter.
According to some embodiments of the invention, the system further comprises a scanning component configured to acquire CT scan data based on the first scan parameters; wherein the processing component is further configured to correct the CT scan data using the correction factor.
According to some embodiments of the invention, the system further comprises a sample acquisition module and a model training module. The sample acquisition module is configured to acquire one or more training samples including at least one of a second scan parameter and a second projection value, the second projection value including a projection value obtained by scanning air under the second scan parameter. The model training module is configured to train the neural network model using the one or more training samples.
Drawings
FIG. 1 is a schematic diagram of a CT scanning system according to the present invention;
FIG. 2 is a schematic diagram of a neural network model provided in accordance with some embodiments of the present invention;
FIG. 3 is a schematic flow chart of a method for obtaining correction coefficients based on a neural network model according to the present invention;
FIG. 4 is a schematic flow chart of a method of training a neural network model to obtain a trained neural network model in accordance with the present invention;
FIG. 1 labels: 100 is a CT scanning system, 110 is a scanning assembly, 111 is a frame, 112 is a scanning bed, 113-bit X-ray sources, 114 is an X-ray detector, 120 is a network, 130 is a processing assembly, 140 is a storage assembly, 150 is a correction coefficient calculation assembly, 152 is a sample acquisition module, 154 is a storage module, 156 is a model training module, and 158 is a correction coefficient calculation module;
FIG. 2 labels: 201 is the input of the neural network model, 202 is the neural network layer, and 203 is the output.
Detailed Description
The invention is further described by means of specific embodiments in conjunction with the accompanying drawings.
Fig. 1 is a schematic diagram of a CT scanning system 100 according to some embodiments of the present invention. As shown in FIG. 1, the CT scanning system 100 includes a scanning component 110, a processing component 130, a storage component 140, and a correction factor calculation component 150. The various components of the CT scanning system 100 may be interconnected by a network 120.
The scanning assembly 110 may include a gantry 111, a scanning couch 112, an X-ray source 113, and an X-ray detector 114. The gantry 111 contains a hollow chamber as a scanning chamber that at least partially houses the scanning couch 112. The gantry 111 is rotatable and the X-ray source 113 thereon is capable of generating X-rays, scanning the object to be scanned from different angles and obtaining projection values, which can be used for image reconstruction to obtain a CT image.
The scanning bed 112 may support an object to be scanned. The object to be scanned may be a patient or a phantom, but also other scanned objects. The scanning couch 112 may be parallel to the floor.
The X-ray source 113 may generate an X-ray beam that passes through the object to be scanned to the X-ray detector 114. The X-ray detector 114 receives the attenuated X-ray beam passing through the object to be scanned and obtains actual projection values.
The X-ray source 113 may include a high-voltage tube (not shown) therein for generating an X-ray beam. The high-voltage tube may emit multi-energy spectrum X-rays, which may include soft rays (low energy rays) and hard rays (high energy rays). When scanning the object to be scanned, the X-ray beam can penetrate the object to be scanned. As the penetration depth of the X-ray beam increases, the soft ray (low energy ray) attenuation is greater than the hard ray (high energy ray) attenuation, i.e., the beam hardens, resulting in a continuous change in the energy spectrum of the X-ray beam. Whereas image reconstruction for CT scanning systems is based on attenuation of monoenergetic X-rays, spectral correction is required to compensate for spectral variations. The goal of the spectral correction is to obtain a set of correction coefficients that can correct the actual projection values received by the X-ray detector 114. In some embodiments, the correction of the actual projection values by the correction coefficients may be implemented by a function shown in equation (1):
Figure BDA0001338222100000051
where j is a configurable correction order in equation (1), for example j may be 3 or 4. In some embodiments, the correction order may be determined by the user. X is the actual projection value received by the X-ray detector 114, aiFor the correction coefficients, y is the corrected projection values, which can be used for the CT image reconstruction.
The network 120 may connect the components of the CT scanning system 100 to allow data exchange between the components of the CT scanning system 100. The network 120 may be a wired network or a wireless network, or a combination thereof.
The processing component 130 is a control and data processing portion of the CT scanning system 100 and may be configured to process data, generate control signals to control the operation of the CT scanning system 100, and the like.
The storage component 140 may be configured to store data for the CT scanning system 100. For example, the storage component 140 can store a CT scan protocol, scan parameters, scan projection values, CT images, ideal correction coefficients corresponding to particular scan parameters, and the like.
The correction coefficient calculation component 150 may be configured to calculate beam hardening correction coefficients based on a neural network model. The neural network model is a nonlinear algorithm comprising a plurality of parameters, and can extract the characteristics of input data after training and classify according to the extracted characteristics to obtain characteristic output. Some specific descriptions of neural network models according to the present application may be found in fig. 2 and its corresponding description. The correction factor calculation component 150 may include a sample acquisition module 152, a storage module 154, a model training module 156, and a correction factor calculation module 158, which will be described in detail below. In some embodiments, correction coefficient calculation component 150 may be a separate component coupled to processing component 130 and/or network 120. For example, the correction coefficient calculation component 140 may be a computing device, such as a personal computer, a server, a tablet computer, a mobile phone, or the like. In some embodiments, the correction coefficient calculation component 150 may be integrated into the processing component 130 and/or the CT scanning system 100.
The sample acquisition module 152 may acquire training samples. The training samples are used to train the neural network model. The training samples may include input values and reference output values. The input values of the training samples may include at least one of air projection values and/or scan parameters. For example, in some embodiments, the input values for the training samples may include a scan parameter and an air projection value corresponding to the scan parameter. In other embodiments, the input values for the training samples may include scan parameters. In still other embodiments, the input values for the training samples may include air projection values corresponding to certain scan parameters. The scan parameter may be a high voltage tube voltage and/or current value. The air projection values may be projection values received at the X-ray detector 114 after the CT scanning system 100 scans the air under certain scanning parameters. For example, no object is placed on the scanning bed 112, and then the X-ray source 113 is controlled to emit X-rays for scanning and the X-rays are received by the X-ray detector 114, so as to obtain air projection values. The reference output value of the sample may comprise one or more ideal correction coefficients. In some embodiments, the ideal correction coefficients may be calculated after scanning a standard phantom. Methods for obtaining the ideal correction factor by scanning a standard phantom have been described in the prior art and are not specifically set forth in the present invention.
The sample acquisition module 152 may acquire data from other components of the CT scanning system 100 to obtain training samples. For example, the sample acquisition module 152 acquires the scan parameters, the air projection values, and the ideal correction coefficients directly from the storage component 140 to obtain the training samples. For another example, the sample acquiring module 152 may acquire the scan parameters from the processing component 130, acquire the air projection values under the scan parameters from the X-ray detector 114, and acquire the ideal correction coefficients under the scan parameters from the storage component 140 to obtain the training samples. The sample acquisition module 152 may store the obtained training samples in the storage module 154, or may store the training samples in the storage component 140.
The storage module 154 may store data, which may include training sample sets, neural network models, trained neural network models, data used in training the models, and the like, as will be described below.
The model training module 156 may train the neural network model. And (3) training the neural network model, namely inputting the input value of the training sample into the neural network model, and calculating the input value by the neural network model to obtain an output value. The current parameters of the model are then adjusted back using the optimization function. In some embodiments, the process of adjusting the model in reverse may be an iterative process. After each training using the training sample, the parameters in the neural network model will change, and serve as the "initialization parameters" for the next training input to the training sample. For example, in some embodiments, the parameters of the model may be inversely adjusted using the function shown in equation (2):
min||{ai}-{ai}ideal|| (2)
wherein { aiIs the neural network model output value, { ai}idealFor the reference output value in the training sample, the objective of the optimization function is to adjust the parameters of the model to minimize the difference between the output value of the neural network model and the reference output value. After a plurality of training samples are used for training, the parameter values in the model can reach the optimal value, namely the difference value between the output value of the neural network model and the reference output value is minimum, and the training is finished. And after the training of the neural network model is finished, obtaining the trained neural network model, wherein the parameter value is fixed. The trained neural network model may be sent to the memory module 154 and/or the storage component 140 for storage, and may also be sent to the correction coefficient calculation module 158.
The correction coefficient calculation module 158 may calculate the correction coefficients using the trained neural network model. The correction coefficient calculation module 158 may obtain the feature values, input the feature values as input values into the trained neural network model, and obtain output values as correction coefficients. Wherein the characteristic values may include air projection values obtained by scanning air by the CT scanning system 100, and may further include scanning parameters. In some embodiments, the correction coefficient calculation module 158 may obtain the characteristic values from the scanning component 110, the processing component 130, and/or the storage component 140.
Fig. 2 is a schematic structural diagram of a neural network model provided according to some embodiments of the present invention. The neural network model may include an input 201, a neural network layer 202, and an output 203. The input 201 of the neural network model is used to receive input values and input the input values into the neural network layer. The input value may include one or more values, such as xkV1、xkV2、…、xkVN. The neural network layer 202 may perform operation processing on the input value, extract and identify features of the input value, and output an operation result. The neural network layer 202 may include one or more operational layers, each of which may include one or more nodes, each of which may include one or more parameters. Each node in the neural network layer 202 may receive the outputs of all nodes in the previous operation layer as the input of the node, and output the result to all nodes in the subsequent operation layer after operation. The node of the first operation layer receives all the input values of the input terminal, and the operation result of the node of the last operation layer is output to the output terminal 203. The output end 203 is used for receiving the operation result of the neural network layer and taking the operation result as the output value of the neural network model. For example, the operation results of one or more nodes in the last operation layer of the neural network layer 202 are a1、a2、…、aMThen the output terminal 203 will be a1、a2、…、aMAnd outputting the output value as the output value of the neural network model. It should be apparent that the user can determine the number of output values of the neural network model by adjusting the number of nodes of the last operational layer in the neural network layer 202.
Fig. 3 is a flowchart illustrating a method 300 for obtaining a correction coefficient based on a neural network model according to the present invention.
As shown in fig. 3, in step 310, a first scan parameter may be set. When a user performs a CT scan, first scan parameters may be set in the processing component 130.
In step 320, the processing component 130 can determine whether the set first scan parameter has a corresponding ideal correction factor. For example, after receiving the first scanning parameter set by the user, the processing component 130 may search from the storage component 140 to determine whether the same scanning parameter and the corresponding ideal correction coefficient as the first scanning parameter are available. If the same scan parameters and corresponding ideal correction factors are already in the storage component 140 as the first scan parameters, then go to step 330. In step 330, the processing component 130 may obtain an ideal correction coefficient corresponding to the first scanning parameter from the storage component 140 as the first correction coefficient.
In step 340, the CT scanning system 100 may scan the object to be scanned by using the set first scan parameter to obtain an actual projection value.
If there are no scan parameters in the storage component 140 that are the same as the first scan parameters or corresponding ideal correction coefficients, then go to step 350.
In step 350, the user may control the CT scanning system 100 to acquire feature values based on the set first scanning parameters. The characteristic values may include at least one of first scan parameters and first projection values, wherein the first projection values include air projection values obtained by scanning air at the first scan parameters. For example, in some embodiments, the feature value may comprise a first projection value. In other embodiments, the characteristic value may comprise a first scanning parameter. In still other embodiments, the feature values may include first scan parameters and first projection values. The characteristic value may be sent to the processing component 130 and/or the storage component 140.
In step 360, the correction coefficient calculation module 158 in the correction coefficient calculation component 150 may obtain the feature values obtained in step 350 and input them as input values into the trained neural network model, resulting in one or more second correction coefficients. Wherein the trained neural network model is a neural network model obtained after training is completed. The specific process of training the neural network model to obtain the trained neural network model will be described below. After obtaining the second correction coefficient, the correction coefficient calculation component 150 may send the second correction coefficient to the processing component 130 and/or the storage component 140.
In step 370, the processing component 130 may correct the actual projection values by using the first correction coefficient or the second correction coefficient, so as to obtain corrected projection values for CT image reconstruction. It is to be understood that, when the determination in step 320 is "yes," the actual projection value may be corrected using the first correction coefficient in step 370; when the determination in step 320 is "no," the actual projection values may be corrected using the second correction coefficient.
It will be appreciated by those skilled in the art that the above-described procedure is merely an exemplary illustration of correcting the projection values, and that other variations are possible. For example, in some embodiments, steps 320 and 330 may be omitted. In addition, the execution order of each step in the above-described flow may vary, for example, step 340 may be executed before step 350.
FIG. 4 is a flow diagram illustrating a method 400 of training a neural network model to obtain a trained neural network model in accordance with the present invention.
As shown in fig. 4, in step 410, a set of training samples is obtained. Those skilled in the art will appreciate that training a neural network model requires training samples. The set of training samples includes one or more training samples, which may be obtained, for example, using sample acquisition module 152. In some embodiments, the sample acquisition module 152 may acquire the second scan parameter, the second projection value, and the ideal correction coefficient to obtain a training sample; in other embodiments, the sample acquisition module 152 may acquire the second projection values and the ideal correction coefficients to obtain training samples; in still other embodiments, the sample acquiring module 152 may acquire the second scanning parameter and the ideal correction coefficient corresponding to the second scanning parameter to obtain the training sample. Wherein the second projection values comprise air projection values obtained by scanning air under the second scanning parameters. The sample acquisition module 152 may send the training samples to the storage module 154 for storage.
Different second projection values and ideal correction coefficients can be obtained by using different second scanning parameters, for example, by changing the voltage or current value of the high-voltage tube in the X-ray source 113, i.e., different training samples can be obtained. After one or more training samples are obtained, a training sample set is obtained. The training sample set may be stored in the storage module 154 and/or the storage component 140.
In step 420, the model training module 156 obtains a training sample from the training sample set, and the reference output value is the ideal correction coefficient.
In step 430, the model training module 156 may train the neural network model using the training samples. The model training module 156 may receive the neural network model from the storage module 154 and may also construct the neural network model, and at least some parameters of the neural network model may be initialized, for example, randomly initialized. The model training module 156 inputs the input value of the training sample to the neural network model, and the neural network model performs operation processing on the input value of the training sample, including processes of feature extraction, feature recognition, and the like, and finally obtains a set of output values. The model training module 156 uses the optimization function to make inverse adjustments to the parameters of the model. For example, in some embodiments, an optimization function min | { a, shown in equation (2), may be usedi}-{ai}ideal | l is the inverse of the model's parameters, where { a |)iIs the output value of the neural network model, { a }i}idealFor the reference output value of the training sample, the objective of the optimization function is to adjust the parameters of the model in the reverse direction to minimize the difference between the output value of the neural network model and the reference output value. After each training, the parameters in the neural network model are changed and used as the 'initialization parameters' for the next training sample input.
In step 440, the model training module 156 determines whether the trained neural network model satisfies a preset condition, wherein the preset condition may be determined by the user. For example, in some embodiments, the preset condition may be that the number of training samples that have been trained reaches a preset value; in other embodiments, the preset condition may be that the trained neural network model is tested, and the test result is qualified. If the determination result is "yes", then in step 450, the trained neural network model is obtained and sent to the storage module 154 and/or the correction coefficient calculation module 158, and the trained neural network model may also be sent to the storage component 140. If the result is "no", go to step 420, continue to obtain training samples for training, and do not repeat here.
It is to be understood that if the second scan parameter is not included in the training sample obtained by the sample obtaining module 152 in step 410, the feature value input to the trained neural network model in step 360 of the method 300 may not include the first scan parameter; if the second projection value is not included in the training sample obtained by the sample obtaining module 152 in step 410, the feature value input to the trained neural network model in step 360 of the method 300 may not include the first projection value.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A method of correcting CT scan data, comprising:
determining a first scanning parameter;
obtaining a feature value based at least in part on the first scanning parameter;
obtaining a trained neural network model;
inputting the characteristic value into the trained neural network model to obtain a correction coefficient corresponding to the first scanning parameter, wherein
The characteristic value comprises at least one of the first scanning parameter and a first projection value, and the first projection value comprises a projection value obtained by scanning air under the first scanning parameter;
the trained neural network model comprises a neural network model trained using one or more training samples comprising at least one of second scan parameters and second projection values.
2. The method of claim 1, wherein the second projection values comprise projection values obtained by scanning air under the second scan parameter.
3. The method of claim 1, wherein the training the neural network model comprises: using min | { a |)i}-{ai}idealI is the optimization objective function, where { a |)iIs the output value of the training sample after being input into the neural network model, { ai}idealIs a reference output value of the training sample.
4. The method of claim 1, wherein the training the neural network model using one or more training samples comprises: and judging whether the trained neural network model meets a preset condition, and stopping training the neural network model if the trained neural network model meets the preset condition.
5. The method of claim 1, further comprising:
acquiring CT scanning data under the first scanning parameters; and
and correcting the CT scanning data by using the correction coefficient.
6. The method of claim 5, wherein using the correction factor to correct the CT scan data comprises using a correction function as follows:
Figure FDA0002977200060000021
where j is the correction order, x is the CT scan data, aiAnd y is the corrected CT scanning data.
7. A CT scanning system, comprising:
a processing component configured to determine first scan parameters;
a correction coefficient calculation component configured to obtain a feature value based at least in part on the first scan parameter;
obtaining a trained neural network model;
inputting the characteristic value into the trained neural network model to obtain a correction coefficient corresponding to the first scanning parameter;
wherein the characteristic value comprises at least one of the first scanning parameter and a first projection value, the first projection value comprising a projection value obtained by scanning air under the first scanning parameter;
the trained neural network model comprises a neural network model trained using one or more training samples comprising at least one of second scan parameters and second projection values.
8. The system of claim 7, further comprising:
a scanning component configured to acquire CT scan data based on the first scan parameters; wherein the processing component is further configured to correct the CT scan data using the correction factor.
9. The system of claim 7, wherein the second projection values comprise projection values obtained by scanning air under the second scan parameter.
CN201710524197.2A 2017-06-28 2017-06-30 Method for correcting CT scanning data and CT scanning system Active CN107374657B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201710524197.2A CN107374657B (en) 2017-06-30 2017-06-30 Method for correcting CT scanning data and CT scanning system
US15/954,953 US10977843B2 (en) 2017-06-28 2018-04-17 Systems and methods for determining parameters for medical image processing
US17/228,690 US11908046B2 (en) 2017-06-28 2021-04-12 Systems and methods for determining processing parameter for medical image processing
US18/437,210 US20240185486A1 (en) 2017-06-28 2024-02-08 Systems and methods for determining parameters for medical image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710524197.2A CN107374657B (en) 2017-06-30 2017-06-30 Method for correcting CT scanning data and CT scanning system

Publications (2)

Publication Number Publication Date
CN107374657A CN107374657A (en) 2017-11-24
CN107374657B true CN107374657B (en) 2021-05-11

Family

ID=60334713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710524197.2A Active CN107374657B (en) 2017-06-28 2017-06-30 Method for correcting CT scanning data and CT scanning system

Country Status (1)

Country Link
CN (1) CN107374657B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3746979B1 (en) * 2018-01-31 2023-11-15 Koninklijke Philips N.V. Image quality improved virtual non-contrast images generated by a spectral computed tomography (ct) scanner
CN109523024A (en) * 2018-11-22 2019-03-26 天津大学 Energy spectrum correction method towards medical X-ray detector
CN111935892B (en) * 2019-05-13 2022-11-22 中科智云科技有限公司 Method and apparatus for measuring plasma state
CN112014870A (en) * 2019-05-31 2020-12-01 佳能医疗***株式会社 Radiation detection device, energy correction method, and program

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1743835A (en) * 2004-07-14 2006-03-08 株式会社东芝 X-ray computed tomography photography system and method for correcting data of the system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03102477A (en) * 1989-06-26 1991-04-26 Fuji Photo Film Co Ltd Radial image processing device
US6422751B1 (en) * 1998-08-07 2002-07-23 General Electric Company Method and system for prediction of exposure and dose area product for radiographic x-ray imaging
US6950493B2 (en) * 2003-06-25 2005-09-27 Besson Guy M Dynamic multi-spectral CT imaging
CN101336828B (en) * 2007-07-06 2010-10-13 Ge医疗***环球技术有限公司 Acquisition method and device of CT value correction paper
CN102768759B (en) * 2012-07-04 2014-11-26 深圳安科高技术股份有限公司 Intraoperative CT (Computed Tomography) image beam hardening artifact correction method and device
CN103961125B (en) * 2013-01-31 2016-01-20 东北大学 A kind of CT value correcting method for Cone-Beam CT
US9918701B2 (en) * 2014-09-03 2018-03-20 Contextvision Ab Methods and systems for automatic control of subjective image quality in imaging of objects
CN106327495A (en) * 2016-08-26 2017-01-11 穆达文 Biological bone recognition method, device and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1743835A (en) * 2004-07-14 2006-03-08 株式会社东芝 X-ray computed tomography photography system and method for correcting data of the system

Also Published As

Publication number Publication date
CN107374657A (en) 2017-11-24

Similar Documents

Publication Publication Date Title
CN107374657B (en) Method for correcting CT scanning data and CT scanning system
CN106413236B (en) A kind of exposure parameter method of adjustment and device
US10417794B2 (en) Reconstructing CT image
CN106683146B (en) Image reconstruction method and parameter determination method of image reconstruction algorithm
CN104103055B (en) Automatically optimal output data is obtained
CN110811663A (en) Multi-region scanning method, device, equipment and storage medium
JP6446361B2 (en) X-ray CT apparatus and correction processing apparatus
JPWO2014041889A1 (en) X-ray CT apparatus and X-ray CT image processing method
WO2017045620A1 (en) Computed tomography method and system
CN111627083B (en) Bone hardening artifact correction method, device, computer equipment and readable storage medium
CN110916703B (en) Scanning dose modulation method, scanning device, scanning apparatus and storage medium
CN108670282A (en) A kind of osteosclerosis artifact correction method
KR102472464B1 (en) Image Processing Method and Image Processing Device using the same
JP2014000407A (en) Method and system for concurrent update successive approximation reconstruction (ir)
US11158050B2 (en) Bone suppression for chest radiographs using deep learning
JP6083990B2 (en) Radiation imaging apparatus, control method thereof, and program
CN105374014B (en) Method for correcting image and device, medical image generation method and device
CN113096210A (en) Image reconstruction method and device, electronic equipment and storage medium
CN110811662A (en) Method, device and equipment for modulating scanning dose and storage medium
EP4123572A2 (en) An apparatus and a method for x-ray image restoration
CN113838161B (en) Sparse projection reconstruction method based on graph learning
CN114638910A (en) Scattering correction method and system and readable storage medium
CN113627492B (en) Method and device for determining size of scanning object, and electronic equipment
KR101834458B1 (en) Correction of an image method and device using optimization threshold
CN113539440B (en) CT image reconstruction method and device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 201807 Shanghai City, north of the city of Jiading District Road No. 2258

Applicant after: Shanghai Lianying Medical Technology Co., Ltd

Address before: 201807 Shanghai City, north of the city of Jiading District Road No. 2258

Applicant before: SHANGHAI UNITED IMAGING HEALTHCARE Co.,Ltd.

GR01 Patent grant
GR01 Patent grant