CN111340903B - Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image - Google Patents

Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image Download PDF

Info

Publication number
CN111340903B
CN111340903B CN202010084751.1A CN202010084751A CN111340903B CN 111340903 B CN111340903 B CN 111340903B CN 202010084751 A CN202010084751 A CN 202010084751A CN 111340903 B CN111340903 B CN 111340903B
Authority
CN
China
Prior art keywords
image
attenuation
pet
discriminator
corrected pet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010084751.1A
Other languages
Chinese (zh)
Other versions
CN111340903A (en
Inventor
胡战利
梁栋
李永昌
郑海荣
杨永峰
刘新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202010084751.1A priority Critical patent/CN111340903B/en
Publication of CN111340903A publication Critical patent/CN111340903A/en
Application granted granted Critical
Publication of CN111340903B publication Critical patent/CN111340903B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating

Abstract

The invention discloses a method and a system for generating a synthetic PET-CT image based on a non-attenuation correction PET image. The method comprises the following steps: constructing a first generation antagonism network and a second generation antagonism network; obtaining a mapping relation between the PET image which is not subjected to attenuation correction and the PET image which is subjected to attenuation correction through training a first generation reactance network; obtaining a mapping relation between the attenuation corrected PET image and the CT image through training a second generation countermeasure network; and generating a synthetic PET/CT image by using the obtained mapping relation. The invention can directly synthesize high-quality PET/CT images from the non-attenuation correction PET images, can reduce the medical expense of patients, and furthest reduces the radiation dose received in the process of checking the patients.

Description

Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image
Technical Field
The present invention relates to the field of medical image processing technology, and more particularly, to a method and system for generating a synthetic PET-CT image based on a non-attenuation corrected PET image.
Background
Positron Emission Tomography (PET) is a non-invasive imaging technique by injecting a specific contrast agent such as 18 After F-Fluorodeoxyglucose (FDG), the PET scanner is used for collecting data and a series of later data processing, and the reconstructed PET image can clearly reflect the metabolic level of the tissues or organs of the patient, so that the method can be clinically used for early screening of tumors and stage diagnosis of the patients after operation. However, PET imaging reflects only functional information of the tissue or organ, and it also requires additional Computed Tomography (CT) imaging to perform attenuation correction of PET imaging to provide additional anatomical information of the human body to locate the lesion location, which inevitably introduces additional ionizing radiation to the patient, potentially increasing the patient's chances of developing cancer. An effective method for generating pseudo-CT from medical images of other modalities, such as Magnetic Resonance Imaging (MRI), to replace the additional extra anatomical imaging in PET scanning, for reducing the radiation dose suffered by the patient and for reducingThe high cost of PET/CT examination has important scientific significance and application prospect.
In the prior art, the technical scheme for generating the CT image mainly has the following problems: the direct PET/CT examination is more expensive, and the radiation dose of the patient in the examination process is more; the method for estimating the attenuation correction of CT for PET from MRI has the defect of long time for obtaining an MRI sequence, and in addition, the accuracy of manual data registration directly influences the quality of later imaging; involuntary movements of the patient's limb during PET/MRI examinations may cause truncation artifacts and the like.
Disclosure of Invention
The invention aims to provide a method and a system for generating a synthetic PET-CT image based on a non-attenuation correction PET image, which are a novel technical scheme for synthesizing a PET/CT image from a non-attenuation correction PET image based on a deep learning method.
According to a first aspect of the present invention there is provided a method of generating a synthetic PET-CT image based on a non-attenuation corrected PET image, comprising the steps of:
constructing a first generation antagonism network comprising a first generator and a first discriminator and constructing a second generation antagonism network comprising a second generator and a second discriminator;
feature extraction is performed by taking a non-attenuation-corrected PET image as an input of the first generator to obtain a synthesized attenuation-corrected PET image, the synthesized attenuation-corrected PET image is taken as an input of the first discriminator, training is performed by taking the attenuation-corrected PET image as a reference image of the first discriminator, and a mapping relation G between the non-attenuation-corrected PET image and the attenuation-corrected PET image is obtained 1
Feature extraction is performed by taking the attenuation-corrected PET image as the input of the second generator to obtain a synthesized CT image, training is performed by taking the synthesized CT image as the input of the second discriminator and taking the trained CT image as the reference image of the second discriminator to obtain a mapping relation G between the attenuation-corrected PET image and the CT image 2
Using mapping relation G 1 And mapping relation G 2 A synthetic PET/CT image is generated from the unattenuated corrected PET image.
In one embodiment, the first generated antagonism network and the second generated antagonism network have the same or different network structures.
In one embodiment, the first generator and the second generator have the same network structure, and sequentially comprise a plurality of convolution layers, a pooling layer, a plurality of residual blocks and a plurality of deconvolution layers corresponding to the convolution layers, wherein the convolution layers are connected with the corresponding deconvolution layers in a jump mode.
In one embodiment, the first and second discriminators employ a fully connected neural network to determine whether an input image is from the output of a corresponding generator or from a reference image.
In one embodiment, the first discriminator and the second discriminator compare the distribution similarity between the output image from the corresponding generator and the reference image based on the distance.
In one embodiment, the first and second generated antagonism networks generate antagonism networks based on Wo Sesi, the antagonism loss being defined as:
Figure BDA0002381648260000021
where λ represents an hyper-parameter, x represents the input image, G (x) represents the output of the generator, E (-) is used to calculate the EM distance,
Figure BDA0002381648260000031
representing samples randomly taken from the real image and the reference image, D (-) represents the discriminating process of the discriminator.
In one embodiment, during the training process, the overall objective function is set to include one or more of an combat loss term and a mean square error loss term, an image gradient loss term, a multi-scale content loss term, a structural similarity loss term.
In one embodiment, during training, the overall objective function is set to:
Figure BDA0002381648260000032
wherein I is AC Representing a true attenuation corrected PET image, I sAC Representing the resulting attenuation corrected PET image, lambda 0 Weight, lambda, representing countermeasures against losses mse Weights representing mean square error, lambda gdl Weights representing image gradient loss, lambda content Weights, lambda, representing loss of multi-scale content ssim The weight representing the loss of structural similarity.
According to a second aspect of the present invention, a system for generating a synthetic PET-CT image based on a non-attenuation corrected PET image is provided. The system comprises:
a network model construction unit: for building a first generation antagonism network comprising a first generator and a first discriminator and building a second generation antagonism network comprising a second generator and a second discriminator;
a first training unit: for feature extraction with a non-attenuation corrected PET image as input to the first generator to obtain a composite attenuation corrected PET image, training with the composite attenuation corrected PET image as input to the first discriminator with an attenuation corrected PET image as reference image of the first discriminator to obtain a mapping G between the non-attenuation corrected PET image and the attenuation corrected PET image 1
A second training unit: for feature extraction with the attenuation-corrected PET image as input of the second generator to obtain a composite CT image, training with the composite CT image as input of the second discriminator and the trained CT image as reference image of the second discriminator to obtain a mapping relationship G between the attenuation-corrected PET image and the CT image 2
An image synthesizing unit: for using the mapping relation G 1 And mapping relation G 2 A synthetic PET/CT image is generated from the unattenuated corrected PET image.
Compared with the prior art, the invention has the advantages that: the existing non-attenuation correction PET image is directly utilized to generate a synthetic PET/CT image, so that the attenuation correction of PET imaging by MRI synthetic CT is omitted, and a new idea is provided for subsequent practical application; in the case of PET/MRI, where PET/CT has not been popular, the present invention provides an effective shortcut for synthesizing PET/CT, which not only reduces the medical costs of the patient, but also minimizes the radiation dose received during the patient's examination.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart of a method of generating a synthetic PET-CT image based on a non-attenuation corrected PET image in accordance with one embodiment of the invention;
FIG. 2 is a network model diagram of a method for generating a synthetic PET-CT image implementing non-attenuation corrected PET images in accordance with one embodiment of the invention;
FIG. 3 is a graph of experimental results according to one embodiment of the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of exemplary embodiments may have different values.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
Briefly, the present invention contemplates a two-stage network for solving the problem of attenuation correction of PET images and reducing radiation dose during patient examination, the network comprising two stages: the first stage completes the self-attenuation correction of the non-attenuation corrected PET image and obtains a composite attenuation corrected PET image, and the second stage utilizes the composite attenuation corrected PET image obtained in the first stage to estimate a corresponding CT image. Through these two phases, a set of uncorrected PET images can be used to generate a PET/CT image.
To achieve the above object, in one embodiment, a synthetic PET/CT image is generated from an unattenuated corrected PET image using a modified Wo Sesi tam generation countermeasure network, as shown in fig. 1, the method of the present invention specifically comprises the steps of
Step S110, a generating countermeasure network model comprising a generator and a discriminator is constructed.
The network model provided by the embodiment of the invention comprises two sets of generating countermeasure networks, wherein each set of generating countermeasure network comprises a generator network (or simply generator) and a discriminator network (or simply discriminator) respectively. To enable the generation of a synthetic PET/CT image, two phases are separated: the first stage is a training stage, the training network is composed of a generator and a discriminator, the second stage is a testing stage, and the composite PET/CT image is generated from the unattenuated corrected PET image by using the network trained in the first stage.
Specifically, referring to fig. 2, in one embodiment, the generator network comprises, in order from left to right (pooling layer, e.g., using max pooling), 5 convolutional layers, 9 residual blocks, 5 deconvolution layers, and 1 convolutional layer, and the discriminator network comprises, in order, four convolutional layers (pooling layer), and 2 fully-connected layers, wherein each residual block comprises 2 convolutional layers.
Referring to fig. 2, the input image is subjected to 5 convolution layers (convolution operation is used to extract features, maximum pooling operation is used to reduce the image size, and an activation function is used to increase the nonlinearity of the network) to extract features multiple times, and then the data stream is subjected to 9 residual blocks and 5 deconvolution layers to restore the resolution of the image to the size of the input image, so as to be converted into an output image. For example, the size of the convolution kernel used for the entire network is 3×3, and the number of filters used for the encoding section is 64,128,256,512,512, respectively.
The discriminator network employs, for example, a fully-connected neural network (FCN) to determine whether the input image is an output from a generator or a reference image. In embodiments of the invention the discriminator network does not employ classification probabilities, but rather uses a special feature-based distance to measure the difference between the output image of the generator and the reference image. Specifically, the discriminator does not use an activation function based on real or false classification, but directly receives and outputs advanced features using two full connection layers, and then calculates the distance between the real image and the reference image on the basis of the received advanced features, which is used to compare the degree of closeness of the two distributions of the output image and the reference image from the generator, and can provide meaningful gradient information.
For a first set of generated countermeasure networks, feature extraction is performed with the trained non-attenuation corrected PET image as input to the generator to obtain a composite attenuation corrected PET image, and the composite attenuation corrected PET image is used as input to the discriminator with the trained attenuation corrected PET image as reference image for the discriminator.
For the second set of generated countermeasure network, feature extraction is performed with the trained attenuation correction PET image as input to the generator to obtain a composite CT image, and the composite CT image is used as input to the discriminator with the trained CT image as reference image of the discriminator.
It should be noted that the first set of generating countermeasure network and the second set of generating countermeasure network may be constructed to have the same or different network structures, for example, the generator networks of the two sets may be configured to have different numbers of convolution layers, different numbers of residual blocks, and the like, and for example, the discriminators of the two sets may use different classification methods and the like. Those skilled in the art may make appropriate modifications to the multi-network model based on factors such as the requirements for processing speed and processing accuracy, and the invention is not limited in this regard.
In step S120, the design generates a loss function against the network.
Since Wo Sesi tam is adopted in the embodiment of the present invention to generate the countermeasure network, the countermeasure loss of the network can be defined as:
Figure BDA0002381648260000061
where λ represents a super-parameter, e.g. empirically set to 10, x represents the input image, G (x) represents the output of the generator, E (-) is used to calculate the EM distance (Earth-river distance),
Figure BDA0002381648260000062
representing samples randomly taken from the real image and the reference image in a certain proportion, and D (-) represents the discriminating process of the discriminator.
In another embodiment, to compensate for the conventional L 2 The problem of image blurring caused by distance introduces multi-scale content loss, image gradient loss and structural similarity loss. For example, the definition of the overall objective function is as follows:
Figure BDA0002381648260000063
wherein I is AC Representing a true attenuation corrected PET image, I sAC Representing the resulting attenuation corrected PET image. Lambda (lambda) 0 Weight, lambda, representing countermeasures against losses mse Weights representing mean square error, lambda gdl Weights representing image gradient loss, lambda content Weights, lambda, representing loss of multi-scale content ssim The weights representing the loss of structural similarity, these hyper-parameters can be set to appropriate values according to the effect of multiple experiments.
In the training process, the optimal solution is obtained by minimizing the total objective function, i.e. a high quality output image is obtained.
It should be noted that, the person skilled in the art may change the objective function according to practical applications, for example, one or more of the mean square error loss term, the image gradient loss term, the multi-scale content loss term, and the structural similarity loss term may be selected, in addition to the countermeasure loss, without necessarily including all the loss terms.
Step S130, training to generate an countermeasure network, obtaining a mapping relationship between the unattenuated corrected PET image and the attenuation corrected PET image, and a mapping relationship between the attenuation corrected PET image and the CT image.
For example, a batch of paired image pairs is extracted from the data set of unattenuated, attenuation corrected, and CT images as network inputs, training the generator and discriminator, respectively. By training, the mapping relation G from the input image to the reference image is obtained 1 ,G 2 Wherein G is 1 Representing a mapping from an unattenuated attenuation corrected PET image to an attenuation corrected PET image, G 2 Representing a mapping from the attenuation corrected PET image to the CT image.
In the process of generating the generator and discriminator of the countermeasure network through combined training, the optimization can be performed by using an optimizer in the prior art, for example, an Adam optimization algorithm is used for optimization, and an exponentially decreasing learning rate is used.
The method for optimizing the network by combining deep learning with various loss functions can realize the self-attenuation correction of the non-attenuation correction PET image and obtain good results.
Step S140, generating a synthesized PET/CT image based on the obtained mapping relation.
In obtaining the mapping relation G 1 ,G 2 Then, the synthesized PET/CT image can be directly generated according to the existing PET image without attenuation correction, namely, the non-attenuation correction PET image is input into a trained model G 1 Obtaining a composite attenuation corrected PET image, and then correcting the composite attenuationThe positive PET image is input into a trained model G 2 A composite CT image is obtained (see also fig. 2).
Accordingly, the present invention provides a system for generating a synthetic PET-CT image based on non-attenuation corrected PET images for implementing one or more aspects of the above method. For example, the system includes: a network model construction unit for constructing a first generation countermeasure network including a first generator and a first discriminator and constructing a second generation countermeasure network including a second generator and a second discriminator; a first training unit for performing feature extraction with a non-attenuation-corrected PET image as input of the first generator to obtain a synthesized attenuation-corrected PET image, and training with the synthesized attenuation-corrected PET image as input of the first discriminator and the attenuation-corrected PET image as reference image of the first discriminator to obtain a mapping relationship G between the non-attenuation-corrected PET image and the attenuation-corrected PET image 1 The method comprises the steps of carrying out a first treatment on the surface of the A second training unit for performing feature extraction with the attenuation-corrected PET image as input of the second generator to obtain a synthesized CT image, and training with the synthesized CT image as input of the second discriminator and the trained CT image as reference image of the second discriminator to obtain a mapping relationship G between the attenuation-corrected PET image and the CT image 2 The method comprises the steps of carrying out a first treatment on the surface of the An image synthesizing unit for utilizing the mapping relation G 1 And mapping relation G 2 A synthetic PET/CT image is generated from the unattenuated corrected PET image. In the system provided by the invention, each module can be realized by adopting a processor or a logic circuit.
It should be noted that the present invention can be applied to a PET/MRI system with appropriate modification, in addition to being applied to replace the existing PET/CT system.
In summary, the invention utilizes the improved depth Wo Sesi to generate the countermeasure network (coupling residual network) to train two sets of networks respectively, respectively learns the non-attenuation-corrected PET image and the end-to-end nonlinear mapping relation between the attenuation-corrected PET image and the CT image, and limits the output by combining various loss functions (such as image gradient loss, content loss and the like), thereby effectively reducing the distortion problem of generating the image by adopting the countermeasure network, and largely retaining the detail information (such as edges and the like) of the image, so as to excite the network to generate the high-quality image. The invention not only realizes the self-attenuation correction of the PET image, but also can provide more accurate anatomical positioning for the diagnosis PET imaging.
It has been verified that with the present invention a clearer high quality image can be obtained, see the comparison of experimental results on different sections as shown in fig. 3, wherein (a) is a reference CT image, (b) is a synthetic CT image, (c) is a reference PET image, and (d) is a synthetic PET image.
The present invention may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present invention may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are all equivalent.
The foregoing description of embodiments of the invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

1. A method of generating a synthetic PET-CT image based on a non-attenuation corrected PET image, comprising the steps of:
constructing a first generation antagonism network comprising a first generator and a first discriminator and constructing a second generation antagonism network comprising a second generator and a second discriminator;
feature extraction is performed by taking a non-attenuation-corrected PET image as an input of the first generator to obtain a synthesized attenuation-corrected PET image, the synthesized attenuation-corrected PET image is taken as an input of the first discriminator, training is performed by taking the attenuation-corrected PET image as a reference image of the first discriminator, and a mapping relation G between the non-attenuation-corrected PET image and the attenuation-corrected PET image is obtained 1
Feature extraction takes the attenuation corrected PET image as an input to the second generator,obtaining a synthesized CT image, taking the synthesized CT image as an input of the second discriminator, and training by taking a trained CT image as a reference image of the second discriminator to obtain a mapping relation G between the attenuation corrected PET image and the CT image 2
Using mapping relation G 1 And mapping relation G 2 A synthetic PET/CT image is generated from the unattenuated corrected PET image.
2. The method of claim 1, wherein the first and second generated antagonism networks have the same or different network structures.
3. The method of claim 1, wherein the first generator and the second generator have the same network structure, and sequentially comprise a plurality of convolution layers and pooling layers, a plurality of residual blocks, and a plurality of deconvolution layers corresponding to the plurality of convolution layers, wherein the convolution layers are connected with the corresponding deconvolution layers in a skip manner.
4. The method of claim 1, wherein the first and second discriminators employ a fully connected neural network to determine whether an input image is from an output of a corresponding generator or from a reference image.
5. The method of claim 4, wherein the first and second discriminators compare a distribution similarity between an output image from a corresponding generator and a reference image based on distance.
6. The method of claim 1, wherein the first and second generated antagonism networks generate antagonism networks based on Wo Sesi tam, antagonism losses defined as:
Figure FDA0002381648250000021
where λ represents an hyper-parameter, x represents the input image, G (x) represents the output of the generator, E (-) is used to calculate the EM distance,
Figure FDA0002381648250000022
representing samples randomly taken from the real image and the reference image, D (-) represents the discriminating process of the discriminator.
7. The method of claim 6, wherein during training, the overall objective function is set to include one or more of an combat loss term and a mean square error loss term, an image gradient loss term, a multi-scale content loss term, a structural similarity loss term.
8. The method of claim 7, wherein during training, the overall objective function is set to:
Figure FDA0002381648250000023
wherein I is AC Representing a true attenuation corrected PET image, I sAC Representing the resulting attenuation corrected PET image, lambda 0 Weight, lambda, representing countermeasures against losses mse Weights representing mean square error, lambda gdl Weights representing image gradient loss, lambda content Weights, lambda, representing loss of multi-scale content ssim The weight representing the loss of structural similarity.
9. A system for generating a composite PET-CT image based on a non-attenuation corrected PET image, comprising:
a network model construction unit: for building a first generation antagonism network comprising a first generator and a first discriminator and building a second generation antagonism network comprising a second generator and a second discriminator;
a first training unit: for non-attenuation corrected PET images as input to the first generatorPerforming feature extraction to obtain a synthesized attenuation-corrected PET image, training the synthesized attenuation-corrected PET image serving as an input of the first discriminator by using the attenuation-corrected PET image as a reference image of the first discriminator to obtain a mapping relation G between an unattenuated attenuation-corrected PET image and an attenuation-corrected PET image 1
A second training unit: for feature extraction with the attenuation-corrected PET image as input of the second generator to obtain a composite CT image, training with the composite CT image as input of the second discriminator and the trained CT image as reference image of the second discriminator to obtain a mapping relationship G between the attenuation-corrected PET image and the CT image 2
An image synthesizing unit: for using the mapping relation G 1 And mapping relation G 2 A synthetic PET/CT image is generated from the unattenuated corrected PET image.
10. A computer readable storage medium having stored thereon a computer program, wherein the program when executed by a processor realizes the steps of the method according to any of claims 1 to 8.
CN202010084751.1A 2020-02-10 2020-02-10 Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image Active CN111340903B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010084751.1A CN111340903B (en) 2020-02-10 2020-02-10 Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010084751.1A CN111340903B (en) 2020-02-10 2020-02-10 Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image

Publications (2)

Publication Number Publication Date
CN111340903A CN111340903A (en) 2020-06-26
CN111340903B true CN111340903B (en) 2023-04-25

Family

ID=71185358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010084751.1A Active CN111340903B (en) 2020-02-10 2020-02-10 Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image

Country Status (1)

Country Link
CN (1) CN111340903B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022094779A1 (en) * 2020-11-04 2022-05-12 深圳先进技术研究院 Deep learning framework and method for generating ct image from pet image
CN112419173A (en) * 2020-11-04 2021-02-26 深圳先进技术研究院 Deep learning framework and method for generating CT image from PET image
WO2022120731A1 (en) * 2020-12-10 2022-06-16 深圳先进技术研究院 Mri-pet image modality conversion method and system based on cyclic generative adversarial network
CN112509093A (en) * 2020-12-17 2021-03-16 深圳高性能医疗器械国家研究院有限公司 Image attenuation correction method and application thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697741A (en) * 2018-12-28 2019-04-30 上海联影智能医疗科技有限公司 A kind of PET image reconstruction method, device, equipment and medium
CN110021037A (en) * 2019-04-17 2019-07-16 南昌航空大学 A kind of image non-rigid registration method and system based on generation confrontation network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10593071B2 (en) * 2017-04-14 2020-03-17 Siemens Medical Solutions Usa, Inc. Network training and architecture for medical imaging

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697741A (en) * 2018-12-28 2019-04-30 上海联影智能医疗科技有限公司 A kind of PET image reconstruction method, device, equipment and medium
CN110021037A (en) * 2019-04-17 2019-07-16 南昌航空大学 A kind of image non-rigid registration method and system based on generation confrontation network

Also Published As

Publication number Publication date
CN111340903A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN111340903B (en) Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image
Armanious et al. MedGAN: Medical image translation using GANs
US10964071B2 (en) Systems and methods for data and model-driven image reconstruction and enhancement
CN110809782B (en) Attenuation correction system and method
Zhang et al. Applications of nonlocal means algorithm in low‐dose X‐ray CT image processing and reconstruction: a review
CN112770838B (en) System and method for image enhancement using self-focused deep learning
CN112424835B (en) System and method for image reconstruction
CN112368738B (en) System and method for image optimization
CN109215014B (en) Training method, device and equipment of CT image prediction model and storage medium
CN111540025B (en) Predicting images for image processing
Cheng et al. Applications of artificial intelligence in nuclear medicine image generation
EP3338636B1 (en) An apparatus and associated method for imaging
Bai et al. Probabilistic self‐learning framework for low‐dose CT denoising
EP3555850A1 (en) System and method for image segmentation using a joint deep learning model
CN108038840B (en) Image processing method and device, image processing equipment and storage medium
CN111899315B (en) Method for reconstructing low-dose image by using multi-scale feature perception depth network
KR20200025598A (en) Apparatus and Method for Fusing Using Weighted Least Squares Filter and Sparse Respresentation
Zhao et al. Dual-energy CT imaging using a single-energy CT data is feasible via deep learning
Amirkolaee et al. Development of a GAN architecture based on integrating global and local information for paired and unpaired medical image translation
Karimi et al. Reducing streak artifacts in computed tomography via sparse representation in coupled dictionaries
Li et al. A comprehensive survey on deep learning techniques in CT image quality improvement
Min et al. Lossless medical image compression based on anatomical information and deep neural networks
Chan et al. An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction
Mangalagiri et al. Toward generating synthetic CT volumes using a 3D-conditional generative adversarial network
Poonkodi et al. 3d-medtrancsgan: 3d medical image transformation using csgan

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant