CN112508928A - Image synthesis method and application thereof - Google Patents

Image synthesis method and application thereof Download PDF

Info

Publication number
CN112508928A
CN112508928A CN202011492925.4A CN202011492925A CN112508928A CN 112508928 A CN112508928 A CN 112508928A CN 202011492925 A CN202011492925 A CN 202011492925A CN 112508928 A CN112508928 A CN 112508928A
Authority
CN
China
Prior art keywords
image
data
convolution
synthesis method
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011492925.4A
Other languages
Chinese (zh)
Inventor
郑海荣
江洪伟
李彦明
万丽雯
薛恒志
胡战利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen National Research Institute of High Performance Medical Devices Co Ltd
Original Assignee
Shenzhen National Research Institute of High Performance Medical Devices Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen National Research Institute of High Performance Medical Devices Co Ltd filed Critical Shenzhen National Research Institute of High Performance Medical Devices Co Ltd
Priority to CN202011492925.4A priority Critical patent/CN112508928A/en
Publication of CN112508928A publication Critical patent/CN112508928A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application belongs to the technical field of image imaging, and particularly relates to an image synthesis method and application thereof. For the synthesis from a CT image to a PET image, the convolution used in the prior art cannot well extract the characteristic information in the CT image. According to the image synthesis network based on the self-correcting convolution neural network, the network obtains the second image by using the self-correcting convolution neural network for the first image, and the diagnostic information in the second image is obtained while the first image is scanned, so that the accurate treatment of a doctor is facilitated.

Description

Image synthesis method and application thereof
Technical Field
The application belongs to the technical field of image imaging, and particularly relates to an image synthesis method and application thereof.
Background
Positron Emission Tomography (PET) plays an indispensable role in early disease discovery and postoperative staging diagnosis, and is a functional imaging method widely applied to neuroscience research. The increased accumulation of fluoro-D-glucose (FDG) used in PET relative to normal tissue is a useful marker of many cancers and can help detect and localize malignant tumors. While PET imaging has many advantages, it has some drawbacks that make it refractory to treatment. The radioactive ingredients may be at risk to pregnant or lactating patients. In addition, PET is a relatively new medical procedure and may be expensive. Therefore, most medical centers in the world have not yet provided it. The difficulty in providing PET imaging as part of therapy has increased the need for alternative, inexpensive, fast and easy to use PET-like imaging. In order to obtain a PET image, it is very meaningful to synthesize the PET image from information in an existing CT image, and it has important scientific significance and application prospects in the field of medical diagnosis.
Avi Ben-Cohen, published in 2017 in the journal of International works work on Simulation and Synthesis in Medical Imaging, the article "Virtual PET Images from CT Data Using Deep relational Networks: Initial Results") derived PET Data from CT Data Using Full Convolution Networks (FCN) and conditional Generation countermeasure Networks (GAN). The full convolution network is used for estimating the PET image of the CT image, the consumed computing resource is large, the extracted information characteristic diagram of the CT image is converted into a synthesized PET image, the synthesized PET image and the real PET image pass through a discriminator to obtain discriminator loss, and the discriminator loss is reduced to further standardize the generator to generate the image, so that the image is closer to the real image.
However, for the synthesis from a CT image to a PET image, the convolution used in the prior art cannot extract the characteristic information in the CT image well.
Disclosure of Invention
1. Technical problem to be solved
Based on the problem that the convolution used in the prior art cannot well extract the characteristic information in the CT image in the synthesis from the CT image to the PET image, the application provides an image synthesis method and application thereof.
2. Technical scheme
In order to achieve the above object, the present application provides an image synthesizing method, including the steps of:
step 1: constructing a generator network;
step 2: dividing data into first data and second data, performing self-correction processing on the first data by adopting the generator network to obtain first data, performing convolution and batch normalization processing on the second data by adopting the generator network to obtain second data, and merging the first data and the second data to obtain input data;
and step 3: optimizing the generator network;
and 4, step 4: using the image I and the image II as input and training labels of the generator network;
and 5: training a network to obtain a mapping relation from the image I to the image II;
and 6, synthesizing the image I into an image II through a trained generator network.
Another embodiment provided by the present application is: the generator network in the step 1 is constructed by adopting a coding and decoding network, and the coding and decoding comprise a plurality of 2D convolutions.
Another embodiment provided by the present application is: the generator network comprises a convolution layer, a self-correction convolution layer, a reverse convolution layer, a batch normalization layer and an activation function layer, the first data is processed by the self-correction convolution layer to obtain the first data, and the second data is processed by the convolution layer and the batch normalization layer to obtain the second data.
Another embodiment provided by the present application is: in the step 2, the data equally divides the number of channels into a first channel and a second channel.
Another embodiment provided by the present application is: the input data is the same size as the data size.
Another embodiment provided by the present application is: and multiplying the second data by the first prime number data, and performing convolution and batch normalization processing to obtain self-correcting convolution data serving as the input data.
Another embodiment provided by the present application is: in the step 3, the Adam optimization algorithm is adopted to optimize the proposed self-correcting convolution, and the optimized loss function is a mean square error loss function.
Another embodiment provided by the present application is: the generator network comprises 2 layers of convolution, 2 layers of deconvolution and 10 layers of self-correcting convolution, and each layer further comprises a batch layer and an activation function layer.
Another embodiment provided by the present application is: the convolution step is all 1.
The application also provides an application of the image synthesis method, and the image synthesis method is applied to the synthesis of the positron emission tomography image, the synthesis of the CT image, the noise reduction of the undersampled MRI image and the noise reduction of the low-count positron emission tomography image.
3. Advantageous effects
Compared with the prior art, the image synthesis method provided by the application has the beneficial effects that:
the application provides an image synthesis method, a method for synthesizing a PET image from a CT image based on self-correcting convolution, belonging to medical Positron Emission Tomography (PET) scanning.
The image synthesis method is a deep learning PET image synthesis method based on CT data, convolution is changed into self-correction convolution in an original convolution neural network, the receptive field of the image is increased, a more comprehensive characteristic image is obtained, and the extraction of CT image characteristics by the network is improved.
Compared with the PET image synthesis in the prior art, the image synthesis method provided by the application can obtain the image which is clearer, and the PET image synthesis capability of the application is shown.
According to the image synthesis method, the PET image synthesis network based on the self-correcting convolution neural network obtains the PET image by using the self-correcting convolution neural network on the CT image, obtains diagnosis information in the PET image while scanning CT, and provides help for accurate treatment of doctors.
The image synthesis method improves the quality of the synthesis image of the PET image.
According to the image synthesis method, the self-correcting convolution is used for replacing the original convolution, the range of the characteristic diagram receptive field is improved, and the representation capability of the characteristic diagram on the input image is improved.
According to the image synthesis method provided by the application, the characteristic image extracted by self-correction convolution has a larger receptive field and contains more information of an input image (CT).
The application of the image synthesis method can better extract the features of the image in the feature extraction part of the generator and better estimate the PET image when the PET image is synthesized by the CT image.
Drawings
FIG. 1 is a schematic diagram of a generator network for the image synthesis method of the present application;
fig. 2 is a schematic diagram of a self-correcting convolution of the image synthesis method of the present application.
Detailed Description
Hereinafter, specific embodiments of the present application will be described in detail with reference to the accompanying drawings, and it will be apparent to those skilled in the art from this detailed description that the present application can be practiced. Features from different embodiments may be combined to yield new embodiments, or certain features may be substituted for certain embodiments to yield yet further preferred embodiments, without departing from the principles of the present application.
Referring to fig. 1 to 2, the present application provides an image synthesis method, including the steps of:
step 1: constructing a generator network;
step 2: dividing data into first data and second data, performing self-correction processing on the first data by adopting the generator network to obtain first data, performing convolution and batch normalization processing on the second data by adopting the generator network to obtain second data, and merging the first data and the second data to obtain input data;
and step 3: optimizing the generator network;
and 4, step 4: using the image I and the image II as input and training labels of the generator network;
and 5: training a network to obtain a mapping relation from the image I to the image II;
and 6, synthesizing the image I into an image II through a trained generator network.
Further, the generator network in step 1 is constructed by using a coding and decoding network, and the coding and decoding network comprises a plurality of 2D convolutions.
Further, the generator network comprises a convolution layer, a self-correcting convolution layer, a reverse convolution layer, a batch normalization layer and an activation function layer, the first data is processed by the self-correcting convolution layer to obtain the first data, and the second data is processed by the convolution layer and the batch normalization layer to obtain the second data.
Further, the data in step 2 equally divides the number of channels into a first channel and a second channel.
Further, the input data is the same size as the data size.
Further, the second data is multiplied by the first prime number data, and self-correcting convolution data is obtained through convolution and batch normalization processing and is used as the input data.
Further, in the step 3, Adam optimization algorithm is adopted to optimize the proposed self-correcting convolution, and the optimized loss function is a mean square error loss function.
Further, the generator network comprises 2 layers of convolution, 2 layers of deconvolution and 10 layers of self-correcting convolution, and each layer further comprises a batch layer and an activation function layer.
Further, the convolution step sizes are all 1.
The application also provides an application of the image synthesis method, and the image synthesis method is applied to the synthesis of the positron emission tomography image, the synthesis of the CT image, the noise reduction of the undersampled MRI image and the noise reduction of the low-count positron emission tomography image.
Examples
The description will be given taking the case of combining CT images with PET images.
The method comprises the following steps: and synthesizing the PET image by the CT image to design a generator network structure.
The module is a codec network, all codecs are composed of a series of 2D convolutions. The generator network comprises a total of 14 layers, including 2 layers of convolution and 2 layers of deconvolution, and 10 layers of self-correcting convolution. Each layer includes a convolution layer/self-correcting convolution/deconvolution, a batch normalization layer, and an activation function layer. All convolution kernels used were 3 x 3 in size. And the number of convolution kernels is 32, 64, 128, 64, 32, 1 in sequence. And the convolution step sizes are all 1.
And designing a U-shaped network to realize the synthesis from the CT image to the PET image.
Step two: the self-correcting convolution in step one is designed. The number of channels is equally divided into an upper channel, i.e., a second channel, and a lower channel, i.e., a first channel, with respect to input data. And processing the data in the upper layer by a self-correction module, combining the data in the lower channel with the data in the upper layer after convolution and batch normalization, and obtaining the data with the same size as the input data as input.
And designing a self-correcting module. The dimension of the input data is C/2 XHXW. The data above is down sampled with a sampling rate r, set to 4 in this application. The dimensionality of the sampled data is as follows: c/2 XH/rxW/r, then performing convolution and batch normalization operation, then performing dimension recovery of data by upsampling with the sampling rate r, then adding the dimension recovery data with the input data, and performing Sigmoid activation function on the obtained result.
The result of the convolution and batch normalization operation on the following data is multiplied by the previous result, and the obtained result is subjected to the convolution and batch normalization operation to obtain the final self-correcting convolution result.
The self-correcting volume is used to improve the richness of the CT image feature extraction in the synthesis process, so that the information hidden in the CT image is extracted to be used for synthesizing the PET image.
Step three: when the network is optimized, the Adam optimization algorithm is adopted to optimize the proposed self-correcting convolution, and the optimized loss function is a mean square error loss function. The following is the mean square error loss function.
Figure BDA0002841223530000051
Wherein w, d are the height and width of the image, respectively, y is the label PET image, x is the input CT image, G (x) is the synthetic PET image, | to |2To square the content therein.
Step four: the CT images and the corresponding attenuation-corrected PET images are used as input to the network and as labels for training.
Step five: and training a generator network to obtain the mapping relation from the CT image to the PET image.
And finally, synthesizing the CT image with the PET image through the trained network to obtain the PET image which can help a doctor to diagnose.
The self-correcting convolution can improve the extraction of the generator to the content information of the CT image, increase the receptive field of the convolution layer (the receptive field is the area size of the mapping of the pixel points on the feature map output by each layer of the Convolutional Neural Network (CNN) on the original input image), increase the pixel points on the feature map extracted by convolution after the receptive field can better reflect the information in the original image, and better synthesize the PET image from the CT image.
Although the present application has been described above with reference to specific embodiments, those skilled in the art will recognize that many changes may be made in the configuration and details of the present application within the principles and scope of the present application. The scope of protection of the application is determined by the appended claims, and all changes that come within the meaning and range of equivalency of the technical features are intended to be embraced therein.

Claims (10)

1. An image synthesis method, characterized by: the method comprises the following steps:
step 1: constructing a generator network;
step 2: dividing data into first data and second data, performing self-correction processing on the first data by adopting the generator network to obtain first data, performing convolution and batch normalization processing on the second data by adopting the generator network to obtain second data, and merging the first data and the second data to obtain input data;
and step 3: optimizing the generator network;
and 4, step 4: using the image I and the image II as input and training labels of the generator network;
and 5: training a generator network to obtain a mapping relation from the image I to the image II;
and 6, synthesizing the image I into an image II through a trained generator network.
2. An image synthesis method according to claim 1, characterized in that: the generator network in the step 1 is constructed by adopting a coding and decoding network, and the coding and decoding comprise a plurality of 2D convolutions.
3. An image synthesis method according to claim 1, characterized in that: the generator network comprises a convolution layer, a self-correction convolution layer, a reverse convolution layer, a batch normalization layer and an activation function layer, the first data is processed by the self-correction convolution layer to obtain the first data, and the second data is processed by the convolution layer and the batch normalization layer to obtain the second data.
4. An image synthesis method according to claim 1, characterized in that: in the step 2, the data equally divides the number of channels into a first channel and a second channel.
5. An image synthesis method according to claim 1, characterized in that: the input data is the same size as the data size.
6. An image synthesis method according to claim 3, characterized in that: and multiplying the second data by the first prime number data, and performing convolution and batch normalization processing to obtain self-correcting convolution data serving as the input data.
7. An image synthesis method according to claim 1, characterized in that: in the step 3, the Adam optimization algorithm is adopted to optimize the proposed self-correcting convolution, and the optimized loss function is a mean square error loss function.
8. An image synthesis method according to any one of claims 1 to 7, characterized in that: the generator network comprises 2 layers of convolution, 2 layers of deconvolution and 10 layers of self-correcting convolution, and each layer further comprises a batch layer and an activation function layer.
9. An image synthesis method according to claim 6, characterized in that: the convolution step is all 1.
10. An application of an image synthesis method, characterized in that: the image synthesis method according to any one of claims 1 to 9 is applied to synthesis of positron emission tomography images, synthesis of CT images, under-sampling MRI image noise reduction, and low-count positron emission tomography image noise reduction.
CN202011492925.4A 2020-12-17 2020-12-17 Image synthesis method and application thereof Pending CN112508928A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011492925.4A CN112508928A (en) 2020-12-17 2020-12-17 Image synthesis method and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011492925.4A CN112508928A (en) 2020-12-17 2020-12-17 Image synthesis method and application thereof

Publications (1)

Publication Number Publication Date
CN112508928A true CN112508928A (en) 2021-03-16

Family

ID=74921687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011492925.4A Pending CN112508928A (en) 2020-12-17 2020-12-17 Image synthesis method and application thereof

Country Status (1)

Country Link
CN (1) CN112508928A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035356A (en) * 2018-07-05 2018-12-18 四川大学 A kind of system and method based on PET pattern imaging
US20200034948A1 (en) * 2018-07-27 2020-01-30 Washington University Ml-based methods for pseudo-ct and hr mr image estimation
CN111340903A (en) * 2020-02-10 2020-06-26 深圳先进技术研究院 Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image
CN111882503A (en) * 2020-08-04 2020-11-03 深圳高性能医疗器械国家研究院有限公司 Image noise reduction method and application thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035356A (en) * 2018-07-05 2018-12-18 四川大学 A kind of system and method based on PET pattern imaging
US20200034948A1 (en) * 2018-07-27 2020-01-30 Washington University Ml-based methods for pseudo-ct and hr mr image estimation
CN111340903A (en) * 2020-02-10 2020-06-26 深圳先进技术研究院 Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image
CN111882503A (en) * 2020-08-04 2020-11-03 深圳高性能医疗器械国家研究院有限公司 Image noise reduction method and application thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马金林;邓媛媛;马自萍;: "肝脏肿瘤CT图像深度学习分割方法综述", 中国图象图形学报, no. 10, 16 October 2020 (2020-10-16) *

Similar Documents

Publication Publication Date Title
CN110506278B (en) Target detection in hidden space
Gao et al. Deep residual inception encoder–decoder network for medical imaging synthesis
US7489825B2 (en) Method and apparatus for creating a multi-resolution framework for improving medical imaging workflow
CN107492071A (en) Medical image processing method and equipment
Huang et al. Self-supervised transfer learning based on domain adaptation for benign-malignant lung nodule classification on thoracic CT
CN109727197B (en) Medical image super-resolution reconstruction method
WO2022257959A1 (en) Multi-modality and multi-scale feature aggregation for synthesizing spect image from fast spect scan and ct image
CN112819914A (en) PET image processing method
Zhan et al. D2FE-GAN: Decoupled dual feature extraction based GAN for MRI image synthesis
Xu et al. Bg-net: Boundary-guided network for lung segmentation on clinical ct images
Wu et al. Unsupervised positron emission tomography tumor segmentation via GAN based adversarial auto-encoder
Amirkolaee et al. Development of a GAN architecture based on integrating global and local information for paired and unpaired medical image translation
Shi et al. Metabolic anomaly appearance aware U-Net for automatic lymphoma segmentation in whole-body PET/CT scans
Zhang et al. Automatic parotid gland segmentation in MVCT using deep convolutional neural networks
CN114066798A (en) Brain tumor nuclear magnetic resonance image data synthesis method based on deep learning
Xu et al. Improved cascade R-CNN for medical images of pulmonary nodules detection combining dilated HRNet
CN113491529B (en) Single-bed PET (positron emission tomography) delayed imaging method without concomitant CT (computed tomography) radiation
CN115984257A (en) Multi-modal medical image fusion method based on multi-scale transform
CN112508928A (en) Image synthesis method and application thereof
CN113379863B (en) Dynamic double-tracing PET image joint reconstruction and segmentation method based on deep learning
Yousefi et al. ASL to PET translation by a semi-supervised residual-based attention-guided convolutional neural network
Alamin et al. Improved framework for breast cancer detection using hybrid feature extraction technique and ffnn
Xiao et al. Contrast-enhanced CT image synthesis of thyroid based on transfomer and texture branching
Abdulwahhab et al. A review on medical image applications based on deep learning techniques
CN118298069B (en) Method, system, equipment and storage medium for acquiring PET synthetic image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination