CN113476064B - BCD-ED-based single-scanning double-tracer PET signal separation method - Google Patents

BCD-ED-based single-scanning double-tracer PET signal separation method Download PDF

Info

Publication number
CN113476064B
CN113476064B CN202110840914.9A CN202110840914A CN113476064B CN 113476064 B CN113476064 B CN 113476064B CN 202110840914 A CN202110840914 A CN 202110840914A CN 113476064 B CN113476064 B CN 113476064B
Authority
CN
China
Prior art keywords
tracer
pet
layer
dynamic image
reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110840914.9A
Other languages
Chinese (zh)
Other versions
CN113476064A (en
Inventor
刘华锋
童珺怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202110840914.9A priority Critical patent/CN113476064B/en
Publication of CN113476064A publication Critical patent/CN113476064A/en
Application granted granted Critical
Publication of CN113476064B publication Critical patent/CN113476064B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • A61B6/5235Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from the same or different ionising radiation imaging techniques, e.g. PET and CT
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/48Diagnostic techniques
    • A61B6/481Diagnostic techniques involving the use of contrast agents

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Veterinary Medicine (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a single-scanning double-tracer PET signal separation method based on BCD-ED, which combines a traditional iterative reconstruction algorithm with deep learning, and can accurately separate two single-tracer PET images from mixed double-tracer images by utilizing data driving. The BCD-ED framework adopted by the invention is provided with three modules, namely a reconstruction module, a denoising module and a separation module, the mixed tracer singram acquired by PET is reconstructed by utilizing a maximum likelihood estimation algorithm, the denoising is carried out on the reconstructed image by using a low-rank regularization model, the shape of the reconstructed mixed concentration map is better constrained, and the noise is smaller; and further, a coding and decoding model is used for learning the mapping relation between the mixed tracer concentration map and the two full-dose single tracers, and the detailed information of the single tracer concentration map is required to be recovered more clearly.

Description

BCD-ED-based single-scanning double-tracer PET signal separation method
Technical Field
The invention belongs to the technical field of PET signal separation, and particularly relates to a single-scanning double-tracer PET signal separation method based on BCD-ED (block coordinate descent-encoding and decoding).
Background
Positron emission tomography (Positron Emission Tomography, PET) is a typical emission computed tomography technique, often with markers 11 C、 18 F、 15 O、 13 The tracer with N isotomic element is used together, and has the advantages of high sensitivity to the tracer, no wound and the like. The dynamic change of the PET scanning tracer can characterize and quantify the functions of in vivo tissues, so that physiological indexes such as glucose metabolism, blood flow, hypoxia and the like of the part are obtained, and the PET scanning tracer is known to be used for researching various diseases such as tumors, heart diseases, neurological diseases and the like. Commonly used nuclides for labeling can be classified into short half-life nuclides, medium half-life nuclides and long half-life nuclides according to the radioactive half-life duration, the half-life duration can influence the synthesis, transportation, dosage, scanning duration, performance of the tracer and the sensitivity required by the PET detector, and the nuclides are required to be weighed according to actual conditions; short half-life nuclides 82 Rb、 15 O、 13 N、 62 Cu、 11 C, a shorter half-life can be used for multiple scan imaging in a short time, but requires a laboratory to be equipped with a cyclotron, high injection dose, short synthesis time, or more sensitive PET detector; long half-life nuclides 64 Cu、 124 I, longitudinal long-term study of physiological activities can be performed and is applicable to experimental sites far away from the cyclotron; medium half-life nuclide mainIs to have 18 F and F 68 Ga, with moderate duration and therefore frequent use, due to 18 F has the advantages of lower positive energy and range, medium half-life, higher bifurcation rate, easiness in labeling biomolecules and the like compared with other nuclides, and is the nuclide which is most widely used in scientific research and clinic.
Compared with Shan Shizong agent PET imaging technology, which can only obtain physiological activity characteristics in a certain aspect, the information is single, and accurate judgment of diseases is difficult, and the multi-tracer PET technology can provide complementary information to characterize more complete disease states through imaging radioactive tracers sensitive to different physiological function changes, so that the possibility of misdiagnosis of the diseases is reduced, and doctors are guided to select more effective treatment schemes. In the early stage, two tracers are often injected respectively for imaging the dual tracers, and a mode of collecting the two tracers respectively, namely a dual injection-dual scanning mode is adopted, so that the two tracers cannot interfere with each other in a corresponding attenuation period, and the scanning mode brings great discomfort to a patient. Because this scanning approach requires a long time, koeppe et al then propose a dual injection-single scan mode, i.e. a unified scan of dual tracers, reducing the signal superposition effects of both tracers by injecting them for a short time interval, e.g. 10-20 minutes, and separating the different tracer signals by analyzing the pixel Time Activity Curve (TAC) or non-linear least squares (NLS) analog target Region (ROI); although this scanning method can combine two kinds of scanning into one scanning, the scanning time is reduced to some extent, but the interval scanning time of 0 to 20 minutes is not the most perfect scanning method.
In order to realize a completely interval-free scanning mode, many researchers have put into great effort, and currently, a plurality of interval-free dual-tracer imaging modes mostly utilize prior information, such as TAC data and atrioventricular model data, to separate different tracers, however, the separation mode based on the prior information has high requirements on accuracy of the prior information and signal-to-noise ratio of the dual-tracer data, which has limited practical application. Therefore, how to distinguish by using the essential feature of the tracer becomes one of important research directions for the imaging of the tracer.
Disclosure of Invention
In view of the above, the present invention provides a BCD-ED based single scan dual tracer PET signal separation method that enables accurate separation of two single tracer PET images from a mixed dual tracer PET image using data driving by means of a powerful feature extraction tool, deep learning.
A single-scanning double-tracer PET signal separation method based on BCD-ED comprises the following steps:
(1) Injecting mixed double tracers into biological tissues, and simultaneously carrying out dynamic PET scanning once to obtain a PET dynamic sinogram sequence y corresponding to the mixed double tracers; the mixed dual tracer consists of two isotopically-labeled tracer I and tracer II;
(2) Respectively injecting the tracer I and the tracer II into the same biological tissue, and separately performing dynamic PET scanning to obtain a PET dynamic sinogram sequence y corresponding to the tracer I and the tracer II I And y II
(3) Calculation of y using PET reconstruction algorithm I And y II Corresponding PET dynamic image sequenceAnd->And will->And->The true value x of the PET dynamic image sequence of the mixed double tracer is obtained after superposition ture
(4) Repeating the steps multiple times to obtain a large number of samples, and dividing the samples into a training set and a testing set, wherein each group of samples comprises y and y I 、y IIX ture
(5) Constructing a BCD-ED network consisting of a reconstruction module, a denoising module and a separation module, and training the network structure by using a training set sample to obtain a reconstruction-separation combined model of a dynamic dual-trace PET signal;
(6) The test set samples are input into the combined model one by one, so that a PET dynamic image sequence of the mixed dual tracer can be reconstructed, and then the PET dynamic image sequences S corresponding to the tracer I and the tracer II are obtained by denoising and separating the PET dynamic image sequences I And S is II
Further, the reconstruction module of the BCD-ED network adopts a maximum likelihood estimation algorithm to solve a PET dynamic image sequence X corresponding to y in an input sample, a regularization term is added to strengthen the constraint of a reconstruction solving problem, and a plurality of neural network convolution kernels are used for convolving X in different directions in the reconstruction solving process to generate a sparse image X k So that it has low rank properties.
Further, the denoising module of the BCD-ED network firstly reconstructs the sparse image X obtained by the module k Decomposition into low rank matrix L k Sum poisson noise matrix W k Then solving the following objective function by utilizing a singular value threshold algorithm;
wherein: c k Represents the kth convolution kernel, lambda k Is a threshold parameter for controlling L k Beta is the super parameter used for controlling the smoothness of the image, K is the number of convolution kernels used in the reconstruction solving process, and I is I * Is a nuclear norm;
the method comprises the following steps:
wherein:c is k Is used for deconvolution,/->Representing the estimation result obtained by converting the maximized likelihood function of the PET dynamic image sequence x under the given observation data y into the minimized likelihood function negative logarithm 2 And the u is the PET dynamic image sequence output after denoising, the superscripts i and i+1 represent iteration times, and i is a natural number.
Further, the separation module of the BCD-ED network merges the idea of encoding-decoding and the same-layer jump connection structure, including encoding and decoding, the encoding part is formed by sequentially connecting a downsampling block D1, a pooling layer C1, a downsampling block D2, a pooling layer C2, a downsampling block D3, a pooling layer C3 and a downsampling block D4 from input to output, and the decoding part is formed by sequentially connecting an upsampling block U1, a deconvolution layer E1, an upsampling block U2, a deconvolution layer E2, an upsampling block U3 and a deconvolution layer E3 from input to output, wherein:
each of the downsampling blocks D1-D4 comprises a three-layer structure which is sequentially connected with each other: the first layer is a convolution layer, the convolution kernel size is 3 multiplied by 3, and features are extracted through the convolution kernel; the second layer is a BatchNorm layer, and the output of the upper layer is normalized; the third layer is a Relu layer, and the output of the previous layer is subjected to activation function processing; d1 to D4 respectively generate 64, 128, 256 and 512 Feature maps; after the processing of the coding part, the maximum number of channels is obtained after the network reaches the bottommost layer, and the original image is downsampled to be very small at the moment, so that a large amount of original characteristic information is extracted.
The convolution kernel sizes of the pooling layers C1-C3 are all 2 multiplied by 2, and the convolution kernel sizes are used for halving the size of the input characteristic images so as to reduce the convolution operation amount; due to the shrinkage of the feature map, the convolution kernels of the same size can extract features of a larger range corresponding to the original image, and have higher robustness and overfitting resistance to some small perturbations of the image, such as offset, rotation, and the like.
The upper sampling blocks U1 to U3 all comprise three layers of structures which are connected in sequence: the first layer is a convolution layer, the convolution kernel size is 3 multiplied by 3, and features are extracted through the convolution kernel; the second layer is a BatchNorm layer, and the output of the upper layer is normalized; the third layer is a Relu layer, and the output of the previous layer is subjected to activation function processing; the up-sampling block U3 further includes a fourth layer, i.e., a convolution layer having a convolution kernel size of 1×1, for reducing the number of channels to a specific number as an output result.
The input of U1 is the splicing result of the output of D3 and D4 in the channel dimension, the input of U2 is the splicing result of the output of D2 and E1 in the channel dimension, the input of U3 is the splicing result of the output of D1 and E2 in the channel dimension, and U1-U3 respectively generate 256, 128 and 64 Feature maps; because partial image information is lost in the downsampling in the encoding stage, the image details are difficult to obtain in decoding, the characteristic diagrams of the encoding parts with the same layer size are introduced in the same layer jump connection mode in decoding, and the two characteristic diagrams are spliced to achieve the purpose of characteristic fusion, so that the network can also use more original information which is not discarded by the pooling layer in decoding, and a clearer image is recovered.
The deconvolution layers E1-E3 are used for doubling the size of the input characteristic image and restoring the size of the Feature map so as to solve the problem that the resolution of the characteristic image is reduced after a series of convolution operations.
The coding stage in the BCD-ED network reduces the image size through the downsampling block and the pooling layer, extracts some shallow features, and acquires some deep features through the deconvolution layer and the upsampling block of the coding stage. Meanwhile, the middle of the upper sampling block and the lower sampling block is operated by a jump layer, the Feature map obtained in the encoding stage and the Feature map obtained in the decoding stage are combined together, the features of deep layers and shallow layers are combined, the image is refined, and prediction separation is carried out according to the obtained Feature map; high resolution information passed directly from the encoding module to the high-level decoding module via the skip layer operation can provide finer features for separation.
Further, the training process of the BCD-ED network structure in the step (5) is as follows:
initializing network parameters including bias vectors and weight matrixes among network layers, learning rate and maximum iteration times;
5.2 taking y in the training set sample as the input of the reconstruction module, and calculating by the joint denoising module to obtain a denoised PET dynamic image sequence u, and further calculating u and a true value x by a loss function loss1 ture A difference between them;
5.3 inputting u into a separation module, and outputting to obtain PET dynamic image sequences S corresponding to the tracer I and the tracer II I And S is II Further, S is calculated by loss function loss2 I And (3) withS II And->A difference between them;
5.4, performing supervised training on the whole network by combining a loss function loss=loss 1+loss2, and guiding the network to reversely propagate and gradient down by taking a root mean square error MSE as a loss error until the loss function loss converges or reaches the maximum iteration number, thereby completing training and obtaining a reconstruction-separation combined model of the dynamic double-trace PET signal.
Further, the expression of the loss function loss1 is as follows:
wherein:for the concentration value of the nth pixel point in the PET dynamic image sequence u obtained by solving the (i+1) th iteration, N is the number of the pixel points of the PET dynamic image sequence, and the number of the pixel points is->True value x for PET dynamic image sequence ture Concentration value at the nth pixel point.
Further, the loss function loss2 is expressed as follows:
wherein:and->Respectively PET dynamic image sequences S I And S is II Concentration value at n-th pixel point, of->Andrespectively PET dynamic image sequence->And->The concentration value of the nth pixel point in the sequence, N is the number of the pixel points of the PET dynamic image sequence.
The invention realizes the reconstruction and separation of the mixed tracer dynamic PET concentration distribution image through the BCD-ED network, and jointly reconstructs and separates the mixed dynamic dual-tracer PET signal from the dynamic sinogram sequence. The BCD-ED network adopted by the invention is based on a traditional low-rank regularization model and a coding and decoding structure, more single tracer image details can be restored by using fewer parameters, the mixed tracer singram acquired by PET is reconstructed by using a maximum likelihood estimation algorithm, noise is removed from the reconstructed image by using the low-rank regularization model, and finally, the mapping relation between the mixed tracer concentration map and two full-dose single tracers is learned by using the coding and decoding model, and the detail information of the single tracer concentration map is required to be restored more clearly.
The invention is a direct separation algorithm, which has the advantages that the traditional iterative reconstruction algorithm is combined with deep learning, the shape of the reconstructed mixed concentration map is better constrained, the noise is smaller, the coding and decoding separation module can learn the mapping relation between the mixed concentration map and the single tracer concentration map, the traditional reconstruction algorithm is further eliminated from the problem of separating dynamic double tracer PET signals, the combined reconstruction and separation are directly carried out from a sinusoidal map, and the double tracer has more possibility of clinical application.
Drawings
FIG. 1 is a schematic flow chart of a dynamic dual tracer PET signal separation method of the invention.
FIG. 2 is a schematic diagram of a BCD-ED network framework according to the present invention.
FIG. 3 (a) is a mixed tracer 18 F-BCPP-FE+ 18 True concentration profile image of F-FDG frame 21.
FIG. 3 (b) is a mixed tracer 18 F-BCPP-FE+ 18 Predicted image of F-FDG frame 21 under BCD-ED network.
FIG. 3 (c) is a mixed tracer 18 F-BCPP-FE+ 18 Predicted image of F-FDG frame 21 under FBP algorithm.
FIG. 3 (d) is a mixed tracer 18 F-BCPP-FE+ 18 Predicted image of F-FDG frame 21 under MLEM algorithm.
FIG. 3 (e) is a mixed tracer 18 F-BCPP-FE+ 18 Predictive image of F-FDG frame 21 under UNET network.
FIG. 3 (f) is a mixed tracer 18 F-BCPP-FE+ 18 Predicted image of F-FDG frame 21 under FBP-CNN network.
FIG. 4 (a) is 18 True concentration profile image of F-FDG frame 21.
FIG. 4 (b) is 18 Predicted image of F-FDG frame 21 under BCD-ED network.
FIG. 4 (c) is 18 Predictive image of F-FDG frame 21 under UNET network.
FIG. 4 (d) is 18 Predicted image of F-FDG frame 21 under FBP-CNN network.
FIG. 5 (a) is 18 The true concentration distribution image of the 21 st frame of F-BCPP-FE.
FIG. 5 (b) is 18 Predictive pictures of F-BCPP-FE 21 st frame under BCD-ED network.
FIG. 5 (c) is 18 Predictive pictures of F-BCPP-FE 21 st frame under UNET network.
FIG. 5 (d) is 18 Predictive pictures of F-BCPP-FE frame 21 under FBP-CNN network.
Detailed Description
In order to more particularly describe the present invention, the following detailed description of the technical scheme of the present invention is provided with reference to the accompanying drawings and the specific embodiments.
As shown in fig. 1, the single-scanning dynamic dual-tracer PET signal separation method based on a pre-trained BCD-ED network of the invention comprises the following steps:
(1) Training set data is prepared.
1.1, injecting a mixed double tracer into biological tissues, and simultaneously carrying out one-time dynamic PET scanning to obtain a PET dynamic sinogram sequence y corresponding to the mixed double tracer, wherein the mixed double tracer consists of a tracer I and a tracer II marked by two isotopes;
1.2 injecting the tracer I and the tracer II into the same biological tissue respectively, and separately carrying out dynamic PET scanning to obtain a PET dynamic sinogram sequence y corresponding to the tracer I and the tracer II I And y II
1.3 calculation of y Using PET reconstruction algorithm I And y II Corresponding PET dynamic image sequenceAnd->And let->And->Superposition to obtain true value x of PET dynamic image sequence of mixed double tracer true
1.4 repeating the above steps for a plurality of times to obtain a plurality of PET dynamic sinogram sequences y, y I And y II PET moving image sequence x trueAnd->
y=[y 1 ,y 2 ,...,y d ]
Wherein: y is 1 ~y d Corresponding to the mixed double tracer sinogram of the 1 st to the d frames in y,corresponds to y I Single tracer sinogram for frame 1-d in +.>Corresponds to y II Single tracer sinogram of the 1 st to d th frames in the process; />Corresponds to x true True value of mixed dual tracer concentration map of frames 1-d, < >>Corresponding to->True value of single tracer concentration profile in frames 1-d, < >>Corresponding to->True value of single tracer concentration map of the 1 st to d th frames, d is the number of PET dynamic scanning frames.
(2) Training set and test set data are prepared.
From y, x trueAnd->4/5 of the samples are randomly taken as a training set, the remaining 1/5 of the samples are taken as a test set, and no samples in the test set appear in the training set.
(3) The BCD-ED network shown in fig. 2 is constructed, and the network framework has three modules, which are respectively composed of reconstruction, denoising and separation modules, and specifically introduced as follows:
in the initialization process, a dynamic sinogram sequence y and a system matrix G of the mixed dual tracer agent acquired from PET are input, a maximum likelihood estimation algorithm is selected in a reconstruction module to solve a dynamic concentration diagram sequence x of PET, and the expectation of obtaining a reconstructed concentration diagram x is as follows:
the frame strengthens the constraint of the reconstruction problem by adding the regularization term, and can reduce the difference between adjacent pixels in the reconstructed image after adding the constraint, so that the reconstructed image is smoother; the kernel c is convolved in this process using K neural networks k Convolving the image X in different directions to produce a sparse image X k The method comprises the steps of carrying out a first treatment on the surface of the Sparse feature images can be better learned with a large amount of data, and these image matrices have low rank properties.
X k =c k *x
In actual case, X k Generally contains some noise, X can be determined k Decomposition into low rank matrix L k Sum poisson noise matrix W k Adding the kernel norm * The purpose of image denoising can be achieved.
X k =L k +W k
The super parameter beta in the above method can control the smoothness degree of the image, lambda k Is a threshold parameter for controlling L k Sparsity of (2); in this process the parameter lambda is initialized k Beta, the method using singular value thresholds can solve the above equation:
wherein: l (L) k The kernel norm of (c) may be represented by the sum of singular values, σ p For the p-th maximum singular value, i is the number of iterations, (x) + =max (x, 0) can be used for soft threshold contraction, concentration plot x estimated by algorithm i+1 Can be represented by the following formula, convolution kernel filter c in neural network k Image features can be extracted as a result of the K convolution kernels c k The x characteristic of the extracted concentration map can be expressed in an equivalent way to x, so that the method can obtain
Then the soft threshold value is used to obtain a concentration graph x i+1 Re-use of deconvolutionObtaining denoised image u i+1
Wherein:c is k Is used for deconvolution, here +.>Is that the PET concentration map x can be estimated by converting the maximum likelihood function given the observed data y into the negative logarithm of the minimum likelihood function>U at this time i+1 The concentration map obtained by performing soft threshold shrinkage sparsification and deconvolution on the concentration map x after the feature map is extracted by convolution is the denoised concentration map.
(4) The training set is input into the network for training, and the training process is as follows:
4.1 initializing the BCD-ED network, wherein the initialization comprises setting the layer numbers of an input layer, a hidden layer and an output layer, and the parameters for initializing the network comprise iteration times, convolution kernel numbers and learning rates.
4.2 inputting the dynamic sinogram sequence y and the system matrix G of the mixed dual tracer acquired from PET into a denoising-reconstruction module in a BCD-ED network for training, and calculating x by the following formula ture,n And u i+1 And (3) correcting and updating the bias vector and the weight matrix between layers in the neural network by an error function through a gradient descent method.
Wherein: u (u) i+1 The concentration diagram of the denoised mixed tracer is obtained through reconstruction, x ture,n Refers to the true concentration value of the mixed tracer at the nth pixel point of the image, and N represents the total number of pixel points in the image.
4.3 inputting the three-dimensional reconstruction concentration map with time frame information obtained in the denoising-reconstruction module into a separation module in the BCD-ED network, wherein the module is of a symmetrical structure, and the other convolution layers comprise a normalized BN layer and a ReLU activation function layer except for the first convolution layer and the last convolution layer. The network extracts the image characteristic information through the convolution layer in the same layer, and then halving the characteristic image size through the largest pooling layer with the size of 2 multiplied by 2 so as to reduce the convolution operation amount; meanwhile, due to the reduction of the feature map, the convolution kernels with the same size can extract features in a larger range corresponding to the original image, and have higher robustness and anti-overfitting capability on some small disturbance of the image, such as offset, rotation and the like; the up-sampling module decodes the characteristic image by adopting the deconvolution of 2 multiplied by 2 to return to the original image size, and the image detail is difficult to obtain by decoding because part of image information is lost by down-sampling during encoding, so that the characteristic image of the encoding module with the same layer size is introduced into the module by using the same layer jump connection mode, the two characteristic images are spliced to achieve the purpose of characteristic fusion, and the network can also utilize more original information which is not discarded by a pooling layer during decoding to recover a clearer image.
As shown in the following equation, the separation module uses the root mean square error MSE as a loss error for guiding the network back propagation and gradient descent, and finally outputs two separated denoising tracer concentration graphs.
Wherein: s is S i,n Andthe concentration predicted value and the concentration true value of the tracer i at the nth pixel point in the image are respectively represented.
4.4 obtaining a loss function loss2 of the separation module, adding the loss function loss2 with the loss function loss1 in the step 4.2 to obtain a combined loss function, further performing combined training based on a denoising part in the block coordinate descending neural network and the separation module based on the coding and decoding network, finishing training M epochs, reserving model parameters, and separating the test set tracer by using the trained network in the test.
(5) And (5) evaluating results.
The results of the reconstruction-isolation are typically evaluated using peak signal-to-noise ratio (Peak Signal to Noise Ratio, PSNR) and average structural similarity (Mean Structural Similarity, MSSIM) indices.
Wherein:and->Mean values of the predicted image and the truth-value diagram, < >>And->Standard deviation, < > -representing predicted image and truth map, respectively>Represents covariance, K represents total image block, MAX represents maximum value in image, C 1 =(0.01MAX) 2 And C 2 =(0.03MAX) 2 Is constant, S i,n And->Respectively representing the concentration predicted value and the concentration true value of the tracer i at the nth pixel point of the image, wherein N represents the total pixel point in the image.
(6) Acquisition and comparison of experimental data.
In a real experiment, five male rhesus monkeys (macaque) weighing 4.7-8.7 kg were subjected to dynamic PET scanning using a high resolution small animal PET scanner (SHR-38000;Hamamatsu Photonics K.K, hamamatsu, japan). The monkeys were given a right lower limb intravenous dose of about 150MBq prior to the first scan 18 F-FDG, second scan injection of about 240MBq 18 F-BCPP-FE, the interval between each scan was over one week to ensure complete metabolism of the tracer in the body. During the scan, to activate the hand region of the left hemispheric sensory cortex, the right anterior paw of the monkey was applied using a vibrator (mini MASSAGER G-2; kawasaki-Seiji co., ltd, tokyo, japan)The haptic stimulus of 93 plus or minus 2Hz is added, the total scanning is performed for 120 minutes, the sampling protocols are 6 multiplied by 10s, 2 multiplied by 30s, 8 multiplied by 60s, 10 multiplied by 300s and 6 multiplied by 600s, the 32-frame dynamic PET data with the image size of 124 multiplied by 148 multiplied by 108 is finally obtained, and 80 slice data are selected. After the two tracer concentration maps are overlapped, the mixed concentration map is projected into a sine map, the process is completed by using a simple band-integration system model in Michigan Image Reconstruction Toolbox (Fessler 1994), the projection angle and the number of detectors are respectively 200 and 200, a sine map with the size of 200 multiplied by 200 is obtained, and poisson noise is added; one of five monkey brain data is randomly selected as a test set, the other four are selected as training sets, and the ratio of the number of the training sets to the number of the test sets is 4:1.
It can be seen from fig. 3 (a) to fig. 3 (f) that the reconstruction results of the conventional method and the neural network are compared, and the BCD-ED network and the two conventional FBP and MLEM methods have better constraints on the image profile compared with the other two neural network methods due to the introduction of the basic reconstruction model and the system matrix, so that the reconstruction results are closer to the truth diagram, but the reconstruction results of the conventional method have relatively more noise. Although the newly proposed FBP-CNN network combines a deep learning method, the parameters required to be trained are huge, the training is difficult, more noise is obviously present in the final reconstruction result, and image details are absent. The result of UNET networks appears better in image detail, but its concentration value is higher. In comparison, the BCD-ED network training parameters are minimum, the reconstruction result is closer to the true value in both image detail and concentration values, and the image is smoother due to the effect of the denoising module.
It is obvious from fig. 4 (a) to 4 (d) and fig. 5 (a) to 5 (d) that, although the separation modules of the three networks all use basic codec structures, the BCD-ED network is closer to true values on image shape details and concentration values than the other two methods, because model constraints are introduced in the previous reconstruction module, and the quality of the reconstructed mixed concentration map when the separation modules are input is higher; and the separation module of the BCD-ED network is connected with the same layer in a jumping manner, and compared with the FBP-CNN network which lacks the jumping connection, the feature fusion method is more beneficial to the restoration of image details.
The previous description of the embodiments is provided to facilitate a person of ordinary skill in the art in order to make and use the present invention. It will be apparent to those having ordinary skill in the art that various modifications to the above-described embodiments may be readily made and the generic principles described herein may be applied to other embodiments without the use of inventive faculty. Therefore, the present invention is not limited to the above-described embodiments, and those skilled in the art, based on the present disclosure, should make improvements and modifications within the scope of the present invention.

Claims (4)

1. A single-scanning double-tracer PET signal separation method based on BCD-ED comprises the following steps:
(1) Carrying out one-time dynamic PET scanning on biological tissues injected with the mixed double tracer to obtain a PET dynamic sinogram sequence y corresponding to the mixed double tracer; the mixed dual tracer consists of two isotopically-labeled tracer I and tracer II;
(2) Respectively injecting the tracer I and the tracer II into the same biological tissue, and separately performing dynamic PET scanning to obtain a PET dynamic sinogram sequence y corresponding to the tracer I and the tracer II I And y II
(3) Calculation of y using PET reconstruction algorithm I And y II Corresponding PET dynamic image sequenceAnd->And will->And->The true value x of the PET dynamic image sequence of the mixed double tracer is obtained after superposition ture
(4) According to the aboveRepeating the steps multiple times to obtain a large number of samples, and dividing the samples into a training set and a testing set, wherein each group of samples comprises y and y I 、y IIX ture
(5) Constructing a BCD-ED network consisting of a reconstruction module, a denoising module and a separation module, and training the BCD-ED network by using a training set sample to obtain a reconstruction-separation combined model of a dynamic dual-trace PET signal;
the reconstruction module adopts a maximum likelihood estimation algorithm to solve a PET dynamic image sequence X corresponding to y in an input sample, strengthens constraint of reconstruction solving problem by adding regularization term, and convolves X in different directions by using a plurality of neural network convolution kernels in reconstruction solving process to generate sparse image X k So that it has low rank properties;
the denoising module firstly reconstructs the sparse image X obtained by the module k Decomposition into low rank matrix L k Sum poisson noise matrix W k Then solving the following objective function by utilizing a singular value threshold algorithm;
wherein: c k Represents the kth convolution kernel, lambda k Is a threshold parameter for controlling L k Beta is the super parameter used for controlling the smoothness of the image, K is the number of convolution kernels used in the reconstruction solving process, and I is I * Is a nuclear norm;
the method comprises the following steps:
wherein:c is k Is used for deconvolution,/->Representing the estimation result obtained by converting the maximized likelihood function of the PET dynamic image sequence x under the given observation data y into the minimized likelihood function negative logarithm 2 For 2 norms, u is a PET dynamic image sequence output after denoising, the superscripts i and i+1 represent iteration times, and i is a natural number;
the separation module comprises an encoding part and a decoding part, wherein the encoding part is formed by sequentially connecting a downsampling block D1, a pooling layer C1, a downsampling block D2, a pooling layer C2, a downsampling block D3, a pooling layer C3 and a downsampling block D4 from input to output, and the decoding part is formed by sequentially connecting an upsampling block U1, a deconvolution layer E1, an upsampling block U2, a deconvolution layer E2, an upsampling block U3 and a deconvolution layer E3 from input to output, wherein:
each of the downsampling blocks D1-D4 comprises a three-layer structure which is sequentially connected with each other: the first layer is a convolution layer, the convolution kernel size is 3 multiplied by 3, and features are extracted through the convolution kernel; the second layer is a BatchNorm layer, and the output of the upper layer is normalized; the third layer is a Relu layer, and the output of the previous layer is subjected to activation function processing; d1 to D4 respectively generate 64, 128, 256 and 512 Feature maps;
the convolution kernel sizes of the pooling layers C1-C3 are all 2 multiplied by 2, and the convolution kernel sizes are used for halving the size of the input characteristic images so as to reduce the convolution operation amount;
the upper sampling blocks U1 to U3 all comprise three layers of structures which are connected in sequence: the first layer is a convolution layer, the convolution kernel size is 3 multiplied by 3, and features are extracted through the convolution kernel; the second layer is a BatchNorm layer, and the output of the upper layer is normalized; the third layer is a Relu layer, and the output of the previous layer is subjected to activation function processing; the up-sampling block U3 also comprises a fourth layer, namely a convolution layer with a convolution kernel size of 1 multiplied by 1, and is used for reducing the number of channels to be used as an output result;
the input of U1 is the splicing result of the output of D3 and D4 in the channel dimension, the input of U2 is the splicing result of the output of D2 and E1 in the channel dimension, the input of U3 is the splicing result of the output of D1 and E2 in the channel dimension, and U1-U3 respectively generate 256, 128 and 64 Feature maps;
the deconvolution layers E1-E3 are used for doubling the size of the input characteristic image and reducing the size of the Feature map;
(6) The test set samples are input into the combined model one by one, so that a PET dynamic image sequence of the mixed dual tracer can be reconstructed, and then the PET dynamic image sequences S corresponding to the tracer I and the tracer II are obtained by denoising and separating the PET dynamic image sequences I And S is II
2. The single scan dual tracer PET signal separation method of claim 1 wherein: the training process of the BCD-ED network structure in the step (5) is as follows:
initializing network parameters including bias vectors and weight matrixes among network layers, learning rate and maximum iteration times;
5.2 taking y in the training set sample as the input of the reconstruction module, and calculating by the joint denoising module to obtain a denoised PET dynamic image sequence u, and further calculating u and a true value x by a loss function loss1 ture A difference between them;
5.3 inputting u into a separation module, and outputting to obtain PET dynamic image sequences S corresponding to the tracer I and the tracer II I And S is II Further, S is calculated by loss function loss2 I And (3) withS II And->A difference between them;
5.4, performing supervised training on the whole network by combining a loss function loss=loss 1+loss2, and guiding the network to reversely propagate and gradient down by taking a root mean square error MSE as a loss error until the loss function loss converges or reaches the maximum iteration number, thereby completing training and obtaining a reconstruction-separation combined model of the dynamic double-trace PET signal.
3. The single scan dual tracer PET signal separation method of claim 2 wherein: the loss function loss1 is expressed as follows:
wherein:for the concentration value of the nth pixel point in the PET dynamic image sequence u obtained by solving the (i+1) th iteration, N is the number of the pixel points of the PET dynamic image sequence, and the number of the pixel points is->True value x for PET dynamic image sequence ture Concentration value at the nth pixel point.
4. The single scan dual tracer PET signal separation method of claim 2 wherein: the loss function loss2 is expressed as follows:
wherein:and->Respectively PET dynamic image sequences S I And S is II Concentration value at n-th pixel point, of->And->Respectively PET dynamic image sequence->And->The concentration value of the nth pixel point in the sequence, N is the number of the pixel points of the PET dynamic image sequence.
CN202110840914.9A 2021-07-23 2021-07-23 BCD-ED-based single-scanning double-tracer PET signal separation method Active CN113476064B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110840914.9A CN113476064B (en) 2021-07-23 2021-07-23 BCD-ED-based single-scanning double-tracer PET signal separation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110840914.9A CN113476064B (en) 2021-07-23 2021-07-23 BCD-ED-based single-scanning double-tracer PET signal separation method

Publications (2)

Publication Number Publication Date
CN113476064A CN113476064A (en) 2021-10-08
CN113476064B true CN113476064B (en) 2023-09-01

Family

ID=77943715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110840914.9A Active CN113476064B (en) 2021-07-23 2021-07-23 BCD-ED-based single-scanning double-tracer PET signal separation method

Country Status (1)

Country Link
CN (1) CN113476064B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11540798B2 (en) 2019-08-30 2023-01-03 The Research Foundation For The State University Of New York Dilated convolutional neural network system and method for positron emission tomography (PET) image denoising
CN114998249B (en) * 2022-05-30 2024-07-02 浙江大学 Double-tracing PET imaging method constrained by space-time attention mechanism

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109009179A (en) * 2018-08-02 2018-12-18 浙江大学 Identical isotope labelling dual tracer PET separation method based on depth confidence network
CN109993825A (en) * 2019-03-11 2019-07-09 北京工业大学 A kind of three-dimensional rebuilding method based on deep learning
CN110490832A (en) * 2019-08-23 2019-11-22 哈尔滨工业大学 A kind of MR image reconstruction method based on regularization depth image transcendental method
CN111127356A (en) * 2019-12-18 2020-05-08 清华大学深圳国际研究生院 Image blind denoising system
CN111166368A (en) * 2019-12-19 2020-05-19 浙江大学 Single-scanning double-tracer PET signal separation method based on pre-training GRU
CN111640075A (en) * 2020-05-23 2020-09-08 西北工业大学 Underwater image occlusion removing method based on generation countermeasure network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9036885B2 (en) * 2012-10-28 2015-05-19 Technion Research & Development Foundation Limited Image reconstruction in computed tomography
US9734601B2 (en) * 2014-04-04 2017-08-15 The Board Of Trustees Of The University Of Illinois Highly accelerated imaging and image reconstruction using adaptive sparsifying transforms

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109009179A (en) * 2018-08-02 2018-12-18 浙江大学 Identical isotope labelling dual tracer PET separation method based on depth confidence network
CN109993825A (en) * 2019-03-11 2019-07-09 北京工业大学 A kind of three-dimensional rebuilding method based on deep learning
CN110490832A (en) * 2019-08-23 2019-11-22 哈尔滨工业大学 A kind of MR image reconstruction method based on regularization depth image transcendental method
CN111127356A (en) * 2019-12-18 2020-05-08 清华大学深圳国际研究生院 Image blind denoising system
CN111166368A (en) * 2019-12-19 2020-05-19 浙江大学 Single-scanning double-tracer PET signal separation method based on pre-training GRU
CN111640075A (en) * 2020-05-23 2020-09-08 西北工业大学 Underwater image occlusion removing method based on generation countermeasure network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
正电子发射断层成像重建算法评述;叶华俊等;生物医学工程学杂志(第19期);全文 *

Also Published As

Publication number Publication date
CN113476064A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
CN111325686B (en) Low-dose PET three-dimensional reconstruction method based on deep learning
CN111627082B (en) PET image reconstruction method based on filtering back projection algorithm and neural network
CN109598722B (en) Image analysis method based on recurrent neural network
CN113476064B (en) BCD-ED-based single-scanning double-tracer PET signal separation method
CN111429379B (en) Low-dose CT image denoising method and system based on self-supervision learning
CN109615674B (en) Dynamic double-tracing PET reconstruction method based on mixed loss function 3D CNN
CN109636869B (en) Dynamic PET image reconstruction method based on non-local total variation and low-rank constraint
CN112258456B (en) Three-dimensional image segmentation method based on convolutional neural network supervision
CN113160347B (en) Low-dose double-tracer PET reconstruction method based on attention mechanism
WO2024011797A1 (en) Pet image reconstruction method based on swin-transformer regularization
CN112819914A (en) PET image processing method
Xia et al. Physics-/model-based and data-driven methods for low-dose computed tomography: A survey
Feng et al. Rethinking PET image reconstruction: ultra-low-dose, sinogram and deep learning
CN116645283A (en) Low-dose CT image denoising method based on self-supervision perceptual loss multi-scale convolutional neural network
CN116503506B (en) Image reconstruction method, system, device and storage medium
CN111920436A (en) Dual-tracer PET (positron emission tomography) separation method based on multi-task learning three-dimensional convolutional coding and decoding network
CN115984401A (en) Dynamic PET image reconstruction method based on model-driven deep learning
CN116757982A (en) Multi-mode medical image fusion method based on multi-scale codec
CN116245969A (en) Low-dose PET image reconstruction method based on deep neural network
CN115272100A (en) Low-dose SPECT chord map preprocessing and image reconstruction method based on teacher-student dual model
CN110335327A (en) A kind of medical image method for reconstructing directly solving inverse problem
Wang et al. 3D multi-modality Transformer-GAN for high-quality PET reconstruction
CN116152373A (en) Low-dose CT image reconstruction method combining neural network and convolutional dictionary learning
CN113379863B (en) Dynamic double-tracing PET image joint reconstruction and segmentation method based on deep learning
CN115423892A (en) Attenuation-free correction PET reconstruction method based on maximum expectation network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant