CN109767410B - Lung CT and MRI image fusion algorithm - Google Patents

Lung CT and MRI image fusion algorithm Download PDF

Info

Publication number
CN109767410B
CN109767410B CN201811575740.2A CN201811575740A CN109767410B CN 109767410 B CN109767410 B CN 109767410B CN 201811575740 A CN201811575740 A CN 201811575740A CN 109767410 B CN109767410 B CN 109767410B
Authority
CN
China
Prior art keywords
image
fusion
loss function
convex
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811575740.2A
Other languages
Chinese (zh)
Other versions
CN109767410A (en
Inventor
郑忍成
王鹤
单飞
杨舒一
施裕新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN201811575740.2A priority Critical patent/CN109767410B/en
Publication of CN109767410A publication Critical patent/CN109767410A/en
Application granted granted Critical
Publication of CN109767410B publication Critical patent/CN109767410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention belongs to the technical field of medical image processing and computer vision, and particularly relates to a lung CT and MRI image fusion algorithm. The method is that the image fusion is placed under an inverse problem model, a loss function is constructed under a variation frame, and an optimal solution is obtained by minimizing the loss function. According to the invention, by utilizing the research advantages of the image in the wavelet domain, the wavelet coefficient is put into the loss function, and the non-convex regularization is carried out on the wavelet coefficient to increase the sparsity of the wavelet coefficient, so that a better image recovery effect is obtained; and (3) maintaining the convex property of the whole loss function by adjusting parameters, thereby obtaining a global optimal solution through convex optimization, and finally obtaining a fused image through wavelet inverse transformation. The invention combines the advantages of CT imaging on the aspect of lung texture and the advantages of MRI imaging sequences on the aspect of lung focus, enriches the information quantity of the fusion image, is convenient for doctors to more clearly find the advantage information of two imaging sensors on the lung imaging on the fusion image, and makes more accurate judgment in shorter time.

Description

Lung CT and MRI image fusion algorithm
Technical Field
The invention belongs to the technical field of medical image processing and computer vision, and particularly relates to a lung CT and MRI image fusion algorithm.
Background
Lung is an important organ which is in contact with the respiratory system of human body, lung diseases are one of the important diseases which threaten human life nowadays, common lung diseases are lung cancer, tuberculosis and the like, wherein lung cancer is one of malignant tumors which have the highest incidence rate and death rate and are the greatest threat to human health and life. In recent 50 years, many countries report that the incidence and death rate of lung cancer are obviously increased, the incidence and death rate of lung cancer in men are the first place of all malignant tumors, the incidence rate in women is the second place, and the death rate is the second place. For the detection of lung cancer and other diseases, the lung CT scanning is usually adopted, the CT scanning can cover most of the texture characteristics of the lung, but the characteristics of the focus area are not obviously displayed, and the nuclear magnetic resonance MRI scanning is usually carried out on a patient in clinic to deeply analyze the focus area, so that diagnosis is made and corresponding treatment comments are given. Pulmonary MRI scans typically use T2 weighting sequences and DWI sequences to better image lung lesions. Therefore, the clinician needs to repeatedly locate and compare the CT and MRI scanned images to combine the significant information displayed by the two images to perform diagnosis, which is time-consuming and labor-consuming and affects the diagnosis effect and efficiency. Aiming at the problem, the invention provides a lung CT and MRI image fusion algorithm, so that a clinician can directly acquire the needed key information through registering and fusing images, thereby improving the diagnosis accuracy and saving a large amount of time for positioning and comparing images among different mode images.
Medical image fusion is more studied in fusion of head CT and MRI images to display different structural features, but is also scarce in other studies such as lung image parts, and has no specific application related data. The algorithm of medical image fusion mainly involves the traditional pixel-level image fusion method, including tiny improvement of the pixel value selection scheme, etc., namely do not carry on any transform domain processing to the image, directly carry on the average weighting to the correspondent pixel on each source image, after simple operations such as gray value selection, etc., fuse into a new image; the method has the advantages that images are subjected to multi-layer decomposition on a wavelet domain, an image wavelet pyramid is constructed, different layers of wavelet coefficients are subjected to different treatments, the method has a great breakthrough in medical image fusion, better effects than those of the traditional image fusion algorithm are obtained in some scenes, but the method also has certain limitations, sometimes causes loss of image details, is easy to be interfered by noise and the like.
The image fusion algorithm used in the invention is to place the image fusion under an inverse problem model, construct a loss function under a variation frame, and calculate an optimal solution by minimizing the loss function. The algorithm of the invention takes advantage of the research of the image in the wavelet domain, puts the wavelet coefficient into the loss function, and carries out non-convex regularization on the wavelet coefficient to increase the sparsity of the wavelet coefficient, thereby obtaining better image recovery effect. The convex property of the whole loss function is kept through adjusting parameters, so that a global optimal solution can be obtained through a convex optimization method, and finally, a fusion image is obtained through wavelet inverse transformation.
Disclosure of Invention
The invention aims to provide a lung CT and MRI image fusion algorithm which is beneficial to a clinician to make accurate diagnosis through fusion images in a short time.
The lung CT and MRI image fusion algorithm provided by the invention combines the advantages of CT imaging in the aspect of lung texture and the advantages of MRI imaging sequences in the aspect of lung focus, enriches the information quantity of the fusion image, is convenient for doctors to more clearly find out the advantage information of two imaging sensors in the aspect of lung imaging on the fusion image, and makes more accurate judgment in a shorter time. Specifically, the method is based on an algorithm for automatically fusing images of MRI T1, T2 and DWI under a variational model and comprises the following steps of:
(1) Initial dataset registration (image preprocessing):
(1-1) setting a datum point by taking a T1 or T2 sequence image as a datum point, firstly registering a lung CT image to the T1 or T2 sequence image through a normalization interaction information maximization criterion method, and checking the registration effect on a three-dimensional image;
(1-2) registering a sequence with b value of 0 in the DWI sequence image and a T1 or T2 sequence image on the T1 or T2 sequence image by a normalized cross-correlation method, checking the registration effect on a three-dimensional image, and checking that the registration between the DWI sequence and the CT image is ensured to be accurate;
(1-3) finally registering a sequence with the b value equal to 100-800 in the DWI sequence image into a T1 or T2 sequence image according to the same conversion equation when the b value is 0, and checking the registration effect;
and (1-4) after registration is finished, denoising each image, normalizing pixel values and the like.
(2) Constructing a model of image fusion:
(2-1) image fusion basic expression:
Y 1 =A 1 X+N 1 ;Y 2 =A 2 X+N 2
wherein Y is 1 ,Y 2 Represents the source image to be fused, X represents the final fused image, A 1 ,A 2 Is a linear operator, can simulate a transformation domain, convolution and the like, N 1 ,N 2 Is additive noise;
(2-2) image processing inverse problem framework: the image processing problem can be seen as an inverse problem in the form of a maximized posterior probability, and according to bayesian theory, the maximum posterior probability (MAP) can be expressed as:
in the image processing inverse framework, B represents an observed degraded image, a represents an original image before degradation which needs to be predicted, in the formula, P (B) is a constant, and after using a log expression, the maximum posterior probability can be expressed as:
the cost function of the image processing inverse problem can be expressed as:
where f (x) is called data attachment term (data fidelity term), g (x) is called penalty term, and λ is called regularization coefficient for controlling the weight between the data fidelity term and the penalty term;
(2-3) in the present image fusion model, the likelihood term P (b|a) can be expressed as:
P(B|A)=P(Y i |W i x,H ii )
Y i for a source image to be fused, x is a sparse coefficient of the fused image in a wavelet domain, and w= -W i Is a discrete inverse wavelet transform, h= i i Is a linear operator, and can represent the PSF (point spread) in the blurring processPass) and the like, beta= (beta) i Is the sensor coefficient. Since the images scanned by the sensors are independent of each other, the above method can be converted into:
P(Y i |W i x,H ii )=P(Y 1 |W 1 x,H 11 )P(Y 2 |W 2 x,H 22 )…P(Y K |W K x,H kK )
k is the number of sensors; from the above derivation, in the present algorithm, the loss function of image fusion is defined as:
because the fusion scene is that the CT images are respectively subjected to image fusion with the MRI sequence images, the loss function comprises two data fidelity terms and a penalty term psi B (x) Lambda is a regularization coefficient in the formula, and the specific gravity between the data fidelity term and the loss term is controlled;
(2-4) penalty item selection
The algorithm adopts a sparse regularization method which corresponds to image sparse representation in wavelet domain, wherein the sparse regularization method generally adopts L1-norm regularization (Tibshirani, R., regression Shrinkage, and Selection via the Lasso. Royal Statistical Society,1994.58: p.267-288.) and the classical basis tracking denoising method is that the sparse representation of the used signals and the L1-norm regularization (S.Chen, D.L.Donoho, M.A.Saunders, atomic decomposition by basis pursuit', SIAM J.Sci. Comput., vol.20, no.1, pp.33-61,1998.) are adopted, the L1-norm regularization term has convex property, and the data fidelity term is also a convex function, so the whole loss function can be proved to be a convex function, and the convex optimization method can be used for solving the global optimal solution of the loss function. But the limitation of L1-norm regularization is the under-estimation of the high frequency content of the signal. The algorithm uses GMC (generalized minimax concave) regularization (Selesnick, i., sparse Regularization via Convex analysis.ieee Transactions on Signal Processing,2017.65 (17): p.4481-4494.) with better sparsity than L1-norm, the expression for GMC regularization is as follows:
ψ B (x)=‖x‖ 1 -S B (x)
wherein S is B (x) For the generalized Huber equation defined by infimal convolution (B ε R M*N ):
Generalized Huber function S B Is a proper lower semi-continuous convex function, and the expression of the internal convolution is:
wherein B is a real space matrix of M x N dimensions, and v is a real space vector of N dimensions.
GMC regularization has a non-convex nature, but by adjusting matrix B, B is made to satisfy:
the overall loss function still maintains the convex nature. Gamma is the control penalty term ψ B (x) The parameter of the non-convex degree is usually 0.5-0.8;
(2-5) sensor selection coefficient β: there are many methods available for determining the sensor selectivity factor, also called sensor gain, using PCA (principal component analysis) in the present invention.
(3) And (3) carrying out convex optimization solution on the loss function, wherein the loss function constructed by the method maintains convex properties according to the step (2-4), so that the global maximum value can be solved by utilizing a convex optimization method. The invention uses a front-term and back-term splitting algorithm (FBS: forward-backward splitting), wherein the FBS algorithm is an iterative algorithm, and the basic model is as follows:
wherein f 1 (x) Is convex and continuous gradient to ρ -LipschitzIs differentiable, ρ is the Lipschitz coefficient, f 2 (x) May be a function of having a lower semi-continuous convex nature. The basic solution of the FBS algorithm is as follows:
wherein,mu is the step size, i represents i th Iterating to obtain an optimal solution after the iteration is completed>
Applying the FBS algorithm to the previously established loss function problem, in fact, the previously established loss function optimization problem of the present invention can be considered as a saddle point problem:
is the optimal solution of x, v. Saddle point problems belong to the monotonic inclusion problem, so this class of problems can be solved by FBS algorithms.
Obtaining an optimal solution according to the methodThe sparse coefficient of the fusion image in the wavelet domain is obtained.
(4) And obtaining a final fusion image through discrete inverse wavelet transform.
The invention has the beneficial effects that:
the invention constructs the loss function based on the variation model under the image processing inverse problem frame, and obtains the optimal fusion effect diagram by minimizing the loss function, and the method has the advantages that:
(1) The preprocessing is convenient, and the image of CT and DWI with b value of 0 can directly enter the algorithm system after being registered with the T1 or T2 image respectively;
(2) The processing speed is high, the optimal solution can be converged after 30 times of iterative operation, and the running time of the whole algorithm system is high;
(3) Through image sparse representation, non-convex regularization is carried out by using a GMC penalty term smaller than an L1 norm, and through adjusting corresponding parameters in the non-convex regularization term, the whole loss function is ensured to have convex properties, so that a global optimal solution can be obtained through a convex optimization method; meanwhile, due to the sparse processing, the algorithm has a certain denoising effect;
(4) The whole algorithm model can be constructed into a joint problem model, such as deblurring and super resolution, by using a method for constructing a loss function.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 shows an example of fusion of a CT image and a T2-series image of a patient's lung, and DWI-series images. Wherein, (a) is a CT source scan image, (b) is an MRI T2 sequence source scan image, (c) is an MRI DWI sequence source scan image (b=800), (d) is a fusion result image of CT and T2 sequence images, and (e) is a fusion result image of CT and DWI sequence images (b=800).
Fig. 3 is a graph comparing fusion results of the images of the case of fig. 2 using different fusion algorithms. Wherein, (a) is a CT source scan image, (b) is an MRI T2 sequence source scan image, (c) is an MRI DWI sequence source scan image (b=800), (d) and (e) are respectively CT/T2 and CT/DWI fusion result images by using an original spatial fusion algorithm I, (f) and (g) are respectively CT/T2 and CT/DWI fusion result images by using a wavelet spatial fusion algorithm II, and (h) and (i) are respectively CT/T2 and CT/DWI fusion result images by using the fusion algorithm provided by the invention.
Fig. 4 is a graph comparing fusion results of the image of the case of fig. 2 after degradation processing using different fusion algorithms. Wherein the source scan image is degraded, gaussian blur (size [ 33 ], sigma=1) is added, gaussian noise (mean=0, variance=0.0002) is added; (a) is a degraded CT source scan image, (b) is a degraded MRI T2 sequence source scan image, (c) is a degraded MRI DWI sequence source scan image (b=800), (d) and (e) are respectively a degraded CT/T2 and CT/DWI fusion result image by using an original spatial fusion algorithm I, (f) and (g) are respectively a degraded CT/T2 and CT/DWI fusion result image by using a wavelet spatial fusion algorithm II, and (h) and (i) are respectively a degraded CT/T2 and CT/DWI fusion result image by using the fusion algorithm provided by the invention.
Detailed Description
The invention provides an automatic image fusion algorithm for lung CT images and MRI sequence images, which is further described in detail below with reference to the accompanying drawings and embodiments:
initial data is prepared. The method comprises the steps of collecting scanning image data of fifty lung cancer patients, collecting CT scanning images of each patient, MRI T1 or T2 and DWI sequence scanning images of each patient, selecting a plurality of layers with larger fusion significance by three major doctors in the radiology department of a hospital, and finally giving fusion effect comments on the layers.
(1) Initial dataset registration (image preprocessing):
(1-1) with the T1 or T2 sequence image as a reference, setting a reference point, firstly registering the lung CT image to the T1 or T2 sequence image by a normalization interaction information maximization criterion method, and checking the registration effect on the three-dimensional image.
And (1-2) registering the sequence with the b value of 0 in the DWI sequence image and the T1 or T2 sequence image on the T1 or T2 sequence image by a normalized cross-correlation method, checking the registration effect on the three-dimensional image, and checking the registration between the DWI sequence and the CT image to ensure accurate registration.
(1-3) finally registering the sequences with the b values equal to 100-800 in the DWI sequence images into the T1 or T2 sequence images according to the same conversion equation when the b values are 0, and checking the registration effect.
And (1-4) after registration is finished, denoising each image, normalizing pixel values and the like.
(2) Constructing a model of image fusion:
(2-1) image fusion basic expression:
Y 1 =A 1 X+N 1 ;Y 2 =A 2 X+N 2
wherein Y is 1 ,Y 2 Represents the source image to be fused, X represents the final fused image, A 1 ,A 2 Is a linear operator, can simulate a transformation domain, convolution and the like, N 1 ,N 2 Is additive noise.
(2-2) image processing inverse problem framework: the image processing problem can be seen as an inverse problem in the form of a maximized posterior probability, and according to bayesian theory, the maximum posterior probability (MAP) can be expressed as:
in the image processing inverse framework, B represents an observed degraded image, a represents an original image before degradation which needs to be predicted, in the formula, P (B) is a constant, and after using a log expression, the maximum posterior probability can be expressed as:
the cost function of the image processing inverse problem can be expressed as:
where f (x) is called data attachment term (data fidelity term), g (x) is called penalty term, and λ is called regularization coefficient, used to control the weight between the data fidelity term and the penalty term.
(2-3) in the present image fusion model, the likelihood term P (b|a) can be expressed as:
P(B|A)=P(Y i |W i x,H ii )
Y i for a source image to be fused, x is a sparse coefficient of the fused image in a wavelet domain, and w= -W i Is a discrete inverse wavelet transform, h= i i Is a linear operator, and can represent PSF (point spread equation) and the like in the blurring process, and beta= (beta) i Is the sensor coefficient. Since the images scanned by the sensors are independent of each other, the above method can be converted into:
P(Y i |W i x,H ii )=P(Y 1 |W 1 x,H 11 )P(Y 2 |W 2 x,H 22 )…P(Y K |W K x,H kK )
k is the number of sensors; from the above derivation, in the present algorithm, the loss function of image fusion is defined as:
because the fusion scene is the image fusion between the CT and the T1 or the T2 sequences, and the loss function comprises two data fidelity terms and a penalty term psi B (x);
(2-4) penalty item selection
The algorithm adopts the sparse regularization corresponding to the image sparse representation in the wavelet domain, the sparse regularization usually adopts L1-norm regularization, a classical basis tracking denoising method is used for sparse representation of signals and L1-norm regularization, the L1-norm regularization has convex properties, and a data fidelity term is also a convex function, so that the whole loss function can be proved to be a convex function, and the method of never convex optimization can be used for solving the global optimal solution of the loss function. But the limitation of L1-norm regularization is the under-estimation of the high frequency content of the signal. The algorithm adopts GMC regularization, the sparsity of which is better than that of L1-norm, and the expression of the GMC regularization is as follows:
ψ B (x)=‖x‖ 1 -S B (x)
wherein S is B (x) For the generalized Huber equation defined by infimal convolution:
generalized Huber function S B Is a proper lower semi-continuous convex function, and the expression of the internal convolution is:
wherein B is a real space matrix of M x N dimensions, and v is a real space vector of N dimensions.
GMC regularization has a non-convex nature, but by adjusting matrix B, B is made to satisfy:
the overall loss function still maintains the convex nature.
(2-5) sensor selection coefficient β: the sensor selectivity coefficient is also called sensor gain, and a number of methods are available for obtaining the sensor selectivity coefficient, in which PCA (principal component analysis) is used to obtain the sensor selectivity coefficient, and the specific procedure is as follows:
(2-5-1) dividing i original images into m×n small blocks, respectively, expressed as:
(2-5-2) for each patch from the same location of the corresponding original image, rearranging as follows:
these pixels can be considered as random variables;
(2-5-3) construction of a correlation matrix Σ v Expressed as:
(2-5-4) calculating a principal eigenvector μ= [ μ ] of the correlation matrix 12 …μ i ] T And normalize μ to μ T μ=1;
(2-5-5) setting the sensor gain β to μ.
(3) And (3) carrying out convex optimization solution on the loss function, wherein the loss function constructed by the method maintains convex properties according to the step (2-4), so that the global maximum value can be solved by utilizing a convex optimization method. The invention uses a front-term and back-term splitting algorithm (FBS: forward-backward splitting), wherein the FBS algorithm is an iterative algorithm, and the basic model is as follows:
wherein f 1 (x) Is convex and continuous gradient to ρ -LipschitzIs differentiable, ρ is the Lipschitz coefficient, f 2 (x) May be a lower semi-continuous convex function. The basic solution of the FBS algorithm is as follows:
wherein,mu is the step size, i represents i th Iterating to obtain an optimal solution after the iteration is completed>
Applying the FBS algorithm to the previously established loss function problem, in fact, the previously established loss function optimization problem of the present invention can be considered as a saddle point problem:
is the optimal solution of x, v. Saddle point problems belong to the monotonic inclusion problem, so this class of problems can be solved by FBS algorithms.
The specific solution to the saddle point problem is as follows:
let ρ=max (1, γ/(1- γ))
Is provided with
For i=0, 1,2, …, k, (k is the maximum number of cycles)
x (i+1) =soft(w (i) ,μλ)
v (i+1) =soft(u (i) ,μλ)
And (5) ending.
Where soft () represents a soft threshold, defined as:
soft(x,y)=(max(|w|-y,0)).*sign(x)
obtaining an optimal solution according to the methodThe sparse coefficient of the fusion image in the wavelet domain is obtained;
(4) And obtaining a final fusion image through discrete inverse wavelet transform.
Specific examples and effect evaluation:
as shown in fig. 2 (a), (b), and (c) show a corresponding layer (having undergone registration and normalization preprocessing stages) of a CT scan image, a T2 serial scan image, and a DWI serial scan image (b=800) of a lung of a patient, respectively, the CT and T2 images and the CT and DWI images are fused. Taking CT and T2 image fusion as an example, a loss function is constructed:
y 1 ,y 2 respectively CT and T2 images, wherein the image scale is 384 x 288, the wavelet decomposition adopts a 2-layer decomposition mode, and the decomposition matrix is L= [96,72;96,72;192,144;384,288]The method comprises the steps of carrying out a first treatment on the surface of the Determination of CT and T2 sensor coefficients beta according to the principal component analysis method described above 1 ,β 2 ;H 1 ,H 2 For a unitary matrix, PSF equations can also be used to further explore the deblurring effect, and the estimation of PSF equations can be found in (Michailovich, O.V. and A.tannenbaum, despeckling of medical ultrasound images, IEEE Transactions on Ultrasonics, ferrocelelectics, and Frequency Control,2006.53 (1): p.64-78.); psi phi type B (x) For the GMC penalty term, the regularization coefficient λ takes 0.005. Psi phi type B (x) The matrix B in (a) is set as:
wherein gamma is 0.8.
After constructing the loss function and formulating each parameter, carrying out convex optimization solution by using the mentioned polynomial and postterm splitting method, whereinThe iteration times are 30 times, so that wavelet domain sparse coefficients of the fusion image are obtained>The final CT and T2 fusion image is obtained after 2-layer discrete inverse wavelet transform as shown in FIG. 3 (h). Similarly, a CT and DWI fusion image was obtained as shown in FIG. 3 (i).
In the present invention, the other two image fusion algorithms used for comparison are as follows:
the method comprises the following steps: and directly fusing pixel-level pixel values in the original image space, wherein the fusion strategy is to average the pixel values of the fused image as the pixel values of the source image.
The second method is as follows: and performing 2 layers of wavelet decomposition on the source image to obtain wavelet sparse coefficients, averaging the wavelet sparse coefficients to obtain the wavelet sparse coefficients of the fusion image, and performing wavelet inverse transformation to obtain the fusion image.
The fusion results of the first method are shown in fig. 4 (d) and (e), and the fusion results of the second method are shown in fig. 4 (f) and (g).
Because an absolute perfect fusion image does not exist in image fusion as a reference, and no fusion criterion and evaluation parameter which are accepted to be the best are used for evaluating the fusion effect, the method for evaluating the image fusion result by using only objective evaluation parameters is not suitable. The medical image fusion result is particularly used in clinic by a doctor, so that the clinical doctor is very important for subjective evaluation of the medical image fusion effect, but the subjective evaluation of the clinical doctor is taken as a principal and objective evaluation parameter as an auxiliary image fusion evaluation strategy because the subjective evaluation of the clinical doctor is also occasional to a certain extent. The objective evaluation parameters for image fusion involved were Petrovic and Xydeas's metric (xydes, c.s. and v.petrovic, objective image fusion performance measure. Electronics Letters,2000.36 (4): p.308-309.), piella's metric (Piella, g.and h.heijmanns.a new quality metric for image fusion.in Proceedings2003International Conference on Image Processing (cat.no. 03ch 37429). 2003.).
In subjective evaluation, we fused the original CT and T2 images with different methods and then please three radiologists to evaluate the fusion effect and give a ranking (see table 1, table 2); in objective evaluation, because no perfect original image or perfect fusion reference image exists, the fused image and the original image are directly used for calculating evaluation parameters, and the fusion algorithm provided by the invention has a certain image recovery effect. The original image is further degraded (added with a certain amount of blurring and noise), the original image to be fused is set to be A and B, the degraded original image to be fused is set to be C and D, the C and D are fused, and then the fusion effect objective parameter evaluation is carried out on the fusion image E and the fusion image B to evaluate the fusion effect of various algorithms. The added blur is Gaussian blur (size= [3,3], sigma=1), the added noise is Gaussian white noise (mean=0, variance=0.0002), the degraded CT image, the T2 image and the DWI image are shown in fig. 2 (a), (b) and (c), the degraded source image is fused with the T2 image and the DWI image, the fusion result of the first method is shown in fig. 3 (d) and (e), the fusion result of the second method is shown in fig. 3 (f) and (g), and the fusion result of the algorithm provided by the invention is shown in fig. 3 (h) and (i). The fusion results were also subjectively rated to give a ranking (see tables 3, 4) and the corresponding Petrovic and Xydeas's metric, piella's metric parameters were calculated for objective parameter evaluation (see Table 5). The result shows that the fusion algorithm provided by the invention has great advantages in subjective and objective evaluation.
Table 1 three attending physicians rank the CT/T2 fusion results evaluation of fig. 3 using different fusion algorithms
CT/T2 fusion Mainly treat doctor one Mainly treating doctor II Mainly treat doctor III
Method one 3 2 3
Method II 2 2 2
The algorithm of the invention 1 1 1
Table 2 three attending physicians rank the CT/DWI fusion results evaluation of fig. 3 using different fusion algorithms
CT/DWI fusion Mainly treat doctor one Mainly treating doctor II Mainly treat doctor III
Method one 3 2 3
Method II 2 2 2
The algorithm of the invention 1 1 1
Table 3 three primary physicians use CT/T2 fusion result evaluation ranking of different fusion algorithms on the degenerated source images of fig. 4
CT/T2 fusion Mainly treat doctor one Mainly treating doctor II Mainly treat doctor III
Method one 3 3 3
Method II 2 2 2
The algorithm of the invention 1 1 1
Table 4 three attending physicians evaluate ranking of CT/DWI fusion results of the degenerated source images of fig. 4 with different fusion algorithms
CT/DWI fusion Mainly treat doctor one Mainly treating doctor II Mainly treat doctor III
Method one 3 3 3
Method II 2 2 2
The algorithm of the invention 1 1 1
Table 5 fusion results in FIG. 4 with undegraded Source graphs fusion objective parameter evaluation results
CT/T2 fusion Petrovic Piella CT/DWI fusion Petrovic Piella
Method one 0.0711 0.0204 Method one 0.0774 0.0187
Method II 0.2132 0.5051 Method II 0.2092 0.5250
The algorithm of the invention 0.4737 0.7581 The algorithm of the invention 0.4731 0.8035

Claims (1)

1. A lung CT and MRI image fusion algorithm is based on registration CT, MRIT1, T2 and DWI to automatically fuse images under a variational model, and specifically comprises the following steps:
(1) Initial dataset registration:
(1-1) setting a datum point by taking a T1 or T2 sequence image as a datum point, firstly registering a lung CT image to the T1 or T2 sequence image through a normalization interaction information maximization criterion method, and checking the registration effect on a three-dimensional image;
(1-2) registering a sequence with b value of 0 in the DWI sequence image and a T1 or T2 sequence image on the T1 or T2 sequence image by a normalized cross-correlation method, checking the registration effect on a three-dimensional image, and checking that the registration between the DWI sequence and the CT image is ensured to be accurate;
(1-3) finally registering a sequence with the b value equal to 100-800 in the DWI sequence image into a T1 or T2 sequence image according to the same conversion equation when the b value is 0, and checking the registration effect;
after registration is finished, denoising and pixel value normalization preprocessing are carried out on each image;
(2) Constructing a model of image fusion:
(2-1) image fusion basic expression:
Y 1 =A 1 X+N 1 ;Y 2 =A 2 X+N 2
wherein Y is 1 ,Y 2 Represents the source image to be fused, X represents the final fused image, A 1 ,A 2 Is a linear operator, can simulate a transformation domain, convolution and N 1 ,N 2 Is additive noiseSound;
(2-2) image processing inverse problem framework: regarding the image processing problem as an inverse problem in the form of maximizing the posterior probability, according to bayesian theory, the maximum posterior probability (MAP) is expressed as:
in the image processing inverse framework, B represents an observed post-degradation image, a represents a pre-degradation original image that needs to be predicted, P (B) is a constant, and using a log expression, the maximum posterior probability is expressed as:
the loss function of the image processing inverse problem is expressed as:
wherein f (x) is called a data fidelity term, g (x) is called a penalty term, lambda is called a regularization coefficient, and the regularization coefficient is used for controlling the weight between the data fidelity term and the penalty term;
(2-3) in the present image fusion model, the likelihood term P (b|a) is expressed as:
P(B|A)=P(Y i |W i x,H ii )
Y i for the original image to be fused, x is the sparse coefficient of the fused image in the wavelet domain, and w=, W i Is a discrete inverse wavelet transform, h= i i Is a linear operator representing the PSF, β= - β during blurring i Is the sensor coefficient; since the images scanned by the sensors are independent of each other, the above conversion is:
P(Y i |W i x,H ii )=P(Y 1 |W 1 x,H 11 )P(Y 2 |W 2 x,H 22 )…P(Y K |W K x,H kK )
k is the number of sensors; the loss function of the image fusion is then defined as:
because the fusion scene is the image fusion between the CT image and the MRI sequence image respectively, the loss function comprises two data fidelity terms and a penalty term psi B (x) Lambda is a regularization coefficient in the formula, and the specific gravity between the data fidelity term and the loss term is controlled;
(2-4) penalty item selection
Adopting GMC regularization to correspond to image sparse representation in wavelet domain, and the expression of the GMC regularization is as follows:
ψ B (x)=‖x‖ 1 -S B (x)
wherein S is B (x) For the generalized Huber equation defined by infimal convolution:
generalized Huber function S B Is a proper lower semi-continuous convex function, and the expression of the internal convolution is:
wherein B is a real space matrix of M x N dimension, and v is a real space vector of N dimension;
the GMC penalty term has a non-convex nature, but by adjusting matrix B, B is made to satisfy:
the overall loss function still maintains the convex nature; gamma is the control penalty term ψ B (x) Parameters of degree of non-convexity;
(2-5) sensor selection coefficient β: the sensor selectivity coefficient is also called sensor gain, and is obtained by adopting a principal component analysis method;
(3) Carrying out convex optimization solution on the loss function, keeping convex properties according to the constructed loss function, and carrying out global maximum solution by using a convex optimization method; specifically, a front-to-back term splitting algorithm (FBS) is used, and the basic model is as follows:
wherein f 1 (x) Is convex and continuous gradient to ρ -LipschitzIs differentiable, ρ is the Lipschitz coefficient, f 2 (x) Is a lower semi-continuous convex function; the basic solution of the FBS algorithm is as follows:
wherein,mu represents the step size, i represents i th Iterating to obtain an optimal solution after the iteration is completed>
Applying an FBS algorithm to the previously established loss function problem; consider the previously established loss function optimization problem as a saddle point problem:
an optimal solution of x, v; saddle point problems belong to monotone containing problems, and the problems are solved through an FBS algorithm;
obtaining an optimal solution according to the methodThe sparse coefficient of the fusion image in the wavelet domain is obtained;
(4) And obtaining a final fusion image through discrete inverse wavelet transform.
CN201811575740.2A 2018-12-22 2018-12-22 Lung CT and MRI image fusion algorithm Active CN109767410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811575740.2A CN109767410B (en) 2018-12-22 2018-12-22 Lung CT and MRI image fusion algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811575740.2A CN109767410B (en) 2018-12-22 2018-12-22 Lung CT and MRI image fusion algorithm

Publications (2)

Publication Number Publication Date
CN109767410A CN109767410A (en) 2019-05-17
CN109767410B true CN109767410B (en) 2024-03-08

Family

ID=66450804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811575740.2A Active CN109767410B (en) 2018-12-22 2018-12-22 Lung CT and MRI image fusion algorithm

Country Status (1)

Country Link
CN (1) CN109767410B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080681B (en) * 2019-12-16 2021-12-10 电子科技大学 3D/2D medical image registration method based on LoG operator
CN112085833B (en) * 2020-08-24 2022-09-06 南昌大学第一附属医院 Analysis method for cone beam CT and image fusion combined in-vivo three-dimensional motion of cervical vertebra

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809650A (en) * 2016-03-04 2016-07-27 北京航空航天大学 Bidirectional iteration optimization based image integrating method
CN106127719A (en) * 2016-06-20 2016-11-16 中国矿业大学 A kind of novel neutral net Method of Medical Image Fusion

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110142316A1 (en) * 2009-10-29 2011-06-16 Ge Wang Tomography-Based and MRI-Based Imaging Systems
US9734601B2 (en) * 2014-04-04 2017-08-15 The Board Of Trustees Of The University Of Illinois Highly accelerated imaging and image reconstruction using adaptive sparsifying transforms

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809650A (en) * 2016-03-04 2016-07-27 北京航空航天大学 Bidirectional iteration optimization based image integrating method
CN106127719A (en) * 2016-06-20 2016-11-16 中国矿业大学 A kind of novel neutral net Method of Medical Image Fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
医学图像融合算法的应用与研究;张彬;郑永果;马芳;东野长磊;重庆医科大学学报;第36卷(第2期);全文 *

Also Published As

Publication number Publication date
CN109767410A (en) 2019-05-17

Similar Documents

Publication Publication Date Title
Sagheer et al. A review on medical image denoising algorithms
Armanious et al. Unsupervised medical image translation using cycle-MedGAN
Wang et al. Advances in data preprocessing for biomedical data fusion: An overview of the methods, challenges, and prospects
US7653229B2 (en) Methods and apparatus for reconstruction of volume data from projection data
US8774481B2 (en) Atlas-assisted synthetic computed tomography using deformable image registration
US8675936B2 (en) Multimodal image reconstruction
Ranjan et al. GAN for synthesizing CT from T2-weighted MRI data towards MR-guided radiation treatment
CN114270397A (en) System and method for determining fluid and tissue volume estimates using electrical property tomography
CN109767410B (en) Lung CT and MRI image fusion algorithm
CN111340768B (en) Multi-center effect compensation method based on PET/CT intelligent diagnosis system
Li et al. Learning non-local perfusion textures for high-quality computed tomography perfusion imaging
CN110599530B (en) MVCT image texture enhancement method based on double regular constraints
CN112529977B (en) PET image reconstruction method and system
Dawood et al. The importance of contrast enhancement in medical images analysis and diagnosis
CN109559283B (en) DNST domain bivariate shrinkage and bilateral non-local mean filtering-based medical PET image denoising method
Guo et al. Graph filtering approach to PET image denoising
CN109584322B (en) Shearlet medical PET image denoising method based on frequency domain direction smoothing
Lohit et al. Modified total Bregman divergence driven picture fuzzy clustering with local information for brain MRI image segmentation
CN109035156B (en) DNST (deep depth transform) -based medical CT (computed tomography) image denoising method
Liu et al. Segmentation for multimodal brain tumor images using dual-tree complex wavelet transform and deep reinforcement learning
CN108010093A (en) A kind of PET image reconstruction method and device
Jagtap et al. Improved Image Fusion Technique Using Convolutional Neural Networks and The Hybrid PCA-Guided Filter
CN111429495A (en) Novel non-rigid image registration method
Burger et al. Mathematical methods in biomedical imaging
Singh et al. Noise-residue learning convolutional network model for magnetic resonance image enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant