CN110827232B - Cross-modality MRI (magnetic resonance imaging) synthesis method based on morphological characteristics GAN (gamma GAN) - Google Patents

Cross-modality MRI (magnetic resonance imaging) synthesis method based on morphological characteristics GAN (gamma GAN) Download PDF

Info

Publication number
CN110827232B
CN110827232B CN201911113248.8A CN201911113248A CN110827232B CN 110827232 B CN110827232 B CN 110827232B CN 201911113248 A CN201911113248 A CN 201911113248A CN 110827232 B CN110827232 B CN 110827232B
Authority
CN
China
Prior art keywords
representative
modality
mode
target
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911113248.8A
Other languages
Chinese (zh)
Other versions
CN110827232A (en
Inventor
王艳
李志昂
吴锡
周激流
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201911113248.8A priority Critical patent/CN110827232B/en
Publication of CN110827232A publication Critical patent/CN110827232A/en
Application granted granted Critical
Publication of CN110827232B publication Critical patent/CN110827232B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a trans-modal MRI (magnetic resonance imaging) synthesis method based on morphological characteristics GAN (neural network), which comprises the steps of establishing an MRFE-GAN model, wherein the MRFE-GAN model comprises a residual error network module and a modal representative characteristic extraction module; the source mode acquires a pseudo target mode through a residual error network module; and extracting representative features of the pseudo target modality through a modality representative feature extraction module, combining the representative features with basic information of the source modality, and fusing to generate a synthetic target modality. The invention can obtain more real and effective target modes; the information difference between cross-domain modes can be effectively overcome, and data of different levels can be effectively extracted; the difference between the synthetic mode and the real target mode is effectively reduced, so that the synthetic image is more real and reliable.

Description

Cross-modal MRI (magnetic resonance imaging) synthesis method based on morphological feature GAN (gain)
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a cross-modality MRI synthesis method based on morphological characteristics GAN.
Background
Medical imaging techniques such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET) are important components of modern healthcare. MRI, which can capture contrast differences in soft tissue, has become the primary imaging modality for studying neuroanatomy. By applying different pulse sequences and parameters, a wide variety of tissue contrasts can be generated when imaging the same anatomy, thereby obtaining images of different contrasts, i.e. MRI modalities. For example, by selecting pulse sequences such as magnetization-prepared gradient echoes (MPRAGE) and recalled variance gradients (SPGR), T1-weighted (T1) images can be generated that clearly depict gray and white matter tissue. In contrast, the T2-weighted (T2) image depicts fluid from cortical tissue, generated by applying a pulse sequence, such as a Dual Spin Echo (DSE). Additionally, fluid attenuated inversion recovery (FLAIR) is a T2 weighted pulse sequence that is employed to enhance image contrast of white matter lesions. Different contrast images of the same patient may provide different diagnostic information. Magnetic Resonance Imaging (MRI) techniques can produce various tissue contrasts by using different pulse sequences and parameters. The same anatomy with different tissue contrast increases the diversity of MRI information. However, obtaining multiple different contrast images (or modalities) for the same examiner is time consuming and expensive. In practice, therefore, the number of contrast agent modalities to be acquired for the same patient is always limited due to relevant factors such as limited scan time and high cost.
Although there are some synthesis methods for performing modality synthesis through a depth network, because the difference in feature distribution between a source modality and a target modality is large, even though the existing methods can know the mapping between different modalities to some extent, there may still be some differences between input and generated images, which cannot effectively reduce the differences, resulting in a large difference between a synthesized image and a real image.
Disclosure of Invention
In order to solve the problems, the invention provides a cross-modal MRI synthesis method based on morphological characteristics GAN, which can obtain a more real and effective target modality; the information difference between cross-domain modes can be effectively overcome, and the features of different layers can be effectively extracted; the difference between the synthetic mode and the real target mode is effectively reduced, so that the synthetic image is more real and reliable.
In order to achieve the purpose, the invention adopts the technical scheme that: a cross-modality MRI synthesis method based on morphological characteristics GAN comprises the following steps:
establishing an MRFE-GAN model, which comprises a residual error network module and a modal representative feature extraction module;
the source mode acquires a pseudo target mode through a residual error network module;
and extracting the representative characteristics of the pseudo target mode through a mode representative characteristic extraction module, combining the representative characteristics with the basic information of the source mode, and fusing to generate a synthetic target mode.
Further, the residual error network module takes a source mode as an input, and a pseudo target mode is formed by establishing an intermediate mode in the residual error network module to simulate a target mode. In the image synthesis task, the feature information between the input and the output is completely different, which means that a deeper network is required to improve the synthesis performance, but in the deepening process of the network, the network may be difficult to optimize, thereby reducing the performance. The residual network can effectively overcome this problem.
Further, the residual error network module comprises 3 downsampling blocks, 12 residual error blocks and 3 deconvolution layers which are operated in sequence; the downsampling blocks increase the number of feature maps from 1 to 256 and each downsampling block comprises convolution, instance normalization and ReLU layers run in sequence; the residual block comprises padding, convolution, instance normalization and ReLU layers; the deconvolution block includes deconvolution, instance normalization, ReLU layers, and activation functions. The independence of each modality instance after each convolutional layer can be effectively maintained through instance normalization.
Further, the modality representative feature extraction module in the MRFE-GAN model comprises a basic encoder, a representative encoder and a decoder; inputting a source mode into the basic encoder, and extracting basic information from the source mode by the basic encoder; inputting a pseudo-target modality into the representative encoder, and extracting representative information from the pseudo-target modality by the representative encoder; the basic encoder and the representative encoder are connected to a decoder in parallel, and the decoder fuses the basic information and the representative information to generate a synthetic target modality.
Further, in the modality representative feature extraction module, a source modality x and a pseudo target modality y are input, and the modality distributions are respectively p (x) and p (y); each mode is decomposed into two different distributions in the own space, which are respectively: p (x1| x) and P (x2| x), P (y1| y) and P (y2| y); wherein P (x1| x) and P (y1| y) are basic distributions of the structures of the respective modes, P (x2| x) and P (y2| y) are representative distributions of the structures of the respective modes, and the difference between the two modes is embodied by P (x2| x) and P (y2| y) and is a representative feature of the mode itself;
obtaining y-modal representative features based on an x-modal structure by fusing P (x1| x) and P (y2| y) at the modal representative feature extraction module; or, obtaining x modal representative features based on a y modal structure by fusing P (x2| x) and P (y1| y) at the modal representative feature extraction module.
Further, the base encoder comprises 3 downsampling blocks and 4 residual blocks which are sequentially run, wherein the downsampling blocks comprise convolution, instance normalization and ReLU layers which are sequentially run, and the residual blocks comprise filling, convolution, instance normalization and ReLU layers which are sequentially run;
the representative encoder comprises 5 sets of convolutional and ReLU layer combinations, a global average pooling layer, and 3 sets of linear layers; discarding all instance normalization layers before each ReLU layer of the representative encoder, converting each two-dimensional feature channel into a real number by utilizing global average pooling, and modeling correlation among the channels in three groups of linear layers to obtain a mean value and a standard deviation of representative information; on the basis of improving the synthesis identification precision, the problem that the original characteristic mean value and the standard deviation of important representative information are deleted due to example normalization is solved;
and the decoder combines the standard deviation and the average value obtained by the representative encoder into a proportion parameter and a shift parameter of an adaptive instance normalization layer of the decoder, so that the basic information and the representative information are fused to generate a synthetic target modality.
Further, the step of the decoder fusing the basic information and the representative information to generate the synthetic target modality includes:
first, normalization α is performed by the adaptive instance normalization layer:
Figure BDA0002273349860000031
wherein α is representative information of an input of the adaptive instance normalization layer, and Δ (α) and xi (α) are a standard deviation and a mean value, respectively, of the representative information α;
then, multiplying the normalized alpha' by a proportional parameter, and adding a shift parameter; and completing the fusion of two kinds of information to achieve the generation of a synthetic target mode by a reconstruction mode:
α"=Δ′*α'+Ξ';
wherein the scale parameter is a standard deviation of the representative information, and the shift parameter is an average of the representative information.
Further, the system also comprises a first identification module and a second identification module, wherein the first identification module is used for inputting a real target mode and a pseudo target mode into the first identification module for loss calculation; inputting a synthetic target mode and a real target mode into a second identification module for loss calculation; and combining the loss calculation of the first identification module and the loss calculation of the second identification module to form a total loss function of the MRFE-GAN model, performing optimization training on the total loss function to minimize the loss function, and optimizing a pseudo-target mode by adjusting parameters so as to improve the truth of a synthetic result.
Further, a pseudo target mode g (x) is generated in the residual error network module according to the source mode x, and the pseudo target mode g (x) is used as an input of the mode representative feature extraction module to extract representative feature information, specifically comprising the following steps:
placing the pseudo target modality g (x) in the first identification module D1 for the first loss calculation:
LRESNET(G,D1)=Ey[logD1(y)]+Ex[log(1-D1(G(x))];
where y is the target modality, and E is the expected value of the input and output;
the loss reconstruction loss with L1 helps the network to capture the overall appearance and relatively coarse features of the target modality in the synthetic modality:
LRESNET-L1(G)=Ex,y[||y-G(x)||1];
the modality represents the feature extraction module, which takes x and G (x) as input and outputs the synthesis target modality M (x, G (x)), and the synthesis target modality M (x, G (x)) is put into the identification module D2 for loss calculation: l is a radical of an alcoholMRFE(M,D2)=Ey[logD2(y)]+Ex,G(x)[log(1-D2(M(x,G(x))))];
The L1 penalty is used to calculate the difference between the synthetic target modality and the real target modality:
LMRFE-L1(M)=Ex,G(x),y[||y-M(x,G(x))||1];
the total loss function of the MRFE-GAN model is as follows:
Ltotal=λ1LRESNET(G,D1)+λ2LRESNET-L1(G)+λ3LMRFE(M,D2)+λ4LMRFE-L1(M);
a countermeasure of maximizing D1 and D2 by minimizing a loss function in which λ is set during training1λ 31 and λ2=λ4100; the network performance is enhanced, and a more real result is output.
The beneficial effects of the technical scheme are as follows:
the invention provides a novel framework for cross-modal MRI synthesis, a countermeasure network (GAN) is generated based on conditions, and a Modal Representative Feature Extraction (MRFE) strategy is introduced into the network to form the MRFE-GAN model provided by the invention. And extracting representative characteristics of the target mode through an MRFE-GAN model, combining the representative characteristics with basic information of the source mode, and accurately synthesizing the target mode from the source mode. The proposed method is superior to the latest image synthesis methods in both qualitative and quantitative aspects, and achieves better performance.
The present invention divides the modality into two different levels of information, called basic information and representative information, respectively, and extracts them using two different network structures. Two encoders with completely different structures are used to encode the modality into two different pieces of information, which are used to extract the basic information and the representative information, respectively. The target modality is not suitable for the test set, so that before the MRFE module, an RESNET module is provided to generate an intermediate modality as a pseudo target modality; in consideration of the concealment of the target morphology during the test, a residual network is used to synthesize intermediate results as a source of representative information. In addition, in the decoding stage, instead of directly fusing the extracted basic information and the representative information, an AdaIN layer is added to the decoder and the representative information is used as the shifting and scaling parameters thereof to complete the information fusion process, and fuse the information of two different levels. A more realistic and effective target modality is obtained. The information difference between cross-domain modes can be effectively overcome, and data of different levels can be effectively extracted; the difference between the synthetic mode and the real target mode is effectively reduced, so that the synthetic image is more real and reliable.
The MRFE-GAN model in the present invention consists of a residual network and an MRFE network that includes two encoders (i.e., a base encoder and a representative encoder) and a decoder. Compared with the traditional deep neural network, the method has the advantages that: (1) different from the traditional GAN which directly takes a source mode as input to generate a synthetic target mode, the MRFE architecture in the network of the invention simultaneously considers the information of two basic characteristic source modes and the representative characteristic information of the target mode to generate the synthetic target mode; the two encoders respectively extract information of different layers, so that on one hand, a better fusion effect can be obtained during information fusion, on the other hand, image fusion is more flexible, and different fusion images can be obtained through fusion of bottom layer information and representative information between different modules; (2) in order to extract representative information from the target mode, a residual error network (RESNET) is adopted to generate an intermediate mode as a pseudo target mode, so that a representative encoder is facilitated to encode; (3) the two kinds of information are directly fused, the standard deviation and the mean value of the representative information are used as the translation and scaling parameters of the self-adaptive instance normalization layer in the decoder, the characteristic space information fusion process mode is completed, and the fusion speed can be accelerated and the calculation cost is not brought by the conversion of the distribution form in the characteristic space; (4) after each convolutional layer, no Batch Normalization (BN) is used, but instead Instance Normalization (IN) is used to maintain the difference between different MRI mode instances.
Drawings
FIG. 1 is a schematic flow chart of a cross-modality MRI synthesis method based on morphological feature GAN of the present invention;
FIG. 2 is a schematic topological structure diagram of a cross-modality MRI synthesis method based on morphological feature GAN in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a residual error network module according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a basic encoder according to an embodiment of the present invention;
FIG. 5 is a block diagram of a representative encoder in an embodiment of the present invention;
FIG. 6 is a comparison of the results of the synthesis mode of synthesizing T2 from T1 according to the present method and the prior art method;
FIG. 7 is a comparison of the results of the synthesis modality of FLAIR from T2 by the present method and by the prior art method in accordance with an embodiment of the present invention;
fig. 8 is a comparison of the synthesis mode results of the present method and the prior art method for synthesizing T2 from PD in accordance with the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described below with reference to the accompanying drawings.
In this embodiment, referring to fig. 1 and fig. 2, the present invention proposes a cross-modality MRI synthesis method based on morphological feature GAN,
the cross-modality MRI synthesis method based on morphological characteristics GAN comprises the following steps:
establishing an MRFE-GAN model, which comprises a residual error network module and a modal representative feature extraction module;
a source mode acquires a pseudo target mode through a residual error network module;
and extracting representative features of the pseudo target modality through a modality representative feature extraction module, combining the representative features with basic information of the source modality, and fusing to generate a synthetic target modality.
As an optimization scheme of the above embodiment, the residual error network module takes a source mode as an input, and a pseudo target mode is formed by establishing an intermediate mode in the residual error network module to simulate a target mode. In the image synthesis task, the feature information between the input and the output is completely different, which means that a deeper network is needed to improve the synthesis performance, and in the deepening process, the network is difficult to optimize, so that the performance is reduced, and therefore, the problem can be effectively overcome by using the generated intermediate mode in the residual network module.
As shown in fig. 3, the residual network module includes 3 downsampling blocks, 12 residual blocks (ResBlock), and 3 deconvolution layers, which are sequentially run; the downsampling blocks increase the number of feature maps from 1 to 256 and each downsampling block comprises successively run convolution (Conv), instance normalization (instanceNorm) and ReLU layers; the residual block comprises successively running padding (padding), convolution (Conv), instance normalization (instanceNorm), and ReLU layers; the deconvolution layer was run sequentially deconvolution (ConvT), instance normalization (instanceNorm), ReLU layer, and activation function (Tanh). The independence of each modality instance after each convolutional layer can be effectively maintained through instance normalization.
As an optimization solution of the above embodiment, the modality representative feature extraction module in the MRFE-GAN model includes a base encoder, a representative encoder, and a decoder; inputting a source mode into the basic encoder, and extracting basic information from the source mode by the basic encoder; inputting a pseudo target modality into the representative encoder, and extracting representative information from the pseudo target modality by the representative encoder; the basic encoder and the representative encoder are connected to a decoder in parallel, and the decoder fuses the basic information and the representative information to generate a synthetic target modality.
In the modal representative feature extraction module, inputting a source modal x and a pseudo target modal y', wherein modal distributions are P (x) and P (y), respectively; each mode is decomposed into two different distributions in the own space, which are respectively: p (x1| x) and P (x2| x), P (y1| y) and P (y2| y); wherein P (x1| x) and P (y1| y) are basic distributions of the structures of the respective modes, P (x2| x) and P (y2| y) are representative distributions of the structures of the respective modes, and the difference between the two modes is embodied by P (x2| x) and P (y2| y), and the modes have representative characteristics of their own;
obtaining y' modal representative features based on an x modal structure by fusing P (x1| x) and P (y2| y) at the modal representative feature extraction module; or, obtaining x modal representative features based on a y modal structure by fusing P (x2| x) and P (y1| y) at the modal representative feature extraction module.
As shown in fig. 4, the base encoder includes 3 downsample blocks and 4 residual blocks, which are sequentially run, the downsample blocks including sequentially run convolution (Conv), instance normalization (instanceNorm), and ReLU layers, and the residual blocks including sequentially run padding (padding), convolution (Conv), instance normalization (instanceNorm), and ReLU layers;
as shown in fig. 5, the representative encoder includes 5 sets of convolutional and ReLU layer combinations, a global average pooling layer (adaptive avgpool), and 3 sets of Linear layers (Linear); discarding all instance normalization layers before each ReLU layer of the representative encoder, converting each two-dimensional feature channel into a real number by utilizing global average pooling, and modeling correlation among the channels in three groups of linear layers to obtain a mean value and a standard deviation of representative information; on the basis of improving the synthesis identification precision, the problem of deleting the original characteristic mean value and standard deviation of important representative information caused by case normalization is solved;
and the decoder combines the standard deviation and the average value obtained by the representative encoder into a proportion parameter and a shift parameter of an adaptive instance normalization layer of the decoder, so that the basic information and the representative information are fused to generate a synthetic target modality.
The step of the decoder fusing the basic information and the representative information to generate the synthetic target modality comprises the following steps:
first, normalization α is performed by the adaptive instance normalization layer:
Figure BDA0002273349860000081
wherein α is representative information of the input to the adaptive instance normalization layer, Δ (α) and xi (α) are respectively a standard deviation and a mean value representative of the information α;
then, multiplying the normalized alpha by a proportion parameter, and adding a shift parameter; completing the fusion of two kinds of information, and achieving the reconstruction mode to generate a synthetic target mode:
α″=Δ′*α′+Ξ′;
wherein the scale parameter is a standard deviation of the representative information, and the shift parameter is an average of the representative information.
As an optimization scheme of the above embodiment, the system further includes a first identifying module and a second identifying module, and the first identifying module inputs a real target mode and a pseudo target mode to perform loss calculation; inputting a synthetic target mode and a real target mode into a second identification module for loss calculation; and combining the loss calculation of the first identification module and the loss calculation of the second identification module to form a total loss function of the MRFE-GAN model, performing optimization training on the total loss function to minimize the loss function, and optimizing a pseudo-target mode by adjusting parameters so as to improve the truth of a synthetic result.
Generating a pseudo target mode G (x) in the residual error network module according to the source mode x, and extracting representative feature information by taking the pseudo target mode G (x) as one input of a mode representative feature extraction module, wherein the specific process comprises the following steps:
placing the pseudo target modality g (x) in the first identification module D1 for the first loss calculation:
LRESNET(G,D1)=Ey[logD1(y)]+Ex[log(1-D1(G(x))];
where y is the target modality, and E is the expected value of the input and output;
the loss reconstruction loss with L1 helps the network to capture the overall appearance and relatively coarse features of the target modality in the synthetic modality:
LRESNET-L1(G)=Ex,y[||y-G(x)||1];
the mode representative feature extraction module takes x and G (x) as input and outputs a synthesis target mode M (x, G (x)), and the synthesis target mode M (x, G (x)) is put into an identification module D2 for loss calculation:
LMRFE(M,D2)=Ey[logD2(y)]+Ex,G(x)[log(1-D2(M(x,G(x))))];
the difference between the synthetic target modality and the real target modality is calculated using the L1 penalty:
LMRFE-L1(M)=Ex,G(x),y[||y-M(x,G(x))||1];
the total loss function of the MRFE-GAN model is as follows:
Ltotal=λ1LRESNET(G,D1)+λ2LRESNET-L1(G)+λ3LMRFE(M,D2)+λ4LMRFE-L1(M);
countermeasure mode for maximizing D1 and D2 by minimizing loss function in training process, wherein λ is set in loss function1λ 31 and λ2=λ4100; and the network performance is enhanced, and a more real result is output.
The advantages of the present method were verified by performing extensive experiments, which were mainly performed on a multimodal brain tumor segmentation (BRATS) dataset 1 and a non-peeled image (I × I) dataset 2. To evaluate the proposed model and compare it with the two most recent comparison methods. In particular, since the main contribution of the proposed model is to introduce the representation information from the target modality for synthesis, the advantages of the representation encoder in the MRFE module were first studied. The performance of the model in synthesizing T2 from the T1 modality and FLAIR from the T2 modality was then studied using the BRATS dataset. Finally, the synthetic target modality is displayed in the image where no skull stripping has been performed on the I dataset.
To verify the advantage of using a representative encoder in the MRFE-GAN model: experiments were performed by comparing (MRFE-GAN w/oRE) between the MRFE-GAN model using a representative encoder and the MRFE-GAN model without a representative encoder. Quantitative comparison results of the BRATS dataset (T1 to T2) and the I × I dataset (PD to T2) are given in table 1 and table 2, with standard deviations in parentheses. As shown in tables 1 and 2, the results of combining the representative information indicate that the peak signal-to-noise ratio PSNR and the structural similarity SSIM increase to some extent and the standard root mean square error NRMSE decreases to some extent on both data sets, compared with the structure without a representative encoder. This means that representative information provided by the target modality is advantageous for improving the performance of modality synthesis. The extraction of the representation information by the representation encoder is necessary to improve the synthesis performance in the single-modality synthesis task.
TABLE 1
Figure BDA0002273349860000091
TABLE 2
Figure BDA0002273349860000092
Figure BDA0002273349860000101
The performance of the model synthesizing T2 from T1 was verified by using the method proposed by the present invention on the BRATS data set. Fig. 6 gives an example of the synthesis of the target modality using different methods, from which it can be seen that the proposed model (MRFE-GAN) yields a higher visual quality in terms of structure and details (as indicated by the arrows) than the existing mlr and p-GAN models. The model result provided by the invention can better retain the detailed information in the synthesized T2 image. For quantitative comparison, the average synthetic target modality obtained by the different methods is listed in table 3; the table shows that all three indices obtained by the process of the invention are significantly improved, especially with respect to the PSNR values, compared to the other two processes.
TABLE 3
Figure BDA0002273349860000102
The performance of the model for synthesizing T2 from PD was verified by using the proposed method of the present invention on IXI data set. In the conventional detection, since a dark skull region surrounded by a bright skin or fat region causes intensity unevenness in an MRI image, it is difficult to synthesize an image that is not stumbled on the skull. In this experiment, the IXI dataset for the non-peeled image was synthesized from PD to T2. As can be seen in fig. 8, the model of the present invention tends to better preserve the detailed information in the synthesized T2 image with minimal differences compared to the two more recent methods, MILR and P-GAN. Table 5 shows the results of a quantitative comparison of the different methods, and it can be seen that all three criteria of the method of the invention are significantly improved compared to MILR and P-GAN.
TABLE 5
Figure BDA0002273349860000103
The foregoing shows and describes the general principles and features of the present invention, together with the advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are given by way of illustration of the principles of the present invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, and such changes and modifications are within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (3)

1. The cross-modality MRI synthesis method based on morphological characteristics GAN is characterized by comprising the following steps:
establishing an MRFE-GAN model, which comprises a residual error network module and a modal representative feature extraction module;
the source mode acquires a pseudo target mode through a residual error network module;
extracting representative characteristics of a pseudo target mode through a mode representative characteristic extraction module, combining the representative characteristics with basic information of a source mode, and fusing to generate a target mode;
the residual error network module takes a source mode as input, and a pseudo target mode is formed by establishing an intermediate mode in the residual error network module to simulate a target mode;
the residual error network module comprises 3 downsampling blocks, 12 residual error blocks and 3 deconvolution layers which operate in sequence; the downsampling blocks increase the number of feature maps from 1 to 256 and each downsampling block comprises convolution, instance normalization and ReLU layers run in sequence; the residual block comprises a filling layer, a convolution layer, an instance normalization layer and a ReLU layer which are sequentially operated; the deconvolution layer sequentially runs deconvolution, instance normalization, a ReLU layer and an activation function;
the mode representative feature extraction module in the MRFE-GAN model comprises a basic encoder, a representative encoder and a decoder; inputting a source mode into the basic encoder, and extracting basic information from the source mode by the basic encoder; inputting a pseudo-target modality into the representative encoder, and extracting representative information from the pseudo-target modality by the representative encoder; the basic encoder and the representative encoder are connected to a decoder in parallel, and basic information and representative information are fused through the decoder to generate a synthetic target modality;
in the modal representative feature extraction module, inputting a source modal x and a pseudo target modal y', wherein modal distributions are respectively P (x) and P (y); each mode is decomposed into two different distributions in the own space, which are respectively: p (x1| x) and P (x2| x), P (y1| y) and P (y2| y); wherein P (x1| x) and P (y1| y) are the basic distributions of the respective modal structures, and P (x2| x) and P (y2| y) are the representation distributions of the respective modal structures; the difference between the two modalities is embodied by P (x2| x) and P (y2| y), which can be regarded as representative features of the modalities themselves;
obtaining y' modal representative features based on an x modal structure by fusing P (x1| x) and P (y2| y) at the modal representative feature extraction module; or obtaining x modal representative features based on a y modal structure by fusing P (x2| x) and P (y1| y) at the modal representative feature extraction module;
wherein the base encoder comprises 3 downsampling blocks and 4 residual blocks which are run in sequence, the downsampling blocks comprise convolution, instance normalization and ReLU layers which are run in sequence, and the residual blocks comprise padding, convolution, instance normalization and ReLU layers which are run in sequence;
the representative encoder comprises 5 groups of convolutional layers and ReLU layer combinations, a global average pooling layer, and 3 groups of linear layers; discarding all instance normalization layers before each ReLU layer of the representative encoder, converting each two-dimensional characteristic channel into a real number by utilizing global average pooling, and modeling the correlation among the channels in three groups of linear layers to obtain the average value and the standard deviation of the representative information;
the decoder combines the standard deviation and the average value obtained by the representative encoder into a proportion parameter and a shift parameter of an adaptive instance normalization layer of the decoder, so that basic information and representative information are fused to generate a synthetic target mode;
the step of the decoder fusing the basic information and the representative information to generate the synthetic target modality comprises the following steps:
first, a is normalized by the adaptive instance normalization layer:
Figure FDA0003657034620000021
wherein α is representative information of the input to the adaptive instance normalization layer, Δ (α) and xi (α) are respectively a standard deviation and a mean value representative of the information α;
then, multiplying the normalized alpha' by a proportional parameter, and adding a shift parameter; completing the fusion of two kinds of information, and achieving the reconstruction mode to generate a synthetic target mode:
α″=Δ′*α′+Ξ′:
wherein, the scale parameter is a standard deviation of the representative information extracted by the morphological coder, and the shift parameter is an average value of the representative information.
2. The modality-signature GAN-based cross-modality MRI synthesis method of claim 1, further comprising two discriminator modules; inputting a real target mode and a pseudo target mode into a first discriminator module for loss calculation; inputting a synthetic target mode and a real target mode into a second discriminator module for loss calculation; and calculating to form a total loss function of the MRFE-GAN model by combining the losses of the first discriminator module and the second discriminator module, performing optimization training on the total loss function to minimize the loss function, and optimizing the synthesized target mode by adjusting parameters.
3. The cross-modality MRI synthesis method based on morphological feature GAN as claimed in claim 2, wherein the residual network module generates a pseudo-target modality G (x) according to a source modality x, and uses the pseudo-target modality G (x) as an input of the modality representative feature extraction module to extract representative feature information, which comprises the following steps:
placing the pseudo target modality g (x) in the first identification module D1 for the first loss calculation:
LRESNET(G,D1)=Ey[logD1(y)]+Ex[log(1-D1(G(x))];
where y is the target modality, and E is the expected value of the input and output;
the loss reconstruction loss with L1 helps the network to capture the overall appearance and relatively coarse features of the target modality in the synthetic modality:
LRESNET-L1(G)=Ex,y[||y-G(x)||1];
the mode representative feature extraction module takes x and G (x) as input and outputs a synthesis target mode M (x, G (x)), and the synthesis target mode M (x, G (x)) is put into an identification module D2 for loss calculation:
LMRFE(M,D2)=Ey[logD2(y)]+Ex,G(x)[log(1-D2(M(x,G(x))))];
the L1 penalty is used to calculate the difference between the synthetic target modality and the real target modality:
LMRFE-L1(M)=Ex,G(x),y[||y-M(x,G(x))||1];
the total loss function of the MRFE-GAN model is as follows:
Ltotal=λ1LRESNET(G,D1)+λ2LRESNET-L1(G)+λ3LMRFE(M,D2)+λ4LMRFE-L1(M);
a countermeasure of maximizing D1 and D2 by minimizing a loss function in which λ is set during training1=λ31 and λ2=λ4=100。
CN201911113248.8A 2019-11-14 2019-11-14 Cross-modality MRI (magnetic resonance imaging) synthesis method based on morphological characteristics GAN (gamma GAN) Active CN110827232B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911113248.8A CN110827232B (en) 2019-11-14 2019-11-14 Cross-modality MRI (magnetic resonance imaging) synthesis method based on morphological characteristics GAN (gamma GAN)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911113248.8A CN110827232B (en) 2019-11-14 2019-11-14 Cross-modality MRI (magnetic resonance imaging) synthesis method based on morphological characteristics GAN (gamma GAN)

Publications (2)

Publication Number Publication Date
CN110827232A CN110827232A (en) 2020-02-21
CN110827232B true CN110827232B (en) 2022-07-15

Family

ID=69555297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911113248.8A Active CN110827232B (en) 2019-11-14 2019-11-14 Cross-modality MRI (magnetic resonance imaging) synthesis method based on morphological characteristics GAN (gamma GAN)

Country Status (1)

Country Link
CN (1) CN110827232B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3872754A1 (en) * 2020-02-28 2021-09-01 Siemens Healthcare GmbH Method and system for automated processing of images when using a contrast agent in mri
CN111524147B (en) * 2020-04-14 2022-07-12 杭州健培科技有限公司 Brain tumor segmentation method based on generative confrontation network
CN111862261B (en) * 2020-08-03 2022-03-29 北京航空航天大学 FLAIR modal magnetic resonance image generation method and system
CN113012086B (en) * 2021-03-22 2024-04-16 上海应用技术大学 Cross-modal image synthesis method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002076489A1 (en) * 2001-03-09 2002-10-03 Dyax Corp. Serum albumin binding moieties
CN109213876A (en) * 2018-08-02 2019-01-15 宁夏大学 Based on the cross-module state search method for generating confrontation network
CN110047056A (en) * 2018-01-16 2019-07-23 西门子保健有限责任公司 With the cross-domain image analysis and synthesis of depth image to image network and confrontation network
CN110210422A (en) * 2019-06-05 2019-09-06 哈尔滨工业大学 It is a kind of based on optical imagery auxiliary naval vessel ISAR as recognition methods

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030083652A1 (en) * 2001-10-31 2003-05-01 Oratec Interventions, Inc Method for treating tissue in arthroscopic environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002076489A1 (en) * 2001-03-09 2002-10-03 Dyax Corp. Serum albumin binding moieties
CN110047056A (en) * 2018-01-16 2019-07-23 西门子保健有限责任公司 With the cross-domain image analysis and synthesis of depth image to image network and confrontation network
CN109213876A (en) * 2018-08-02 2019-01-15 宁夏大学 Based on the cross-module state search method for generating confrontation network
CN110210422A (en) * 2019-06-05 2019-09-06 哈尔滨工业大学 It is a kind of based on optical imagery auxiliary naval vessel ISAR as recognition methods

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Prevalence of spinal pathology in patients presenting for lumbar MRI as referred from general practice;Evelien de Schepper等;《Family Practice》;20160228;第33卷(第1期);51-56 *
Statistical iterative reconstruction using adaptive fractional order regularization;Yan Wang 等;《Biomedical Optics Express》;20160301;第7卷(第3期);1015-1029 *
不同分子亚型乳腺癌的MRI和病理特征初探;于洋 等;《中华放射学杂志》;20140310;第48卷(第3期);184-188 *
基于U-net模型的全自动鼻咽肿瘤MR图像分割;潘沛克 等;《计算机应用》;20181116;第39卷(第4期);1183-1188 *
基于相似度的双搜索多目标识别算法;冷何英等;《红外与激光工程》;20021225(第06期);3-6 *

Also Published As

Publication number Publication date
CN110827232A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
CN110827232B (en) Cross-modality MRI (magnetic resonance imaging) synthesis method based on morphological characteristics GAN (gamma GAN)
Liu et al. Multimodal MR image synthesis using gradient prior and adversarial learning
Zhan et al. Multi-modal MRI image synthesis via GAN with multi-scale gate mergence
CN110288609B (en) Multi-modal whole-heart image segmentation method guided by attention mechanism
CN113554669B (en) Unet network brain tumor MRI image segmentation method with improved attention module
CN114266939B (en) Brain extraction method based on ResTLU-Net model
CN112365556B (en) Image extension method based on perception loss and style loss
CN112634265B (en) Method and system for constructing and segmenting fully-automatic pancreas segmentation model based on DNN (deep neural network)
CN113674330A (en) Pseudo CT image generation system based on generation countermeasure network
CN116188452A (en) Medical image interlayer interpolation and three-dimensional reconstruction method
CN114170244A (en) Brain glioma segmentation method based on cascade neural network structure
CN110782427A (en) Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution
CN112785540B (en) Diffusion weighted image generation system and method
CN114140341A (en) Magnetic resonance image non-uniform field correction method based on deep learning
CN117437420A (en) Cross-modal medical image segmentation method and system
CN115496732B (en) Semi-supervised heart semantic segmentation algorithm
CN109741439B (en) Three-dimensional reconstruction method of two-dimensional MRI fetal image
CN115861464A (en) Pseudo CT (computed tomography) synthesis method based on multimode MRI (magnetic resonance imaging) synchronous generation
CN115908451A (en) Heart CT image segmentation method combining multi-view geometry and transfer learning
CN114332271A (en) Dynamic parameter image synthesis method and system based on static PET image
CN113052840A (en) Processing method based on low signal-to-noise ratio PET image
Lu et al. A novel u-net based deep learning method for 3d cardiovascular MRI segmentation
Szűcs et al. Self-supervised segmentation of myocardial perfusion imaging SPECT left ventricles
Larroza et al. Deep learning for MRI-based CT synthesis: A comparison of MRI sequences and neural network architectures
CN117315065B (en) Nuclear magnetic resonance imaging accurate acceleration reconstruction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant