CN110288609B - Multi-modal whole-heart image segmentation method guided by attention mechanism - Google Patents

Multi-modal whole-heart image segmentation method guided by attention mechanism Download PDF

Info

Publication number
CN110288609B
CN110288609B CN201910461477.2A CN201910461477A CN110288609B CN 110288609 B CN110288609 B CN 110288609B CN 201910461477 A CN201910461477 A CN 201910461477A CN 110288609 B CN110288609 B CN 110288609B
Authority
CN
China
Prior art keywords
image
map
segmentation
attention mechanism
modal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910461477.2A
Other languages
Chinese (zh)
Other versions
CN110288609A (en
Inventor
杨琬琪
周子奇
郭心娜
杨明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Normal University
Original Assignee
Nanjing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Normal University filed Critical Nanjing Normal University
Priority to CN201910461477.2A priority Critical patent/CN110288609B/en
Publication of CN110288609A publication Critical patent/CN110288609A/en
Application granted granted Critical
Publication of CN110288609B publication Critical patent/CN110288609B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a multi-modal whole heart image segmentation method guided by an attention mechanism. And generating images corresponding to other modalities from the current modality through Cycle-GAN to expand the training set, and then performing image segmentation on the original image and the corresponding generated images through a semi-twin network. The semi-twin network comprises two independent encoders and a shared decoder, wherein the encoders are used for learning the characteristics private to the modes, the characteristics are subjected to characteristic fusion through an attention mechanism module, and then the characteristics common to the modes are extracted through the decoder for final segmentation. The invention fully utilizes the modal shared information and the private information and improves the segmentation precision.

Description

Multi-modal whole-heart image segmentation method guided by attention mechanism
Technical Field
The invention belongs to the field of medical images, and particularly relates to a multi-mode whole heart image segmentation method.
Background
According to the American Heart Association (AHA)2019 statistical report of heart disease and stroke, approximately 1,055,000 people suffered from coronary heart disease in 2019 in the united states, including 720,000 new and 335,000 recurrent coronary artery cases. In this sense, early diagnosis and treatment play an important role in reducing mortality and morbidity of cardiovascular disease. During early diagnosis, physicians often collect imaging information from different modalities (e.g., MR and CT) for comprehensive investigation, with one important prerequisite being to accurately segment cardiac substructures from different modality images. However, conventional manual segmentation is very time consuming and laborious. Therefore, it is urgent to develop a method for automatic segmentation of the whole heart.
Although methods based on deep convolutional neural networks have been widely used to segment other organs, the application of these methods to the multi-modal whole-heart segmentation task remains limited because: 1) modal inconsistency: images from different modalities have significant appearance differences; 2) the complicated results are: different cardiac substructures are connected and sometimes even overlap; 3) the heart of each patient also has differences in appearance.
In recent years, there have been several attempts at multi-modal whole-heart segmentation. For example, sinoqi et al propose an unsupervised cross-domain generation framework with counterlearning cross-modality medical image segmentation. Zhengzheng et al propose a method of simultaneous learning transformation and segmentation of medical 3D images that can learn unpaired datasets and keep their anatomical structure unchanged. Regarding image generation, CycleGAN can migrate and generate styles of unpaired images from one domain to another, but it has the problem of lacking labels for deformation constraints. Ronneberger et al inspired by the full convolution neural network FCN, proposed a "U-net" structure, which includes a contraction path to capture the upper and lower literature information and a symmetric expansion path to obtain accurate local information, and is commonly used for medical image segmentation. However, none of the above methods can fully utilize the information or correlation that can be shared between the two modalities, and cannot effectively overcome the above-mentioned limitations.
Disclosure of Invention
In order to solve the technical problems mentioned in the background art, the invention provides a multi-modal whole heart image segmentation method guided by an attention mechanism, which makes full use of modal shared information and private information and improves the segmentation precision.
In order to achieve the technical purpose, the technical scheme of the invention is as follows:
an attention mechanism guided multi-modal whole heart image segmentation method comprises the following steps:
(1) generating a cross-modal image:
introducing a generation countermeasure network, wherein the network comprises 2 generators and 2 discriminators, the generators respectively correspond to the CT images and the MRI images, and the original CT images and the original MRI images are respectively input into the corresponding generators to respectively generate images corresponding to another modality;
(2) cross-modal feature learning and image segmentation:
constructing a semi-twin network, wherein the network comprises 2 independent encoders and 1 shared decoder, and an attention mechanism module is arranged between the encoders and the decoders; the method comprises the steps of inputting an original CT image and an MRI image generated by the original CT image into one encoder, inputting the original MRI image and the CT image generated by the original MRI image into the other encoder, wherein the encoder comprises a plurality of layers of down-sampling layers, 2 encoders output 2 characteristic spectrograms proprietary to modalities, an attention mechanism module fuses the 2 characteristics proprietary to the modalities and sends the characteristics to a shared decoder, and the decoder outputs the segmentation result of the image.
Further, in generating the countermeasure network, a cycle consistency loss function L is defined as followscyc(GA,GB):
Figure BDA0002078195540000021
In the above formula, xA、xBRespectively an original CT image sample and an MRI image sample,
Figure BDA0002078195540000022
to obey pd(xA) X of distributionAIn the expectation that the position of the target is not changed,
Figure BDA0002078195540000031
to obey pd(xB) X of distributionBExpectation of (1), GA、GBA generator for corresponding CT images and MRI images, respectively;
in generating a countermeasure network, a segmentation loss function L is defined as followsseg(SA/B,GA,GB):
Figure BDA0002078195540000032
In the above equation, S is mappedA/BA → Y, B → Y, A represents CT modality, B represents MRI modality, Y represents segmentation label, i represents a training sample,n is the total number of training samples, yA、yBThe real segmentation results in the A mode and the B mode are respectively.
Further, the cyclic consistency loss function and the segmentation loss function are integrated to define a total loss function L (G)A,GB,DA,DB,SA/B):
L(GA,GB,DA,DB,SA/B)=LGAN(GA,DA)+LGAN(GB,DB)+λLcyc(GA,GB)+γLseg(SA/B,GA,GB)
In the above formula, LGAN(GA,DA) And LGAN(GB,DB) To generate a pairwise loss-resistance function, DA、DBCorresponding to the discriminators of the CT image and the MRI image, lambda and gamma are weight coefficients of the loss function.
Further, in a semi-twin network, the encoder localizes high-resolution features and captures more accurate information; the decoder propagates contextual information to higher resolution layers and learns higher level semantic information.
Further, the flow of the attention mechanism module is as follows:
firstly, connecting feature maps output by 2 encoders through a channel layer to obtain a primary fusion map, carrying out matrix recombination on the primary fusion map to obtain a map 1, sequentially carrying out matrix recombination and transposition on the primary fusion map to obtain a map 2, carrying out vector product on the map 1 and the map 2 to obtain an attention map through a softmax function, carrying out element-by-element summation on the result of vector product of the attention map and the primary fusion map to obtain a final feature fusion map.
Adopt the beneficial effect that above-mentioned technical scheme brought:
the invention adopts the improved cycleGAN to generate the cross-modal image to expand the training set, and reduces the image inconsistency of the modal layer; the invention provides a novel cross-modal attention mechanism guided semi-twin network, which is used for learning the characteristics of modality sharability and modality privacy and carrying out multi-modal full heart image segmentation. The method effectively solves the problems that the number of marked 3D full-heart CT and MRI images is small, cross-modal related information is not fully utilized in the prior art, and the like, and improves the segmentation precision, so the method has higher application value.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a network architecture diagram of the present invention;
FIG. 3 is a flow diagram of an attention mechanism module of the present invention;
FIG. 4 is a diagram of the segmentation result of the embodiment, which includes two subgraphs (a) and (b).
Detailed Description
The technical scheme of the invention is explained in detail in the following with the accompanying drawings.
The invention designs a multi-modal whole heart image segmentation method guided by an attention mechanism, which comprises the following steps as shown in figures 1-2:
1. generating a cross-modal image:
a generative countermeasure network was introduced comprising 2 generators and 2 discriminators, corresponding to CT images and MRI images respectively, which, by gaming each other during learning, could get a very good output. The original CT image and the MRI image are input to respective generators, and images corresponding to the other modalities are generated.
To solve the problem that the generator is to resist learning from unpaired images, the invention uses samples G that are forced to be generatedA(GB(xA) And G)B(GA(xB) Constraint that remains consistent with the original image, thus defining a circular consistency loss function:
Figure BDA0002078195540000051
in the above formula, xA、xBRespectively an original CT image sample and an MRI image sample,
Figure BDA0002078195540000052
to obey pd(xA) X of distributionAIn the expectation that the position of the target is not changed,
Figure BDA0002078195540000053
to obey pd(xB) X of distributionBExpectation of (1), GA、GBCorresponding to the CT image and MRI image, respectively.
If the transformations learned by two generators are invertible to each other, then the two generators will pass the cycle-consistent test without any penalty. To prevent this problem of data deformation, another auxiliary mapping S is definedA/BA → Y, B → Y, wherein A represents the CT modality, B represents the MR modality, Y represents the segmentation label, and the cross entropy is taken as a constrained segmentation loss function between the segmentation result and the true segmentation result:
Figure BDA0002078195540000054
in the above formula, i represents a training sample, N is the total number of training samples, yA、yBThe real segmentation results in the A mode and the B mode are respectively.
And integrating the cycle consistency loss function and the segmentation loss function to define a total loss function:
L(GA,GB,DA,DB,SA/B)=LGAN(GA,DA)+LGAN(GB,DB)+λLcyc(GA,GB)+γLseg(SA/B,GA,GB)
in the above formula, LGAN(GA,DA) And LGAN(GB,DB) To generate a pairwise loss-resistance function, DA、DBCorresponding to the discriminators of the CT image and the MRI image, lambda and gamma are weight coefficients of the loss function.
2. Cross-modal feature learning and image segmentation:
a semi-twin network was constructed comprising 2 independent encoders and 1 shared decoder with an attention mechanism module between the encoders and decoders. The method comprises the steps of inputting an original CT image and an MRI image generated by the original CT image into one encoder, inputting the original MRI image and the CT image generated by the original MRI image into the other encoder, wherein the encoder comprises a plurality of layers of down-sampling layers, 2 encoders output 2 characteristic spectrograms proprietary to modalities, an attention mechanism module fuses the 2 characteristics proprietary to the modalities and sends the characteristics to a shared decoder, and the decoder outputs the segmentation result of the image.
The role of the encoder is to localize the high resolution features and capture more accurate information; the role of the decoder is to propagate contextual information to higher resolution layers and to learn higher level semantic information.
The invention designs a channel-wise attention mechanism module between an encoder and a decoder. The method takes a characteristic map with boundary information of each heart substructure in two modes as input, and outputs a new characteristic map. Note that the flow of the force mechanism module is shown in fig. 3. Firstly, the feature maps (with the assumed size of C) output by 2 encoders are obtained1XHxW) to obtain a preliminary fusion map (size (C)1+C2) xHxW), performing matrix recombination on the primary fusion map to obtain a map 1, sequentially performing matrix recombination and transposition on the primary fusion map to obtain a map 2, and performing a vector product on the map 1 and the map 2 to obtain an attention map (with the size of (C) through a softmax function1+C2)×(C1+C2) Performing element-by-element addition on the result of vector multiplication of the attention map and the primary fusion map to obtain a final feature fusion map (with the size of (C))1+C2)×H×W)。
Since both modalities are used to describe the same heart organ, it is assumed that the two modalities may contain many associated features, which are in the off-diagonal blocks C of the attention map1×C2And C2×C1Part can beIs embodied. The modes also include some characteristics unique to the modes, which is in the diagonal block C1×C1And C2×C2Portions may also be embodied.
The effectiveness of the present invention is verified by simulation experiments below.
Random gradient descent is carried out by adopting an Adam method, and the learning rate is 2 multiplied by 10-4Other settings refer entirely to the settings of CycleGAN to train the generator and arbiter. To accelerate the training process, a split pre-training G is selectedA/BAnd DA/BAnd then the end-to-end split network is trained completely.
The method is applied to a public data set provided by the Challenge competition Multi-modification white Heart search of the flagship conference MICCAI 2017 in the field of medical images. The data set contained 3D images of 20 MRI and 20 CT unpaired. The substructures of the data set have all been labeled by the radiologist. The goal of segmentation is to segment 7 substructures of the heart including: left ventricle, left atrium, right ventricle, right atrium, aorta, pulmonary artery, and myocardium. In the training process, the data set is divided into a training set (10 samples) and a test set (10 samples), and a two-fold cross-validation is performed. Let the CT modality be a and the MRI modality be B.
All sample orientations are changed to coronal positions by the software ITK-SNAP because the direction of data acquisition is different. All sections with valid tags are cut out and then cut into 2534 CT and 2208 MRI slices in 2D. These differently shaped slices are then resized to a size of 128 x 128.
In order to measure the difference between the segmentation result and the actual result, a Dice coefficient is used as an evaluation index. Dice is used to measure the proportion of coincidence between the real mark and the segmentation result. Higher Dice indicates higher segmentation accuracy.
The visualization contrast result of heart segmentation is shown in fig. 4, where (a) in fig. 4 is the visualization result of heart segmentation from CT to MRI, and (b) is the visualization result of heart segmentation from MRI to CT, and Ours in the figure is the segmentation result obtained by the segmentation method of the present invention. It can be seen that the resulting image is very similar to the original image and does not have any significant distortion and segmentation errors.
The results of the segmentation using the different methods are shown in table 1. As can be seen from the table, the present invention successfully extracts features shared between modalities to improve the segmentation accuracy of MRI images. The invention firstly evaluates the result of the full convolution neural network (FCN) respectively applied to the two modal segmentations as a reference line. U-net was then evaluated for application to both modalities, respectively. To further verify the validity of the method of the present invention, only the CycleGAN method was used for cross-modality image generation on this data set. A comparison experiment shows that the method has a not obvious effect on improving the CT segmentation precision, but the obvious improvement of the MRI segmentation precision shows that the attention mechanism is effective. Also, the attention mechanism effectively avoids the risk of "bad" MRI images carrying "good" CT images.
TABLE 1
Method Aorta Left atrium Left ventricle Cardiac muscle Right ventricle Right atrium Pulmonary artery Average
FCN_CT 0.8863 0.8163 0.8838 0.8541 0.7885 0.7940 0.7758 0.8284
FCN_MRI 0.6931 0.6882 0.8006 0.7161 0.6759 0.7593 0.6693 0.7146
Unet_CT 0.8992 0.7704 0.8381 0.8162 0.7643 0.8003 0.7947 0.8119
Unet_MRI 0.7719 0.6942 0.7751 0.6961 0.6983 0.7829 0.7171 0.7336
CycleGAN_CT 0.9407 0.8277 0.8362 0.7942 0.8064 0.8134 0.8103 0.8327
CycleGAN_MRI 0.7686 0.6555 0.7612 0.7038 0.6637 0.7658 0.6973 0.7165
Ours_CT 0.9282 0.8131 0.8497 0.7869 0.8066 0.8255 0.8391 0.8356
Ours_MRI 0.7875 0.6940 0.8031 0.7189 0.6733 0.7967 0.7147 0.7412
The embodiments are only for illustrating the technical idea of the present invention, and the technical idea of the present invention is not limited thereto, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the scope of the present invention.

Claims (5)

1. An attention mechanism guided multi-modal whole heart image segmentation method is characterized by comprising the following steps:
(1) generating a cross-modal image:
introducing a generation countermeasure network, wherein the network comprises 2 generators and 2 discriminators, one set of generators and discriminators corresponds to CT images, and the other set of generators and discriminators corresponds to MRI images; inputting the original CT image and the MRI image into corresponding generators respectively, and generating images corresponding to the other modalities respectively;
(2) cross-modal feature learning and image segmentation:
constructing a semi-twin network, wherein the network comprises 2 independent encoders and 1 shared decoder, and an attention mechanism module is arranged between the encoders and the decoders; the method comprises the steps of inputting an original CT image and an MRI image generated by the original CT image into one encoder, inputting the original MRI image and the CT image generated by the original MRI image into the other encoder, wherein the encoder comprises a plurality of layers of down-sampling layers, 2 encoders output 2 characteristic spectrograms proprietary to modalities, an attention mechanism module fuses the 2 characteristics proprietary to the modalities and sends the characteristics to a shared decoder, and the decoder outputs the segmentation result of the image.
2. The attention mechanism-guided multi-modal whole-heart image segmentation method according to claim 1, wherein in generating the countermeasure network, a cyclic consistency loss function L is defined as followscyc(GA,GB):
Figure FDA0002974158040000011
In the above formula, xA、xBRespectively an original CT image sample and an MRI image sample,
Figure FDA0002974158040000012
to obey pd(xA) X of distributionAIn the expectation that the position of the target is not changed,
Figure FDA0002974158040000013
to obey pd(xB) X of distributionBExpectation of (1), GA、GBA generator for corresponding CT images and MRI images, respectively;
in generating a countermeasure network, a segmentation loss function L is defined as followsseg(SA/B,GA,GB):
Figure FDA0002974158040000021
In the above equation, S is mappedA/BA → Y, B → Y, A represents CT mode, B represents MRI mode, Y represents segmentation label, i represents a training sample, N is total number of training samples, Y representsA、yBThe real segmentation results in the A mode and the B mode are respectively.
3. The attention mechanism-guided multi-modal whole-heart image segmentation method according to claim 2, wherein the loop consistency loss function is synthesizedNumber and division loss function, defining a total loss function L (G)A,GB,DA,DB,SA/B):
L(GA,GB,DA,DB,SA/B)=LGAN(GA,DA)+LGAN(GB,DB)+λLcyc(GA,GB)+γLseg(SA/B,GA,GB)
In the above formula, LGAN(GA,DA) And LGAN(GB,DB) To generate a pairwise loss-resistance function, DA、DBCorresponding to the discriminators of the CT image and the MRI image, lambda and gamma are weight coefficients of the loss function.
4. The attention mechanism-guided multi-modal whole-heart image segmentation method as claimed in claim 1, wherein in a semi-twin network, the encoder localizes high-resolution features and captures more accurate information; the decoder propagates contextual information to higher resolution layers and learns higher level semantic information.
5. The attention mechanism-guided multi-modal whole-heart image segmentation method according to claim 1, wherein the flow of the attention mechanism module is as follows:
firstly, connecting feature maps output by 2 encoders through a channel layer to obtain a primary fusion map, carrying out matrix recombination on the primary fusion map to obtain a map 1, sequentially carrying out matrix recombination and transposition on the primary fusion map to obtain a map 2, carrying out vector product on the map 1 and the map 2 to obtain an attention map through a softmax function, carrying out element-by-element summation on the result of vector product of the attention map and the primary fusion map to obtain a final feature fusion map.
CN201910461477.2A 2019-05-30 2019-05-30 Multi-modal whole-heart image segmentation method guided by attention mechanism Active CN110288609B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910461477.2A CN110288609B (en) 2019-05-30 2019-05-30 Multi-modal whole-heart image segmentation method guided by attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910461477.2A CN110288609B (en) 2019-05-30 2019-05-30 Multi-modal whole-heart image segmentation method guided by attention mechanism

Publications (2)

Publication Number Publication Date
CN110288609A CN110288609A (en) 2019-09-27
CN110288609B true CN110288609B (en) 2021-06-08

Family

ID=68002969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910461477.2A Active CN110288609B (en) 2019-05-30 2019-05-30 Multi-modal whole-heart image segmentation method guided by attention mechanism

Country Status (1)

Country Link
CN (1) CN110288609B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179207B (en) * 2019-12-05 2022-04-08 浙江工业大学 Cross-modal medical image synthesis method based on parallel generation network
CN111353499B (en) * 2020-02-24 2022-08-19 上海交通大学 Multi-modal medical image segmentation method, system, storage medium and electronic device
CN111696027B (en) * 2020-05-20 2023-04-07 电子科技大学 Multi-modal image style migration method based on adaptive attention mechanism
CN112150429A (en) * 2020-09-18 2020-12-29 南京师范大学 Attention mechanism guided kidney CT image segmentation method
CN112308833B (en) * 2020-10-29 2022-09-13 厦门大学 One-shot brain image segmentation method based on circular consistent correlation
CN113312530B (en) * 2021-06-09 2022-02-15 哈尔滨工业大学 Multi-mode emotion classification method taking text as core
CN113177943B (en) * 2021-06-29 2021-09-07 中南大学 Cerebral apoplexy CT image segmentation method
CN113537057B (en) * 2021-07-14 2022-11-01 山西中医药大学 Facial acupuncture point automatic positioning detection system and method based on improved cycleGAN
CN113779298B (en) * 2021-09-16 2023-10-31 哈尔滨工程大学 Medical vision question-answering method based on composite loss
CN114092422B (en) * 2021-11-11 2024-06-07 长沙理工大学 Image multi-target extraction method and system based on deep circulation attention
CN114842312B (en) * 2022-05-09 2023-02-10 深圳市大数据研究院 Generation and segmentation method and device for unpaired cross-modal image segmentation model
CN116883247B (en) * 2023-09-06 2023-11-21 感跃医疗科技(成都)有限公司 Unpaired CBCT image super-resolution generation algorithm based on Cycle-GAN

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564119A (en) * 2018-04-04 2018-09-21 华中科技大学 A kind of any attitude pedestrian Picture Generation Method
CN108897740A (en) * 2018-05-07 2018-11-27 内蒙古工业大学 A kind of illiteracy Chinese machine translation method based on confrontation neural network
CN109325951A (en) * 2018-08-13 2019-02-12 深圳市唯特视科技有限公司 A method of based on the conversion and segmenting medical volume for generating confrontation network
CN109598745A (en) * 2018-12-25 2019-04-09 上海联影智能医疗科技有限公司 Method for registering images, device and computer equipment
CN109637634A (en) * 2018-12-11 2019-04-16 厦门大学 A kind of medical image synthetic method based on generation confrontation network
US20190147582A1 (en) * 2017-11-15 2019-05-16 Toyota Research Institute, Inc. Adversarial learning of photorealistic post-processing of simulation with privileged information

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325931A (en) * 2018-08-22 2019-02-12 中北大学 Based on the multi-modality images fusion method for generating confrontation network and super-resolution network
CN109801294A (en) * 2018-12-14 2019-05-24 深圳先进技术研究院 Three-dimensional atrium sinistrum dividing method, device, terminal device and storage medium
CN109685813B (en) * 2018-12-27 2020-10-13 江西理工大学 U-shaped retinal vessel segmentation method capable of adapting to scale information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190147582A1 (en) * 2017-11-15 2019-05-16 Toyota Research Institute, Inc. Adversarial learning of photorealistic post-processing of simulation with privileged information
CN108564119A (en) * 2018-04-04 2018-09-21 华中科技大学 A kind of any attitude pedestrian Picture Generation Method
CN108897740A (en) * 2018-05-07 2018-11-27 内蒙古工业大学 A kind of illiteracy Chinese machine translation method based on confrontation neural network
CN109325951A (en) * 2018-08-13 2019-02-12 深圳市唯特视科技有限公司 A method of based on the conversion and segmenting medical volume for generating confrontation network
CN109637634A (en) * 2018-12-11 2019-04-16 厦门大学 A kind of medical image synthetic method based on generation confrontation network
CN109598745A (en) * 2018-12-25 2019-04-09 上海联影智能医疗科技有限公司 Method for registering images, device and computer equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Low-Dose CT Image Denoising Using a Generative Adversarial Network With Wasserstein Distance and Perceptual Loss;Qingsong Yang等;《IEEE Transactions on Medical Imaging》;20180417;第37卷(第6期);第1348-1357页 *
基于注意力机制的多尺度融合航拍影像语义分割;郑顾平 等;《图学学报》;20181231;第39卷(第6期);第1069-1077页 *
基于注意机制的图像分割研究;屈宗艳;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150315(第3期);I138-2203 *

Also Published As

Publication number Publication date
CN110288609A (en) 2019-09-27

Similar Documents

Publication Publication Date Title
CN110288609B (en) Multi-modal whole-heart image segmentation method guided by attention mechanism
Liu et al. CT synthesis from MRI using multi-cycle GAN for head-and-neck radiation therapy
Wolterink et al. Automatic segmentation and disease classification using cardiac cine MR images
US10922816B2 (en) Medical image segmentation from raw data using a deep attention neural network
CN109242844B (en) Pancreatic cancer tumor automatic identification system based on deep learning, computer equipment and storage medium
CN108460726A (en) A kind of magnetic resonance image super-resolution reconstruction method based on enhancing recurrence residual error network
CN109598722B (en) Image analysis method based on recurrent neural network
CN105654117B (en) High spectrum image sky based on SAE depth network composes united classification method
CN109584164B (en) Medical image super-resolution three-dimensional reconstruction method based on two-dimensional image transfer learning
Deng et al. Transbridge: A lightweight transformer for left ventricle segmentation in echocardiography
CN113674330B (en) Pseudo CT image generation system based on generation countermeasure network
CN107563434A (en) A kind of brain MRI image sorting technique based on Three dimensional convolution neutral net, device
Cui et al. Bidirectional cross-modality unsupervised domain adaptation using generative adversarial networks for cardiac image segmentation
Cheng et al. DDU-Net: A dual dense U-structure network for medical image segmentation
CN115457020B (en) 2D medical image registration method fusing residual image information
CN109727197A (en) A kind of medical image super resolution ratio reconstruction method
CN110827232A (en) Cross-modal MRI (magnetic resonance imaging) synthesis method based on morphological feature GAN (gain)
Lin et al. BATFormer: Towards boundary-aware lightweight transformer for efficient medical image segmentation
Zyuzin et al. Segmentation of 2D echocardiography images using residual blocks in U-net architectures
Wang et al. IGNFusion: an unsupervised information gate network for multimodal medical image fusion
CN110270015A (en) A kind of sCT generation method based on multisequencing MRI
CN117036162B (en) Residual feature attention fusion method for super-resolution of lightweight chest CT image
CN117333750A (en) Spatial registration and local global multi-scale multi-modal medical image fusion method
CN113205567A (en) Method for synthesizing CT image by MRI image based on deep learning
Bharti et al. Qemcgan: quantized evolutionary gradient aware multiobjective cyclic gan for medical image translation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant