CN114596285A - Multitask medical image enhancement method based on generation countermeasure network - Google Patents

Multitask medical image enhancement method based on generation countermeasure network Download PDF

Info

Publication number
CN114596285A
CN114596285A CN202210232832.0A CN202210232832A CN114596285A CN 114596285 A CN114596285 A CN 114596285A CN 202210232832 A CN202210232832 A CN 202210232832A CN 114596285 A CN114596285 A CN 114596285A
Authority
CN
China
Prior art keywords
network
resolution
loss function
medical image
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210232832.0A
Other languages
Chinese (zh)
Inventor
尹海涛
岳勇赢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202210232832.0A priority Critical patent/CN114596285A/en
Publication of CN114596285A publication Critical patent/CN114596285A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention discloses a multitask medical image enhancement method based on a generation countermeasure network, which is used for realizing super-resolution and fusion tasks of medical images and comprises the following steps: step 1, constructing a training data set and preprocessing the data set; step 2, forming a generation countermeasure network model by using a dynamic convolution and a residual dense network; step 3, setting the hyper-parameters and the loss functions of the network model, and optimizing the loss functions; step 4, loseInputting the preprocessed training data set to a generated confrontation network model, and training a network to obtain a trained generated confrontation network model; step 5, inputting a test data set into a network to obtain a fused super-resolution medical image; step 6, using mutual information QmiAnd Q designed based on structural similarityyangAnd evaluating the quality of the fused super-resolution medical image. The method of the invention can realize the super-resolution and fusion tasks of the medical images in a network.

Description

Multitask medical image enhancement method based on generation countermeasure network
Technical Field
The invention relates to the technical field of computer vision and image processing, in particular to a multitask medical image enhancement method based on a generation countermeasure network.
Background
With the continuous development of medical imaging technology, various medical images of different modalities are widely applied to the fields of clinical disease diagnosis, auxiliary surgery, health examination and detection, and the like. Common medical images include Magnetic Resonance Images (MRI), Positron Emission Tomography (PET), and Computed Tomography (CT). Different modality medical images respond with different body structure information due to different imaging modes. The single modality image cannot fully describe the focus information, which is not beneficial to the diagnosis of doctors. In order to solve the contradiction between the clinical requirement and the imaging technology, the medical image fusion technology generates a fusion image containing multi-modal complementary information by inheriting medical image information of different modalities.
Since medical images are imaged with radiation, the higher the radiation level, the sharper the image obtained. In order to ensure that the patient receives a smaller radiation dose, the resolution of most medical source images is limited, resulting in a lower resolution of the multi-modal medical fusion image. In order to improve the resolution of the fused image, an image super-resolution method is generally adopted in advance, and then a high-resolution multi-modal medical fused image is obtained by various fusion methods. This results in multiple steps and multiple network models required to obtain a high resolution multi-modality medical fusion image, which greatly increases labor costs.
Disclosure of Invention
In order to solve the above problems, the present invention provides a multitask medical image enhancement method based on generation of a countermeasure network.
In order to achieve the purpose, the invention is realized by the following technical scheme:
the invention relates to a multitask medical image enhancement method based on a generation countermeasure network, which comprises the following steps:
step 1, constructing a training data set, and carrying out size unification and data augmentation operation on the training data set;
step 2, forming a generation countermeasure network model by using a dynamic convolution and a residual dense network;
step 3, setting the hyper-parameters and the loss functions of the network model, and optimizing the loss functions;
step 4, inputting the preprocessed training data set to a generated confrontation network model, and training a network to obtain a trained generated confrontation network model;
step 5, inputting a test data set into a network to obtain a fused super-resolution medical image;
step 6, using mutual information QmiAnd Q designed based on structural similarityyangAnd evaluating the quality of the fused super-resolution medical image.
The invention is further improved in that: the generation countermeasure network formed in step 2 includes a generation model G and two discriminant models
Figure BDA0003539172760000021
And
Figure BDA0003539172760000022
the generation model comprises two encoders and three decoders, the encoders comprise dynamic convolution layers, a dense residual error network and an up-sampling module, the decoders and the discrimination model are composed of different convolution layers, the dynamic convolution layers perform global pooling on input features and input a full connection layer and ReLU activation, finally, the weights of all convolution kernels are obtained through Softmax layer activation, the weights and all convolution kernels are subjected to linear weighting to obtain final convolution kernels, and the dense residual error network comprises five convolution layer link activation layers.
The invention is further improved in that: the hyper-parameters in the step 3 comprise batch size, initial learning rate, iteration times and a learning rate attenuation strategy.
The invention is further improved in that: the loss function in the step 3 comprises a super-resolution loss function and a fusion loss function, wherein the super-resolution loss function is expressed by the following mathematical expression:
Figure BDA0003539172760000023
in the formula (1), α represents a ratio between two types of loss, N represents all pixels of an image, and IsrRepresentation of super-resolution image, IhrRepresenting a high resolution image, psi (g) representing features obtained by inputting the image into a pre-trained VGG16 network;
the fusion loss function is composed of the countermeasure loss and the content loss, and specifically comprises the following steps:
Figure BDA0003539172760000024
the 1 st and 2 nd terms of the formula (2) represent the mode m1And mode m2Loss function of { lambda }12Beta, gamma is a balance parameter,
Figure BDA0003539172760000025
and
Figure BDA0003539172760000026
to use the content loss function according to different modality images,
Figure BDA0003539172760000027
and
Figure BDA0003539172760000028
respectively representing generator G and discriminator
Figure BDA0003539172760000029
And
Figure BDA00035391727600000210
the loss of antagonism between, defined as follows:
Figure BDA00035391727600000211
the invention is further improved in that: for PET images, the content loss function is expressed as
Figure BDA00035391727600000212
The formula is as follows:
Figure BDA0003539172760000031
wherein IPETAs a PET image, ImIs another modality image.
The invention is further improved in that: for MRI images or CT images, the content loss function is expressed as
Figure BDA0003539172760000032
The Laplace characteristic of the source image is constrained, and the mathematical formula is as follows:
Figure BDA0003539172760000033
wherein Laplace (·) denotes Laplace transform, IMRI/CTRepresenting an MRI or CT image.
The invention is further improved in that: the discriminator
Figure BDA0003539172760000034
And
Figure BDA0003539172760000035
the antagonism loss of (1) adopts the loss function of a WGAN-GP discriminator, and the formula is as follows:
Figure BDA0003539172760000036
wherein,
Figure BDA0003539172760000037
represents from
Figure BDA0003539172760000038
To IiRandomly sampling the data.
The invention is further improved in that: the specific operation of the step 4 is as follows:
step 4.1, inputting the original data training set into the forward propagation of the encoder and the super-resolution decoder, and training the two branches by using a super-resolution loss function;
and 4.2, extracting and assigning the encoder parameters trained in the step 4.1 to an encoder of the fusion network, freezing the parameters of the encoder, and training the network model by using the fusion loss function to obtain the trained model.
The invention has the beneficial effects that:
1. the invention adopts the generation of the confrontation network model, uses the dynamic convolution and the dense residual error network to furthest ensure the information transmission, reduce the problems of gradient dispersion or gradient explosion and the like, links the shallow layer characteristic with the deep layer characteristic and ensures the network performance.
2. The invention also optimizes the network performance by using the technologies of parameter extraction, parameter freezing, parameter fine adjustment and the like, thereby obtaining a better super-resolution multi-modal medical fusion image.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is an overall network model.
Fig. 3 is a schematic diagram of an encoder in a network.
Fig. 4 is a schematic diagram of a decoder in a network.
FIG. 5 is a schematic diagram of the fusion results.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings. It is obvious that the described embodiments are only a part of the implementations of the present invention, and not all implementations, and all other embodiments obtained by those of ordinary skill in the art based on the embodiments of the present invention are within the protection scope of the present invention without any inventive work.
As shown in fig. 1-5, the present invention is a multitask medical image enhancement method based on generation of countermeasure network, comprising the following steps:
step 1, constructing a training data set, and processing the training data set as follows:
selecting training samples from the training data set as an original training set, wherein the sizes of images in the training samples are all 256 multiplied by 256, and performing image enhancement operation on the images, wherein the adopted image enhancement operation comprises the following steps: turning over the upper part and the lower part, rotating by 90 degrees, rotating by 180 degrees, rotating by 270 degrees, rotating by 90 degrees after turning over the upper part and the lower part, rotating by 180 degrees after turning over the upper part and the lower part, and rotating by 270 degrees after turning over the upper part and the lower part;
step 2, forming a generation countermeasure network model by using a dynamic convolution and a residual dense network;
the generation countermeasure network comprises a generation model G and two discriminant models
Figure BDA0003539172760000041
And
Figure BDA0003539172760000042
wherein the generative model comprises two encoding modules and three decoding modules; the coding module comprises a dynamic convolution layer, a dense residual error network and an up-sampling module; the decoding module and the discriminant model both include different convolutional layers. The dynamic convolution changes an original fixed convolution kernel into a convolution kernel which can adaptively change attention according to input; and globally pooling input features, inputting the input features into a full connection layer and a ReLU (return link) layer for activation, finally obtaining the weight of each convolution kernel through Softmax layer activation, and linearly weighting the weight and each convolution kernel to obtain the final convolution kernel.
The dense residual error network comprises five convolutional layer link activation layers, and all the layers are connected on the premise of ensuring the maximum information transmission between the layers in the network, namely the input of each layer is the output of all the previous layers. On the basis, in order to reduce the problem of gradient dispersion or gradient explosion, residual error linkage is introduced, and shallow features and deep features are linked for ensuring the network performance.
Step 3, setting the batch size, the initial learning rate, the iteration times, the learning rate attenuation strategy and the loss function of the network model, and optimizing the loss function, wherein the loss function comprises a super-resolution loss function and a fusion loss function;
step 4, generating a confrontation network model by training, adopting an AdamW optimizer to train, and setting the initial learning rate to be 2 multiplied by 10-4The network counts 200 epochs, and the specific training steps are as follows:
step 4.1, inputting the original data training set into forward propagation of an Encoder Encoder and a super-resolution decoder DecoderSR, and training two branches by using a super-resolution loss function;
the super-resolution loss function formula is as follows:
Figure BDA0003539172760000051
in the formula (1), α represents a ratio between two types of loss, N represents all pixels of an image, and IsrRepresentation of super-resolution image, IhrRepresenting a high resolution image, psi (g) representing features obtained by inputting the image into a pre-trained VGG16 network;
and 4.2, extracting and assigning the encoder parameters trained in the step 4.1 to an encoder of the fusion network, freezing the parameters of the encoder, and training the network model by using the fusion loss function to obtain the trained model.
The fusion loss function is composed of the countermeasure loss and the content loss, and specifically comprises the following steps:
Figure BDA0003539172760000052
the 1 st and 2 nd terms of the formula (2) represent the mode m1And mode m2Loss function of { lambda }12Beta, gamma is a balance parameter,
Figure BDA0003539172760000053
and
Figure BDA0003539172760000054
respectively representing generator G and discriminator
Figure BDA0003539172760000055
And
Figure BDA0003539172760000056
the loss of antagonism between, defined as follows:
Figure BDA0003539172760000057
Figure BDA0003539172760000058
and
Figure BDA0003539172760000059
in order to adopt content loss functions according to different modality images, the images are divided into PET images, MRI images and CT images, aiming at the PET images, in order to keep the metabolic information of the PET images, a mean square error loss function is adopted, and the expression is as follows:
Figure BDA00035391727600000510
wherein IPETAs a PET image, ImIs another modality image.
For an MRI image or a CT image, in order to maintain texture information and contours in a source image, the Laplace characteristic of the source image is constrained, and the mathematical formula is as follows:
Figure BDA00035391727600000511
wherein Laplace (·) denotes Laplace transform, IMRI/CTRepresenting an MRI or CT image.
Figure BDA00035391727600000512
And
Figure BDA00035391727600000513
the antagonistic loss of (2) is defined by the loss function of the WGAN-GP discriminator as follows:
Figure BDA0003539172760000061
wherein,
Figure BDA0003539172760000062
represents from
Figure BDA0003539172760000063
To IiThe data is sampled randomly.
Step 5, inputting a test data set into the network obtained by training in the step 4.2 to obtain a fused super-resolution medical image;
step 6, using mutual information QmiAnd Q designed based on structural similarityyangAnd evaluating the quality of the fused super-resolution medical image.
The data set used in the experiment is obtained by downloading http:// www.med.harvard.edu/AANLIB/home. html of a website, and in the test result, only the network model obtained by training in the step 4.2 is used as a test model. The experimental verification shows that the three pictures in the first row in the figure are respectively the T1 and T2 modalities of MRI images and the FDG modality of PET images, as shown in fig. 5. The two images in the second row are the fusion results of the PET image with the MRI images of the two different modalities. Q of the fusion resultmiAnd QyangThe values of (A) are shown in Table 1:
TABLE 1 mutual information Q of fusion resultsmiValue and QyangValue of
Figure BDA0003539172760000064
Through the verification of the figure 5 and the table 1, the model of the invention can better recover the details of the image edge and the texture, more completely retain the semantic information and the edge texture of the two source images, and can obtain a better super-resolution multi-modal fusion image.
Although the present invention has been described with reference to the preferred embodiments, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. A multitask medical image enhancement method based on a generation countermeasure network is characterized by comprising the following steps: the method comprises the following steps:
step 1, constructing a training data set, and carrying out size unification and data augmentation operation on the training data set;
step 2, forming a generation countermeasure network model by using a dynamic convolution and a residual dense network;
step 3, setting the hyper-parameters and the loss functions of the network model, and optimizing the loss functions;
step 4, inputting the preprocessed training data set to a generated confrontation network model, and training a network to obtain a trained generated confrontation network model;
step 5, inputting a test data set into a network to obtain a fused super-resolution medical image;
step 6, using mutual information QmiAnd Q designed based on structural similarityyangEvaluating the quality of the fused super-resolution medical image;
the generation countermeasure network formed in step 2 includes a generation model G and two discriminant models
Figure FDA0003539172750000011
And
Figure FDA0003539172750000012
the generation model comprises two encoders and three decoders, the encoders comprise dynamic convolution layers, a dense residual error network and an up-sampling module, the dynamic convolution layers perform global pooling on input features and input a full connection layer and ReLU activation, finally the weights of all convolution kernels are obtained through Softmax layer activation, the weights and all convolution kernels are subjected to linear weighting to obtain final convolution kernels, and the dense residual error network comprises five convolution layer link activation layers.
2. The multitask medical image enhancement method based on the generation countermeasure network as claimed in claim 1, wherein: the hyper-parameters in the step 3 comprise batch size, initial learning rate, iteration times and a learning rate attenuation strategy.
3. The multitask medical image enhancement method based on the generation countermeasure network as claimed in claim 1, wherein: the loss function in the step 3 comprises a super-resolution loss function and a fusion loss function, wherein the super-resolution loss function is expressed by the following mathematical expression:
Figure FDA0003539172750000013
in the formula (1), α represents a ratio between two types of loss, N represents all pixels of an image, and IsrSuper-resolution image, I, representing network outputhrRepresenting the original high resolution reference image, psi (g) representing the features obtained by inputting the image into a pre-trained VGG16 network;
the fusion loss function is composed of the countermeasure loss and the content loss, and specifically comprises the following steps:
Figure FDA0003539172750000014
the 1 st and 2 nd terms of the formula (2) represent the mode m1And mode m2Loss function of { lambda }12Beta, gamma is a balance parameter,
Figure FDA0003539172750000021
and
Figure FDA0003539172750000022
to use the content loss function according to different modality images,
Figure FDA0003539172750000023
and
Figure FDA0003539172750000024
respectively representing generator G and discriminator
Figure FDA0003539172750000025
And
Figure FDA0003539172750000026
the loss of antagonism between, defined as follows:
Figure FDA0003539172750000027
4. the multitask medical image enhancement method based on the generation countermeasure network according to claim 3, characterized in that: for PET images, the content loss function is expressed as
Figure FDA0003539172750000028
The formula is as follows:
Figure FDA0003539172750000029
wherein IPETIs a PET pictureLike, ImIs another modality image.
5. The multitask medical image enhancement method based on the generation countermeasure network according to claim 3, characterized in that: for MRI images or CT images, the content loss function is expressed as
Figure FDA00035391727500000210
The Laplace characteristic of the source image is constrained, and the expression is as follows:
Figure FDA00035391727500000211
wherein Laplace (·) denotes Laplace transform, IMRI/CTRepresenting an MRI or CT image.
6. The multitask medical image enhancement method based on the generation countermeasure network according to claim 3, characterized in that: the discriminator
Figure FDA00035391727500000212
And
Figure FDA00035391727500000213
the antagonism loss of (1) adopts the loss function of a WGAN-GP discriminator, and the formula is as follows:
Figure FDA00035391727500000214
wherein,
Figure FDA00035391727500000215
represents from
Figure FDA00035391727500000216
To IiThe random sampling data of (a) is,
Figure FDA00035391727500000217
the gradient is indicated.
7. The multitask medical image enhancement method based on the generation countermeasure network according to claim 3, characterized in that: the specific operation of the step 4 is as follows:
step 4.1, inputting the original data training set into the forward propagation of the encoder and the super-resolution decoder, and training the two branches by using a super-resolution loss function;
and 4.2, extracting and assigning the encoder parameters trained in the step 4.1 to an encoder of the fusion network, freezing the parameters of the encoder, and training the network model by using the fusion loss function to obtain the trained model.
CN202210232832.0A 2022-03-09 2022-03-09 Multitask medical image enhancement method based on generation countermeasure network Pending CN114596285A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210232832.0A CN114596285A (en) 2022-03-09 2022-03-09 Multitask medical image enhancement method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210232832.0A CN114596285A (en) 2022-03-09 2022-03-09 Multitask medical image enhancement method based on generation countermeasure network

Publications (1)

Publication Number Publication Date
CN114596285A true CN114596285A (en) 2022-06-07

Family

ID=81808761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210232832.0A Pending CN114596285A (en) 2022-03-09 2022-03-09 Multitask medical image enhancement method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN114596285A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974832A (en) * 2024-04-01 2024-05-03 南昌航空大学 Multi-modal liver medical image expansion algorithm based on generation countermeasure network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127447A (en) * 2019-12-26 2020-05-08 河南工业大学 Blood vessel segmentation network and method based on generative confrontation network
CN111178499A (en) * 2019-12-10 2020-05-19 西安交通大学 Medical image super-resolution method based on generation countermeasure network improvement
WO2021056969A1 (en) * 2019-09-29 2021-04-01 中国科学院长春光学精密机械与物理研究所 Super-resolution image reconstruction method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021056969A1 (en) * 2019-09-29 2021-04-01 中国科学院长春光学精密机械与物理研究所 Super-resolution image reconstruction method and device
CN111178499A (en) * 2019-12-10 2020-05-19 西安交通大学 Medical image super-resolution method based on generation countermeasure network improvement
CN111127447A (en) * 2019-12-26 2020-05-08 河南工业大学 Blood vessel segmentation network and method based on generative confrontation network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974832A (en) * 2024-04-01 2024-05-03 南昌航空大学 Multi-modal liver medical image expansion algorithm based on generation countermeasure network
CN117974832B (en) * 2024-04-01 2024-06-07 南昌航空大学 Multi-modal liver medical image expansion algorithm based on generation countermeasure network

Similar Documents

Publication Publication Date Title
CN110827216B (en) Multi-generator generation countermeasure network learning method for image denoising
Gu et al. MedSRGAN: medical images super-resolution using generative adversarial networks
CN111325686B (en) Low-dose PET three-dimensional reconstruction method based on deep learning
CN110009669B (en) 3D/2D medical image registration method based on deep reinforcement learning
CN107492071A (en) Medical image processing method and equipment
CN115953494B (en) Multi-task high-quality CT image reconstruction method based on low dose and super resolution
CN109741254B (en) Dictionary training and image super-resolution reconstruction method, system, equipment and storage medium
Su et al. A deep learning method for eliminating head motion artifacts in computed tomography
He et al. Downsampled imaging geometric modeling for accurate CT reconstruction via deep learning
CN114596285A (en) Multitask medical image enhancement method based on generation countermeasure network
Zhang et al. DREAM-Net: Deep residual error iterative minimization network for sparse-view CT reconstruction
CN117813055A (en) Multi-modality and multi-scale feature aggregation for synthesis of SPECT images from fast SPECT scans and CT images
Liu et al. MRCON-Net: Multiscale reweighted convolutional coding neural network for low-dose CT imaging
Chan et al. An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction
Zhang et al. Hformer: highly efficient vision transformer for low-dose CT denoising
CN112991220B (en) Method for correcting image artifact by convolutional neural network based on multiple constraints
CN111105475A (en) Bone three-dimensional reconstruction method based on orthogonal angle X-ray
CN114358285A (en) PET system attenuation correction method based on flow model
CN112465118B (en) Low-rank generation type countermeasure network construction method for medical image generation
Poonkodi et al. 3D-MedTranCSGAN: 3D medical image transformation using CSGAN
CN117475268A (en) Multimode medical image fusion method based on SGDD GAN
US20210074034A1 (en) Methods and apparatus for neural network based image reconstruction
CN116757982A (en) Multi-mode medical image fusion method based on multi-scale codec
Aldemir et al. Chain code strategy for lossless storage and transfer of segmented binary medical data
Fan et al. Quadratic neural networks for CT metal artifact reduction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination