CN111462264A - Medical image reconstruction method, medical image reconstruction network training method and device - Google Patents

Medical image reconstruction method, medical image reconstruction network training method and device Download PDF

Info

Publication number
CN111462264A
CN111462264A CN202010186019.5A CN202010186019A CN111462264A CN 111462264 A CN111462264 A CN 111462264A CN 202010186019 A CN202010186019 A CN 202010186019A CN 111462264 A CN111462264 A CN 111462264A
Authority
CN
China
Prior art keywords
image
network
vector
image reconstruction
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010186019.5A
Other languages
Chinese (zh)
Other versions
CN111462264B (en
Inventor
胡圣烨
王书强
陈卓
申妍燕
张炽堂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202010186019.5A priority Critical patent/CN111462264B/en
Publication of CN111462264A publication Critical patent/CN111462264A/en
Application granted granted Critical
Publication of CN111462264B publication Critical patent/CN111462264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides a medical image reconstruction method, a medical image reconstruction network training method and a medical image reconstruction network training device. The medical image reconstruction network training method comprises the following steps: carrying out feature coding extraction on a real image sample to obtain a feature coding vector of the real image sample; through an image reconstruction network, carrying out image reconstruction based on the feature coding vector to obtain a first image, and carrying out image reconstruction based on the first hidden layer vector of the real image sample to obtain a second image; and carrying out image discrimination on the real image sample, the first image and the second image through an image discrimination network, and optimizing the image reconstruction network according to an image discrimination result. The method and the device introduce priori knowledge guidance from a real image, stabilize training of the image reconstruction network, and easily achieve optimal convergence, so that the problem that training of the generation countermeasure network is difficult is solved.

Description

Medical image reconstruction method, medical image reconstruction network training method and device
Technical Field
The embodiment of the application belongs to the technical field of image processing, and particularly relates to a medical image reconstruction method, a medical image reconstruction network training method and a medical image reconstruction network training device.
Background
Functional magnetic resonance imaging (fMRI) is an emerging neuroimaging modality, which utilizes magnetic resonance imaging to measure hemodynamic changes induced by neuronal activity. It is a non-invasive technique that allows accurate localization of specific cerebral active cortical areas and captures blood oxygen changes that reflect neuronal activity. However, since fMRI image acquisition costs are high, scanning time is long, and some special patients cannot perform the scanning (for example, in-vivo metal object persons cannot receive the scanning), in a specific application scenario, the number of images that can be acquired is often limited, which greatly limits the application of artificial intelligence methods that rely on a large amount of data, such as deep learning, in the field of medical image analysis.
A promising solution is to use limited real image samples through the existing artificial intelligence method to learn to reconstruct corresponding medical images from Gaussian hidden layer vectors, thereby achieving the purposes of enhancing the sample size and supporting subsequent image analysis tasks. The generation of the countermeasure network is a generation model with better performance at present, and gradually becomes a research hotspot of deep learning, and starts to be applied to the field of medical images.
The traditional generation countermeasure network can generate a new image with diversity by learning real data distribution, but also has the problems of difficult network training and difficult achievement of optimal convergence.
Disclosure of Invention
In order to overcome the problems in the related art, the embodiment of the application provides a medical image reconstruction method, a medical image reconstruction network training method and a medical image reconstruction network training device.
The application is realized by the following technical scheme:
in a first aspect, an embodiment of the present application provides a medical image reconstruction network training method, which includes:
carrying out feature coding extraction on a real image sample to obtain a feature coding vector of the real image sample;
through an image reconstruction network, carrying out image reconstruction based on the feature coding vector to obtain a first image, and carrying out image reconstruction based on the first hidden layer vector of the real image sample to obtain a second image;
and carrying out image discrimination on the real image sample, the first image and the second image through an image discrimination network, and optimizing the image reconstruction network according to an image discrimination result.
In a first possible implementation manner of the first aspect, the performing feature coding extraction on a real image sample to obtain a feature coding vector of the real image sample includes:
carrying out layered feature extraction on the real image sample through a plurality of three-dimensional convolution layers of an image coding network;
and processing the extracted features through a linear function to obtain a feature coding vector of the real image sample.
In a second possible implementation manner of the first aspect, the method further includes:
carrying out vector discrimination on the feature coding vector and the first hidden layer vector through a coding feature discrimination network;
and optimizing the image coding network based on the vector discrimination result.
In a third possible implementation manner of the first aspect, the optimizing the image coding network based on the vector discrimination result includes:
calculating the voxel-by-voxel difference between the second image and the real image sample, and updating the network parameters of the image coding network by a gradient descent method until the voxel-by-voxel difference is less than or equal to a preset threshold value;
wherein the voxel-wise difference is a first loss function of the image coding network, the first loss function being:
Figure BDA0002414213770000021
LCis said first loss function, zeEncoding a vector, z, for said featurerFor the first hidden layer vector, C characterizes the image coding network and E is a mathematical expectation.
In a fourth possible implementation manner of the first aspect, the optimizing the image reconstruction network according to the image discrimination result includes:
determining a second loss function of the image reconstruction network according to the image discrimination result, the structural similarity measurement loss function and the perception measurement loss function, updating network parameters of the image reconstruction network through a gradient descent method, and training the image reconstruction network;
wherein the second loss function is:
Figure BDA0002414213770000031
Figure BDA0002414213770000032
Figure BDA0002414213770000033
Figure BDA0002414213770000034
LGis said second loss function, zeEncoding a vector, z, for said featurerFor the first hidden layer vector, C characterizes the image coding network, D is the image discrimination network, G is the image reconstruction network, E is the mathematical expectation, LSSIMMeasure the loss function for structural similarity, LperceptualRepresenting a perceptual metric loss function, XrealCharacterizing said real image, λ1And λ2Phi is a Gram matrix for the weight coefficients, LDA loss function of the network is determined for the image.
In a second aspect, an embodiment of the present application provides a medical image reconstruction method, including:
acquiring a second hidden layer vector of an image to be reconstructed;
and reconstructing the image to be reconstructed based on the second hidden layer vector through the trained image reconstruction network.
In a third aspect, an embodiment of the present application provides a medical image reconstruction network training apparatus, including:
the characteristic coding extraction module is used for extracting characteristic coding of a real image sample to obtain a characteristic coding vector of the real image sample;
the first image reconstruction module is used for reconstructing an image based on the characteristic coding vector through an image reconstruction network to obtain a first image and reconstructing the image based on a first hidden layer vector of the real image sample to obtain a second image;
and the first optimization module is used for carrying out image discrimination on the real image sample, the first image and the second image through an image discrimination network and optimizing the image generation network according to an image discrimination result.
In a fourth aspect, an embodiment of the present application provides a medical image reconstruction apparatus, including:
the hidden layer vector acquisition module is used for acquiring a second hidden layer vector of the image to be reconstructed;
and the second image reconstruction module is used for reconstructing the image with the reconstructed image based on the second hidden layer vector through the trained image reconstruction network.
In a fifth aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor, when executing the computer program, implements the medical image reconstruction network training method according to the first aspect, or implements the medical image reconstruction method according to the second aspect.
In a sixth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the medical image reconstruction network training method according to the first aspect, or implements the medical image reconstruction method according to the second aspect.
In a seventh aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the medical image reconstruction network training method according to the first aspect, or to implement the medical image reconstruction method according to the second aspect.
Compared with the prior art, the embodiment of the application has the advantages that:
in the embodiment of the application, feature coding extraction is performed on a real image sample to obtain a feature coding vector of the real image sample, image reconstruction is performed on the basis of the feature coding vector through an image reconstruction network to obtain a first image, image reconstruction is performed on the basis of a hidden vector of the real image sample to obtain a second image, image discrimination is performed on the real image sample, the first image and the second image through an image discrimination network, the image reconstruction network is optimized according to an image discrimination result, the optimized image reconstruction network is used for image reconstruction work to introduce priori knowledge guidance from the real image into a countermeasure network for stabilizing training of the image reconstruction network, optimal convergence is easily achieved, and the problem that training of the countermeasure network is difficult is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the specification.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a medical image reconstruction network training method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a medical image reconstruction network training method according to an embodiment of the present application;
fig. 4 is a flowchart illustrating a medical image reconstruction network training method according to an embodiment of the present application;
fig. 5 is a flowchart illustrating a medical image reconstruction method according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of medical image reconstruction provided by an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a medical image reconstruction network training apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a medical image reconstruction apparatus provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Functional magnetic resonance imaging fMRI is an emerging neuroimaging modality that uses magnetic resonance imaging to measure hemodynamic changes induced by neuronal activity. It is a non-invasive technique that allows accurate localization of specific cerebral active cortical areas and captures blood oxygen changes that reflect neuronal activity. However, since fMRI image acquisition costs are high, scanning time is long, and some special patients cannot perform the scanning (for example, in-vivo metal object persons cannot receive the scanning), in a specific application scenario, the number of images that can be acquired is often limited, which greatly limits the application of artificial intelligence methods that rely on a large amount of data, such as deep learning, in the field of medical image analysis.
The generation of countermeasure networks has become a research focus of deep learning and is beginning to be applied to various fields, except for reconstructing an original image from a hidden layer vector, another solution is to synthesize a medical image of another modality from a medical image of one modality, such as synthesizing a corresponding PET image from a CT image of the same patient.
Although in this method, the generation of the countermeasure network can generate new images with diversity by learning the true data distribution, it has the biggest problem that the network training is difficult and the optimal convergence is not easy to achieve. The aim of generating the countermeasure network is to enable the data distribution fitted by the generator to be close to the real data distribution, and the inventor of the application finds that the generation network introduced without any prior knowledge does not know the real data distribution at all in the research, and can only try again and again according to the true and false feedback of the discriminator. The self-coding network of variation of another generation model with strong performance does not have the problem, and can extract the coding feature vector of a real image firstly, and simultaneously carry out variation reasoning through resampling and decode and generate the hidden vector according to the variation result.
Based on the action mechanism inspiration of the variational self-encoder, the encoding characteristic vector of the variational self-encoder is introduced into the training for generating the countermeasure network as the characteristic priori knowledge about the real image, and a more definite optimization direction is given to the generation network, so that the problems of difficult training, long time consumption and easy breakdown are solved. And we have found that simply piecing together a combined variational self-encoder and generating a countermeasure network is not feasible because of the optimization conflict between variational reasoning and the objective function of generating the countermeasure network, which cannot achieve optimal convergence at the same time. In order to solve the problem, the application further introduces a separate coding discriminator, so that the optimization process of the variational self-encoder is also incorporated into a 'generation-countermeasure' system, and the optimization conflict between the variational reasoning and the objective function of the generation-countermeasure network is solved.
For example, the embodiment of the present application can be applied to the exemplary scenario shown in fig. 1. The terminal 10 and the server 20 form an application scenario of the medical image reconstruction network training method and the medical image reconstruction method.
Specifically, the terminal 10 is configured to acquire a real image sample of the subject and send the real image sample to the server 20; the server 20 is configured to perform feature coding extraction on a real image sample to obtain a feature coding vector of the real image sample, perform image reconstruction on the basis of the feature coding vector through an image reconstruction network to obtain a first image, perform image reconstruction on the basis of a hidden vector of the real image sample to obtain a second image, perform image discrimination on the real image sample, the first image, and the second image through an image discrimination network, optimize the image reconstruction network according to an image discrimination result, use the optimized image generation network for image reconstruction work, introduce priori knowledge guidance from the real image for the countermeasure network, thereby stabilizing training on the image reconstruction network, facilitating achievement of optimal convergence, and solving a problem that training on the countermeasure network is difficult.
The medical image reconstruction network training method of the present application is described in detail below with reference to fig. 1.
Fig. 2 is a schematic flow chart of a medical image reconstruction network training method provided in an embodiment of the present application, and referring to fig. 2, the medical image reconstruction network training method is described in detail as follows:
in step 101, feature coding extraction is performed on a real image sample to obtain a feature coding vector of the real image sample.
In an embodiment, in step 101, feature extraction may be performed on the real image sample through an image coding network, so as to obtain a feature coding vector of the real image sample.
For example, referring to fig. 3, the extracting features of the real image sample through the image coding network to obtain the feature coding vector of the real image sample may specifically include:
in step 1011, the layered feature extraction is performed on the real image sample by the plurality of three-dimensional convolution layers of the image coding network.
In step 1012, the extracted features are processed by a linear function to obtain a feature encoding vector of the real image sample.
In an example scenario, a real image sample may be spread into a three-dimensional image in a time sequence, the three-dimensional image is sequentially input into an image coding network, a plurality of three-dimensional convolution layers of the image coding network are used to perform hierarchical feature extraction on the three-dimensional image, and a linear feature and a nonlinear feature of the three-dimensional image are synthesized through a linear function to obtain a feature coding expression vector of the real image sample.
Wherein the linear function is a piecewise linear function. Specifically, linear features and nonlinear features of the three-dimensional image are processed through a piecewise linear function, and a feature coding expression vector of a real image sample is obtained.
Specifically, the linear characteristic and the nonlinear characteristic of the three-dimensional image are processed through the Re L U function, and a characteristic code expression vector of the real image sample is obtained.
In step 102, an image reconstruction network is used to reconstruct an image based on the feature coding vector to obtain a first image, and an image reconstruction network is used to reconstruct an image based on the first hidden layer vector of the real image sample to obtain a second image.
In one embodiment, the feature encoding vector and the first hidden layer vector may be input to the image reconstruction network to obtain the first image and the second image; in the embodiment of the present application, the convolution layer of the image generation network is a three-dimensional separable convolution layer sampled close to the upper portion.
For example, a feature coding vector extracted from a real image sample and a first hidden layer vector sampled from a gaussian distribution of the real image sample may be used as input of an image reconstruction network, and a first image and a second image are respectively reconstructed from the feature coding vector and the first hidden layer vector step by step. In this embodiment, the three-dimensional separable convolutional layer with neighbor upsampling is used to replace an anti-convolutional layer in a conventional image reconstruction network, so that the number of learnable parameters can be reduced, the quality of the generated fMRI image can be improved, the reconstructed image has fewer artifacts and a clearer structure.
In step 103, image discrimination is performed on the real image sample, the first image and the second image through an image discrimination network, and the image reconstruction network is optimized according to an image discrimination result.
Specifically, the real image sample, the first image and the second image can be used as input of an image discrimination network, the image reconstruction network is optimized according to the discrimination result of the image discrimination network, generation-countermeasure training is constructed, and the image reconstruction network subjected to optimization training is used for image reconstruction.
After the image reconstruction network is optimized in step 103, the image reconstruction network is continuously used for the image reconstruction in step 102 to obtain the first image and the second image, and step 103 is executed again after the first image and the second image are obtained, and the steps are sequentially executed in a circulating manner.
The medical image reconstruction network training method carries out feature coding extraction on a real image sample to obtain a feature coding vector of the real image sample, through an image reconstruction network, carrying out image reconstruction based on the characteristic coding vector to obtain a first image, carrying out image reconstruction based on the hidden layer vector of the real image sample to obtain a second image, and the real image sample, the first image and the second image are subjected to image discrimination through an image discrimination network, and the image reconstruction network is optimized according to the image discrimination result, the optimized image reconstruction network is used for image reconstruction work to generate a countermeasure network and introduce prior knowledge guidance from a real image, therefore, the training of the image reconstruction network is stabilized, the optimal convergence is easy to achieve, and the problem that the generation of the confrontation network training is difficult is solved.
Fig. 4 is a schematic flowchart of a medical image reconstruction network training method provided in an embodiment of the present application, and referring to fig. 4, based on the embodiment shown in fig. 2, the medical image reconstruction network training method may further include:
in step 104, vector discrimination is performed on the feature encoding vector and the first hidden layer vector by an encoding feature discrimination network.
In step 105, the image coding network is optimized based on the vector discrimination result.
After the feature coding vector is obtained in step 101, the feature coding vector and the first hidden layer vector of the real image sample may be optimized through steps 104 and 105, and then the optimized image coding network is used as the image coding network in step 101, and may be used to execute step 101 again; the image coding network is repeatedly optimized in such a way.
In one embodiment, the image coding network may be optimized by performing countermeasure training on the image coding network based on the vector discrimination result.
Specifically, a coding feature discrimination network with the same structure as the image discrimination network can be constructed, and a feature coding vector obtained by coding a real image sample and a first hidden vector obtained by sampling from gaussian distribution are used as input of the coding feature discrimination network, so that the coding feature discrimination network and the image coding network also form a training relationship of 'generation-countermeasure' to replace variational reasoning and solve the problem of training conflict between the variational reasoning and the generation of a countermeasure objective function.
In an embodiment, the performing countermeasure training on the image coding network based on the vector discrimination result specifically includes: calculating the voxel-by-voxel difference between the second image and the real image sample, and updating the network parameters of the image coding network by a gradient descent method until the voxel-by-voxel difference is smaller than or equal to a preset threshold value, so as to realize the training of the image coding network, wherein the voxel-by-voxel difference is a first loss function of the image coding network.
Illustratively, for training optimization of the image coding network, a coding feature discrimination network is introduced to replace an original variational reasoning process. In the training process of the image coding network, firstly calculating the voxel-by-voxel difference between a reconstructed fMRI image and a real fMRI image, and then updating the network parameters of the image coding network by a gradient descent method to ensure that the voxel-by-voxel difference is less than or equal to a first preset threshold value; secondly, the Wasserstein distance is selected as a measuring tool of the real image distribution and the reconstructed image distribution in the first loss function, and the network gradient of the gradient penalty item clipping discriminator is introduced to further stabilize the network training.
Illustratively, the first loss function may be:
Figure BDA0002414213770000111
wherein, LCIs said first loss function, zeEncoding a vector, z, for said featurerFor the first hidden layer vector, C characterizes the image coding network and E is a mathematical expectation.
In an embodiment, the optimizing the image reconstruction network according to the image determination result in step 103 may specifically be: and performing countermeasure training on the image reconstruction network according to the image discrimination result.
The performing countermeasure training on the image reconstruction network according to the image discrimination result may include: and determining a second loss function of the image reconstruction network according to the image discrimination result, the structural similarity measurement loss function and the perception measurement loss function, updating network parameters of the image reconstruction network through a gradient descent method, and training the image reconstruction network.
Illustratively, the countermeasure training is carried out on the image reconstruction network according to the image discrimination result, specifically, if the discrimination result of the image discrimination network is closer to a real image, the network parameters are updated or not updated by only using a gradient descent method for the image reconstruction network; if the judgment result of the image judgment network is closer to the reconstructed image, the image reconstruction network needs to update the network parameters by a second preset amplitude, and the second preset amplitude is larger than the first preset amplitude. In addition, besides the Wasserstein distance is selected as a measurement tool for the distribution of the real image and the distribution of the reconstructed image in the second loss function, structural similarity measurement loss and perception measurement loss are introduced, and the characteristics of the reconstructed image can be ensured to be more consistent with the real image.
Illustratively, the second loss function may be:
Figure BDA0002414213770000121
Figure BDA0002414213770000122
Figure BDA0002414213770000123
Figure BDA0002414213770000124
wherein, LGIs said second loss function, zeEncoding the features intoAmount, zrFor the first hidden layer vector, C characterizes the image coding network, D is the image discrimination network, G is the image reconstruction network, E is the mathematical expectation, LSSIMMeasure the loss function for structural similarity, LperceptualRepresenting a perceptual metric loss function, XrealCharacterizing said real image, λ1And λ2Phi is a Gram matrix for the weight coefficients, LDA loss function of the network is determined for the image.
In this embodiment, the closeness of the image reconstructed by the image reconstruction network and the real image can be evaluated by an image overlap ratio (SOR) technical index. After the training optimization of the image reconstruction network is completed, a high-quality medical image sample can be reconstructed from the Gaussian hidden layer vector through the trained image reconstruction network, the image sample size is enhanced, and the subsequent analysis work is facilitated.
The medical image reconstruction method of the present application is described in detail below with reference to fig. 1.
Fig. 5 is a schematic flow chart of a medical image reconstruction method provided in an embodiment of the present application, and with reference to fig. 5, the medical image reconstruction method is described in detail as follows:
in step 201, a second hidden layer vector of the image to be reconstructed is obtained.
In step 202, image reconstruction is performed on the image to be reconstructed based on the second hidden layer vector through the trained image reconstruction network.
The medical image reconstruction method comprises the steps of extracting feature codes of a real image sample to obtain a feature code vector of the real image sample, reconstructing an image based on the feature code vector to obtain a first image through an image reconstruction network, reconstructing the image based on a hidden vector of the real image sample to obtain a second image, discriminating the real image sample, the first image and the second image through an image discrimination network, training and optimizing the image reconstruction network according to an image discrimination result, reconstructing the image to be reconstructed through the image reconstruction network after the training and optimizing based on the second hidden vector, introducing priori knowledge from the real image for generating an anti-network, guiding the training of the image reconstruction network stably, easily achieving optimal convergence, and solving the problem of difficulty in generating anti-network training, and the reconstructed image is closer to the real image.
Referring to fig. 6, in the present embodiment, the process of medical image reconstruction may include the following steps:
in step 301, feature extraction is performed on the real image sample based on the image coding network, so as to obtain a feature coding vector of the real image sample.
In step 302, an image reconstruction network is used to reconstruct an image based on the feature coding vector to obtain a first image, and an image reconstruction network is used to reconstruct an image based on the first hidden layer vector of the real image sample to obtain a second image.
In step 303, the real image sample, the first image and the second image are subjected to image discrimination by an image discrimination network, and training and optimization are performed on the image reconstruction network according to an image discrimination result. Wherein, the image reconstruction network after the training optimization is used as the image reconstruction network in step 302 to perform the next image reconstruction.
In step 304, the feature coding vector in step 301 and the first hidden layer vector of the real image sample are vector-discriminated by a coding feature discrimination network.
In step 305, based on the vector discrimination result, the image coding network is optimized, and the optimized image coding network is used as the image coding network in step 301 to perform feature extraction on the next real image sample.
In step 306, after the training and optimization of the image reconstruction network through the real image sample are completed, a second hidden layer vector of the image to be reconstructed is obtained.
In step 307, image reconstruction is performed on the image to be reconstructed based on the second hidden layer vector through the trained image reconstruction network.
The following examples of the present application will be described with reference to real fMRI images of rat brain regions, but the present invention is not limited thereto.
First, rats are treatedTrue fMRI image x of brain regionrealGenerating a three-dimensional image on a time sequence, sequentially inputting the three-dimensional image into an image coding network, performing layered feature extraction on the three-dimensional image by using a plurality of three-dimensional convolution layers of the image coding network, synthesizing linear and nonlinear features through a Re L U function, and outputting a feature coding vector z of a real fMRI imagee
Secondly, extracting a characteristic coding vector z obtained by the real fMRI imageeWith the hidden vector z sampled from the Gaussian distributionrBoth as input to the image reconstruction network, respectively from zeAnd zrIntermediate-stage-by-stage reconstruction of fMRI image xrecAnd xrand. The convolution of the image reconstruction network is a three-dimensional separable convolution layer with neighbor upsampling, the three-dimensional separable convolution operation with the neighbor upsampling is used for replacing a traditional deconvolution layer, the quantity of learnable parameters can be reduced, the quality of the reconstructed fMRI image is improved, the artifact of the reconstructed image is less, the brain region structure is clearer, and the like.
Thirdly, real fMRI images x are obtainedrealImage xrecAnd image xrandThe three are used as the input of an image discrimination network, and an image reconstructor is optimized according to the discrimination result of the image discrimination network to construct generation-countermeasure training. Meanwhile, a coding feature discrimination network with the same structure as the image discrimination network is constructed, and the real fMRI image x is obtainedrealThe feature expression vector z obtained by the middle codingeWith the hidden vector z sampled from the Gaussian distributionrAs the input, the coding feature discrimination network and the image coding network also form a training relation of 'generation-countermeasure', so as to replace variational reasoning and solve the problem of the training conflict between the variational reasoning and the generation of the countermeasure objective function.
And fourthly, selecting an optimal loss function to train and optimize the network. For training optimization of the image coding network, the coding feature discrimination network is skillfully introduced to replace the traditional variational reasoning process, and only the voxel-by-voxel difference between the reconstructed fMRI image and the real fMRI image needs to be minimized; in addition, the Wasserstein distance is selected as a measuring tool of real image distribution and reconstructed image distribution in a loss function, and the network gradient of a gradient penalty item clipping discriminator is introduced, so that the image coding network training is further stabilized. For the training of the image reconstructor network, besides the Wassertein distance, structural similarity measurement loss and perception measurement loss are introduced, and the characteristics of the reconstructed image in key areas such as rat Ventral Tegmental Area (VTA) and prefrontal cortex (PFC) are ensured to be consistent with the real image. The loss function for each network is formulated as follows:
the loss function of an image coding network is:
Figure BDA0002414213770000151
the loss function of the image discrimination network is:
Figure BDA0002414213770000152
the loss function of the image reconstruction network is:
Figure BDA0002414213770000153
wherein, LSSIMMeasure the loss function for structural similarity, LperceptualRepresentative perceptual metric loss functions are:
Figure BDA0002414213770000154
Figure BDA0002414213770000155
finally, the approach degree of the reconstructed image and the real image is estimated through an image overlapping rate (SOR) technical index. After the training optimization of the image reconstruction network is completed, a high-quality medical image sample is reconstructed from the Gaussian hidden layer vector of the image to be reconstructed through the trained image reconstruction network, so that the image sample amount is enhanced, and the subsequent analysis work is facilitated.
Compared with the traditional generation countermeasure network, the medical image reconstruction network training method based on the fusion variational self-encoder and the generation countermeasure network introduces priori knowledge guidance from a real image through the fusion variational self-encoder, and therefore the problem that the generation countermeasure network training is difficult is solved.
The embodiment of the application adds a separate coding discrimination network between the variational self-encoder and the generation countermeasure network, and aims to replace the function of variational reasoning so that the coding feature vector of the variational encoder approaches the original Gaussian hidden layer vector in a countermeasure training mode, thereby solving the conflict between the variational reasoning and the objective function of the generation countermeasure network.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Corresponding to the above embodiment applied to the medical image reconstruction network training method, fig. 7 shows a structural block diagram of the medical image reconstruction network training apparatus provided in the embodiment of the present application, and for convenience of explanation, only the parts related to the embodiment of the present application are shown.
Referring to fig. 7, the medical image reconstruction network training apparatus in the embodiment of the present application may include a feature code extraction module 401, a first image reconstruction module 402, and an optimization module 403.
The feature coding extraction module 401 is configured to perform feature coding extraction on a real image sample to obtain a feature coding vector of the real image sample;
a first image reconstruction module 402, configured to perform image reconstruction based on the feature coding vector through an image reconstruction network to obtain a first image, and perform image reconstruction based on the first hidden layer vector of the real image sample to obtain a second image;
a first optimization module 403, configured to perform image discrimination on the real image sample, the first image, and the second image through an image discrimination network, and optimize the image reconstruction network according to an image discrimination result.
Optionally, the feature encoding extraction module 401 may be configured to: and performing feature extraction on the real image sample based on an image coding network to obtain a feature coding vector of the real image sample.
Optionally, the feature encoding extraction module 401 may be specifically configured to:
performing layered feature extraction on the real image sample through a plurality of three-dimensional convolution layers of the image coding network;
and processing the extracted features through a linear function to obtain a feature coding vector of the real image sample.
Optionally, the linear function is a piecewise linear function.
Optionally, the piecewise linear function is a Re L U function.
Optionally, the medical image reconstruction network training apparatus may further include a second optimization module; the second optimization module is to:
carrying out vector discrimination on the feature coding vector and the first hidden layer vector through a coding feature discrimination network;
and optimizing the image coding network based on the vector discrimination result.
Optionally, the optimizing the image coding network based on the vector discrimination result includes:
and carrying out countermeasure training on the image coding network based on the vector discrimination result.
Optionally, the performing countermeasure training on the image coding network based on the vector discrimination result includes:
calculating the voxel-by-voxel difference between the second image and the real image sample, and updating the network parameters of the image coding network by a gradient descent method until the voxel-by-voxel difference is less than or equal to a preset threshold value;
wherein the voxel-wise difference is a first loss function of the image coding network, the first loss function being:
Figure BDA0002414213770000171
wherein, LCIs said first loss function, zeEncoding a vector, z, for said featurerFor the first hidden layer vector, C characterizes the image coding network and E is a mathematical expectation.
Optionally, the first optimization module 403 may be configured to:
and performing countermeasure training on the image reconstruction network according to the image discrimination result.
Optionally, the performing countermeasure training on the image reconstruction network according to the image discrimination result may include:
determining a second loss function of the image reconstruction network according to the image discrimination result, the structural similarity measurement loss function and the perception measurement loss function, updating network parameters of the image reconstruction network through a gradient descent method, and training the image reconstruction network;
wherein the second loss function is:
Figure BDA0002414213770000181
Figure BDA0002414213770000182
Figure BDA0002414213770000183
Figure BDA0002414213770000184
LGis said second loss function, zeEncoding a vector, z, for said featurerFor the first hidden layer vector, C characterizes the image coding network, D is the image discrimination network, G is the imageRebuild the network, E is the mathematical expectation, LSSIMMeasure the loss function for structural similarity, LperceptualRepresenting a perceptual metric loss function, XrealCharacterizing said real image, λ1And λ2Phi is a Gram matrix for the weight coefficients, LDA loss function of the network is determined for the image.
Optionally, the first image reconstruction module 402 may be specifically configured to:
inputting the feature coding vector and the first hidden layer vector into the image reconstruction network to obtain the first image and the second image; wherein the convolution layer of the image generation network is a three-dimensional separable convolution layer sampled on the neighboring.
Corresponding to the application of the above embodiments to the image reconstruction method, fig. 8 shows a block diagram of a medical image reconstruction apparatus provided in an embodiment of the present application, and for convenience of explanation, only the relevant parts of the embodiment of the present application are shown.
Referring to fig. 8, the medical image reconstruction apparatus in the embodiment of the present application may include a hidden vector acquisition module 501 and a second image reconstruction module 502.
The hidden layer vector acquiring module 501 is configured to acquire a second hidden layer vector of an image to be reconstructed;
a second image reconstruction module 502, configured to perform image reconstruction on the image to be reconstructed based on the second hidden layer vector through the trained image reconstruction network.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides a terminal device, and referring to fig. 9, the terminal device 600 may include: at least one processor 610, a memory 620, and a computer program stored in the memory 620 and executable on the at least one processor 610, wherein the processor 610, when executing the computer program, implements the steps of any of the above-mentioned method embodiments, such as the steps 101 to 103 in the embodiment shown in fig. 2, or the steps 201 to 202 in the embodiment shown in fig. 5. Alternatively, the processor 610, when executing the computer program, implements the functions of each module/unit in the above-described device embodiments, such as the functions of the modules 301 to 303 shown in fig. 7 or the functions of the modules 401 to 402 shown in fig. 8.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in the memory 620 and executed by the processor 610 to accomplish the present application. The one or more modules/units may be a series of computer program segments capable of performing specific functions, which are used to describe the execution of the computer program in the terminal device X00.
Those skilled in the art will appreciate that fig. 9 is merely an example of a terminal device and is not limiting and may include more or fewer components than shown, or some components may be combined, or different components such as input output devices, network access devices, buses, etc.
The Processor 610 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 620 may be an internal storage unit of the terminal device, or may be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. The memory 620 is used for storing the computer program and other programs and data required by the terminal device. The memory 620 may also be used to temporarily store data that has been output or is to be output.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when being executed by a processor, the computer program implements the steps in the embodiments of the medical image reconstruction network training method or implements the steps in the embodiments of the medical image reconstruction method.
The embodiment of the present application provides a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the embodiments of the medical image reconstruction network training method or implement the steps in the embodiments of the medical image reconstruction method when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), random-access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A medical image reconstruction network training method is characterized by comprising the following steps:
carrying out feature coding extraction on a real image sample to obtain a feature coding vector of the real image sample;
through an image reconstruction network, carrying out image reconstruction based on the feature coding vector to obtain a first image, and carrying out image reconstruction based on the first hidden layer vector of the real image sample to obtain a second image;
and carrying out image discrimination on the real image sample, the first image and the second image through an image discrimination network, and optimizing the image reconstruction network according to an image discrimination result.
2. The medical image reconstruction network training method according to claim 1, wherein the performing feature coding extraction on the real image sample to obtain a feature coding vector of the real image sample includes:
carrying out layered feature extraction on the real image sample through a plurality of three-dimensional convolution layers of an image coding network;
and processing the extracted features through a linear function to obtain a feature coding vector of the real image sample.
3. The medical image reconstruction network training method of claim 2, wherein the method further comprises:
carrying out vector discrimination on the feature coding vector and the first hidden layer vector through a coding feature discrimination network;
and optimizing the image coding network based on the vector discrimination result.
4. The medical image reconstruction network training method of claim 3, wherein the optimizing the image coding network based on the vector discrimination result comprises:
calculating the voxel-by-voxel difference between the second image and the real image sample, and updating the network parameters of the image coding network by a gradient descent method until the voxel-by-voxel difference is less than or equal to a preset threshold value;
wherein the voxel-wise difference is a first loss function of the image coding network, the first loss function being:
Figure FDA0002414213760000021
LCis the firstA loss function, zeEncoding a vector, z, for said featurerFor the first hidden layer vector, C characterizes the image coding network and E is a mathematical expectation.
5. The method for training the medical image reconstruction network according to claim 1, wherein the optimizing the image reconstruction network according to the image discrimination result comprises:
determining a second loss function of the image reconstruction network according to the image discrimination result, the structural similarity measurement loss function and the perception measurement loss function, updating network parameters of the image reconstruction network through a gradient descent method, and training the image reconstruction network;
wherein the second loss function is:
Figure FDA0002414213760000022
Figure FDA0002414213760000023
Figure FDA0002414213760000024
Figure FDA0002414213760000025
LGis said second loss function, zeEncoding a vector, z, for said featurerFor the first hidden layer vector, C characterizes the image coding network, D is the image discrimination network, G is the image reconstruction network, E is the mathematical expectation, LSSIMMeasure the loss function for structural similarity, LperceptualRepresenting a perceptual metric loss function, XrealCharacterizing said real image, λ1And λ2Phi is a Gram matrix for the weight coefficients, LDA loss function of the network is determined for the image.
6. A method of medical image reconstruction, comprising:
acquiring a second hidden layer vector of an image to be reconstructed;
and reconstructing the image to be reconstructed based on the second hidden layer vector through the trained image reconstruction network.
7. A medical image reconstruction network training apparatus, comprising:
the characteristic coding extraction module is used for extracting characteristic coding of a real image sample to obtain a characteristic coding vector of the real image sample;
the first image reconstruction module is used for reconstructing an image based on the characteristic coding vector through an image reconstruction network to obtain a first image and reconstructing the image based on a first hidden layer vector of the real image sample to obtain a second image;
and the first optimization module is used for carrying out image discrimination on the real image sample, the first image and the second image through an image discrimination network and optimizing the image generation network according to an image discrimination result.
8. A medical image reconstruction apparatus, characterized by comprising:
the hidden layer vector acquisition module is used for acquiring a second hidden layer vector of the image to be reconstructed;
and the second image reconstruction module is used for reconstructing the image with the reconstructed image based on the second hidden layer vector through the trained image reconstruction network.
9. A terminal device comprising a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, the processor when executing the computer readable instructions implementing the method of any one of claims 1 to 6.
10. A computer readable storage medium having computer readable instructions stored thereon which, when executed by a processor, implement the method of any one of claims 1 to 6.
CN202010186019.5A 2020-03-17 2020-03-17 Medical image reconstruction method, medical image reconstruction network training method and device Active CN111462264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010186019.5A CN111462264B (en) 2020-03-17 2020-03-17 Medical image reconstruction method, medical image reconstruction network training method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010186019.5A CN111462264B (en) 2020-03-17 2020-03-17 Medical image reconstruction method, medical image reconstruction network training method and device

Publications (2)

Publication Number Publication Date
CN111462264A true CN111462264A (en) 2020-07-28
CN111462264B CN111462264B (en) 2023-06-06

Family

ID=71680771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010186019.5A Active CN111462264B (en) 2020-03-17 2020-03-17 Medical image reconstruction method, medical image reconstruction network training method and device

Country Status (1)

Country Link
CN (1) CN111462264B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037299A (en) * 2020-08-20 2020-12-04 上海壁仞智能科技有限公司 Image reconstruction method and device, electronic equipment and storage medium
CN112102194A (en) * 2020-09-15 2020-12-18 北京金山云网络技术有限公司 Face restoration model training method and device
CN112419303A (en) * 2020-12-09 2021-02-26 上海联影医疗科技股份有限公司 Neural network training method, system, readable storage medium and device
CN112598790A (en) * 2021-01-08 2021-04-02 中国科学院深圳先进技术研究院 Brain structure three-dimensional reconstruction method and device and terminal equipment
CN112802072A (en) * 2021-02-23 2021-05-14 临沂大学 Medical image registration method and system based on counterstudy
CN113569928A (en) * 2021-07-13 2021-10-29 湖南工业大学 Train running state detection data missing processing model and reconstruction method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180225823A1 (en) * 2017-02-09 2018-08-09 Siemens Healthcare Gmbh Adversarial and Dual Inverse Deep Learning Networks for Medical Image Analysis
CN108776959A (en) * 2018-07-10 2018-11-09 Oppo(重庆)智能科技有限公司 Image processing method, device and terminal device
CN110298898A (en) * 2019-05-30 2019-10-01 北京百度网讯科技有限公司 Change the method and its algorithm structure of automobile image body color

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180225823A1 (en) * 2017-02-09 2018-08-09 Siemens Healthcare Gmbh Adversarial and Dual Inverse Deep Learning Networks for Medical Image Analysis
CN108776959A (en) * 2018-07-10 2018-11-09 Oppo(重庆)智能科技有限公司 Image processing method, device and terminal device
CN110298898A (en) * 2019-05-30 2019-10-01 北京百度网讯科技有限公司 Change the method and its algorithm structure of automobile image body color

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037299A (en) * 2020-08-20 2020-12-04 上海壁仞智能科技有限公司 Image reconstruction method and device, electronic equipment and storage medium
CN112037299B (en) * 2020-08-20 2024-04-19 上海壁仞智能科技有限公司 Image reconstruction method and device, electronic equipment and storage medium
CN112102194A (en) * 2020-09-15 2020-12-18 北京金山云网络技术有限公司 Face restoration model training method and device
CN112419303A (en) * 2020-12-09 2021-02-26 上海联影医疗科技股份有限公司 Neural network training method, system, readable storage medium and device
CN112419303B (en) * 2020-12-09 2023-08-15 上海联影医疗科技股份有限公司 Neural network training method, system, readable storage medium and device
CN112598790A (en) * 2021-01-08 2021-04-02 中国科学院深圳先进技术研究院 Brain structure three-dimensional reconstruction method and device and terminal equipment
CN112802072A (en) * 2021-02-23 2021-05-14 临沂大学 Medical image registration method and system based on counterstudy
CN113569928A (en) * 2021-07-13 2021-10-29 湖南工业大学 Train running state detection data missing processing model and reconstruction method
CN113569928B (en) * 2021-07-13 2024-01-30 湖南工业大学 Train running state detection data missing processing model and reconstruction method

Also Published As

Publication number Publication date
CN111462264B (en) 2023-06-06

Similar Documents

Publication Publication Date Title
CN111462264B (en) Medical image reconstruction method, medical image reconstruction network training method and device
CN110559009B (en) Method for converting multi-modal low-dose CT into high-dose CT based on GAN
WO2021184195A1 (en) Medical image reconstruction method, and medical image reconstruction network training method and apparatus
CN111784706B (en) Automatic identification method and system for primary tumor image of nasopharyngeal carcinoma
Du et al. Accelerated super-resolution MR image reconstruction via a 3D densely connected deep convolutional neural network
CN109035316B (en) Registration method and equipment for nuclear magnetic resonance image sequence
Zhan et al. LR-cGAN: Latent representation based conditional generative adversarial network for multi-modality MRI synthesis
CN116823625B (en) Cross-contrast magnetic resonance super-resolution method and system based on variational self-encoder
CN110874855B (en) Collaborative imaging method and device, storage medium and collaborative imaging equipment
CN117274599A (en) Brain magnetic resonance segmentation method and system based on combined double-task self-encoder
Zuo et al. HACA3: A unified approach for multi-site MR image harmonization
Zuo et al. Synthesizing realistic brain MR images with noise control
Ho et al. Inter-individual deep image reconstruction via hierarchical neural code conversion
Yu et al. An unsupervised hybrid model based on CNN and ViT for multimodal medical image fusion
CN112529915B (en) Brain tumor image segmentation method and system
Sander et al. Autoencoding low-resolution MRI for semantically smooth interpolation of anisotropic MRI
Yang et al. Hierarchical progressive network for multimodal medical image fusion in healthcare systems
CN116664467A (en) Cross neural network and ECA-S-based multi-modal medical image fusion method
CN114463459B (en) Partial volume correction method, device, equipment and medium for PET image
Zhang et al. Multi-scale network with the deeper and wider residual block for MRI motion artifact correction
CN113066145B (en) Deep learning-based rapid whole-body diffusion weighted imaging method and related equipment
Wasserman et al. Functional Brain-to-Brain Transformation with No Shared Data
Ma et al. Image quality transfer with auto-encoding applied to dMRI super-resolution
CN117333571B (en) Reconstruction method, system, equipment and medium of magnetic resonance image
CN116597041B (en) Nuclear magnetic image definition optimization method and system for cerebrovascular diseases and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant