CN112508834B - GAN-based contrast agent-free medical image enhancement method - Google Patents

GAN-based contrast agent-free medical image enhancement method Download PDF

Info

Publication number
CN112508834B
CN112508834B CN202011455213.5A CN202011455213A CN112508834B CN 112508834 B CN112508834 B CN 112508834B CN 202011455213 A CN202011455213 A CN 202011455213A CN 112508834 B CN112508834 B CN 112508834B
Authority
CN
China
Prior art keywords
contrast agent
image
virtual
data
free
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011455213.5A
Other languages
Chinese (zh)
Other versions
CN112508834A (en
Inventor
张娜
郑海荣
刘新
李宗阳
胡战利
梁栋
李烨
邹超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202011455213.5A priority Critical patent/CN112508834B/en
Publication of CN112508834A publication Critical patent/CN112508834A/en
Application granted granted Critical
Publication of CN112508834B publication Critical patent/CN112508834B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application discloses a GAN-based contrast agent-free medical image enhancement method, which comprises the steps of generating a virtual contrast agent image of a previous patient according to a contrast agent-free image of the previous patient by adopting a GAN enhancement algorithm, training the virtual contrast agent image and a real contrast agent image to obtain a final virtual contrast agent image of the previous patient, and establishing a target image analysis model for generating the final virtual contrast agent image according to the contrast agent-free image; dividing the image data without contrast agent to obtain target data without contrast agent and background data without contrast agent; and fusing the contrast agent-free background data obtained by segmentation of the contrast agent-free image data with final virtual contrast agent-containing target data to obtain a virtual contrast agent-containing image model, so as to realize the enhancement of the image. The application can enhance the image without contrast agent of the newly added patient to obtain the virtual image with contrast agent of the newly added patient, simplify the treatment procedure, reduce the treatment cost of the patient and reduce the influence of the contrast agent on the body of the patient.

Description

GAN-based contrast agent-free medical image enhancement method
Technical Field
The invention relates to the technical field of medical image processing, in particular to a contrast agent-free medical image enhancement method based on GAN.
Background
At present, the judgment of the human body part based on the image in medicine is a common method, so that in order to obtain the image of a certain part of the human body, the patient needs to be injected with an angiographic agent, and the patient is painful, and meanwhile, the angiographic agent is difficult to thoroughly discharge in the body and has various allergic reactions, so that known and unknown risks are brought to the patient.
The magnetic resonance angiography has high clinical value, but due to the problem of the components of the angiography agent, part of patients have different degrees of allergic reactions on the angiography agent, the pathological influence of internal residues on the patients is still in an undefined state, and certain unknown hidden danger exists. There has been no significant progress in simply making technical improvements in improving angiographic agents.
Therefore, how to obtain an image of a new patient based on an image of a previous patient is a problem to be solved in the present day.
Disclosure of Invention
The invention aims to provide a GAN-based contrast agent-free medical image enhancement method, which is based on a previous patient contrast agent-free image and a true contrast agent-free image, adopts a GAN enhancement algorithm, creates a virtual contrast agent-free image from the previous patient contrast agent-free image, verifies the virtual contrast agent-free image with the true contrast agent-free image, realizes the establishment of a model from the contrast agent-free image, is used for analyzing the virtual contrast agent-free image according to the newly added patient contrast agent-free image, realizes that the newly added patient can obtain the virtual contrast agent-free image without taking contrast agent, simplifies the treatment procedure, and reduces the treatment cost of the patient.
In a first aspect, the above object of the present invention is achieved by the following technical solutions:
A method for enhancing a medical image without contrast agent based on GAN adopts GAN enhancement algorithm, and establishes a target image analysis model according to the existing image without contrast agent of a patient and the image with contrast agent in reality; and inputting the contrast agent-free image of the newly added patient into an image target analysis model to obtain a virtual contrast agent-free image of the newly added patient.
The invention is further provided with: the previous patient's no contrast agent image and the actual contrast agent image are images of the same body location of the previous patient.
The invention is further provided with: generating a virtual contrast agent image of the past patient according to the non-contrast agent image of the past patient, training the virtual contrast agent image and the real contrast agent image to obtain a final virtual contrast agent image of the past patient, and establishing a target image analysis model for generating the final virtual contrast agent image according to the non-contrast agent image.
The invention is further provided with: dividing the image data without contrast agent to obtain target data without contrast agent and background data without contrast agent; and dividing the image data of the real contrast agent to obtain target data of the real contrast agent and background data of the real contrast agent.
The invention is further provided with: and establishing a generator model and a discriminator model by adopting a GAN enhancement algorithm, wherein the generator model is used for generating virtual contrast agent target data according to the non-contrast agent target data, the discriminator model is used for discriminating whether the analyzed data is the virtual contrast agent target data or the real contrast agent target data, and the generator model corrects the virtual contrast agent target data generated next time according to the discrimination result until the virtual contrast agent target data generated by the generator model is discriminated as true by the discriminator model and recorded as final virtual target data.
The invention is further provided with: and establishing a generator model and a discriminator model by adopting a GAN enhancement algorithm, wherein the generator model is used for generating virtual contrast agent image data according to the non-contrast agent image, the discriminator model is used for discriminating whether the analyzed data is the virtual contrast agent image data or the real contrast agent image data, and the generator model corrects the virtual contrast agent image data generated next time according to the discrimination result until the virtual contrast agent image data generated by the generator model is discriminated as true by the discriminator model and recorded as final virtual contrast agent image data.
The invention is further provided with: labeling the image without contrast agent and the image with contrast agent truly, extracting image characteristics, discriminating the discriminator model based on the image characteristics, and generating virtual contrast agent image data based on the image characteristics by the generator model.
The invention is further provided with: and dividing the final virtual contrast agent image data to obtain final virtual contrast agent target data and final virtual contrast agent background data.
The invention is further provided with: and fusing the background data of the contrast agent-free image obtained by segmentation of the contrast agent-free image data with final virtual contrast agent target data to obtain a virtual contrast agent-free image.
In a second aspect, the above object of the present invention is achieved by the following technical solutions:
A GAN-based contrast agent-free medical image enhancement system comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the GAN-based contrast agent-free medical image enhancement method when executing the computer program.
In a third aspect, the above object of the present invention is achieved by the following technical solutions:
A computer readable storage medium storing a computer program, wherein the computer program when executed by the processor implements the GAN-based contrast agent-free medical image enhancement method.
Compared with the prior art, the application has the beneficial technical effects that:
1. The application adopts the GAN enhancement algorithm to enhance the image without contrast agent of the previous patient to obtain the virtual image with contrast agent of the previous patient, and verifies the virtual image with contrast agent of the previous patient with the real previous patient of the previous patient, thereby obtaining a model and ensuring the accuracy of model analysis;
2. further, when the image is enhanced, two models are established and are respectively used for generating the virtual image and identifying the virtual image, so that the precision of the virtual image is ensured;
3. furthermore, the application fuses the background image based on the previous patient image without contrast agent with the enhanced virtual target image with contrast agent, thereby avoiding the influence of the background image on the target image and improving the accuracy of the model.
Drawings
FIG. 1 is a schematic diagram of a modeling method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a two-one modeling approach architecture of an embodiment of the present application;
FIG. 3 is a schematic diagram of a multi-layer convolutional network structure in accordance with one embodiment of the present application.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Detailed description of the preferred embodiments
The application relates to a GAN-based contrast agent-free medical image enhancement modeling method, which is shown in figure 1, and comprises the steps of obtaining a virtual contrast agent target image of a previous patient through a GAN enhancement algorithm based on a previous contrast agent-free image of the patient; and simultaneously, carrying out target identification from the non-contrast agent images of the previous patients, dividing the target image and the background image in the non-contrast agent images of the previous patients, fusing the virtual target image with the background image of the non-contrast agent images to obtain virtual contrast agent images of the previous patients, obtaining a plurality of virtual contrast agent images through training based on the non-contrast agent images of a plurality of previous patients, and establishing an analysis model.
The image without contrast agent of the newly added patient is input into the analysis model, so that the virtual image with contrast agent of the newly added patient is obtained, and the economic and physical burden on the patient caused by the use of the contrast agent is avoided.
Based on the previous contrast agent-free image of the patient, a virtual contrast agent target image of the previous patient is obtained through a GAN enhancement algorithm, and the method comprises the following two modes:
The method comprises the steps of firstly, obtaining a virtual contrast agent image of a previous patient based on a contrast agent-free image of the previous patient through a GAN enhancement algorithm, and segmenting a virtual contrast agent target image of the previous patient from the virtual contrast agent image of the previous patient by adopting a target recognition method.
The second method is to perform target recognition based on the previous patient's non-contrast agent image, segment the target image and the background image from the previous patient's non-contrast agent image, and enhance the non-contrast agent target image by using a GAN enhancement algorithm to obtain a virtual contrast agent target image, as shown in fig. 2.
The following is a detailed description of method one.
GAN enhancement algorithm:
And creating two models, namely a generator model G and a discriminator model D, wherein the generator model G is used for generating a virtual contrast agent image based on the contrast agent-free image, the discriminator model D is used for discriminating the authenticity of the virtual contrast agent image, namely discriminating whether the image is the virtual contrast agent image or the real contrast agent image, and according to the discrimination result of the discriminator, the generator model G corrects the next time when the virtual contrast agent image is generated, so that the similarity between the virtual contrast agent image and the real contrast agent image is higher, and the training is finished until the discriminator model discriminates the virtual contrast agent image as true, so as to obtain the final virtual contrast agent image.
Specifically, a generator distribution P g is learned on the basis of the contrast agent-free image data q, where P represents the probability and g represents the generator model. The generator model G builds a mapping G (r; θ g) of the noise distribution P R (r) to the data space, where r represents the virtual contrast agent image data and θ g represents the probability of the generator. The discriminator model D outputs a scalar D (q; θ d) representing the probability that data x originates from true presence contrast data instead of P g.
Each image is composed of a plurality of pixels, the pixel size of the image used for the GAN algorithm is fixed, the image is an image group with specific characteristics, the pixel values of fixed pixel points are similar, and the generator distribution P g is the pixel value distribution.
The generator model G extracts the first image features and generates a virtual contrast agent image.
The discriminator model D discriminates the virtual contrast agent image by extracting the features of the real contrast agent image.
The generator model G extracts and learns the no-contrast agent image data features and generates virtual contrast agent image data. The discriminator model D extracts the data characteristics of the image with the real contrast agent, discriminates the image with the virtual contrast agent, and discriminates the probability that the image with the virtual contrast agent is true. The generator model corrects the virtual contrast agent image according to the identification result of the identifier model D, and further training is carried out until the identification result of the identifier model D on the virtual contrast agent image is true, and the training is finished, so that the generated final virtual contrast agent image is the expected characteristic enhancement image.
During training, parameters of the generator model G are adjusted to minimize log {1-D [ G (r) ] } and parameters of the discriminator model D are adjusted to minimize log D (Q) so that the virtual contrast agent image more closely approximates the real contrast agent image.
Wherein G (r) represents a result obtained by putting the virtual contrast agent image data r into the generator model G;
D [ G (r) ] represents the result obtained by putting G (r) into the discriminator model D and subsequently discriminating. The result is a specific value ranging from 0 to 1, representing the probability of being true, and thus 1-D [ G (r) ] represents the probability of being false. Log is taken for convenient size control.
The specific formula is as follows:
where Q represents the actual contrast agent image data, R represents the virtual contrast agent image data, Q represents the Q data set as a whole, R represents the R data set as a whole, E represents the desired value, Representing expectations derived from image data of truly contrast agent present,/>Representing the expectation derived from virtual contrast agent image data, V represents a function comprising two parameters D/G.
Since the initially generated data R is generally characterized by a small feature and a large difference from the original data, and is called noise, in P R (R), R also represents a generated noise set, R represents any one of the noise sets, and P R (R) represents a probability of occurrence of any one of the data in the generated noise set.
The target identification method comprises the following steps:
the contrast agent free image is imported into a target detection algorithm.
The target detection algorithm predicts the bounding boxes by using dimension clusters as anchor boxes, and marks the coordinates of each bounding box according to the extracted features as references in the target detection training process, wherein the method comprises the following steps: t x,ty,tw,th, where (t x、ty) denotes the center coordinates (x, y) of the anchor frame on the image, t w denotes the anchor frame width on the image, t h denotes the anchor frame height on the image,
For a determined target in the image, a series of virtual anchor frames are generated, the coordinate error of the central point of the anchor frames is not too large, and sigma () is used for representing the average value distribution of a plurality of prediction frames; in this embodiment, taking the top left corner pole of the pixel point as the starting end point (0, 0), the whole image is the fourth quadrant, but the value of the y axis is a positive number, and C x、Cy represents the top left corner coordinate offset of each prediction, that is, represents that the point originally at the top left corner is simultaneously offset to the x axis and the y axis, then the coordinates (b x、by、bw、bh) of the next prediction frame are calculated based on the previous prediction frame as follows:
bx=σ(tx)+Cx (2);
by=σ(ty)+Cy (3);
Correspondingly, (b x、by) represents the center point coordinates of the next prediction frame, b w represents the width of the next prediction frame, b h represents the height of the next prediction frame, p w represents the width of the previous prediction frame, and p h represents the height of the previous prediction frame; σ (t x) represents the x-coordinate of the original virtual center point, and σ (t y) represents the y-label of the original virtual center point.
After determining the bounding box, constructing a multi-layer convolution network, and extracting the characteristics.
As shown in fig. 3, the multi-layer convolutional network includes: a convolutional layer (Convolutional), a Residual layer (Residual), a pooling layer (Avgpool), a full-concatenated layer (concatenated); the convolution layer is used for extracting features to obtain main pixel points in a certain area; a residual layer for preventing gradient explosion; the pooling layer is used for compressing the input feature map and extracting main features; and the full connection layer is used for mapping the learned characteristics to the sample marking space and plays a role of a classifier.
In the figure, softmax represents the loss function used to guide the learning process as a target network function. Type is the class of each unit layer structure, filter is the number of feature extractors, size is its size, and unit is pixel.
And generating a final virtual image with the shadow according to the image without the contrast agent and the image with the real shadow by adopting a GAN enhancement method.
Performing label segmentation on the image without the contrast agent by adopting a target detection method to obtain a target image group without the contrast agent and a background image group without the contrast agent; labeling and dividing the final virtual image with the shadow to obtain a final virtual image with the shadow target image group and a final virtual image with the shadow background image group; and fusing the final virtual image group with the target image group without the contrast agent according to the labeling address information, and deleting the enhancement of the background in the final virtual image group with the background.
In fusion, to ensure natural transition at the image boundary, the gradient at the boundary is made smaller, and the calculation formula is as follows:
Let S be the image definition field, Ω be a closed subset within S, Is the boundary of this subset, f is the scalar function contained by subtracting Ω from S, and f is the unknown scalar function defined inside Ω. v is the vector field defined above Ω.
Is a gradient operator. /(I)The result is the result of the gradient operation of the function f.
This formula represents: in the case where the values of the functions f and f are equal, the gradient value with respect to f is reduced to a minimum in the range of Ω.
The difference is calculated in the formula, the absolute value is calculated, and then the squaring is performed to normalize the result (Normalization) so as to avoid larger difference value.
The minimum solution of equation 6 is the final value:
Δf=divv overΩ,
Wherein div represents the divergence and is used for representing the degree of the divergence of the vector field of each point in space.
This formula represents: in the case where the functions f and f are equal, the value of the divergence divv of v in the Ω range is equal to the minimum Δf of f.
After fusion, a contrast agent-free medical image enhancement analysis model is obtained, wherein the analysis model comprises a final virtual image group with a contrast agent and a background image group without a contrast agent, and the contrast agent-free medical image of a newly added patient is input into the model for training to obtain a virtual image with a contrast agent of the newly added patient for medical analysis.
For the second method, firstly, a target detection method is adopted to divide a non-contrast agent image of a previous patient to obtain a non-contrast agent target image and a non-contrast agent background image of the previous patient, and then a GAN enhancement method is adopted to enhance the non-contrast agent target image to obtain a virtual imaging agent target image.
And fusing the virtual imaging agent target image with the contrast agent-free background image to obtain the contrast agent-free medical image enhancement analysis model.
Second embodiment
An embodiment of the present invention provides a GAN-based contrast agent-free medical image enhancement modeling system, the system of which includes: a processor, a memory, and a computer program stored in the memory and executable on the processor, such as a stamp authenticity judging computer program, which when executed implements the modeling method in embodiment 1.
The computer program may be divided into one or more modules/units, which are stored in the memory and executed by the processor to accomplish the present invention, for example. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program in the GAN-based contrast-agent-free medical image enhancement modeling system. For example, the computer program may be divided into a plurality of modules, each module having the following specific functions:
1. The image enhancement module is used for enhancing the image;
2. and the image segmentation module is used for segmenting the image.
The GAN-based contrast-agent-free medical image enhancement modeling system can be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The GAN-based contrast agent-free medical image enhancement modeling system may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that the above examples are merely examples of systems and do not constitute a limitation of the GAN-based contrast agent-free medical image enhancement modeling system, which may include more or fewer components than shown, or may combine certain components, or different components, e.g., the GAN-based contrast agent-free medical image enhancement modeling system may further include input-output devices, network access devices, buses, etc.
The Processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, data signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is a control center of the GAN-based contrast-agent-free medical image enhancement modeling system, connecting the various parts of the entire GAN-based contrast-agent-free medical image enhancement modeling system using various interfaces and lines.
The memory may be used to store the computer program and/or the module, and the processor may implement the various functions of the GAN-based contrast agent-free medical image enhancement modeling system by running or executing the computer program and/or the module stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart memory card (SMART MEDIA CARD, SMC), secure digital (SecureDigital, SD) card, flash memory card (FLASH CARD), at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
Detailed description of the preferred embodiments
The modules/units integrated in a GAN-based contrast agent-free medical image enhancement modeling system, if implemented in the form of software functional units, and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a Read-only memory (ROM), a random access memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The embodiments of the present invention are all preferred embodiments of the present invention, and are not intended to limit the scope of the present invention in this way, therefore: all equivalent changes in structure, shape and principle of the invention should be covered in the scope of protection of the invention.

Claims (12)

1. A method for enhancing a contrast agent-free medical image based on GAN, characterized by comprising the steps of: adopting a GAN enhancement algorithm, and obtaining a virtual target image with contrast agent of the existing patient through the GAN enhancement algorithm according to the image without contrast agent of the existing patient; meanwhile, performing target recognition from the non-contrast agent images of the previous patient, dividing a target image and a background image in the non-contrast agent images of the previous patient, fusing the virtual contrast agent target image and the non-contrast agent image background image to obtain virtual contrast agent images of the previous patient, obtaining a plurality of virtual contrast agent images through training based on the non-contrast agent images of a plurality of previous patients, and establishing a target image analysis model; inputting the image without contrast agent of the newly added patient into an image target analysis model to obtain a virtual image with contrast agent of the newly added patient; the GAN enhancement algorithm comprises a generator model and a discriminator model, wherein the generator model is used for generating virtual contrast medium data according to non-contrast medium data, the discriminator model is used for discriminating whether analyzed data are virtual contrast medium data or real contrast medium data according to real contrast medium data, the generator model corrects the virtual contrast medium data generated next time according to a discrimination result until the virtual contrast medium data generated by the generator model is discriminated as true by the discriminator model and recorded as final virtual data, and the data are target data or image data.
2. The GAN-based contrast agent-free medical image enhancement method of claim 1, wherein: the previous patient's no contrast agent image and the actual contrast agent image are images of the same body location of the previous patient.
3. The GAN-based contrast agent-free medical image enhancement method of claim 1, wherein: generating a virtual contrast agent image of the past patient according to the non-contrast agent image of the past patient, training the virtual contrast agent image and the real contrast agent image to obtain a final virtual contrast agent image of the past patient, and establishing a target image analysis model for generating the final virtual contrast agent image according to the non-contrast agent image.
4. The GAN-based contrast agent-free medical image enhancement method of claim 1, wherein: marking and dividing the image data without contrast agent by adopting a target detection method to obtain target data without contrast agent and background data without contrast agent; dividing the image data of the real contrast agent to obtain target data of the real contrast agent and background data of the real contrast agent; labeling and dividing the final virtual image with the shadow to obtain a final virtual image with the shadow target image group and a final virtual image with the shadow background image group; and according to the labeling address information, fusing the final virtual imaging agent-free target image group and the contrast agent-free background image group, and obtaining a contrast agent-free medical image enhancement analysis model after fusing, wherein the contrast agent-free medical image enhancement analysis model comprises the final virtual imaging agent-free target image group and the contrast agent-free background image group, and inputting the contrast agent-free medical image of the newly added patient into the model for training to obtain a virtual imaging agent-free target image of the newly added patient.
5. The GAN-based contrast agent-free medical image enhancement method of claim 1, wherein: labeling the non-contrast agent image and the true contrast agent image, extracting image features, discriminating the discriminator model based on the true contrast agent image features, and generating virtual contrast agent image data based on the image features by the generator model.
6. The GAN-based contrast agent-free medical image enhancement method of claim 1, wherein: and dividing the final virtual contrast agent image data to obtain final virtual contrast agent target data and final virtual contrast agent background data.
7. The GAN-based contrast agent-free medical image enhancement method of claim 1, wherein: and fusing the background data of the contrast agent-free image obtained by segmentation of the contrast agent-free image data with final virtual contrast agent target data to obtain a virtual contrast agent-free image.
8. The GAN-based contrast agent-free medical image enhancement method of claim 1, wherein: in fusion, to ensure natural transition at the image boundary, the gradient at the boundary is made smaller, and the calculation formula is as follows:
Let S be the image definition field, Ω be a closed subset within S, Is the boundary of this subset, f is the scalar function contained by subtracting Ω from S, and f is the unknown scalar function defined inside Ω; v is the vector field defined above Ω; /(I)Is a gradient operator; /(I)The result is the result of the gradient operation of the function f;
This formula represents: in the case where the values of the functions f and f are equal, the gradient value with respect to f is reduced to a minimum in the range of Ω;
the minimum solution of equation 6 is the final value:
Wherein div represents the divergence and is used for representing the degree of the divergence of vector fields of each point in space; equation 7 represents: in the case where the functions f and f are equal, the value of the divergence divv of v in the Ω range is equal to the minimum Δf of f.
9. The GAN-based contrast agent-free medical image enhancement method of claim 1, wherein: GAN enhancement algorithm: comprises learning a generator distribution P g on the basis of the contrast agent-free image data q, wherein P represents a probability and g represents a generator model; the generator model G builds a mapping G (r; θ g) of the noise distribution P R (r) to the data space, where r represents the virtual contrast agent image data, and θ g represents the probability of the generator; the discriminator model D outputs a scalar D (q; θ d) representing the probability that data x originated from true presence contrast data instead of P g; during training, parameters of the generator model G are adjusted to minimize log {1-D [ G (r) ] } and parameters of the discriminator model D are adjusted to minimize log D (Q); g (r) represents a result obtained by putting the virtual contrast agent image data r into the generator model G; wherein D [ G (r) ] represents that G (r) is put into the discriminator model D for discrimination, the obtained result has a value ranging from 0 to 1, which represents the probability of being true, and 1-D [ G (r) ] represents the probability of being false; the following formula is shown:
where Q represents the actual contrast agent image data, R represents the virtual contrast agent image data, Q represents the Q data set as a whole, R represents the R data set as a whole, E represents the desired value, Representing expectations derived from image data of truly contrast agent present,/>Representing the expectation derived from virtual contrast agent image data, V represents a function comprising two parameters D/G.
10. The GAN-based contrast agent-free medical image enhancement method of claim 1, wherein: the target recognition method comprises the steps of adopting dimension clusters as anchor frames to predict boundary frames, constructing a multi-layer convolution network after determining the boundary frames, extracting features, marking coordinates of each boundary frame according to the extracted features as references in the target detection training process, and comprises the following steps: t x,ty,tw,th, wherein (t x、ty) represents the center coordinates (x, y) of the anchor frame on the image, t w represents the anchor frame width on the image, t h represents the anchor frame height on the image, the top left corner pole of the pixel point is taken as the starting end point (0, 0), the whole image is in the fourth quadrant, but the value of the y axis is positive, the top left corner coordinate offset of each prediction is represented by C x、Cy, namely, the point originally at the top left corner is simultaneously offset to the x axis and the y axis, and then the coordinates (b x、by、bw、bh) of the next prediction frame are calculated based on the previous prediction frame as follows:
bx=σ(tx)+Cx (2);
by=σ(ty)+Cy (3);
Wherein (b x、by) represents the center point coordinates of the next prediction frame, b w represents the width of the next prediction frame, b h represents the height of the next prediction frame, p w represents the width of the previous prediction frame, and p h represents the height of the previous prediction frame; σ (t x) represents the x-coordinate of the original virtual center point, and σ (t y) represents the y-label of the original virtual center point.
11. A GAN-based contrast agent-free medical image enhancement system comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, characterized by: the processor, when executing the computer program, implements the method according to any of claims 1-10.
12. A computer readable storage medium storing a computer program, which when executed by a processor performs the method according to any one of claims 1-10.
CN202011455213.5A 2020-12-10 2020-12-10 GAN-based contrast agent-free medical image enhancement method Active CN112508834B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011455213.5A CN112508834B (en) 2020-12-10 2020-12-10 GAN-based contrast agent-free medical image enhancement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011455213.5A CN112508834B (en) 2020-12-10 2020-12-10 GAN-based contrast agent-free medical image enhancement method

Publications (2)

Publication Number Publication Date
CN112508834A CN112508834A (en) 2021-03-16
CN112508834B true CN112508834B (en) 2024-06-25

Family

ID=74973416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011455213.5A Active CN112508834B (en) 2020-12-10 2020-12-10 GAN-based contrast agent-free medical image enhancement method

Country Status (1)

Country Link
CN (1) CN112508834B (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10592779B2 (en) * 2017-12-21 2020-03-17 International Business Machines Corporation Generative adversarial network medical image generation for training of a classifier
CN110852993B (en) * 2019-10-12 2024-03-08 拜耳股份有限公司 Imaging method and device under action of contrast agent

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于生成对抗网络的肝血管瘤和肝细胞瘤的增强剂检测方法的研究;赵建峰;《中国优秀硕士学位论文全文数据库 医药卫生科技辑》;第25-45页,图3-2, 3-6 *

Also Published As

Publication number Publication date
CN112508834A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
CN112508835B (en) GAN-based contrast agent-free medical image enhancement modeling method
CN112581629B (en) Augmented reality display method, device, electronic equipment and storage medium
CN108509915B (en) Method and device for generating face recognition model
CN110310287B (en) Automatic organ-at-risk delineation method, equipment and storage medium based on neural network
CN111161275B (en) Method and device for segmenting target object in medical image and electronic equipment
WO2021196955A1 (en) Image recognition method and related apparatus, and device
US10853409B2 (en) Systems and methods for image search
CN108229301B (en) Eyelid line detection method and device and electronic equipment
CN110276408B (en) 3D image classification method, device, equipment and storage medium
CN110956632B (en) Method and device for automatically detecting pectoralis major region in molybdenum target image
US11276490B2 (en) Method and apparatus for classification of lesion based on learning data applying one or more augmentation methods in lesion information augmented patch of medical image
CN113222038B (en) Breast lesion classification and positioning method and device based on nuclear magnetic image
CN111695462A (en) Face recognition method, face recognition device, storage medium and server
WO2021120961A1 (en) Brain addiction structure map evaluation method and apparatus
CN111815606B (en) Image quality evaluation method, storage medium, and computing device
CN114862861B (en) Lung lobe segmentation method and device based on few-sample learning
CN113888566B (en) Target contour curve determination method and device, electronic equipment and storage medium
CN111986137B (en) Biological organ lesion detection method, apparatus, device, and readable storage medium
WO2021097595A1 (en) Method and apparatus for segmenting lesion area in image, and server
Dou et al. Tooth instance segmentation based on capturing dependencies and receptive field adjustment in cone beam computed tomography
CN110021021B (en) Head image analysis device and image analysis method
CN113724185B (en) Model processing method, device and storage medium for image classification
CN111598144B (en) Training method and device for image recognition model
CN112508834B (en) GAN-based contrast agent-free medical image enhancement method
CN111724371A (en) Data processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant