CN110197716B - Medical image processing method and device and computer readable storage medium - Google Patents

Medical image processing method and device and computer readable storage medium Download PDF

Info

Publication number
CN110197716B
CN110197716B CN201910426692.9A CN201910426692A CN110197716B CN 110197716 B CN110197716 B CN 110197716B CN 201910426692 A CN201910426692 A CN 201910426692A CN 110197716 B CN110197716 B CN 110197716B
Authority
CN
China
Prior art keywords
image
medical image
medical
deep learning
learning model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910426692.9A
Other languages
Chinese (zh)
Other versions
CN110197716A (en
Inventor
蔡君
胡梦影
戴青云
赵慧民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Polytechnic Normal University
Original Assignee
Guangdong Polytechnic Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Polytechnic Normal University filed Critical Guangdong Polytechnic Normal University
Priority to CN201910426692.9A priority Critical patent/CN110197716B/en
Publication of CN110197716A publication Critical patent/CN110197716A/en
Application granted granted Critical
Publication of CN110197716B publication Critical patent/CN110197716B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Quality & Reliability (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention discloses a medical image processing method, which comprises the steps of carrying out data expansion on a medical image through a generative countermeasure network when the medical image is obtained; and acquiring a focus image in the medical image after data expansion. The invention also discloses a processing device of the medical image and a computer readable storage medium, which expand the data set of the medical image through the generative confrontation network and segment the focus according to the medical image after expanding the data set, thereby realizing the precise segmentation of the focus area and meeting the requirements of disease diagnosis and medical research.

Description

Medical image processing method and device and computer readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for processing a medical image, and a computer-readable storage medium.
Background
In the medical industry, medical staff often need to segment a focus in a medical image, for example, segment a focus region in a liver cancer image, so as to realize auxiliary diagnosis of diseases and visualization of medical data, and provide reliable basis for clinical diagnosis and pathological research.
Currently, when a focus in a medical image is segmented, artificial intelligence is often combined to realize an automatic process of focus segmentation. However, artificial intelligence generally needs tens of thousands of sample data, even hundreds of thousands of sample data for learning, and in reality, such many sample data cannot be collected, which results in a large segmentation error and low precision of artificial intelligence when segmenting a lesion, and cannot meet medical requirements.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a medical image processing method, a medical image processing device and a computer-readable storage medium, aiming at expanding a data set of a medical image through a generative countermeasure network and segmenting a focus according to the medical image after the data set is expanded, so that accurate segmentation of a focus region is realized, and the requirements of disease diagnosis and medical research are met.
In order to achieve the above object, the present invention provides a method for processing a medical image, the method comprising the steps of:
when a medical image is acquired, performing data expansion on the medical image through a generative countermeasure network;
and acquiring a focus image in the medical image after data expansion.
Optionally, after the step of acquiring a lesion image in the medical image after the expansion data, the method further includes:
and performing edge optimization processing on the focus image.
Optionally, the generative confrontation network includes a generator network and a discriminator network, and the step of data expansion of the medical image through the generative confrontation network includes:
inputting the medical image into the generator network to obtain an output image;
judging the medical image and the output image according to the discriminator network to obtain a judgment probability;
and when the discrimination probability is in a preset range, taking the output image as an extended image of the medical image.
Optionally, after the step of performing decision processing on the medical image and the output image according to the discriminator network to obtain a decision probability, the medical image processing method further includes:
and when the judgment probability is not in a preset range, updating the generator network according to a gradient descent algorithm, and updating the discriminator network according to a gradient ascent algorithm.
Optionally, the step of acquiring a lesion image in the medical image after data expansion includes:
acquiring a preset deep learning model according to the medical image after data expansion;
and acquiring a focus image in the medical image after data expansion according to the preset deep learning model.
Optionally, after the step of obtaining the preset deep learning model according to the medical image after data expansion, the method for processing a medical image further includes:
inputting the medical image after data expansion into the preset deep learning model to obtain an output image;
and comparing the output image with a standard image, and updating the preset deep learning model according to a comparison result.
Optionally, the step of comparing the output image with a standard image and updating the preset deep learning model according to a comparison result includes:
acquiring an image error between the image output by the preset deep learning model and the standard image;
and updating the preset deep learning model according to the image error.
Optionally, the step of obtaining a focus image in the medical image after data expansion according to the preset deep learning model includes:
inputting the medical image after data expansion into the preset deep learning model to obtain an organ image;
and acquiring a focus image in the organ image.
In order to achieve the above object, the present invention also provides a medical image processing apparatus including: a memory, a processor and a processing program of medical image stored on the memory and capable of running on the processor, wherein the processing program of medical image realizes the steps of the processing method of medical image as described in any one of the above when being executed by the processor.
In addition, to achieve the above object, the present invention further provides a computer-readable storage medium having a medical image processing program stored thereon, which when executed by a processor, implements the steps of the medical image processing method according to any one of the above.
According to the medical image processing method, the medical image processing device and the computer-readable storage medium, when the medical image is obtained, data expansion is carried out on the medical image through the generative countermeasure network; and acquiring a focus image in the medical image after data expansion. The embodiment of the invention expands the data set of the medical image through the generating countermeasure network and performs focus segmentation according to the medical image after the data set is expanded, thereby realizing accurate segmentation of a focus area and meeting the requirements of disease diagnosis and medical research.
Drawings
Fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for processing medical images according to an embodiment of the present invention;
FIG. 3 is a detailed flowchart of step S10 in FIG. 2;
FIG. 4 is a detailed flowchart of step S20 in FIG. 2;
fig. 5 is a flowchart illustrating a medical image processing method according to another embodiment of the invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The main solution of the embodiment of the invention is as follows:
when a medical image is acquired, performing data expansion on the medical image through a generative countermeasure network;
and acquiring a focus image in the medical image after data expansion.
In the prior art, artificial intelligence is often combined when the focus in the medical image is segmented, so that the automatic process of focus segmentation is realized. However, artificial intelligence generally needs tens of thousands or even hundreds of thousands of sample data to learn, so that in reality, such many sample data cannot be collected, which results in a larger segmentation error and lower precision of artificial intelligence when segmenting a lesion, and cannot meet the medical requirements.
The invention provides a solution, which expands a data set for a medical image through a generative confrontation network and segments a focus according to the medical image after expanding the data set, thereby realizing accurate segmentation of a focus area and meeting the requirements of disease diagnosis and medical research.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be a PC, and can also be a device such as a smart phone, a tablet computer, a portable computer and the like.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. The communication bus 1002 is used to implement connection communication among these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include an operating system, a network communication module, a user interface module, and a processing program of medical images.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; the processor 1001 may be configured to call up a processing program of the medical image stored in the memory 1005, and perform the following operations:
when a medical image is acquired, performing data expansion on the medical image through a generative countermeasure network;
acquiring a focus image in the medical image after data expansion
Further, the processor 1001 may call the processing program of the medical image stored in the memory 1005, and further perform the following operations:
and performing edge optimization processing on the focus image.
Further, the processor 1001 may call the processing program of the medical image stored in the memory 1005, and further perform the following operations:
inputting the medical image into the generator network to obtain an output image;
judging the medical image and the output image according to the discriminator network to obtain a judgment probability;
and when the discrimination probability is in a preset range, taking the output image as an extended image of the medical image.
Further, the processor 1001 may call the processing program of the medical image stored in the memory 1005, and further perform the following operations:
and when the judgment probability is not in a preset range, updating the generator network according to a gradient descent algorithm, and updating the discriminator network according to a gradient ascent algorithm.
Further, the processor 1001 may call the processing program of the medical image stored in the memory 1005, and further perform the following operations:
acquiring a preset deep learning model according to the medical image after data expansion;
and acquiring a focus image in the medical image after data expansion according to the preset deep learning model.
Further, the processor 1001 may call a processing program of the medical image stored in the memory 1005, and further perform the following operations:
inputting the medical image after data expansion into the preset deep learning model to obtain an output image;
and comparing the output image with a standard image, and updating the preset deep learning model according to a comparison result.
Further, the processor 1001 may call the processing program of the medical image stored in the memory 1005, and further perform the following operations:
acquiring an image error between the image output by the preset deep learning model and the standard image;
and updating the preset deep learning model according to the image error.
Further, the processor 1001 may call the processing program of the medical image stored in the memory 1005, and further perform the following operations:
inputting the medical image after data expansion into the preset deep learning model to obtain an organ image;
and acquiring a focus image in the organ image.
Referring to fig. 2, in an embodiment, the method for processing medical images includes the following steps:
step S10, when acquiring the medical image, performing data expansion on the medical image through a generative countermeasure network;
in this embodiment, since the number of medical images of the same type is small and inconvenient to collect, when the medical images are acquired, the data set needs to be expanded according to a plurality of medical images, so that when the number of sample data is sufficient, the lesion is segmented according to the deep learning model, and the segmentation is more accurate. When data expansion is carried out, a generative confrontation network can be adopted to generate virtual medical images of the same type. The generative confrontation network comprises a generator network and a discriminator network, and when the generator network and the discriminator network are constructed, the generator network and the discriminator network can be any two deep learning models capable of outputting pictures, such as a fully connected neural network model, a deconvolution neural network model and the like. And inputting a plurality of medical image models into the generator network, and adding random noise when controlling the generator network to output. After the medical image and the random noise are superposed, an output image generated randomly can be obtained, whether the output image belongs to the category of the medical image or not is judged according to a discriminator network, and discrimination probability is output through a discriminator. When the judgment probability is in a preset range, the output image meets the requirement and can be used as an extended image of the medical image, so that the data extension of the medical image is realized, wherein the preset range is generally equal to 0.5. When the medical image is subjected to data expansion through the generative countermeasure network, the data can be further expanded by combining the modes of rotation, deformation, mirror image and the like. In addition, before data expansion is performed on the medical image, various preprocessing processes including image size clipping, adjustment of image window width and window level, etc. may be performed on the medical image to reduce image differences of different medical images.
And step S20, acquiring a focus image in the medical image after data expansion.
In this embodiment, after the medical image is subjected to data expansion, a lesion image in the medical image after the data expansion may be obtained through a deep learning model. And acquiring an initial preset deep learning model, and then training the initial preset deep learning model according to the medical image after data expansion, namely updating the preset deep learning model. And after the updating is finished, inputting the medical image after the data expansion into the updated preset deep learning model to obtain a segmented focus image. In order to make the segmentation more accurate, the segmentation process can be performed step by step, i.e. the focus image is obtained through multiple segmentations. Due to the particularity of the medical image, the segmentation step can be generally divided into two steps, namely acquiring the organ image in the medical image after data expansion and then acquiring the focus image in the organ image so as to improve the accuracy of segmentation. In the updating process of the preset deep learning model, the medical image after data expansion is input into the preset deep learning model to obtain an output image, the output image is compared with a standard image, an image error is calculated, and model parameters of the preset deep learning model are adjusted according to the error, so that the updating of the preset deep learning model is realized. And when the updating reaches the preset times or the image error is smaller than the preset value, the updating is finished.
In addition, the initial preset deep Learning model can be generated by means of Transfer Learning (Transfer Learning) according to the existing or other image segmentation models trained by technicians. The transfer learning is a weight sharing technology, and the calculated amount and the calculated time in the updating process of the preset deep learning model can be reduced by learning transfer, so that the steps are simplified, and meanwhile, the generalization capability can be improved.
After the focus image in the medical image after data expansion is obtained, edge optimization processing can be further performed on the focus image. In particular, Fully Connected Conditional Random Fields (CRF) may be used. Assigning classification label x to pixel point i in focus imagei,xiIs a random variable, and a CRF model is established. In the CRF model, class label xiThe expression of the gibbs energy function of (a) is as follows:
Figure BDA0002065767060000071
wherein the content of the first and second substances,
Figure BDA0002065767060000072
representing a unary energy item, and classifying the representative pixel point i into a label xiIncluding the gray level of the pixel point i,
Figure BDA0002065767060000073
representing the simultaneous classification of pixel i and pixel j as label xiRepresents the relationship between the pixel point i and all other pixels. By minimizing the Gibbs energy E (x), the most probable classification label of each pixel point can be calculated, and then the edge part of the focus image is segmented according to different classification labels, so as to realize the optimization of the edge.
In the technical scheme disclosed in this embodiment, the medical image is expanded into the data set by the generative countermeasure network, and the focus is segmented according to the medical image after the data set is expanded, so that the focus region is accurately segmented, and the requirements of disease diagnosis and medical research are met.
In another embodiment, as shown in fig. 3, on the basis of the embodiment shown in fig. 2, the step S10 includes:
step S11, inputting the medical image into the generator network to obtain an output image;
step S12, the medical image and the output image are judged according to the discriminator network to obtain the judgment probability;
in this embodiment, a Generative Adaptive Networks (GAN) is a deep learning model, which is formed by at least two Networks: the mutual game learning of the generator network and the arbiter network produces very good output. When the image data is expanded, a noise image is randomly generated, then a plurality of medical images are input into a generator network, and the noise image and the implicit characteristic parts of the medical images are overlapped to obtain an output image. Judging whether the output image belongs to the category of a plurality of medical images through a discriminator network, and outputting a one-dimensional vector, wherein the value of the one-dimensional vector is between 0 and 1. When the output image does not belong to a plurality of medical image categories, the discriminator network judges the output image as a false sample, the output one-dimensional vector is 0, namely the discrimination probability is 0; when the output image belongs to a plurality of medical image categories, the discriminator network judges the output image as a 'true' sample, and the output one-dimensional vector is 1, namely the discrimination probability is 1.
And step S13, when the discrimination probability is within a preset range, using the output image as an extended image of the medical image.
In the present embodiment, the predetermined range is generally equal to 0.5. When the discrimination probability is 0.5, the image output by the generator network is shown to meet the requirement, and the image output by the generator network can be used as an extended image of the medical image. When the discrimination probability is not 0.5, the generator network and the discriminator network can be updated, and the updating process can be repeated for multiple times until the discrimination probability is in a preset range. The main objective for the update of the generator network is to maximize the discrimination probability of the discriminator network, making the discrimination probability equal to 1, even if the image generated by the generator network is as "true" as possible, and the main objective for the update of the discriminator network is to minimize the discrimination probability of the discriminator network, making the discrimination probability equal to 0, so as to better distinguish the image generated by the generator network from the medical image. Thus, the generator network and the discriminator network form a dynamic game process until the discrimination probability is 0.5 and a balance point is reached, and then data can be expanded according to the image output by the generator network. The expression for the above process is as follows:
Figure BDA0002065767060000091
wherein x represents the medical image, z represents the noise of the generator network, g (z) represents the output image of the generator network, D (x) represents the probability that the medical image is the real image, and since the medical image itself is real, the closer D (x) is to 1, the better, and D (g (z)) represents the discrimination probability that the output image of the generator network is the real image. Since the output image of the generator network is as close to true as possible, D (G (z)) should be as large as possible, and V (D, G) is smaller, so that expression (1) is
Figure BDA0002065767060000092
On the other hand, the discriminator network needs to distinguish the image generated by the generator network from the medical image, so D (x) should be as large as possible, D (G (z)) should be as small as possible, and V (D, G) will become large, so expression (1) is
Figure BDA0002065767060000093
A stochastic gradient algorithm may be employed in updating the generator network and the discriminator network. In updating the discriminator network, a gradient down algorithm may be used since the larger V (D, G) the better, and in updating the generator network, a gradient up algorithm may be used since the smaller V (D, G) the better. The detailed process of the gradient algorithm is not described herein.
In the technical scheme disclosed in this embodiment, the medical image is input into the generator network, the medical image and the output image are discriminated according to the discriminator network to obtain a discrimination probability, and when the discrimination probability is within a preset range, the output image is used as an extended image of the medical image, so that the purpose of performing data extension of the medical image according to a generated confrontation network model is achieved, and the accuracy of lesion segmentation is improved.
In yet another embodiment, as shown in fig. 4, on the basis of the embodiment shown in any one of fig. 2 to 3, the step S20 includes:
step S21, acquiring a preset deep learning model according to the medical image after data expansion;
in the embodiment, the deep learning model can adopt a U-net model, and the U-net model is a convolution neural network for biomedical image segmentation and is suitable for segmentation of medical images. Because the medical image is a gray image, the target features are very close to the surrounding tissues, and the data set is small, the semantic information of the image can be efficiently extracted through the U-net model. The U-net model adopts a full convolution neural network comprising a convolution layer, a shear layer, a maximum pooling layer and a deconvolution layer, so that pictures with any size can be input when the U-net model is used, and the output is also pictures, which is an end-to-end model. When the pictures are input into the U-net model, firstly, the pictures are subjected to layer-by-layer down-sampling through the convolution layer and the pooling layer to obtain abstract characteristics of the pictures, then, the pictures are subjected to layer-by-layer up-sampling through the reverse convolution layer and the convolution layer to complement information of some pictures, but the information is not completely supplemented, so that the information needs to be integrated with information of high-resolution pictures transmitted from the shearing layer, the characteristics extracted according to the pictures are more effective and more abstract, and finally, the output pictures are subjected to dimension reduction through the convolution layer with convolution kernel of 1 to obtain the output pictures. And inputting the medical image after data expansion into the preset deep learning model to obtain an output image. And comparing the output image with the standard image, and updating the preset deep learning model according to the comparison result. And during comparison, calculating the image error of the output image and the standard image, and adjusting the model parameters of the preset deep learning model according to the image error so as to realize the updating process.
Since the computation of deep learning is huge and needs to consume a large amount of computing resources, in the U-net model, migration learning can be added to reduce the computation amount. In the transfer learning, the image segmentation model can be generated according to the same type of image segmentation models which are well trained by the existing technicians or other technicians. The same type of image segmentation model may employ the residual network model Resnet-50. Resnet-50 is composed of 50 units, each unit including an identity map to pass the output of the current layer directly to the next layer, avoiding adding extra parameters. In the backward propagation process, the constant mapping can be used, the gradient of the current layer is directly transmitted to the previous layer, the problem that the gradient in the deep learning model disappears is solved, and therefore the features in the picture can be better extracted. In the specific operation process, a pre-trained Resnet-50 image segmentation model is obtained, part of model parameters and part of structure in the initial preset deep learning model are directly replaced by the model parameters and structure in the Resnet-50 image segmentation model, the replaced preset deep learning model is updated according to the medical image after data expansion, and the replaced model parameters are not updated during updating or only fine-tuned, so that the calculated amount is reduced, and the generalization capability is improved.
And step S22, acquiring a focus image in the medical image after data expansion according to the preset deep learning model.
In this embodiment, after obtaining the preset deep learning model, the medical image after data expansion may be input into the preset deep learning model to obtain an output image, i.e., a segmented lesion image. In order to achieve more accurate segmentation, the segmentation can be performed step by step, i.e. the focus image is obtained through multiple segmentation. Due to the particularity of the medical image, the segmentation step is generally performed twice, namely, the organ image in the medical image after data expansion is obtained, and then the focus image in the organ image is obtained, so that the segmentation accuracy is improved. After the focus image is obtained, the accuracy of focus image segmentation can be evaluated through DICE coefficients, volume overlapping errors and the like.
In addition, since the medical image cannot be directly input into the preset deep learning model and the medical image is generally a gray picture, the gray level of each pixel in the medical image can be used as the value of the pixel, so as to obtain the data matrix corresponding to the medical image, so that the data matrix can be input into the preset deep learning model, and the output data matrix can be converted into a specific output image during output. Since the gray scale ranges from 0 to 255, the data in the data matrix also ranges from 0 to 255.
In the technical scheme disclosed in this embodiment, a preset deep learning model is obtained according to the medical image after data expansion, and a focus image in the medical image after data expansion is obtained according to the preset deep learning model, so that the purpose of obtaining the focus image in the medical image according to the deep learning model is achieved.
In another embodiment, as shown in fig. 5, on the basis of the embodiment shown in any one of fig. 2 to 4, after step S21, the method further includes:
step S01, inputting the medical image after data expansion into the preset deep learning model to obtain an output image;
step S02, comparing the output image with a standard image, and updating the preset deep learning model according to the comparison result.
In this embodiment, the medical image after data expansion is input into the preset deep learning model to obtain an output image of the preset deep learning model, the output image is compared with a standard image, and the preset deep learning model is updated according to a comparison result. The standard image is a standard lesion image segmented from the medical image by a person skilled in the art, i.e. a gold standard for image segmentation. The comparison result, i.e. the image error between the output image and the standard image, can be calculated by a weighting loss function. The expression of the weighted loss function is as follows:
Figure BDA0002065767060000111
where loss is the output value of the weighted loss function, ytrueRepresenting a standard image, ypredAn output image representing a predetermined deep learning model,
Figure BDA0002065767060000112
indicating that the two images are compared pixel by pixel.
After the image error is calculated, the model parameters of the preset deep learning model can be updated through backward propagation according to the image error so as to update the preset deep learning model. And when the updating reaches the preset times or the image error is smaller than the preset value, the updating is finished.
If the organ image in the medical image after data expansion is obtained through the primary segmentation and the focus image in the organ image is obtained through the secondary segmentation, the corresponding preset deep learning model can be two different models, and when the preset deep learning model is updated, the standard image can also be two different images so as to respectively correspond to the organ image and the focus image.
In the technical scheme disclosed in this embodiment, the medical image after data expansion is input into the preset deep learning model to obtain an output image, the output image is compared with a standard image, and the preset deep learning model is updated according to a comparison result, so that an updating method of the preset deep learning model is realized.
In addition, an embodiment of the present invention further provides a medical image processing apparatus, where the medical image processing apparatus includes: a memory, a processor and a processing program of medical image stored in the memory and capable of running on the processor, wherein the processing program of medical image realizes the steps of the processing method of medical image as described in the above embodiment when executed by the processor.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where a medical image processing program is stored on the computer-readable storage medium, and when the medical image processing program is executed by a processor, the steps of the medical image processing method according to the above embodiment are implemented.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. A method for processing medical images, the method comprising:
when a medical image is acquired, preprocessing the medical image, wherein the preprocessing comprises cutting the size of the medical image, adjusting the window width and the window level of the medical image, and performing data expansion on the preprocessed medical image through a generative confrontation network;
acquiring a focus image in the medical image after data expansion;
wherein the acquiring the focus image in the medical image after data expansion comprises:
acquiring a preset deep learning model according to the medical image after data expansion;
inputting the medical image after data expansion into the preset deep learning model to obtain an organ image;
updating the preset deep learning model to obtain a deep learning model corresponding to the organ image;
and inputting the organ image into a deep learning model corresponding to the organ image to obtain a focus image in the organ image.
2. The method for processing medical images according to claim 1, wherein the step of acquiring the lesion image in the medical image after data expansion further comprises:
and performing edge optimization processing on the focus image.
3. The medical image processing method according to claim 1, wherein the generative countermeasure network comprises a generator network and a discriminator network, and the step of performing data expansion on the preprocessed medical image through the generative countermeasure network comprises:
inputting the medical image into the generator network to obtain an output image;
judging the medical image and the output image according to the discriminator network to obtain a judgment probability;
and when the discrimination probability is in a preset range, taking the output image as an extended image of the preprocessed medical image.
4. The method for processing medical images according to claim 3, wherein after the step of obtaining the decision probability by performing the decision processing on the medical image and the output image according to the network of decision devices, the method for processing medical images further comprises:
and when the judgment probability is not in a preset range, updating the generator network according to a gradient descent algorithm, and updating the discriminator network according to a gradient ascent algorithm.
5. The method for processing medical images according to claim 1, wherein after the step of obtaining the predetermined deep learning model from the medical images after data expansion, the method for processing medical images further comprises:
inputting the medical image after data expansion into the preset deep learning model to obtain an output image;
and comparing the output image with a standard image, and updating the preset deep learning model according to a comparison result.
6. The method of claim 5, wherein the step of comparing the output image with a standard image and updating the predetermined deep learning model according to the comparison result comprises:
acquiring an image error between the image output by the preset deep learning model and the standard image;
and updating the preset deep learning model according to the image error.
7. A medical image processing apparatus, comprising: a memory, a processor and a processing program of medical images stored on the memory and executable on the processor, the processing program of medical images implementing the steps of the processing method of medical images according to any one of claims 1 to 6 when executed by the processor.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a processing program of a medical image, which when executed by a processor implements the steps of the processing method of a medical image according to any one of claims 1 to 6.
CN201910426692.9A 2019-05-20 2019-05-20 Medical image processing method and device and computer readable storage medium Active CN110197716B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910426692.9A CN110197716B (en) 2019-05-20 2019-05-20 Medical image processing method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910426692.9A CN110197716B (en) 2019-05-20 2019-05-20 Medical image processing method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110197716A CN110197716A (en) 2019-09-03
CN110197716B true CN110197716B (en) 2022-05-20

Family

ID=67753058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910426692.9A Active CN110197716B (en) 2019-05-20 2019-05-20 Medical image processing method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110197716B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648318A (en) * 2019-09-19 2020-01-03 泰康保险集团股份有限公司 Auxiliary analysis method and device for skin diseases, electronic equipment and storage medium
CN111128349A (en) * 2019-11-14 2020-05-08 清华大学 GAN-based medical image focus detection marking data enhancement method and device
CN110837572B (en) * 2019-11-15 2020-10-13 北京推想科技有限公司 Image retrieval method and device, readable storage medium and electronic equipment
CN111009309B (en) * 2019-12-06 2023-06-20 广州柏视医疗科技有限公司 Visual display method, device and storage medium for head and neck lymph nodes
CN111388000B (en) * 2020-03-27 2023-08-25 上海杏脉信息科技有限公司 Virtual lung air retention image prediction method and system, storage medium and terminal
CN112950569B (en) * 2021-02-25 2023-07-25 平安科技(深圳)有限公司 Melanoma image recognition method, device, computer equipment and storage medium
CN116030158B (en) * 2023-03-27 2023-07-07 广州思德医疗科技有限公司 Focus image generation method and device based on style generation countermeasure network model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718952A (en) * 2016-01-22 2016-06-29 武汉科恩斯医疗科技有限公司 Method for focus classification of sectional medical images by employing deep learning network
WO2018015080A1 (en) * 2016-07-19 2018-01-25 Siemens Healthcare Gmbh Medical image segmentation with a multi-task neural network system
CN109522973A (en) * 2019-01-17 2019-03-26 云南大学 Medical big data classification method and system based on production confrontation network and semi-supervised learning

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10636141B2 (en) * 2017-02-09 2020-04-28 Siemens Healthcare Gmbh Adversarial and dual inverse deep learning networks for medical image analysis
US10600185B2 (en) * 2017-03-08 2020-03-24 Siemens Healthcare Gmbh Automatic liver segmentation using adversarial image-to-image network
CN108171266A (en) * 2017-12-25 2018-06-15 中国矿业大学 A kind of learning method of multiple target depth convolution production confrontation network model
CN108961174A (en) * 2018-05-24 2018-12-07 北京飞搜科技有限公司 A kind of image repair method, device and electronic equipment
CN109308477A (en) * 2018-09-21 2019-02-05 北京连心医疗科技有限公司 A kind of medical image automatic division method, equipment and storage medium based on rough sort
CN109685102B (en) * 2018-11-13 2024-07-09 平安科技(深圳)有限公司 Chest focus image classification method, device, computer equipment and storage medium
CN109727253A (en) * 2018-11-14 2019-05-07 西安大数据与人工智能研究院 Divide the aided detection method of Lung neoplasm automatically based on depth convolutional neural networks
CN109635850A (en) * 2018-11-23 2019-04-16 杭州健培科技有限公司 A method of network optimization Medical Images Classification performance is fought based on generating
CN109754403A (en) * 2018-11-29 2019-05-14 中国科学院深圳先进技术研究院 Tumour automatic division method and system in a kind of CT image
CN109670510B (en) * 2018-12-21 2023-05-26 万达信息股份有限公司 Deep learning-based gastroscope biopsy pathological data screening system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718952A (en) * 2016-01-22 2016-06-29 武汉科恩斯医疗科技有限公司 Method for focus classification of sectional medical images by employing deep learning network
WO2018015080A1 (en) * 2016-07-19 2018-01-25 Siemens Healthcare Gmbh Medical image segmentation with a multi-task neural network system
CN109522973A (en) * 2019-01-17 2019-03-26 云南大学 Medical big data classification method and system based on production confrontation network and semi-supervised learning

Also Published As

Publication number Publication date
CN110197716A (en) 2019-09-03

Similar Documents

Publication Publication Date Title
CN110197716B (en) Medical image processing method and device and computer readable storage medium
EP3933693B1 (en) Object recognition method and device
WO2022083536A1 (en) Neural network construction method and apparatus
CN111310808B (en) Training method and device for picture recognition model, computer system and storage medium
EP3779774A1 (en) Training method for image semantic segmentation model and server
EP4145353A1 (en) Neural network construction method and apparatus
JP2022505775A (en) Image classification model training methods, image processing methods and their equipment, and computer programs
US20180025249A1 (en) Object Detection System and Object Detection Method
EP4163831A1 (en) Neural network distillation method and device
EP4322056A1 (en) Model training method and apparatus
CN110827236B (en) Brain tissue layering method, device and computer equipment based on neural network
JP7054278B1 (en) Edge identification method based on deep learning
US11615292B2 (en) Projecting images to a generative model based on gradient-free latent vector determination
CN111461213A (en) Training method of target detection model and target rapid detection method
CN115063875A (en) Model training method, image processing method, device and electronic equipment
CN113408570A (en) Image category identification method and device based on model distillation, storage medium and terminal
CN116645592B (en) Crack detection method based on image processing and storage medium
WO2021027152A1 (en) Image synthesis method based on conditional generative adversarial network, and related device
CN113902010A (en) Training method of classification model, image classification method, device, equipment and medium
CN111694954B (en) Image classification method and device and electronic equipment
WO2021036397A1 (en) Method and apparatus for generating target neural network model
CN113469091B (en) Face recognition method, training method, electronic device and storage medium
CN116580174A (en) Real-time virtual scene construction method
CN117010480A (en) Model training method, device, equipment, storage medium and program product
CN112749702A (en) Image identification method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant