CN111179277A - Unsupervised self-adaptive mammary gland lesion segmentation method - Google Patents
Unsupervised self-adaptive mammary gland lesion segmentation method Download PDFInfo
- Publication number
- CN111179277A CN111179277A CN201911264888.9A CN201911264888A CN111179277A CN 111179277 A CN111179277 A CN 111179277A CN 201911264888 A CN201911264888 A CN 201911264888A CN 111179277 A CN111179277 A CN 111179277A
- Authority
- CN
- China
- Prior art keywords
- image
- network
- domain
- target domain
- domain image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000011218 segmentation Effects 0.000 title abstract description 19
- 230000003902 lesion Effects 0.000 title description 10
- 210000005075 mammary gland Anatomy 0.000 title description 4
- 238000003709 image segmentation Methods 0.000 claims abstract description 72
- 238000012549 training Methods 0.000 claims description 39
- 230000006870 function Effects 0.000 claims description 38
- 238000006243 chemical reaction Methods 0.000 claims description 35
- 206010028980 Neoplasm Diseases 0.000 claims description 32
- 230000003044 adaptive effect Effects 0.000 claims description 21
- 206010006187 Breast cancer Diseases 0.000 claims description 19
- 208000026310 Breast neoplasm Diseases 0.000 claims description 19
- 238000005457 optimization Methods 0.000 claims description 17
- 210000000481 breast Anatomy 0.000 claims description 15
- 201000011510 cancer Diseases 0.000 claims description 15
- 238000012216 screening Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 10
- 238000010801 machine learning Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 5
- 238000013508 migration Methods 0.000 abstract description 3
- 230000005012 migration Effects 0.000 abstract description 3
- 238000002595 magnetic resonance imaging Methods 0.000 description 16
- 210000001519 tissue Anatomy 0.000 description 11
- 238000013135 deep learning Methods 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000010191 image analysis Methods 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000013170 computed tomography imaging Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000013399 early diagnosis Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000003703 image analysis method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010921 in-depth analysis Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30068—Mammography; Breast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a self-adaptive image segmentation method in the unsupervised field. Converting the target domain image to keep semantic information of the target domain image, and reconstructing shallow layer information of the image by taking the source domain image as a characteristic; and then, image discrimination is carried out on the converted and reconstructed target domain image by utilizing a segmentation model established by the source domain image, so that the migration among data sets in different domains of the model is realized on the premise of no new data annotation.
Description
Technical Field
The invention relates to the field of image processing, in particular to an unsupervised adaptive breast lesion segmentation method.
Background
The breast cancer is the cancer with the highest incidence rate in women, and the long-term survival rate of breast cancer patients can be effectively improved by early diagnosis and early treatment. Magnetic Resonance Imaging (MRI) is a multi-parameter, multi-contrast Imaging technique, can reflect various characteristics of tissues T1, T2, proton density and the like, has the advantages of high resolution and sensitivity, and has become one of important tools for early screening of breast cancer. Breast MRI technology has been increasingly used in clinical practice, particularly in the early screening of breast cancer.
In the MRI screening of breast cancer, the above-mentioned image analysis aided by computer is a development trend and a core technical problem in the field. Early medical image segmentation initially employed edge detection, texture features, morphological filtering, etc., but required extensive manual labeling and targeted analysis, which had limited ability to resolve deep structures and adaptivity. In recent years, machine learning algorithms represented by deep learning have been developed in a breakthrough manner in image recognition, image segmentation and other predictions, and various deep learning algorithms represented by Deep Neural Networks (DNNs) and Convolutional Neural Networks (CNNs) have been developed significantly, and the application of the deep learning algorithms to image segmentation of medical images including other image analysis methods is also a trend in the field.
In early screening of breast cancer, image segmentation of a lesion region and other regions is a precondition of subsequent in-depth analysis, and most of the existing image segmentation technologies adopt a supervised deep learning mode, namely training is concentrated on marking the lesion region and a healthy region, and a training model or a training network is obtained by training a known sample, and then a target image is distinguished. However, even for the same type of image data, such as magnetic resonance images, if the image acquisition systems or parameter settings used by the two data sets are not consistent, due to the difference in data distribution, it is difficult for the deep learning segmentation network trained on one data set to obtain a good segmentation result on the other data set.
Particularly in the field of breast cancer MRI screening, magnetic resonance scanning systems or magnetic resonance imaging sequences used in different centers may be inconsistent, resulting in differences in the distribution of the acquired data. This difference makes the trained MRI image segmentation model unable to guarantee a stable discrimination effect under additional systems or parameters.
One of the solutions is: the imaging sequences obtained for different magnetic resonance scanning systems or different parameters are manually labeled, i.e. the data sets under each new condition are retrained and supervised learning is performed to ensure the effect in each data set. The method has the disadvantages that the annotation of image segmentation is very time-consuming, the annotation of medical images needs strong professional knowledge and experience, the low-cost manual annotation in batches cannot be realized, and the annotation standard is difficult to control and unify.
The other solution is to perform parameter fine tuning of a trained and segmented network model for a new target data set, but the method needs the participation of algorithm designers, and fine tuning still needs the cooperation of medical professional knowledge, so that unsupervised application of the trained model in other data sets cannot be realized.
Disclosure of Invention
In order to solve the defect that the segmentation model in the background art is poor in generalization capability among fields, the invention provides an unsupervised field self-adaptive image segmentation method. Converting the target domain image to keep semantic information of the target domain image, and reconstructing shallow layer information of the image by taking the source domain image as a characteristic; and then, image discrimination is carried out on the converted and reconstructed target domain image by utilizing a segmentation model established by the source domain image, so that the migration among data sets in different fields of the model is realized on the premise of no new data annotation.
According to a first aspect of the invention, a method for converting network based on image style between image reconstruction and image discrimination is provided.
S101: obtaining a source domain image in a source domain image set, wherein the image in the source domain image set comprises a marked characteristic region;
obtaining a target domain image in a target domain image set, wherein the target domain image has or does not have a region to be segmented similar to a characteristic region in the source domain image set;
establishing an image domain discrimination network Ncl for discriminating the domain to which the image belongs by taking the source domain image and the target domain image as input through a training function;
s102: the target domain image is used as an input, and image reconstruction is performed on the target domain image to obtain a learned reconstruction network Nre.
S103: the image data obtained by taking the target domain image as input and passing through the reconstruction network Nre is judged through the image domain judging network Ncl, and the parameters of the reconstruction network Nre are optimized and adjusted according to the loss data of the image domain judging network Ncl.
S104: and repeating the step S103, and continuously optimizing the reconstructed network Nre until the conditions are set, where the reconstructed network is the switching network Ntr after optimization.
In S104, the setting conditions are: and the loss data of the image domain judging network Ncl is smaller than a preset value.
According to the transformation network Ntr proposed by the present invention, the image P in the target domain can be transformed into an image P' that retains image information but has a source domain style.
According to a second aspect of the present invention, a method for establishing an inter-domain image distribution adaptive model based on shallow semantic features is provided:
s201: obtaining a source domain image in a source domain image set, wherein the image in the source domain image set comprises a marked characteristic region;
obtaining a target domain image in a target domain image set, wherein the target domain image has or does not have a region to be segmented similar to a characteristic region in the source domain image set;
establishing an image domain discrimination network Ncl for discriminating the domain to which the image belongs by taking the source domain image and the target domain image as input through a training function;
s202: the target domain image is used as an input, and image reconstruction is performed on the target domain image to obtain a learned reconstruction network Nre. The reconstructed image information includes shallow information M1 corresponding to the image and deep semantic information M2, and the image reconstruction network Nre includes a shallow information module nm1 corresponding to the shallow information M1 and a semantic information module nm2 corresponding to the deep semantic information.
S203: and image data obtained by taking a target domain image as input and passing through a reconstruction network Nre is judged through the image domain judgment network Ncl, parameters of the shallow information module nm1 are optimized and adjusted according to loss data of the image domain judgment network Ncl, and parameters of the semantic information module nm2 are kept unchanged.
S204: and repeating the step S203, and continuously optimizing the reconstructed network Nre until the conditions are set, where the optimized reconstructed network is the switching network Ntr.
Preferably, in S201, a cross entropy loss function is used as a loss function for training, and the image domain discrimination network Ncl is a residual network.
Preferably, the Loss function of the reconstructed network in S202 is an L2Loss function;
preferably, the reconstruction network in S202 may adopt a coding and decoding structure;
preferably, in S203, the loss data of the image domain discrimination network Ncl adopts cross entropy loss;
preferably, the setting conditions in S204 are: and the loss data of the image domain judging network Ncl is smaller than a preset value.
According to the method for the inter-domain image distribution adaptive model based on the shallow semantic features, provided by the invention, the image P in the target domain can be converted into the image P' which retains deep semantic information of the image and has the source domain style in the shallow features.
According to a third aspect of the invention, an unsupervised adaptive image segmentation method is provided.
S301: obtaining a source domain image in a source domain image set, wherein the image in the source domain image set comprises a marked characteristic region;
obtaining a target domain image in a target domain image set, wherein the target domain image has or does not have a region to be segmented similar to a characteristic region in the source domain image set;
establishing an image domain discrimination network Ncl for discriminating the domain to which the image belongs by taking the source domain image and the target domain image as input through a training function;
s302: the target domain image is used as an input, and image reconstruction is performed on the target domain image to obtain a learned reconstruction network Nre. The reconstructed image information includes shallow information M1 corresponding to the image and deep semantic information M2, and the image reconstruction network Nre includes a shallow information module nm1 corresponding to the shallow information M1 and a semantic information module nm2 corresponding to the deep semantic information.
S303: and image data obtained by taking a target domain image as input and passing through a reconstruction network Nre is judged through the image domain judgment network Ncl, parameters of the shallow information module nm1 are optimized and adjusted according to loss data of the image domain judgment network Ncl, and parameters of the semantic information module nm2 are kept unchanged.
S304: and repeating the step of S303, and continuously optimizing the reconstructed network Nre until a condition is set, where the reconstructed network is the switching network Ntr after optimization.
S305: based on the source domain image set and its labeled feature regions, an image segmentation network Nse for the feature regions and non-feature regions is trained by machine learning.
S306: and converting the image P to be analyzed in the target domain image set into a converted image P' which has a source domain style and retains semantic information through a conversion network Ntr.
S307: the image segmentation network Nse is used to perform image segmentation on the converted image P'.
Preferably, in S301, a cross entropy loss function is used as a loss function for training, and the image domain discrimination network Ncl is a residual network.
Preferably, the Loss function of the reconstructed network in S302 is an L2Loss function;
preferably, the reconstruction network in S302 may adopt a coding and decoding structure;
preferably, in S303, the loss data of the image domain determination network Ncl adopts cross entropy loss;
preferably, the setting conditions in S304 are: the loss data of the image domain judging network Ncl is smaller than a preset value;
preferably, in S305, the UNet algorithm is used for training the image segmentation network;
preferably, in S305, the training of the image segmentation network thereof employs UNet algorithm in combination with attention mechanism and/or multi-scale feature expression.
According to the image segmentation method provided by the invention, the unsupervised self-adaptation of the image segmentation method from the labeled source domain image to the unlabeled target domain image is realized, and the unsupervised task of target domain image segmentation is realized.
According to a fourth aspect of the present invention, there is provided an adaptive image segmentation method for breast cancer screening.
S401: obtaining a source domain image in a source domain image set, wherein the image in the source domain image set comprises a marked characteristic region, the source domain image set is a marked breast MRI image, and the characteristic region is a marked tumor or cancer tissue region;
obtaining a target domain image in a target domain image set, wherein the target domain image is an unlabeled breast MRI image, and the target domain image may contain an image part corresponding to a tumor or a cancer tissue region;
establishing an image domain discrimination network Ncl for discriminating the domain to which the image belongs by taking the source domain image and the target domain image as input through a training function;
s402: the target domain image is used as an input, and image reconstruction is performed on the target domain image to obtain a learned reconstruction network Nre. The reconstructed image information includes shallow information M1 corresponding to the image and deep semantic information M2, and the image reconstruction network Nre includes a shallow information module nm1 corresponding to the shallow information M1 and a semantic information module nm2 corresponding to the deep semantic information.
S403: and image data obtained by taking a target domain image as input and passing through a reconstruction network Nre is judged through the image domain judgment network Ncl, parameters of the shallow information module nm1 are optimized and adjusted according to loss data of the image domain judgment network Ncl, and parameters of the semantic information module nm2 are kept unchanged.
S404: and repeating the step S403, and continuously optimizing the reconstructed network Nre until the conditions are set, where the reconstructed network is the switching network Ntr after optimization.
S405: based on the source domain image set and its labeled feature regions, an image segmentation network Nse for the feature regions and non-feature regions is trained by machine learning.
S406: converting the image P to be analyzed in the target domain image set into a converted image P' with a source domain style and reserved semantic information through a conversion network Ntr,
s407: the image segmentation network Nse is used to perform image segmentation on the converted image P', and the corresponding feature region after image segmentation is the image region of the tumor or cancer tissue screened by the suspected breast cancer.
Preferably, in S401, a cross entropy loss function is used as a loss function for training, and the image domain discrimination network Ncl is a residual network.
Preferably, the Loss function of the reconstructed network in S402 is an L2Loss function;
preferably, the reconstruction network in S402 may adopt a coding and decoding structure;
preferably, in S403, the loss data of the image domain determination network Ncl adopts cross entropy loss;
preferably, the setting conditions in S404 are: the loss data of the image domain judging network Ncl is smaller than a preset value;
preferably, in S405, the UNet algorithm is used for training the image segmentation network;
preferably, in S405, the training of the image segmentation network employs UNet algorithm in combination with attention mechanism and/or multi-scale feature expression.
According to the breast cancer screening self-adaptive image segmentation method provided by the invention, unsupervised self-adaptation of a breast lesion region segmentation method from a labeled source domain image to an unlabeled target domain image is realized, and an unsupervised target domain image segmentation task is completed.
According to a fifth aspect of the present invention, there is provided a breast cancer screening apparatus based on adaptive image segmentation, comprising:
the device comprises an acquisition unit, a comparison unit and a processing unit, wherein the acquisition unit is used for acquiring a source domain image in a source image set, the image in the source domain image set comprises a marked characteristic region, the source domain image set is a marked breast MRI image, and the characteristic region is a marked tumor or cancer tissue region;
the system is also used for acquiring a target domain image in a target domain image set, wherein the target domain image is an unlabeled breast MRI image, and an image part of a corresponding tumor or cancer tissue region is possibly contained in the target domain image;
an image domain judging unit, configured to establish an image domain judging network Ncl for judging a domain to which an image belongs through a training function, with the source domain image and the target domain image acquired by the acquiring unit as input;
and an image reconstruction unit which takes the target domain image as input and performs image reconstruction on the target domain image to obtain a learned reconstruction network Nre. The reconstructed image information comprises shallow information and deep semantic information corresponding to the image, and the image reconstruction network Nre comprises a shallow information module nm1 corresponding to the shallow information of the image and a semantic information module nm2 corresponding to the deep semantic information;
the image conversion network optimization unit takes a target domain image as input, image data obtained after the target domain image passes through a reconstruction network Nre is judged through an image domain judgment network Ncl, according to loss data of the image domain judgment network Ncl, parameters of the shallow information module nm1 are optimized and adjusted, and parameters of the semantic information module nm2 are kept unchanged; repeating the optimization and adjustment processes until set conditions, and reconstructing a network after optimization as a conversion network Ntr;
a source domain image segmentation network training unit, configured to train an image segmentation network Nse for a feature region and a non-feature region through machine learning in a source domain image set and the labeled feature region thereof;
a target domain image segmentation unit, which converts the image P to be analyzed in the target domain image set into a converted image P 'having a source domain style and retaining semantic information through a conversion network Ntr, and performs image segmentation on the converted image P' by using the image segmentation network Nse; the corresponding characteristic region after the image segmentation is the image region of the tumor or cancer tissue screened by the suspected breast cancer.
Preferably, the image domain discrimination unit performs training by using a cross entropy loss function as a loss function, and the image domain discrimination network Ncl is a residual network.
Preferably, in the image conversion network optimizing unit, the setting conditions are: the loss data of the image domain judging network Ncl is smaller than a preset value;
therefore, the invention provides an unsupervised domain self-adaptive breast lesion segmentation method, which forces new data to be close to the existing data set distribution by performing data domain conversion on the new data, thereby realizing unsupervised domain self-adaptive migration of a segmentation network. Based on the method, even if the new data set is different from the marked data set, the image in the new data set does not need to be marked, and the mammary lesion segmentation network trained on the marked data set can be directly self-adapted to the new data set, so that a good segmentation effect is obtained.
Therefore, the invention solves the defects that in the prior art, for each set of mammary gland magnetic resonance image data acquired by using specific experimental parameters, a doctor needs to perform complete or partial data annotation to obtain a segmentation model suitable for a data set to be processed, and the whole process is time-consuming and labor-consuming and has high cost. The method can realize unsupervised segmentation of a new data set with the aid of a labeled data set, reduces the economic cost of image labeling, and can save time cost by directly optimizing and applying the model.
Drawings
FIG. 1 is a diagram illustrating an adaptive image conversion method between different domains;
FIG. 2 shows a schematic diagram of an adaptive image segmentation method;
fig. 3 illustrates a typical structure of the image segmentation model UNet;
FIG. 4 shows a schematic diagram of a breast cancer screening apparatus based on adaptive image segmentation;
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced otherwise than as specifically described herein, and thus the scope of the present invention is not limited by the specific embodiments disclosed below.
Example one
As shown in FIG. 1, according to the image adaptive conversion method between different domains provided by the present invention, even if there is a difference between a new data set and an annotated data set, the image in the new data set is not annotated, but adaptive learning is performed in the two data sets through image conversion. Therefore, the unmarked data set retains high-order semantic information thereof through self-adaptive conversion, and shallow representations of the image style, texture, brightness and the like are converted into the characteristics of the marked data set, so that the network model trained in the marked data set can be directly applied to a new data set.
The adaptive image conversion method according to one embodiment of the invention comprises the following steps:
first, a data set containing annotations is needed as a source domain image, and the data set can be regarded as an atlas, and other data sets in the future use the data set as a template. An image of the target area, for example a set of images to be analyzed without labeling, is generally an image data set to be subjected to image analysis, for example classification or image segmentation.
It should be noted that the source domain image and the target domain image should be images containing similar deep level features, such as images captured under the same type of things or similar scenes, and the similar deep level features make the adaptive image conversion method practical. The source domain image and the target domain image may exhibit different appearance image features, such as light shades, noise levels, different textures, or other non-semantic features.
For example, in applications for computer-aided medical image analysis, the source domain image may contain a series of labeled X-ray, CT, or MRI image data, while the target domain image may be corresponding, but may not have corresponding X-ray, CT, or MRI image data acquired by the same instrument or acquired under the same conditions. The source domain image includes marked feature regions, which may be tumor or cancerous regions identified or marked by a medical professional.
Firstly, an image domain discrimination network is established (S301), a source domain image and a target domain image are used as training sample input, the discrimination network is established to classify any new test sample image into the source domain image and the target domain image. However, for the above classification, the variance, residual or other loss needs to be calculated for adjustment in the subsequent steps.
For the classification method of the image domain discriminant network training, various classical classification and discriminant methods in a deep neural network can be used, for example, a residual error network is adopted. The residual error network is a convolution identification network, is characterized by easy optimization and can improve the accuracy rate by increasing the equivalent depth. The inner residual block uses jump connection, and the problem of gradient disappearance caused by depth increase in a deep neural network is relieved.
Wherein, in the training process, the discriminant error can be calculated by adopting a cross entropy loss function (Cross Encopy). The cross entropy loss function is particularly suitable for the training process in model prediction of the two classes, and enables the convex optimization problem to have good convergence when loss is calculated. Giving a classification label to different images, for example, the label of the source domain image is 1, the label of the target domain image is 0, under the condition of bisection, the final result to be predicted of the model has only two conditions, and the probability obtained for each category is y andthe cross-entropy at this point expresses its loss as:
in the second step, learning of image reconstruction of the target domain image is performed (S302). That is, the image of the target area itself is input, and the target area image is output. By combining with the continuous self-learning and training process, the shallow characterization feature information and the deep semantic features of the medical image can be separated, wherein the deep semantic features, such as the region and edge features of a tumor in the medical image, are reserved, and the shallow features, such as the image style brightness, texture, noise level and the like, can be gradually converted into the style of the source domain image through the optimization of the discrimination network.
In a typical application, the image reconstruction network Nre may use the codec structure SegNet. The encoder structure and decoder structure of SegNet are one-to-one corresponding, i.e. one decoder has the same spatial size and channel number as its corresponding encoder. A basic SegNet structure, which has 13 convolutional layers, compared to the corresponding FCN classical image segmentation model, has much smaller volume, and benefits from the operations of SegNet to balance the computation: the direct deconvolution operation is replaced by the recorded position information of the pooling process.
The image loss function can be expressed as L2loss, which is also a loss function commonly used for CNN function loss, because the convergence rate is faster than L1 loss.
And thirdly, optimizing the reconstruction network based on the second step (S302) to obtain a conversion network (S303), wherein the purpose is to convert the target domain image into the style of the source domain image, and to draw the distribution of the two domain images so as to perform operations such as segmentation on the target domain image.
For the reconstructed image in the second step (S302), which takes the encoder structure as an example, the reconstructed image includes shallow information M1 corresponding to the image and deep semantic information M2, and correspondingly, the image reconstruction network Nre includes a shallow information module nm1 corresponding to the shallow information M1 and a semantic information module nm2 corresponding to the deep semantic information. It can be easily understood that the image shallow information is the image style brightness, texture, noise level, etc. mentioned earlier, while the deep information is the region and edge features of the tumor in the medical image, for example.
The target domain image is transformed by the reconstructed network Nre obtained in the second step S302, so as to obtain a generated source domain image. The generated source domain image is classified and judged by the image domain judging network in the first step S301, and whether the converted image can be classified as the source domain image is judged. And the parameters of the shallow information module nm1 in the reconstructed network Nre are corrected and optimized as a loss function according to the cross entropy function generated in the classification discrimination, while the parameters of the deep semantic information module nm2 are kept unchanged. In the embodiment adopting the coding and decoding structure, the parameters of the first two coding modules and the first three coding modules in the coding part of the reconstructed network are continuously updated, and the rest parts are kept unchanged.
The output of the reconstructed network Nre' with the corrected parameters is continuously input into the image domain discrimination network for correction, and the above process is continuously repeated, so that the reconstructed network is continuously optimized as follows: the reconstructed shallow information of the target domain image can be made to approach the source domain image more and more, and the deep semantic information is not changed (S304).
When certain conditions are met, a typical example is when the generated source domain images have been identified as a classification of source domain images and the cross-entropy loss has been small enough to be less than a certain threshold, the parameters of the reconstructed network Nre can be considered to have been modified and optimized to an acceptable effect. The network obtained at this time is the image conversion network Ntr.
That is to say, after an unlabeled target domain image is input into the image conversion network Ntr, the representation style of the output image tends to be close to that of the source domain training set image, and the semantic information of the deep features of the image can still be maintained. Therefore, various network models trained in the labeled data set can be directly applied to the new data set.
Example two
Another embodiment of the present invention is an image segmentation method for unsupervised domain adaptation, as shown in fig. 2, which can be applied to the computer-aided identification of MRI images of breast lesions. The method mainly comprises the following steps: an image domain discrimination network is established (S401), learning of image reconstruction of a target domain image is performed (S402), a conversion network is obtained by optimization based on the reconstruction network (S403, S404), image segmentation network training is performed on an annotated source domain image (S405), the target domain image is converted through the conversion network (S406), and then the converted target image is subjected to image segmentation by using the image segmentation network (S407).
In the above steps, the steps of S401, S402, S403, and S404 are the same as the corresponding steps in the first embodiment, and are not described again.
Image segmentation (S405) is a typical problem of performing supervised image segmentation based on an annotated image, and is specifically applied to training a supervised segmentation network. It may be a more widely used medical image segmentation network now, such as UNet.
UNet is one of the most widely used models in image segmentation projects since its birth, and the encoder (downsampling) -decoder (upsampling) structure and skip connection adopted by UNet are very classical design methods. At present, a plurality of new convolutional neural network design modes exist, but a plurality of new convolutional neural network design modes still continue the core idea of UNet, and new modules are added or other design concepts are blended.
UNet is structured as shown in fig. 3, and the left side can be regarded as an encoder and the right side as a decoder. The encoder has four sub-modules, each containing two convolutional layers, each sub-module being followed by a downsampled layer implemented by max pool. The resolution of the input image is 572x572 and the resolutions of the 1 st to 5 th modules are 572x572,284x284,140x140,68x68 and 32x32, respectively. Since the convolution uses a valid pattern, the resolution of the latter sub-module is equal to (resolution of the previous sub-module-4)/2 here. The decoder contains four sub-modules and the resolution is sequentially increased by the up-sampling operation until it coincides with the resolution of the input image (the actual output is smaller than the input image due to the valid mode used for convolution). The network also uses a skip connection to connect the upsampled result to the output of a sub-block in the encoder having the same resolution as the input of the next sub-block in the decoder.
The network structure of UNet is particularly suitable for the segmentation of medical images, the medical images have fuzzy boundaries and complex gradients, more high-resolution information is needed, and deep learning can achieve the point, such as up-sampling and down-sampling and jump connection; meanwhile, the target to be segmented has similar shape and can be regularly circulated, for example, the shape is approximate to a circle, and the distributed areas are all in a certain range. Since the organ itself has a fixed structure and the semantic information is not particularly rich, the high-level semantic information and the low-level features are very important, and the hop network and the U-shaped structure of UNet are both very suitable for the above information.
In addition, in the image segmentation process, effective new modules can be introduced, such as UNet plus attention mechanism, multi-scale feature expression and the like.
Image conversion (S406) is performed on the target image to be analyzed: and converting the image P to be analyzed in the target domain image set into a converted image P' which has a source domain style and retains semantic information through a conversion network Ntr. The above-mentioned conversion network and the conversion network trained through S402, S403 and S404, that is, the representation style of the output image tends to approach the source domain training set image, while the semantic information of the deep features of the conversion network can still be maintained.
Image segmentation and recognition result (S407): and (4) carrying out image segmentation on the converted image P' by adopting the image segmentation network established in the step S405, wherein the corresponding characteristic region after image segmentation is the image region of the tumor or the cancer tissue screened by the suspected breast cancer in the screening of the breast cancer MRI image.
In the method provided by the steps, the unsupervised field self-adaptive lesion segmentation of the mammary gland magnetic resonance image can be realized. .
EXAMPLE III
Referring to fig. 4, the present embodiment provides a breast cancer screening apparatus based on adaptive image segmentation, including:
the device comprises an acquisition unit, a comparison unit and a processing unit, wherein the acquisition unit is used for acquiring a source domain image in a source image set, the image in the source domain image set comprises a marked characteristic region, the source domain image set is a marked breast MRI image, and the characteristic region is a marked tumor or cancer tissue region;
the system is also used for acquiring a target domain image in a target domain image set, wherein the target domain image is an unlabeled breast MRI image, and an image part of a corresponding tumor or cancer tissue region is possibly contained in the target domain image;
an image domain judging unit, configured to establish an image domain judging network Ncl for judging a domain to which an image belongs through a training function, with the source domain image and the target domain image acquired by the acquiring unit as input;
and an image reconstruction unit which takes the target domain image as input and performs image reconstruction on the target domain image to obtain a learned reconstruction network Nre. The reconstructed image information comprises shallow information and deep semantic information corresponding to the image, and the image reconstruction network Nre comprises a shallow information module nm1 corresponding to the shallow information of the image and a semantic information module nm2 corresponding to the deep semantic information;
the image conversion network optimization unit takes a target domain image as input, image data obtained after the target domain image passes through a reconstruction network Nre is judged through an image domain judgment network Ncl, according to loss data of the image domain judgment network Ncl, parameters of the shallow information module nm1 are optimized and adjusted, and parameters of the semantic information module nm2 are kept unchanged; repeating the optimization and adjustment processes until set conditions, and reconstructing a network after optimization as a conversion network Ntr;
a source domain image segmentation network training unit, configured to train an image segmentation network Nse for a feature region and a non-feature region through machine learning in a source domain image set and the labeled feature region thereof;
a target domain image segmentation unit, which converts the image P to be analyzed in the target domain image set into a converted image P 'having a source domain style and retaining semantic information through a conversion network Ntr, and performs image segmentation on the converted image P' by using the image segmentation network Nse; the corresponding characteristic region after the image segmentation is the image region of the tumor or cancer tissue screened by the suspected breast cancer.
The units in the above-mentioned apparatus may be respectively or completely combined into one or several other units to form the apparatus, or some of the unit(s) may be further split into multiple units with smaller functions to form the apparatus, which may implement the same operation without affecting the implementation of the technical effect of the embodiments of the present invention. The units are divided based on logic functions, and in practical application, the functions of one unit can be realized by a plurality of units, or the functions of a plurality of units can be realized by one unit. In other embodiments of the present invention, the model-based training apparatus may also include other units, and in practical applications, these functions may also be implemented by the assistance of other units, and may be implemented by cooperation of a plurality of units.
According to another embodiment of the present invention, the model training apparatus as shown in fig. 4 may be constructed by running a computer program (including program codes) capable of executing the steps involved in the corresponding method in the second embodiment on a general-purpose computing device such as a computer including a Central Processing Unit (CPU), a random access storage medium (RAM), a read only storage medium (ROM), and the like as well as a storage element, and the model training method of the embodiment of the present invention may be implemented. The computer program may be recorded on a computer-readable recording medium, for example, and loaded and executed in the above-described computing apparatus via the computer-readable recording medium.
Example four
An embodiment four of the present invention provides a computer storage medium storing one or more first instructions adapted to be loaded by a processor and to perform the adaptive image segmentation method in the foregoing embodiments.
The steps in the method of each embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device of each embodiment of the invention can be merged, divided and deleted according to actual needs.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The technical solutions of the present invention have been described in detail with reference to the accompanying drawings, and the above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (8)
1. A network conversion method for image style between fields based on image reconstruction and image discrimination is characterized in that:
comprises the following steps of (a) carrying out,
s101: obtaining a source domain image in a source domain image set, wherein the image in the source domain image set comprises a marked characteristic region;
obtaining a target domain image in a target domain image set, wherein the target domain image has or does not have a region to be segmented similar to a characteristic region in the source domain image set;
establishing an image domain discrimination network Ncl for discriminating the domain to which the image belongs by taking the source domain image and the target domain image as input through a training function;
s102: taking the target domain image as input, carrying out image reconstruction on the target domain image to obtain a learned reconstruction network Nre;
s103: taking a target domain image as input, judging by the image domain judging network Ncl according to image data obtained by the reconstructed network Nre, and optimizing and adjusting parameters of the reconstructed network Nre according to loss data of the image domain judging network Ncl;
s104: repeating the step of S103, continuously optimizing the reconstructed network Nre until a condition is set, where the optimized reconstructed network is the conversion network Ntr;
the conversion network Ntr may convert the image in the target domain into a converted image that retains image information but has a source domain style.
2. A method for establishing an inter-domain image distribution self-adaptive model based on shallow semantic features is characterized by comprising the following steps:
comprises the following steps of (a) carrying out,
s201: obtaining a source domain image in a source domain image set, wherein the image in the source domain image set comprises a marked characteristic region;
obtaining a target domain image in a target domain image set, wherein the target domain image has or does not have a region to be segmented similar to a characteristic region in the source domain image set;
establishing an image domain discrimination network Ncl for discriminating the domain to which the image belongs by taking the source domain image and the target domain image as input through a training function;
s202: taking the target domain image as input, carrying out image reconstruction on the target domain image to obtain a learned reconstruction network Nre; the reconstructed image information comprises shallow information M1 corresponding to the image and deep semantic information M2, and the image reconstruction network Nre comprises a shallow information module nm1 corresponding to the shallow information M1 and a semantic information module nm2 corresponding to the deep semantic information;
s203: taking a target domain image as input, judging image data obtained after the target domain image passes through a reconstruction network Nre through an image domain judging network Ncl, optimizing and adjusting parameters of the shallow information module nm1 according to loss data of the image domain judging network Ncl, and keeping the parameters of the semantic information module nm2 unchanged;
s204: and repeating the step of S203, continuously optimizing the reconstruction network Nre until a condition is set, wherein the optimized reconstruction network is the conversion network Ntr, and the inter-domain image distribution adaptive model can be established through the conversion network Ntr.
3. A method of establishing an inter-domain image distribution adaptive model according to claim 2, characterized by: in S201, a cross entropy loss function is used as a loss function for training.
4. A method of establishing an inter-domain image distribution adaptive model according to claim 2, characterized by: the reconstruction network in S202 may adopt a codec structure.
5. A method of establishing an inter-domain image distribution adaptive model according to claim 2, characterized by: in S204, the setting conditions are: and the loss data of the image domain judging network Ncl is smaller than a preset value.
6. An unsupervised adaptive image segmentation method is characterized in that:
comprises the following steps of (a) carrying out,
s301: obtaining a source domain image in a source domain image set, wherein the image in the source domain image set comprises a marked characteristic region;
obtaining a target domain image in a target domain image set, wherein the target domain image has or does not have a region to be segmented similar to a characteristic region in the source domain image set;
establishing an image domain discrimination network Ncl for discriminating the domain to which the image belongs by taking the source domain image and the target domain image as input through a training function;
s302: taking the target domain image as input, carrying out image reconstruction on the target domain image to obtain a learned reconstruction network Nre; the reconstructed image information comprises shallow information M1 corresponding to the image and deep semantic information M2, and the image reconstruction network Nre comprises a shallow information module nm1 corresponding to the shallow information M1 and a semantic information module nm2 corresponding to the deep semantic information;
s303: taking a target domain image as input, judging image data obtained after the target domain image passes through a reconstruction network Nre through an image domain judging network Ncl, optimizing and adjusting parameters of the shallow information module nm1 according to loss data of the image domain judging network Ncl, and keeping the parameters of the semantic information module nm2 unchanged;
s304: repeating the step of S303, continuously optimizing the reconstructed network Nre until a condition is set, where the reconstructed network is the switching network Ntr after optimization;
s305: training an image segmentation network Nse aiming at the characteristic region and the non-characteristic region through machine learning based on the source domain image set and the labeled characteristic region;
s306: converting an image P to be analyzed in the target domain image set into a converted image P' which has a source domain style and retains semantic information through a conversion network Ntr;
s307: the image segmentation network Nse is used to perform image segmentation on the converted image P'.
7. An unsupervised adaptive image segmentation method as defined in claim 6, wherein: the source domain image set is a labeled breast MRI image, the characteristic region is a labeled tumor or cancer tissue region, and the target domain image is an unlabeled breast MRI image.
8. A breast cancer screening device based on self-adaptive image segmentation is characterized in that:
the device comprises:
the device comprises an acquisition unit, a comparison unit and a processing unit, wherein the acquisition unit is used for acquiring a source domain image in a source domain image set, the image in the source domain image set comprises a marked characteristic region, the source domain image set is a marked breast MRI image, and the characteristic region is a marked tumor or cancer tissue region;
the system is also used for acquiring a target domain image in a target domain image set, wherein the target domain image set is an unmarked breast MRI image, and an image part of a corresponding tumor or cancer tissue region is possibly contained in the target domain image;
the image domain judging unit is used for establishing an image domain judging network for judging the domain to which the image belongs through a training function by taking the source domain image and the target domain image acquired by the acquiring unit as input;
an image reconstruction unit which takes the target domain image as input and carries out image reconstruction on the target domain image to obtain a learned reconstruction network Nre; the reconstructed image information comprises shallow information and deep semantic information corresponding to the image, and the image reconstruction network Nre comprises a shallow information module nm1 corresponding to the shallow information of the image and a semantic information module nm2 corresponding to the deep semantic information;
the image conversion network optimization unit takes a target domain image as input, image data obtained after the target domain image passes through a reconstruction network Nre is judged through an image domain judgment network Ncl, according to loss data of the image domain judgment network Ncl, parameters of the shallow information module nm1 are optimized and adjusted, and parameters of the semantic information module nm2 are kept unchanged; repeating the optimization and adjustment processes until set conditions, and reconstructing a network after optimization as a conversion network Ntr;
a source domain image segmentation network training unit, configured to train an image segmentation network Nse for a feature region and a non-feature region through machine learning in a source domain image set and the labeled feature region thereof;
a target domain image segmentation unit, which converts the image P to be analyzed in the target domain image set into a converted image P 'having a source domain style and retaining semantic information through a conversion network Ntr, and performs image segmentation on the converted image P' by using the image segmentation network Nse; the corresponding characteristic region after the image segmentation is the image region of the tumor or cancer tissue screened by the suspected breast cancer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911264888.9A CN111179277B (en) | 2019-12-11 | 2019-12-11 | Unsupervised self-adaptive breast lesion segmentation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911264888.9A CN111179277B (en) | 2019-12-11 | 2019-12-11 | Unsupervised self-adaptive breast lesion segmentation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111179277A true CN111179277A (en) | 2020-05-19 |
CN111179277B CN111179277B (en) | 2023-05-02 |
Family
ID=70657198
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911264888.9A Active CN111179277B (en) | 2019-12-11 | 2019-12-11 | Unsupervised self-adaptive breast lesion segmentation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111179277B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112001398A (en) * | 2020-08-26 | 2020-11-27 | 科大讯飞股份有限公司 | Domain adaptation method, domain adaptation device, domain adaptation apparatus, image processing method, and storage medium |
CN112686906A (en) * | 2020-12-25 | 2021-04-20 | 山东大学 | Image segmentation method and system based on uniform distribution migration guidance |
CN112784879A (en) * | 2020-12-31 | 2021-05-11 | 前线智能科技(南京)有限公司 | Medical image segmentation or classification method based on small sample domain self-adaption |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108062753A (en) * | 2017-12-29 | 2018-05-22 | 重庆理工大学 | The adaptive brain tumor semantic segmentation method in unsupervised domain based on depth confrontation study |
CN110322446A (en) * | 2019-07-01 | 2019-10-11 | 华中科技大学 | A kind of domain adaptive semantic dividing method based on similarity space alignment |
CN110533044A (en) * | 2019-05-29 | 2019-12-03 | 广东工业大学 | A kind of domain adaptation image, semantic dividing method based on GAN |
-
2019
- 2019-12-11 CN CN201911264888.9A patent/CN111179277B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108062753A (en) * | 2017-12-29 | 2018-05-22 | 重庆理工大学 | The adaptive brain tumor semantic segmentation method in unsupervised domain based on depth confrontation study |
CN110533044A (en) * | 2019-05-29 | 2019-12-03 | 广东工业大学 | A kind of domain adaptation image, semantic dividing method based on GAN |
CN110322446A (en) * | 2019-07-01 | 2019-10-11 | 华中科技大学 | A kind of domain adaptive semantic dividing method based on similarity space alignment |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112001398A (en) * | 2020-08-26 | 2020-11-27 | 科大讯飞股份有限公司 | Domain adaptation method, domain adaptation device, domain adaptation apparatus, image processing method, and storage medium |
CN112001398B (en) * | 2020-08-26 | 2024-04-12 | 科大讯飞股份有限公司 | Domain adaptation method, device, apparatus, image processing method, and storage medium |
CN112686906A (en) * | 2020-12-25 | 2021-04-20 | 山东大学 | Image segmentation method and system based on uniform distribution migration guidance |
CN112784879A (en) * | 2020-12-31 | 2021-05-11 | 前线智能科技(南京)有限公司 | Medical image segmentation or classification method based on small sample domain self-adaption |
Also Published As
Publication number | Publication date |
---|---|
CN111179277B (en) | 2023-05-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021114130A1 (en) | Unsupervised self-adaptive mammary gland lesion segmentation method | |
CN111784671B (en) | Pathological image focus region detection method based on multi-scale deep learning | |
Zuo et al. | R2AU-Net: attention recurrent residual convolutional neural network for multimodal medical image segmentation | |
CN109801272B (en) | Liver tumor automatic segmentation positioning method, system and storage medium | |
CN112102266B (en) | Attention mechanism-based cerebral infarction medical image classification model training method | |
CN111179277B (en) | Unsupervised self-adaptive breast lesion segmentation method | |
CN112365980B (en) | Brain tumor multi-target auxiliary diagnosis and prospective treatment evolution visualization method and system | |
CN111275686B (en) | Method and device for generating medical image data for artificial neural network training | |
CN114022718B (en) | Digestive system pathological image recognition method, system and computer storage medium | |
CN112862805B (en) | Automatic auditory neuroma image segmentation method and system | |
Popescu et al. | Retinal blood vessel segmentation using pix2pix gan | |
CN115375711A (en) | Image segmentation method of global context attention network based on multi-scale fusion | |
CN112132815A (en) | Pulmonary nodule detection model training method, detection method and device | |
Zhou et al. | Evolutionary neural architecture search for automatic esophageal lesion identification and segmentation | |
CN113539402B (en) | Multi-mode image automatic sketching model migration method | |
CN114693671A (en) | Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning | |
Pal et al. | A fully connected reproducible SE-UResNet for multiorgan chest radiographs segmentation | |
CN115965785A (en) | Image segmentation method, device, equipment, program product and medium | |
CN113409273B (en) | Image analysis method, device, equipment and medium | |
CN115187512A (en) | Hepatocellular carcinoma great vessel invasion risk prediction method, system, device and medium | |
Zhang et al. | Research on brain glioma segmentation algorithm | |
CN114463320A (en) | Magnetic resonance imaging brain glioma IDH gene prediction method and system | |
CN113379770A (en) | Nasopharyngeal carcinoma MR image segmentation network construction method, image segmentation method and device | |
CN114612373A (en) | Image identification method and server | |
CN111815569A (en) | Image segmentation method, device and equipment based on deep learning and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |