CN114463332B - Unsupervised domain adaptation method and system for cross-data source medical image segmentation - Google Patents

Unsupervised domain adaptation method and system for cross-data source medical image segmentation Download PDF

Info

Publication number
CN114463332B
CN114463332B CN202210381144.0A CN202210381144A CN114463332B CN 114463332 B CN114463332 B CN 114463332B CN 202210381144 A CN202210381144 A CN 202210381144A CN 114463332 B CN114463332 B CN 114463332B
Authority
CN
China
Prior art keywords
image
domain
current iteration
under
source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210381144.0A
Other languages
Chinese (zh)
Other versions
CN114463332A (en
Inventor
刘涛
丁少东
程健
刘子阳
刘旺开
徐红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202210381144.0A priority Critical patent/CN114463332B/en
Publication of CN114463332A publication Critical patent/CN114463332A/en
Application granted granted Critical
Publication of CN114463332B publication Critical patent/CN114463332B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an unsupervised domain adaptation method and an unsupervised domain adaptation system for cross-data source medical image segmentation, and relates to the technical field of medical image segmentation. The method comprises the following steps: inputting a source graph and a target graph to be processed into an image translation network to obtain a class source domain and a class target domain image after translation; training an encoder, an image translation decoder module and a discriminator module under the last iteration number based on the discriminator module, the translated similar target domain and the similar source domain image under the last iteration number; and respectively inputting the source graph and the target graph to be processed and the translated similar target domain and similar source domain images into the image segmentation model under the current iteration times to obtain a segmentation result, and training the image segmentation model according to the label and the segmentation result of the source graph to be processed. The method can improve the segmentation precision of the medical image segmentation model when segmenting the target domain data source.

Description

Unsupervised domain adaptation method and system for cross-data source medical image segmentation
Technical Field
The invention relates to the technical field of medical image segmentation, in particular to an unsupervised domain adaptation method and system for cross-data source medical image segmentation.
Background
In the field of medical images, medical image data scanned by the same imaging machine, the same imaging parameter and the same imaging sequence are generally regarded as a single domain, a training set is called a source domain, and a test set is called a target domain. Under the influence of different brands of imaging machines, different imaging parameters and different imaging sequences, in a clinical environment, brightness, contrast and structural differences exist among medical images of different data sources, and the differences are called domain shifts. In an actual clinical environment, a deep learning segmentation model pre-trained on a source domain is extremely susceptible to domain deviation, and the segmentation accuracy of the deep learning segmentation model on a medical image of a target domain is greatly reduced. In addition, the training of the deep learning segmentation model requires pixel-level labeling of medical images, which is expensive, and in a clinical environment, except for an original training data source (source domain) of the segmentation model, other data source (target domain) labels are difficult to obtain, so that the target domain data source often has a large amount of data without labels, which results in that the segmentation model cannot be directly trained on the target domain to obtain the deep learning model with good segmentation accuracy.
The existing technology for improving the data source-crossing segmentation capability of a deep learning segmentation model is to perform data expansion on an existing labeled data set, and the method comprises two methods of collecting diversified medical image data or adopting data augmentation, for example: the training personnel of the deep learning segmentation model usually collects medical images of a plurality of data sources to expand a source domain training data set of the medical images, enriches the knowledge learned by the deep learning model as much as possible, or simulates a target domain magnetic resonance image to achieve the aim of data augmentation, namely, the data augmentation is carried out on the existing source domain training data set by adopting the modes of random rotation, random scaling, random translation, random shearing and random contrast enhancement, but the existing method for improving the data source segmentation capability of the deep learning segmentation model has the defects that only source domain labeled data is utilized, abundant unlabeled data of a target domain is not utilized, and the deep learning segmentation model cannot be trained on a target domain data source, so the method is only suitable for the problem of data source medical image segmentation with small domain deviation degree, but when the target domain data with large domain deviation degree is segmented, the segmentation accuracy thereof is still seriously degraded.
Disclosure of Invention
The invention aims to provide an unsupervised domain adaptation method and an unsupervised domain adaptation system for cross-data source medical image segmentation, which can improve the segmentation precision of a medical image segmentation model when segmenting a target domain data source.
In order to achieve the purpose, the invention provides the following scheme:
an unsupervised domain adaptation method for medical image segmentation across data sources, comprising:
constructing a generation countermeasure network and acquiring a training data set, wherein the generation countermeasure network comprises an encoder, an image translation decoder module, an image segmentation decoder and a discriminator module; the encoder, the image translation decoder module and the discriminator module are connected in series, the encoder is also connected with the image segmentation decoder, and the encoder and the image translation decoder module form an image translation network; the encoder and the image segmentation decoder constitute an image segmentation model; the training data set comprises a source domain data set and a target domain data set, each of which is composed of a plurality of medical images;
under the current iteration times, one medical image is selected from the source domain data set and the target domain data set to serve as a source image to be processed and a target image to be processed under the current iteration times, and the source image to be processed and the target image to be processed under the current iteration times are input into an image translation network under the last iteration times to obtain a translated similar source domain image and a translated similar target domain image;
respectively training an encoder, an image translation decoder module and a discriminator module under the last iteration number based on the translated class target domain image, the translated class source domain image and the discriminator module under the last iteration number to obtain the encoder, the image translation decoder module and the discriminator module which are trained under the current iteration number;
inputting the source graph to be processed under the current iteration number, the translated class target domain image, the target graph to be processed under the current iteration number and the translated class source domain image into an image segmentation model under the current iteration number respectively to obtain a first segmentation result, a second segmentation result, a third segmentation result and a fourth segmentation result in sequence;
training the image segmentation model under the current iteration number by adopting a gradient descent method to obtain an optimized image segmentation model under the current iteration number, wherein the result obtained by calculation according to the label of the source image to be processed under the current iteration number, the first segmentation result and the second segmentation result is used as a domain loss;
taking a result obtained by calculation according to the third segmentation result and the fourth segmentation result as a target domain consistency loss, training the image segmentation model optimized under the current iteration times by adopting a gradient descent method to obtain an optimal image segmentation model under the current iteration times, and judging whether the optimal image segmentation model under the current iteration times reaches an iteration stop condition or not to obtain a first judgment result;
if the first judgment result is negative, obtaining an image translation network under the current iteration times according to an encoder in the optimal image segmentation model under the current iteration times and an image translation decoder module trained under the current iteration times, and performing next iteration;
and if the first judgment result is yes, processing the target domain data set by using the optimal image segmentation model.
An unsupervised domain adaptation system for medical image segmentation across data sources, comprising:
the system comprises a construction module, a training data set and a database module, wherein the construction module is used for constructing a generation countermeasure network and acquiring the training data set, and the generation countermeasure network comprises an encoder, an image translation decoder module, an image segmentation decoder and a discriminator module; the encoder, the image translation decoder module and the discriminator module are connected in series, the encoder is also connected with the image segmentation decoder, and the encoder and the image translation decoder module form an image translation network; the encoder and the image segmentation decoder constitute an image segmentation model; the training data set comprises a source domain data set and a target domain data set, each of which is composed of a plurality of medical images;
the translation module is used for selecting one medical image in the source domain data set and the target domain data set as a source image to be processed and a target image to be processed under the current iteration number respectively under the current iteration number, and inputting the source image to be processed and the target image to be processed under the current iteration number into an image translation network under the previous iteration number to obtain a translated similar source domain image and a translated similar target domain image;
the encoding and decoding discriminator updating module is used for respectively training the encoder, the image translation decoder module and the discriminator module under the previous iteration times based on the translated class target domain image, the translated class source domain image and the discriminator module under the previous iteration times to obtain the encoder, the image translation decoder module and the discriminator module which are trained under the current iteration times;
the image segmentation module is used for respectively inputting the source image to be processed under the current iteration times, the translated similar target domain image, the target image to be processed under the current iteration times and the translated similar source domain image into the image segmentation model under the current iteration times to sequentially obtain a first segmentation result, a second segmentation result, a third segmentation result and a fourth segmentation result;
the segmentation model first optimization module is used for training the image segmentation model under the current iteration times by adopting a gradient descent method to obtain an optimized image segmentation model under the current iteration times, wherein a result obtained by calculation according to the label of the source image to be processed under the current iteration times, the first segmentation result and the second segmentation result is used as a domain loss;
a segmentation model final optimization module, configured to train the image segmentation model optimized for the current iteration number by using a result obtained by calculation according to the third segmentation result and the fourth segmentation result as a target domain consistency loss, obtain an optimal image segmentation model for the current iteration number by using a gradient descent method, and determine whether the optimal image segmentation model for the current iteration number reaches an iteration stop condition, so as to obtain a first determination result;
the iteration loop module is used for obtaining an image translation network under the current iteration times and carrying out the next iteration according to an encoder in the optimal image segmentation model under the current iteration times and the image translation decoder module trained under the current iteration times if the first judgment result is negative;
and the data processing module is used for processing the target domain data set by using the optimal image segmentation model if the first judgment result is positive.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: in the invention, under the current iteration times, one medical image is selected from a source domain data set and a target domain data set as a source image to be processed and a target image to be processed under the current iteration times respectively, and the source image to be processed and the target image to be processed under the current iteration times are input into an image translation network under the last iteration times to obtain a translated similar source domain image and a translated similar target domain image; respectively training an encoder, an image translation decoder module and a discriminator module under the previous iteration number based on the translated class target domain image, the translated class source domain image and the discriminator module under the previous iteration number to obtain the encoder, the image translation decoder module and the discriminator module which are trained under the current iteration number; respectively inputting the source graph to be processed under the current iteration times, the translated class target domain image, the target graph to be processed under the current iteration times and the translated class source domain image into an image segmentation model under the current iteration times to sequentially obtain a first segmentation result, a second segmentation result, a third segmentation result and a fourth segmentation result; training an image segmentation model under the current iteration number by adopting a gradient descent method to obtain an optimized image segmentation model under the current iteration number, wherein a result obtained by calculation according to the label of the source image to be processed, the first segmentation result and the second segmentation result under the current iteration number is used as a domain loss; taking a result obtained by calculation according to the third segmentation result and the fourth segmentation result as a target domain consistency loss, training the optimized image segmentation model under the current iteration times by adopting a gradient descent method to obtain an optimal image segmentation model under the current iteration times, and judging whether the optimal image segmentation model under the current iteration times reaches an iteration stop condition or not to obtain a first judgment result; if the first judgment result is negative, obtaining an image translation network under the current iteration times according to an encoder in the optimal image segmentation model under the current iteration times and an image translation decoder module trained under the current iteration times and carrying out next iteration; if the first judgment result is yes, the optimal image segmentation model is used for processing the target domain data set, the source domain data and the label of the source domain data and the data without the label of the target domain are used for training and adjusting the segmentation model, a segmentation model with high segmentation precision on the target domain data source is directly obtained, and a solution is provided for the application of the deep learning model in a clinical environment.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a block diagram of a generation countermeasure network provided by an embodiment of the present invention;
fig. 2 is a network structure diagram of an arbiter provided in an embodiment of the present invention;
FIG. 3 is a network structure diagram of an image segmentation model according to an embodiment of the present invention;
FIG. 4 is a flowchart of an unsupervised domain adaptation method for cross-data source medical image segmentation provided by an embodiment of the present invention;
fig. 5 is a network structure diagram of a source domain-to-target domain image translation network and a target domain-to-source domain image translation network according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention more comprehensible, the present invention is described in detail with reference to the accompanying drawings and the detailed description thereof.
The domain deviation can cause the segmentation precision of the deep learning segmentation model pre-trained on a certain data source (source domain) on a test data source (target domain) to be reduced. If the domain deviation degree is too large, the pre-trained deep learning model cannot complete the segmentation task on the target domain. In statistics, domain migration can be generally described as a problem of inconsistent data distribution between a source domain and a target domain, so the present invention proposes a domain adaptation method to align the data distribution between the source domain and the target domain. In addition, labels of medical image data of a target domain are often unavailable, so the invention provides an unsupervised domain adaptation method, which utilizes source domain data, labels thereof and data without the labels of the target domain to train and adjust a segmentation model, directly obtains a segmentation model with higher segmentation precision on a data source of the target domain, provides a solution for the application of a deep learning model in a clinical environment, and is used for the unsupervised domain adaptation method of cross-data source medical image segmentation, and comprises the following steps:
constructing a generation countermeasure network and acquiring a training data set, wherein the generation countermeasure network comprises an encoder, an image translation decoder module, an image segmentation decoder and a discriminator module; the encoder, the image translation decoder module and the discriminator module are connected in series, the encoder is also connected with the image segmentation decoder, and the encoder and the image translation decoder module form an image translation network; the encoder and the image segmentation decoder constitute an image segmentation model; the training dataset comprises a source domain dataset and a target domain dataset each consisting of a plurality of medical images.
And under the current iteration times, selecting a medical image in the source domain data set and the target domain data set as a to-be-processed source image and a to-be-processed target image under the current iteration times respectively, and inputting the to-be-processed source image and the to-be-processed target image under the current iteration times into an image translation network under the last iteration times to obtain a translated similar source domain image and a translated similar target domain image.
And training the encoder, the image translation decoder module and the discriminator module under the previous iteration number respectively based on the translated class target domain image, the translated class source domain image and the discriminator module under the previous iteration number to obtain the encoder, the image translation decoder module and the discriminator module which are trained under the current iteration number.
And respectively inputting the source graph to be processed under the current iteration times, the translated class target domain image, the target graph to be processed under the current iteration times and the translated class source domain image into an image segmentation model under the current iteration times to sequentially obtain a first segmentation result, a second segmentation result, a third segmentation result and a fourth segmentation result.
And training the image segmentation model under the current iteration number by adopting a gradient descent method to obtain an optimized image segmentation model under the current iteration number, wherein the result obtained by calculation according to the label of the source image to be processed under the current iteration number, the first segmentation result and the second segmentation result is used as a domain loss.
And taking a result obtained by calculation according to the third segmentation result and the fourth segmentation result as a target domain consistency loss, training the image segmentation model optimized under the current iteration times by adopting a gradient descent method to obtain an optimal image segmentation model under the current iteration times, and judging whether the optimal image segmentation model under the current iteration times reaches an iteration stop condition to obtain a first judgment result.
And if the first judgment result is negative, obtaining an image translation network under the current iteration times according to an encoder in the optimal image segmentation model under the current iteration times and the image translation decoder module trained under the current iteration times, and performing the next iteration.
And if the first judgment result is yes, processing the target domain data set by using the optimal image segmentation model.
In practical applications, the training of the encoder, the image translation decoder module, and the discriminator module under the previous iteration number based on the translated class target domain image, the translated class source domain image, and the discriminator module under the previous iteration number is respectively performed to obtain the encoder, the image translation decoder module, and the discriminator module trained under the current iteration number, which specifically include:
and training the image translation network under the previous iteration number by adopting a gradient descent method to obtain the optimized image translation network under the current iteration number by taking the result obtained by inputting the translated class target domain image and the translated class source domain image into the discriminator module under the previous iteration number as the calculation countermeasure loss.
And inputting the translated type target domain image and the translated type source domain image into the optimized image translation network under the current iteration times to obtain a reconstructed target domain image and a reconstructed source domain image.
And respectively training an encoder and an image translation decoder module in the optimized image translation network under the current iteration times by adopting a gradient descent method to obtain the encoder and the image translation decoder module trained under the current iteration times, wherein results obtained by calculation according to the reconstructed source domain image, the reconstructed target domain image, the source graph to be processed under the current iteration times and the target graph to be processed under the current iteration times are used as cycle consistency loss.
And training the discriminator module under the previous iteration number by adopting a gradient descent method to obtain the discriminator module under the current iteration number by taking a result obtained by calculation according to the target graph to be processed under the current iteration number, the source graph to be processed under the current iteration number, the translated class target domain image and the translated class source domain image as the discriminator countermeasure loss.
In practical application, the invention generates an antagonistic network (cycleGAN) thought based on cycle consistencyIt is particularly proposed that the generation countermeasure network shown in fig. 1, wherein the image translation decoding module includes a first image translation decoder and a second image translation decoder; the discriminator module comprises a first discriminator and a second discriminator, and the encoder
Figure DEST_PATH_IMAGE001
Respectively with the first image translation decoder
Figure 977352DEST_PATH_IMAGE002
Image segmentation decoder
Figure DEST_PATH_IMAGE003
And a second image translation decoder
Figure 550285DEST_PATH_IMAGE004
Is connected to the input terminal of the first image translation decoder, and the output terminal of the first image translation decoder is connected to the first discriminator
Figure DEST_PATH_IMAGE005
Is connected to the input terminal of the first image translation decoder, and the output terminal of the second image translation decoder is connected to the second discriminator
Figure 923497DEST_PATH_IMAGE006
Is connected to the input terminal of the controller. The image translation network comprises a source domain-to-target domain image translation network and a target domain-to-source domain image translation network; the encoder and the first image translation decoder comprise the source domain to target domain image translation network; the encoder and the second image translation decoder form the target domain-to-source domain image translation network, and the following gives detailed descriptions of the parts in the countermeasure network:
1)
Figure DEST_PATH_IMAGE007
: see the encoder on the left side of fig. 1, the input of which contains a total of four terms, i.e. source domain medical images
Figure 912182DEST_PATH_IMAGE008
Translated class object domain image
Figure DEST_PATH_IMAGE009
Target domain medical image
Figure 523292DEST_PATH_IMAGE010
Translated source-like domain images
Figure DEST_PATH_IMAGE011
And extracting the features of four different levels of the four inputs (the features of the four levels are described in the countermeasure multi-level feature bidirectional alignment module).
2)
Figure 204809DEST_PATH_IMAGE012
Figure DEST_PATH_IMAGE013
And
Figure 65318DEST_PATH_IMAGE014
a)
Figure 795376DEST_PATH_IMAGE012
: referring to the second row of the top part of FIG. 1, the decoder and the encoder together form a source domain to target domain image translation network
Figure DEST_PATH_IMAGE015
Is responsible for encoding the encoder
Figure 57730DEST_PATH_IMAGE007
And respectively translating and reconstructing the extracted multi-level characteristics of the source domain image and the multi-level characteristics of the similar target domain image.
b)
Figure 582253DEST_PATH_IMAGE013
: in the second column of the lowest part of FIG. 1, the decoder and the encoder together form a target domain-to-source domain image translation network
Figure 602161DEST_PATH_IMAGE016
Is responsible for encoding the encoder
Figure 667069DEST_PATH_IMAGE007
And respectively performing image translation and reconstruction on the extracted multi-level features of the target domain image and the multi-level features of the source domain-like image.
c)
Figure DEST_PATH_IMAGE017
: in the middle of the second column of FIG. 1, the decoder and the encoder together form an image segmentation model
Figure 987192DEST_PATH_IMAGE018
Is responsible for encoding the encoder
Figure DEST_PATH_IMAGE019
And performing image segmentation on the extracted multi-level image features of all input images.
3)
Figure 10512DEST_PATH_IMAGE020
a)
Figure DEST_PATH_IMAGE021
: and the system is responsible for judging the truth between the class target domain image and the real target domain image output by the source domain-to-target domain image translation network so as to guide the translation network to generate more vivid class targets and images.
b)
Figure 111192DEST_PATH_IMAGE022
: and the system is responsible for judging the truth between the class source domain image and the real source domain image output by the target domain-to-source domain image translation network so as to guide the translation network to generate a more vivid class source domain image.
In practical application, the inputting a source graph to be processed and a target graph to be processed under a current iteration number into an image translation network under a previous iteration number to obtain a translated class source domain image and a translated class target domain image specifically includes:
and inputting the source graph to be processed under the current iteration times into a source domain-to-target domain image translation network under the previous iteration times to obtain a translated class target domain image.
And inputting the target graph to be processed under the current iteration times into a target domain-to-source domain image translation network under the previous iteration times to obtain a translated similar source domain image.
In practical applications, the training of the image translation network under the previous iteration number by using the result obtained by inputting the translated class target domain image and the translated class source domain image into the discriminator module under the previous iteration number as the calculation countermeasure loss by using a gradient descent method to obtain the optimized image translation network under the current iteration number specifically includes:
inputting the translated class target domain image into a first discriminator under the last iteration number to obtain a target domain calculation countermeasure loss, training a source domain-to-target domain image translation network under the last iteration number by adopting a gradient descent method according to the target domain calculation countermeasure loss to obtain an optimized source domain-to-target domain image translation network under the current iteration number, inputting the translated class source domain image into a second discriminator under the last iteration number to obtain a source domain calculation countermeasure loss, and training the target domain-to-source domain image translation network under the last iteration number by adopting the gradient descent method according to the source domain calculation countermeasure loss to obtain an optimized target domain-to-source domain image translation network under the current iteration number.
In practical application, the inputting the translated class target domain image and the translated class source domain image into the optimized image translation network under the current iteration number to obtain a reconstructed target domain image and a reconstructed source domain image specifically includes:
inputting the translated class target domain image into an optimized target domain-to-source domain image translation network under the current iteration times to obtain a reconstructed source domain image.
And inputting the translated similar source domain image into an optimized source domain-to-target domain image translation network under the current iteration times to obtain a reconstructed target domain image.
In practical application, the training of the encoder and the image translation decoder module in the optimized image translation network under the current iteration number by using a gradient descent method to obtain the encoder and the image translation decoder module trained under the current iteration number with a result obtained by calculation according to the reconstructed source domain image, the reconstructed target domain image, the source map to be processed under the current iteration number and the target map to be processed under the current iteration number as a loop consistency loss specifically includes:
and obtaining a source domain gradient cut-off type cycle consistency loss according to the reconstructed source domain image and the source image to be processed under the current iteration times.
And obtaining the gradient cut-off type cycle consistency loss of the target domain according to the reconstructed target domain image and the target image to be processed under the current iteration times.
And training the encoder and the second image translation decoder in the image translation network from the optimized target domain to the source domain under the current iteration number respectively by adopting a gradient descent method according to the gradient truncation type cycle consistency loss of the source domain, and training the encoder and the first image translation decoder in the image translation network from the optimized source domain to the target domain under the current iteration number respectively by adopting a gradient descent method according to the gradient truncation type cycle consistency loss of the target domain to obtain the encoder, the first image translation decoder and the second image translation decoder which are trained under the current iteration number.
In practical application, the training of the discriminator module under the previous iteration number by using a gradient descent method with the result obtained by calculation according to the target graph to be processed under the current iteration number, the source graph to be processed under the current iteration number, the translated class target domain image and the translated class source domain image as the discriminator countermeasure loss to obtain the discriminator module under the current iteration number specifically includes:
and calculating the first discriminator countermeasure loss according to the target graph to be processed under the current iteration times and the translated class target domain image.
And training the first discriminator under the previous iteration number by adopting a gradient descent method according to the confrontation loss of the first discriminator to obtain the first discriminator under the current iteration number.
And calculating the confrontation loss of a second discriminator according to the source graph to be processed under the current iteration number and the translated source-like domain image.
And training the second discriminator under the previous iteration number by adopting a gradient descent method according to the confrontation loss of the second discriminator to obtain the second discriminator under the current iteration number.
In practical application, taking a result obtained by calculation according to the label of the source graph to be processed under the current iteration number, the first segmentation result and the second segmentation result as a domain loss, training the image segmentation model under the current iteration number by adopting a gradient descent method to obtain an optimized image segmentation model under the current iteration number, and specifically comprising the following steps:
and training the image segmentation model under the current iteration number by adopting a gradient descent method to obtain the optimized image segmentation model under the current iteration number, wherein the result obtained by calculation according to the label of the source image to be processed under the current iteration number and the first segmentation result is used as a source domain loss, and the result obtained by calculation according to the label of the source image to be processed under the current iteration number and the second segmentation result is used as a class target domain loss.
The embodiment of the present invention further provides an unsupervised domain adaptation system for cross-data source medical image segmentation, including:
the system comprises a construction module, a training data set and a database module, wherein the construction module is used for constructing a generation countermeasure network and acquiring the training data set, and the generation countermeasure network comprises an encoder, an image translation decoder module, an image segmentation decoder and a discriminator module; the encoder, the image translation decoder module and the discriminator module are connected in series, the encoder is also connected with the image segmentation decoder, and the encoder and the image translation decoder module form an image translation network; the encoder and the image segmentation decoder constitute an image segmentation model; the training dataset comprises a source domain dataset and a target domain dataset each consisting of a plurality of medical images.
And the translation module is used for selecting one medical image in the source domain data set and the target domain data set respectively as a source image to be processed and a target image to be processed under the current iteration times, and inputting the source image to be processed and the target image to be processed under the current iteration times into the image translation network under the previous iteration times to obtain a translated similar source domain image and a translated similar target domain image.
And the encoding and decoding discriminator updating module is used for respectively training the encoder, the image translation decoder module and the discriminator module under the previous iteration times based on the translated class target domain image, the translated class source domain image and the discriminator module under the previous iteration times to obtain the encoder, the image translation decoder module and the discriminator module which are trained under the current iteration times.
And the image segmentation module is used for respectively inputting the source image to be processed under the current iteration times, the translated class target domain image, the target image to be processed under the current iteration times and the translated class source domain image into the image segmentation model under the current iteration times to sequentially obtain a first segmentation result, a second segmentation result, a third segmentation result and a fourth segmentation result.
And the segmentation model first optimization module is used for training the image segmentation model under the current iteration times by adopting a gradient descent method to obtain the optimized image segmentation model under the current iteration times, wherein a result obtained by calculation according to the label of the source image to be processed under the current iteration times, the first segmentation result and the second segmentation result is used as a domain loss.
And the segmentation model final optimization module is used for training the image segmentation model optimized under the current iteration times by using a result obtained by calculation according to the third segmentation result and the fourth segmentation result as a target domain consistency loss through a gradient descent method to obtain an optimal image segmentation model under the current iteration times and judging whether the optimal image segmentation model under the current iteration times reaches an iteration stop condition or not to obtain a first judgment result.
And if the first judgment result is negative, the iteration loop module is used for obtaining an image translation network under the current iteration times according to the encoder in the optimal image segmentation model under the current iteration times and the image translation decoder module trained under the current iteration times and performing the next iteration.
And the data processing module is used for processing the target domain data set by using the optimal image segmentation model if the first judgment result is yes.
In practical applications, the encoder update module specifically includes:
and the translation network optimization submodule is used for training the image translation network under the previous iteration times by adopting a gradient descent method to obtain the optimized image translation network under the current iteration times by taking the result obtained by inputting the translated class target domain image and the translated class source domain image into the discriminator module under the previous iteration times as the calculation countermeasure loss.
And the reconstructed map determining submodule is used for inputting the translated class target domain image and the translated class source domain image into the optimized image translation network under the current iteration times to obtain a reconstructed target domain image and a reconstructed source domain image.
And the coding, decoding and updating submodule is used for training an encoder and an image translation decoder module in the optimized image translation network under the current iteration times by adopting a gradient descent method to obtain the encoder and the image translation decoder module which are trained under the current iteration times, wherein results obtained by calculation according to the reconstructed source domain image, the reconstructed target domain image, the source image to be processed under the current iteration times and the target image to be processed under the current iteration times are taken as cycle consistency loss.
And the discriminator updating submodule is used for training the discriminator module under the previous iteration number by adopting a gradient descent method to obtain the discriminator module under the current iteration number by taking the result obtained by calculation according to the target graph to be processed under the current iteration number, the source graph to be processed under the current iteration number, the translated class target domain image and the translated class source domain image as the discriminator countermeasure loss.
In practical application, the image translation decoding module comprises a first image translation decoder and a second image translation decoder; the discriminator module comprises a first discriminator and a second discriminator.
The output end of the encoder is respectively connected with the input ends of the first image translation decoder, the image segmentation decoder and the second image translation decoder, the output end of the first image translation decoder is connected with the input end of the first discriminator, and the output end of the second image translation decoder is connected with the input end of the second discriminator.
The image translation network comprises a source domain-to-target domain image translation network and a target domain-to-source domain image translation network; the encoder and the first image translation decoder comprise the source domain to target domain image translation network; the encoder and the second image translation decoder comprise the target domain-to-source domain image translation network.
In practical applications, the image segmentation model has a structure as shown in fig. 3, and includes an encoder E and an image segmentation decoder S connected in sequence, where E includes a first convolution block network C1, a first downsampling layer, a second residual convolution block network R2, a second downsampling layer, a third residual convolution block network R3, a third downsampling layer, a fourth residual convolution block network R4, a 4-repetition fifth residual convolution block network R5, a ninth residual convolution block network R9, and a tenth convolution block network C10 connected in sequence. The S comprises a fourth up-sampling layer, an eleventh convolution block network C11, a third up-sampling layer, a twelfth convolution block network C12, a second up-sampling layer, a thirteenth convolution block network C13 and a fourteenth single-layer convolution network C14 which are connected in sequence. In addition, the dotted arrow in the middle of fig. 3 represents the connection relationship between E and S, and since the skip connection of UNet is adopted, S in the figure utilizes the output characteristic diagram of the corresponding hierarchical convolutional network of E, and the segmentation performance of the segmented network is further improved.
In practical applications, the source domain-to-target domain image translation network and the target domain-to-source domain image translation network have the same structure, and as shown in fig. 5, include E and U connected in sequence, where E is the same as E in fig. 3, and U includes two repeated eleventh residual convolution block network R11, seventh upsampling layer, thirteenth residual convolution block network R13, sixth upsampling layer, fourteenth residual convolution block network R14, fifth upsampling layer, fifteenth residual convolution block network R15, sixteenth convolution block network C16, and seventeenth single-layer convolution network C17 connected in sequence. Similar to the jump connection of the image segmentation network in fig. 3, U in fig. 5 is also connected to E by a dashed arrow to utilize the feature map of the convolution output of each level of the encoder E. Taking 256 × 256 input image size as an example, the first-level feature map size is 256 × 256, the second-level feature map size is 128 × 128, the third-level feature map size is 64 × 64, and until the last fourth-level feature map size is 32 × 32, the higher the level count, the smaller the image resolution. The jump connection between S and E, U and E is done at the corresponding level.
As shown in fig. 4, an embodiment of the present invention provides a more specific unsupervised domain adaptation method for cross-data source medical image segmentation, and the main workflow is as follows:
s1: and preprocessing the original image to construct a source domain data set and a target domain data set. Preprocessing raw source domain images and target domain images from different data sources (source domain and target domain) to construct a source domain data set
Figure DEST_PATH_IMAGE023
And a target domain data set
Figure 183053DEST_PATH_IMAGE024
. The pre-processing operations include image resampling and image data normalization. Specifically, the image resampling includes: performing a resampling operation on the input medical image, and resampling the voxel size of the dataSampling to 1 × 1 × 1 mm; image data normalization includes: calculating the mean value and standard deviation of the voxels of a region of interest (ROI), and carrying out Gaussian normalization on the voxels of the ROI; and finally, storing the three-dimensional medical image into a two-dimensional slice. The original input image is preprocessed, so that the segmentation precision of the deep learning model on the input image can be improved, the processing and analyzing speed is increased, and the method has high efficiency and usability.
S2: an overall network architecture diagram is constructed. The overall network architecture is shown in FIG. 1, including the encoderETwo image translation decoders
Figure 154420DEST_PATH_IMAGE012
And
Figure 755166DEST_PATH_IMAGE013
an image segmentation decoderSAnd two image discriminators
Figure 811983DEST_PATH_IMAGE021
And
Figure 421956DEST_PATH_IMAGE022
s3: the source domain data and the target domain data are randomly chosen from the source domain and target domain data sets. From a source domain data set
Figure 185513DEST_PATH_IMAGE023
And a target domain data set
Figure 488318DEST_PATH_IMAGE024
Randomly and respectively selecting source domain medical images
Figure 766853DEST_PATH_IMAGE008
And target domain medical images
Figure 914937DEST_PATH_IMAGE010
As a network input.
S4: extraction of multi-level image features using a shared encoder followed by an image decoderAnd (4) image translation, namely updating the related network by utilizing a gradient descent algorithm after calculating the countermeasure loss. Using encodersEExtraction of
Figure 798580DEST_PATH_IMAGE008
By the multi-level features of
Figure 69024DEST_PATH_IMAGE012
Translating the multi-level feature to obtain a translated class target domain image
Figure 772538DEST_PATH_IMAGE009
Then using a discriminator
Figure 724313DEST_PATH_IMAGE021
Will be provided with
Figure 524779DEST_PATH_IMAGE009
Discriminating calculation of countermeasure loss for target domain
Figure DEST_PATH_IMAGE025
And optimizing the image translation network from the source domain to the target domain by combining a gradient descent method
Figure 434966DEST_PATH_IMAGE015
Aligning the multi-level features of the source domain image from the source domain to the target domain; at the same time, using encodersEExtraction of
Figure 688093DEST_PATH_IMAGE010
By the multi-level features of
Figure 177980DEST_PATH_IMAGE013
Translating the multi-level features into a source-like domain image
Figure 770636DEST_PATH_IMAGE011
Then using a discriminator
Figure 382883DEST_PATH_IMAGE022
Will be provided with
Figure 60989DEST_PATH_IMAGE011
Discrimination of calculating countermeasure loss for source domain
Figure 416884DEST_PATH_IMAGE026
And optimizing the target domain-to-source domain image translation network by combining a gradient descent method
Figure 129625DEST_PATH_IMAGE016
To align the multi-level features of the target domain image from the target domain to the source domain. I.e., the bi-directional multilevel features are aligned.
Multi-level image characteristics: the invention focuses on the use of encodersEAnd extracting multi-level features to perform bidirectional alignment. Encoder for encoding a video signalEFirstly, an encoder performs convolution block operation with convolution kernel size of 5 on an input picture, outputs a high-resolution feature map with size of 64 multiplied by 256, performs residual convolution block operation with convolution kernel size of 3 after performing down sampling once, obtains a medium-high resolution feature map with size of 96 multiplied by 128, performs residual convolution block operation with convolution kernel size of 3 after performing down sampling once again, obtains a medium-resolution feature map with size of 128 multiplied by 64, and performs residual convolution block operation with convolution kernel size of 3 and a common convolution block operation after performing down sampling once to obtain a low-resolution feature map with size of 512 multiplied by 32. Using encodersEThe extraction of the features of the four levels of the input image can align the features of the source domain image and the target domain image in the feature space of four different levels in a bidirectional mode, and the method is also beneficial to generating high-quality translated images.
The encoder extracts features of four different levels of the input image, and then the image translation decoder performs two residual error connection convolution operations on the low-resolution feature map, performs upsampling and convolution with the medium-high-resolution feature map, performs upsampling and convolution with the high-resolution feature map, and outputs a final translation image after performing convolution with the high-resolution feature map. Thus, at each upsampling, the image features extracted by the corresponding hierarchical encoder are introduced as input, which serves two purposes:
1) by utilizing the image characteristics of the multi-level encoder, a more refined and high-quality translation image is obtained.
2) The discriminator can direct the bi-directional alignment of the source and target domain image features in the multi-level feature space.
The image discriminator is used to judge whether the input image is from the source domain or the target domain, and the network structure is shown in fig. 2. It can be seen that it outputs a 1 × 256 × 256 input image after convolution with step 2 for multiple times to be a 1 × 13 × 13 logic diagram, where 1 on the logic diagram indicates from the source domain and 0 indicates from the target domain.
To combat the loss
Figure 405052DEST_PATH_IMAGE025
And
Figure 836033DEST_PATH_IMAGE026
the calculation expression of (a) is as follows:
Figure DEST_PATH_IMAGE027
wherein,
Figure 995619DEST_PATH_IMAGE028
it is indicated that the first discriminator is,
Figure DEST_PATH_IMAGE029
it is indicated that the second discriminator is,
Figure 890763DEST_PATH_IMAGE030
representing target domain data samples
Figure 782496DEST_PATH_IMAGE010
Sampling in a target domain data set
Figure DEST_PATH_IMAGE031
Same principle of
Figure 763090DEST_PATH_IMAGE032
Representing source domain data samples
Figure DEST_PATH_IMAGE033
Sampling in the source domain data set
Figure 929629DEST_PATH_IMAGE034
Where E denotes the mathematical expectation,
Figure DEST_PATH_IMAGE035
is shown as
Figure 679279DEST_PATH_IMAGE036
To do so
Figure DEST_PATH_IMAGE037
Is shown as
Figure 69809DEST_PATH_IMAGE038
S5: and performing retranslation on the class source domain image and the class target domain image by using an image translation network to generate a reconstructed image, calculating gradient truncation type cyclic consistency loss, and updating parameters of a related network by using a gradient descent algorithm. Will be provided with
Figure 209804DEST_PATH_IMAGE009
Input target domain to source domain image translation network
Figure 445613DEST_PATH_IMAGE016
Obtaining a reconstructed source domain image
Figure DEST_PATH_IMAGE039
(here too obtain
Figure 253032DEST_PATH_IMAGE009
Multi-level features of) followed by
Figure 548884DEST_PATH_IMAGE008
Computing source domain gradient truncation-type cyclic consistency loss
Figure 910595DEST_PATH_IMAGE040
Using the loss in combination with the gradient descent method to respectively align the encodersEAnd
Figure 950095DEST_PATH_IMAGE013
updating is carried out; at the same time, will
Figure 346442DEST_PATH_IMAGE011
Input source domain to target domain image translation network
Figure 750878DEST_PATH_IMAGE015
Obtaining a reconstructed target domain image
Figure DEST_PATH_IMAGE041
(here too obtain
Figure 193361DEST_PATH_IMAGE011
Multi-level features of) followed by
Figure 708656DEST_PATH_IMAGE010
Calculating target domain gradient truncation type cyclic consistency loss
Figure 756246DEST_PATH_IMAGE042
Using the loss in combination with the gradient descent method to respectively align the encodersEAnd
Figure 597163DEST_PATH_IMAGE012
and (6) updating.
Gradient truncated cyclic consistency loss calculation:
Figure 995784DEST_PATH_IMAGE040
and
Figure 111507DEST_PATH_IMAGE042
is improved on the basis of the loss of cycle consistency of the original CycleGAN, and canOn the premise of ensuring the consistency of the content before and after image translation, the image generation capability of the image translation network is improved, namely the authenticity of the class target domain image and the class source domain image generated by the two image translation networks is further enhanced, and the function of enhancing the data expansion capability is exerted in the scheme.
The original cycle consistency loss considers the image translation network as a whole, which imposes a strong constraint on the image translation network, and limits the image translation capability of the image translation network, and the formula is as follows:
Figure DEST_PATH_IMAGE043
the present invention improves the original cyclic consistency loss, and when the image translation decoder is updated (i.e. u = dec), the formula is as follows:
Figure 748025DEST_PATH_IMAGE044
wherein
Figure DEST_PATH_IMAGE045
As a pair of encoders
Figure 822160DEST_PATH_IMAGE009
The multi-level characteristics extracted are, likewise,
Figure 442497DEST_PATH_IMAGE046
as a pair of encoders
Figure 565174DEST_PATH_IMAGE011
And extracting multi-level characteristics.
When the encoder is optimized (i.e., u = enc), the parameters of the three decoders are fixed, and the optimization is performed by using the following formula:
Figure DEST_PATH_IMAGE047
s6: and calculating the countermeasure loss at the relevant discriminators by using the source domain image, the class target domain image, the target domain image and the class source domain image, and updating the two discriminators by using a gradient descent algorithm. Will be provided with
Figure 852936DEST_PATH_IMAGE010
Is discriminated as the target domain
Figure 35656DEST_PATH_IMAGE009
The first discriminator confrontation loss is calculated for the source domain by discrimination, and then the discriminator is trained by combining a gradient descent method
Figure 408868DEST_PATH_IMAGE021
(ii) a At the same time, will
Figure 335236DEST_PATH_IMAGE008
Is discriminated as a source domain
Figure 149608DEST_PATH_IMAGE011
Computing second discriminator confrontation loss for the target domain by discrimination, and then training the discriminator by combining a gradient descent method
Figure 565546DEST_PATH_IMAGE022
Training the arbiter, i.e. updating the network parameters of the arbiter, involves three parts: input, loss and optimization algorithms. The input is an original image and a translated image. The optimization algorithm is a gradient descent method. To be provided with
Figure 98158DEST_PATH_IMAGE008
Figure 828217DEST_PATH_IMAGE011
And
Figure 559413DEST_PATH_IMAGE048
input is as
Figure 83935DEST_PATH_IMAGE008
And
Figure 369423DEST_PATH_IMAGE011
the loss is the second discriminator countermeasure loss, and the optimization algorithm is a gradient descent method. Distinguishing device
Figure 434331DEST_PATH_IMAGE022
Will be provided with
Figure 223295DEST_PATH_IMAGE008
Mapping to a logic map with a value of 1 (i.e., discriminating input belongs to source domain), mapping a class source domain image to a logic map with a value of 0 (i.e., discriminating input from target domain), and computing
Figure DEST_PATH_IMAGE049
And then updating the parameters of the discriminator by using a gradient descent method. In the same way, the pair discriminator
Figure 246615DEST_PATH_IMAGE021
When training is carried out, the input image is the target domain image
Figure 550557DEST_PATH_IMAGE010
And source domain images
Figure 622418DEST_PATH_IMAGE011
Against the loss of
Figure 531469DEST_PATH_IMAGE050
S7: and outputting the segmentation results of the source domain image, the class target domain image, the target domain image and the class source domain image by using an image segmentation decoder. Decoder using image segmentation
Figure 194531DEST_PATH_IMAGE014
To pair
Figure 189032DEST_PATH_IMAGE008
The multi-layer characteristics,
Figure 799005DEST_PATH_IMAGE009
The multi-layer characteristics,
Figure 624878DEST_PATH_IMAGE010
The multi-layer characteristics,
Figure 396525DEST_PATH_IMAGE011
The multi-level features are divided to respectively obtain the division results
Figure DEST_PATH_IMAGE051
Figure 471798DEST_PATH_IMAGE052
Figure DEST_PATH_IMAGE053
Figure 947778DEST_PATH_IMAGE054
Image segmentation network: image segmentation network by the aforementioned encoder
Figure 831421DEST_PATH_IMAGE007
And a split decoder
Figure 39548DEST_PATH_IMAGE014
The components are combined as shown in figure 3.
Segmenting the decoder in correspondence with the encoder's cubic downsampling
Figure 539800DEST_PATH_IMAGE014
There are three upsampling and convolution operations. In particular, the decoder is partitioned each time an upsampling operation is performed
Figure 288313DEST_PATH_IMAGE014
All will utilize the corresponding level of the encoder
Figure 292041DEST_PATH_IMAGE007
The extracted image characteristics are used as input to obtain more refinementThe image segmentation result of (1).
S8: and correcting the segmentation results of the source domain image and the class target domain image by using the source domain label, calculating the segmentation loss, and updating relevant parameters of the image segmentation network part by using a gradient descent algorithm. Annotation with source domain images
Figure DEST_PATH_IMAGE055
And the segmentation result
Figure 733386DEST_PATH_IMAGE056
Figure 189776DEST_PATH_IMAGE052
Calculating loss
Figure DEST_PATH_IMAGE057
And
Figure 7559DEST_PATH_IMAGE058
then optimizing the image segmentation network by combining a gradient descent method
Figure DEST_PATH_IMAGE059
To guarantee the segmentation result
Figure 928110DEST_PATH_IMAGE051
Figure 743620DEST_PATH_IMAGE052
Keeping consistent with the source domain image label; (in calculating the loss
Figure 218463DEST_PATH_IMAGE057
And loss of
Figure 512041DEST_PATH_IMAGE058
Time, source domain data tagging
Figure 959203DEST_PATH_IMAGE060
So as to ensure that the segmentation result of the source domain image is consistent with the segmentation result of the class target domain image, and then a gradient descent method is utilized to combine two lossesAnd updating network parameters of the image segmentation network. Wherein the dotted line indicates when the propagation is reversed
Figure DEST_PATH_IMAGE061
Does not affect the updating of the network parameters).
In order to keep the consistency of the source domain image and the translated class target domain image on the labeling information and ensure that the translated image does not generate excessive deformation, the source domain image is utilized for labeling
Figure 273510DEST_PATH_IMAGE055
To keep the original source domain image segmentation result
Figure 438912DEST_PATH_IMAGE051
And translated class object domain image segmentation results
Figure 332919DEST_PATH_IMAGE052
The consistency of (c). Wherein the loss of division
Figure 634587DEST_PATH_IMAGE057
Figure 791899DEST_PATH_IMAGE058
Respectively as follows:
Figure 506914DEST_PATH_IMAGE062
wherein
Figure DEST_PATH_IMAGE063
And
Figure 407874DEST_PATH_IMAGE064
respectively, cross entropy loss and Dice loss with weights, with S representing the loss of the source domain and S
Figure DEST_PATH_IMAGE065
Representing the loss of the translated source domain, i.e. the class target domain, the expressions are:
Figure 157524DEST_PATH_IMAGE066
Wherein
Figure DEST_PATH_IMAGE067
A trade-off factor representing the classification loss at that point,
Figure 837071DEST_PATH_IMAGE068
i.e. the label value at the (i, j) th position of the image, is represented as 1 when the same type as c, 0 when different types are different, and
Figure DEST_PATH_IMAGE069
the predicted value of the segmentation network to the Class c at the pixel point is shown, and the Class c is abbreviated, so that the segmentation task is emphasized, namely the region to be segmented at the position c in the segmentation task image is seen.
Figure 242644DEST_PATH_IMAGE070
Figure DEST_PATH_IMAGE071
Represents the product between the predicted value and the true annotated value, and
Figure 9612DEST_PATH_IMAGE072
indicating the probability value predicted by the segmentation result at that pixel point, and
Figure DEST_PATH_IMAGE073
representing the value marked at the pixel point, the numerator represents 2 times of the total number of overlapped pixels between the segmentation result graph and the real segmentation graph, and a smoothing factorsThe denominator is the sum of the probability of the segmentation result graph and the true value plus a smoothing factors
S9: correcting class source domain image segmentation result by using target domain image segmentation result. And calculating segmentation consistency loss and updating relevant parameters of the image segmentation network part by using a gradient descent algorithm. Segmenting the target domain image into results
Figure 879348DEST_PATH_IMAGE053
Direct as target domain image pseudo label
Figure 909621DEST_PATH_IMAGE074
And the class source domain image segmentation result
Figure 536911DEST_PATH_IMAGE054
Co-computing consistency loss
Figure DEST_PATH_IMAGE075
Then optimizing the image segmentation network by combining a gradient descent method
Figure 576411DEST_PATH_IMAGE059
. The gradient descent method is used to combine with the loss to update the network parameters of the image segmentation network, namely, the network is optimized.
Target domain coherence segmentation loss
Figure 238337DEST_PATH_IMAGE075
: because the labeling information of the target domain data source is not available, the segmentation network cannot be trained by using the target domain data source. Target Domain consistency loss
Figure 642774DEST_PATH_IMAGE075
Reliable segmentation result of segmented network in original target domain data source
Figure 757360DEST_PATH_IMAGE053
As a result of translated class source domain image segmentation
Figure 334972DEST_PATH_IMAGE054
Pseudo tag of
Figure 585825DEST_PATH_IMAGE074
(pseudo labeling), so that target domain labeling information can be indirectly obtained to train the segmentation network.
Figure 223479DEST_PATH_IMAGE076
Wherein
Figure 825362DEST_PATH_IMAGE074
By
Figure 144348DEST_PATH_IMAGE053
Mainly using the threshold of 0.5 pairs
Figure 312024DEST_PATH_IMAGE053
And carrying out confidence constraint, wherein a value higher than 0.5 is considered as a high-confidence segmentation result, and a value lower than 0.5 is considered as a low-confidence result, and the formula is as follows:
Figure DEST_PATH_IMAGE077
s10: and judging whether a target domain high-precision segmentation model is obtained or not, and if not, repeating the steps S3-S9 to iteratively train the overall network. If yes, only the image segmentation network with good generalization performance and high segmentation precision is stored on the target domain image
Figure 589422DEST_PATH_IMAGE059
Partial parameters, and finally discarding other network parts, the image segmenting the network
Figure 209759DEST_PATH_IMAGE078
The depth learning segmentation model with high segmentation precision on the target domain is used for segmenting the target domain image, the annotation information on the target domain is introduced to evaluate the segmentation result of the model on the target domain image, the evaluation index is the calculation of the Dice similarity, the greater the Dice similarity, the better the generalization performance of the model on the target domain image is, the higher the segmentation precision is, the introduction of the target domain annotation information seems to destroy the methodIn the case of the unsupervised condition, the target domain labeling information is not used for training the model but only used for evaluating the model, so that the unsupervised domain adaptation method is still unsupervised.
S11: and after other images of the target domain are preprocessed, the preprocessed target domain image is subjected to image segmentation by using the stored image segmentation network to obtain a target domain image segmentation result. Segmenting networks using saved images
Figure 332436DEST_PATH_IMAGE059
And segmenting the original image of the target domain. The original image of a certain target domain is preprocessed to obtain input
Figure 557881DEST_PATH_IMAGE010
Subsequently segmenting the network using the saved image
Figure 537338DEST_PATH_IMAGE059
The preprocessed target domain original image is processed
Figure 848234DEST_PATH_IMAGE010
The segmentation output is carried out to obtain the final segmentation result of the target domain image
Figure 571339DEST_PATH_IMAGE053
The invention has the following technical effects:
1. the unsupervised domain adaptation method provided by the invention can train the segmentation network by using the source domain data, the labels thereof and the unlabeled data of the target domain without using the labeled information of the target domain, model and align the distribution between the source domain data and the target domain data in hidden spaces of different levels by a deep learning method, and generate the pseudo label of the unlabeled image of the target domain to train the segmentation model, so that a deep learning model with good segmentation precision on a target domain data source can be obtained even if the domain deviation degree is large. The experimental result shows that the segmentation index can be improved by about 60 percent in the cross-device white matter high signal segmentation scene, and the segmentation index can be improved by about 56 percent in the cross-modal heart segmentation scene.
2. The method uses a deep learning method, can simultaneously utilize marked data of a source domain and non-marked data of a target domain, and utilizes an image discriminator to bidirectionally align the data distribution of images of the source domain and the target domain in a multi-level feature space, thereby eliminating the domain offset between the source domain and the target domain in a plurality of feature spaces.
3. The invention improves the cycle consistency loss applied by the original cycleGAN in order to ensure the consistency before and after the image translation, proposes the gradient truncation type cycle consistency loss, reduces the constraint of the original cycle consistency loss, and improves the image quality of the class target domain image and the class source domain image.
4. The invention can monitor the segmentation result of the source domain image and the similar target domain image by using the source domain image labeling information and can promote the image translation network to carry out data expansion aiming at the target domain data source. The segmentation result of the target domain image can be used for monitoring the segmentation result of the class source domain image, and reliable annotation information can be introduced into the target domain data source.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. An unsupervised domain adaptation method for medical image segmentation across data sources, comprising:
constructing a generation countermeasure network and acquiring a training data set, wherein the generation countermeasure network comprises an encoder, an image translation decoder module, an image segmentation decoder and a discriminator module; the encoder, the image translation decoder module and the discriminator module are connected in series, the encoder is also connected with the image segmentation decoder, and the encoder and the image translation decoder module form an image translation network; the encoder and the image segmentation decoder constitute an image segmentation model; the training data set comprises a source domain data set and a target domain data set, each of which is composed of a plurality of medical images;
under the current iteration times, one medical image is selected from the source domain data set and the target domain data set to serve as a source image to be processed and a target image to be processed under the current iteration times, and the source image to be processed and the target image to be processed under the current iteration times are input into an image translation network under the last iteration times to obtain a translated similar source domain image and a translated similar target domain image;
respectively training an encoder, an image translation decoder module and a discriminator module under the previous iteration number based on the translated class target domain image, the translated class source domain image and the discriminator module under the previous iteration number to obtain the encoder, the image translation decoder module and the discriminator module which are trained under the current iteration number;
inputting the source graph to be processed under the current iteration number, the translated class target domain image, the target graph to be processed under the current iteration number and the translated class source domain image into an image segmentation model under the current iteration number respectively to obtain a first segmentation result, a second segmentation result, a third segmentation result and a fourth segmentation result in sequence;
training the image segmentation model under the current iteration number by adopting a gradient descent method to obtain an optimized image segmentation model under the current iteration number, wherein the result obtained by calculation according to the label of the source image to be processed under the current iteration number, the first segmentation result and the second segmentation result is used as a domain loss;
taking a result obtained by calculation according to the third segmentation result and the fourth segmentation result as a target domain consistency loss, training the image segmentation model optimized under the current iteration times by adopting a gradient descent method to obtain an optimal image segmentation model under the current iteration times, and judging whether the optimal image segmentation model under the current iteration times reaches an iteration stop condition or not to obtain a first judgment result;
if the first judgment result is negative, obtaining an image translation network under the current iteration times according to an encoder in the optimal image segmentation model under the current iteration times and an image translation decoder module trained under the current iteration times, and performing next iteration;
and if the first judgment result is yes, processing the target domain data set by using the optimal image segmentation model.
2. The unsupervised domain adaptation method for cross-data-source medical image segmentation according to claim 1, wherein the training of the encoder, the image translation decoder module, and the discriminator module at the previous iteration number based on the translated target-like domain image, the translated source-like domain image, and the discriminator module at the previous iteration number respectively obtains the encoder, the image translation decoder module, and the discriminator module trained at the current iteration number, and specifically includes:
inputting the translated type target domain image and the translated type source domain image into a discriminator module under the last iteration number to obtain a result, serving as a calculation countermeasure loss, and training an image translation network under the last iteration number by adopting a gradient descent method to obtain an optimized image translation network under the current iteration number;
inputting the translated class target domain image and the translated class source domain image into the optimized image translation network under the current iteration times to obtain a reconstructed target domain image and a reconstructed source domain image;
respectively training an encoder and an image translation decoder module in the optimized image translation network under the current iteration times by adopting a gradient descent method to obtain the encoder and the image translation decoder module trained under the current iteration times, wherein results obtained by calculation according to the reconstructed source domain image, the reconstructed target domain image, the source graph to be processed under the current iteration times and the target graph to be processed under the current iteration times are used as cycle consistency loss;
and training the discriminator module under the previous iteration number by adopting a gradient descent method to obtain the discriminator module under the current iteration number by taking a result obtained by calculation according to the target graph to be processed under the current iteration number, the source graph to be processed under the current iteration number, the translated class target domain image and the translated class source domain image as the discriminator countermeasure loss.
3. The unsupervised domain adaptation method for medical image segmentation across data sources of claim 2, wherein the image translation decoding module comprises a first image translation decoder and a second image translation decoder; the discriminator module comprises a first discriminator and a second discriminator;
the output end of the encoder is respectively connected with the input ends of the first image translation decoder, the image segmentation decoder and the second image translation decoder, the output end of the first image translation decoder is connected with the input end of the first discriminator, and the output end of the second image translation decoder is connected with the input end of the second discriminator;
the image translation network comprises a source domain-to-target domain image translation network and a target domain-to-source domain image translation network; the encoder and the first image translation decoder comprise the source domain to target domain image translation network; the encoder and the second image translation decoder comprise the target domain-to-source domain image translation network.
4. The unsupervised domain adaptation method for medical image segmentation across data sources as claimed in claim 3, wherein the inputting the source image to be processed and the target image to be processed at a current iteration number into the image translation network at a previous iteration number to obtain the translated source domain image and the translated target domain image specifically comprises:
inputting the source graph to be processed under the current iteration times into a source domain-to-target domain image translation network under the previous iteration times to obtain a translated class target domain image;
and inputting the target graph to be processed under the current iteration times into a target domain-to-source domain image translation network under the previous iteration times to obtain a translated similar source domain image.
5. The unsupervised domain adaptation method for medical image segmentation across data sources of claim 3, wherein the training of the image translation network at the previous iteration number by using a gradient descent method to obtain the optimized image translation network at the current iteration number, based on the result obtained by inputting the translated class target domain image and the translated class source domain image into the discriminator module at the previous iteration number, comprises:
inputting the translated class target domain image into a first discriminator under the last iteration number to obtain the calculation countermeasure loss of the target domain; training the source domain-to-target domain image translation network under the last iteration number by adopting a gradient descent method according to the calculated countermeasure loss of the target domain to obtain an optimized source domain-to-target domain image translation network under the current iteration number; meanwhile, inputting the translated similar source domain image into a second discriminator under the last iteration number to obtain source domain calculation countermeasure loss; and training the target domain-to-source domain image translation network under the previous iteration number by adopting a gradient descent method according to the source domain calculated countermeasure loss to obtain an optimized target domain-to-source domain image translation network under the current iteration number.
6. The unsupervised domain adaptation method for medical image segmentation across data sources of claim 3, wherein the inputting the translated class target domain image and the translated class source domain image into the optimized image translation network at the current iteration number to obtain a reconstructed target domain image and a reconstructed source domain image comprises:
inputting the translated class target domain image into an optimized target domain-to-source domain image translation network under the current iteration times to obtain a reconstructed source domain image;
and inputting the translated similar source domain image into an optimized source domain-to-target domain image translation network under the current iteration times to obtain a reconstructed target domain image.
7. The unsupervised domain adaptation method for cross-data-source medical image segmentation according to claim 3, wherein the step of training the encoder and the image translation decoder module in the optimized image translation network for the current iteration number by using a gradient descent method to obtain the encoder and the image translation decoder module trained for the current iteration number, with a result calculated according to the reconstructed source domain image, the reconstructed target domain image, the source map to be processed for the current iteration number, and the target map to be processed for the current iteration number being a loop consistency loss, specifically includes:
obtaining a source domain gradient cut-off type cycle consistency loss according to the reconstructed source domain image and a source image to be processed under the current iteration times;
obtaining a target domain gradient cut-off type cycle consistency loss according to the reconstructed target domain image and the target image to be processed under the current iteration times;
and training the encoder and the second image translation decoder in the image translation network from the optimized target domain to the source domain under the current iteration number respectively by adopting a gradient descent method according to the gradient truncation type cycle consistency loss of the source domain, and training the encoder and the first image translation decoder in the image translation network from the optimized source domain to the target domain under the current iteration number respectively by adopting a gradient descent method according to the gradient truncation type cycle consistency loss of the target domain to obtain the encoder, the first image translation decoder and the second image translation decoder which are trained under the current iteration number.
8. The unsupervised domain adaptation method for data source-crossing medical image segmentation according to claim 3, wherein the result obtained by calculation according to the target image to be processed in the current iteration number, the source image to be processed in the current iteration number, the translated class target domain image, and the translated class source domain image is a discriminator countermeasure loss, and a discriminator module in the previous iteration number is trained by using a gradient descent method to obtain a discriminator module in the current iteration number, specifically comprising:
calculating a first discriminator countermeasure loss according to the target graph to be processed under the current iteration times and the translated class target domain image;
training the first discriminator under the last iteration number by adopting a gradient descent method according to the first discriminator confrontation loss to obtain the first discriminator under the current iteration number;
calculating a second discriminator countermeasure loss according to the source image to be processed under the current iteration times and the translated source-like domain image;
and training the second discriminator under the previous iteration number by adopting a gradient descent method according to the confrontation loss of the second discriminator to obtain the second discriminator under the current iteration number.
9. The unsupervised domain adaptation method for cross-data source medical image segmentation according to claim 1, wherein the training of the image segmentation model in the current iteration number by using a gradient descent method to obtain the optimized image segmentation model in the current iteration number, with a result obtained by calculation according to the label of the source image to be processed in the current iteration number, the first segmentation result, and the second segmentation result being a domain loss, specifically comprises:
and training the image segmentation model under the current iteration number by adopting a gradient descent method to obtain the optimized image segmentation model under the current iteration number, wherein the result obtained by calculation according to the label of the source image to be processed under the current iteration number and the first segmentation result is used as a source domain loss, and the result obtained by calculation according to the label of the source image to be processed under the current iteration number and the second segmentation result is used as a class target domain loss.
10. An unsupervised domain adaptation system for medical image segmentation across data sources, comprising:
the system comprises a construction module, a training data set and a database module, wherein the construction module is used for constructing a generation countermeasure network and acquiring the training data set, and the generation countermeasure network comprises an encoder, an image translation decoder module, an image segmentation decoder and a discriminator module; the encoder, the image translation decoder module and the discriminator module are connected in series, the encoder is also connected with the image segmentation decoder, and the encoder and the image translation decoder module form an image translation network; the encoder and the image segmentation decoder constitute an image segmentation model; the training data set comprises a source domain data set and a target domain data set, each of which is composed of a plurality of medical images;
the translation module is used for selecting one medical image in the source domain data set and the target domain data set as a source image to be processed and a target image to be processed under the current iteration number respectively under the current iteration number, and inputting the source image to be processed and the target image to be processed under the current iteration number into an image translation network under the previous iteration number to obtain a translated similar source domain image and a translated similar target domain image;
the encoding and decoding discriminator updating module is used for respectively training the encoder, the image translation decoder module and the discriminator module under the previous iteration times based on the translated class target domain image, the translated class source domain image and the discriminator module under the previous iteration times to obtain the encoder, the image translation decoder module and the discriminator module which are trained under the current iteration times;
the image segmentation module is used for respectively inputting the source image to be processed under the current iteration times, the translated similar target domain image, the target image to be processed under the current iteration times and the translated similar source domain image into the image segmentation model under the current iteration times to sequentially obtain a first segmentation result, a second segmentation result, a third segmentation result and a fourth segmentation result;
the segmentation model first optimization module is used for training the image segmentation model under the current iteration times by adopting a gradient descent method to obtain an optimized image segmentation model under the current iteration times, wherein a result obtained by calculation according to the label of the source image to be processed under the current iteration times, the first segmentation result and the second segmentation result is used as a domain loss;
a segmentation model final optimization module, configured to train the image segmentation model optimized for the current iteration number by using a result obtained by calculation according to the third segmentation result and the fourth segmentation result as a target domain consistency loss, obtain an optimal image segmentation model for the current iteration number by using a gradient descent method, and determine whether the optimal image segmentation model for the current iteration number reaches an iteration stop condition, so as to obtain a first determination result;
the iteration loop module is used for obtaining an image translation network under the current iteration times and carrying out the next iteration according to an encoder in the optimal image segmentation model under the current iteration times and the image translation decoder module trained under the current iteration times if the first judgment result is negative;
and the data processing module is used for processing the target domain data set by using the optimal image segmentation model if the first judgment result is positive.
CN202210381144.0A 2022-04-13 2022-04-13 Unsupervised domain adaptation method and system for cross-data source medical image segmentation Active CN114463332B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210381144.0A CN114463332B (en) 2022-04-13 2022-04-13 Unsupervised domain adaptation method and system for cross-data source medical image segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210381144.0A CN114463332B (en) 2022-04-13 2022-04-13 Unsupervised domain adaptation method and system for cross-data source medical image segmentation

Publications (2)

Publication Number Publication Date
CN114463332A CN114463332A (en) 2022-05-10
CN114463332B true CN114463332B (en) 2022-06-10

Family

ID=81418627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210381144.0A Active CN114463332B (en) 2022-04-13 2022-04-13 Unsupervised domain adaptation method and system for cross-data source medical image segmentation

Country Status (1)

Country Link
CN (1) CN114463332B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109637634A (en) * 2018-12-11 2019-04-16 厦门大学 A kind of medical image synthetic method based on generation confrontation network
CN110322446A (en) * 2019-07-01 2019-10-11 华中科技大学 A kind of domain adaptive semantic dividing method based on similarity space alignment
CN110738663A (en) * 2019-09-06 2020-01-31 上海衡道医学病理诊断中心有限公司 Double-domain adaptive module pyramid network and unsupervised domain adaptive image segmentation method
CN112308862A (en) * 2020-06-04 2021-02-02 北京京东尚科信息技术有限公司 Image semantic segmentation model training method, image semantic segmentation model training device, image semantic segmentation model segmentation method, image semantic segmentation model segmentation device and storage medium
CN112734764A (en) * 2021-03-31 2021-04-30 电子科技大学 Unsupervised medical image segmentation method based on countermeasure network
CN113344944A (en) * 2021-05-28 2021-09-03 山东师范大学 Medical image segmentation method and system based on domain self-adaption
CN113744233A (en) * 2021-08-30 2021-12-03 河南工业大学 Robust medical image segmentation method research based on time adaptive neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11488021B2 (en) * 2020-06-18 2022-11-01 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for image segmentation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109637634A (en) * 2018-12-11 2019-04-16 厦门大学 A kind of medical image synthetic method based on generation confrontation network
CN110322446A (en) * 2019-07-01 2019-10-11 华中科技大学 A kind of domain adaptive semantic dividing method based on similarity space alignment
CN110738663A (en) * 2019-09-06 2020-01-31 上海衡道医学病理诊断中心有限公司 Double-domain adaptive module pyramid network and unsupervised domain adaptive image segmentation method
CN112308862A (en) * 2020-06-04 2021-02-02 北京京东尚科信息技术有限公司 Image semantic segmentation model training method, image semantic segmentation model training device, image semantic segmentation model segmentation method, image semantic segmentation model segmentation device and storage medium
CN112734764A (en) * 2021-03-31 2021-04-30 电子科技大学 Unsupervised medical image segmentation method based on countermeasure network
CN113344944A (en) * 2021-05-28 2021-09-03 山东师范大学 Medical image segmentation method and system based on domain self-adaption
CN113744233A (en) * 2021-08-30 2021-12-03 河南工业大学 Robust medical image segmentation method research based on time adaptive neural network

Also Published As

Publication number Publication date
CN114463332A (en) 2022-05-10

Similar Documents

Publication Publication Date Title
Zhao et al. Learning to forecast and refine residual motion for image-to-video generation
Poudel et al. Fast-scnn: Fast semantic segmentation network
CN110992252B (en) Image multi-grid conversion method based on latent variable feature generation
CN111581405A (en) Cross-modal generalization zero sample retrieval method for generating confrontation network based on dual learning
CN110689599B (en) 3D visual saliency prediction method based on non-local enhancement generation countermeasure network
CN111369565A (en) Digital pathological image segmentation and classification method based on graph convolution network
Qing et al. Mar: Masked autoencoders for efficient action recognition
WO2019196718A1 (en) Element image generation method, device and system
CN111325660A (en) Remote sensing image style conversion method based on text data
Ma et al. SD-GAN: Saliency-discriminated GAN for remote sensing image superresolution
Zhou et al. Attention transfer network for nature image matting
CN115526801A (en) Automatic color homogenizing method and device for remote sensing image based on conditional antagonistic neural network
CN114723950A (en) Cross-modal medical image segmentation method based on symmetric adaptive network
CN116229106A (en) Video significance prediction method based on double-U structure
CN115661165A (en) Glioma fusion segmentation system and method based on attention enhancement coding and decoding network
Tan et al. Nope-sac: Neural one-plane ransac for sparse-view planar 3d reconstruction
CN112541566B (en) Image translation method based on reconstruction loss
Yu et al. Multiprior learning via neural architecture search for blind face restoration
CN114463332B (en) Unsupervised domain adaptation method and system for cross-data source medical image segmentation
CN117315244A (en) Multi-scale feature fused medical image segmentation method, device and storage medium
CN115496134A (en) Traffic scene video description generation method and device based on multi-modal feature fusion
CN113658285B (en) Method for generating face photo to artistic sketch
CN114049939A (en) Pneumonia CT image generation method based on UNet-GAN network
CN113205521A (en) Image segmentation method of medical image data
CN117974693B (en) Image segmentation method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant