CN115393361B - Skin disease image segmentation method, device, equipment and medium with low annotation cost - Google Patents

Skin disease image segmentation method, device, equipment and medium with low annotation cost Download PDF

Info

Publication number
CN115393361B
CN115393361B CN202211332281.1A CN202211332281A CN115393361B CN 115393361 B CN115393361 B CN 115393361B CN 202211332281 A CN202211332281 A CN 202211332281A CN 115393361 B CN115393361 B CN 115393361B
Authority
CN
China
Prior art keywords
skin disease
labeling
disease image
pixels
prediction model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211332281.1A
Other languages
Chinese (zh)
Other versions
CN115393361A (en
Inventor
梁桥康
秦海
肖海华
邹坤霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202211332281.1A priority Critical patent/CN115393361B/en
Publication of CN115393361A publication Critical patent/CN115393361A/en
Application granted granted Critical
Publication of CN115393361B publication Critical patent/CN115393361B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method, a device, equipment and a medium for segmenting a skin disease image with low annotation cost, wherein the method comprises the following steps: constructing a skin disease image data set, including unmarked and marked skin disease images; designing a skin disease image segmentation network with low annotation cost, wherein the skin disease image segmentation network comprises N prediction models and a multi-model fusion module; training and labeling each prediction model in batches by using a skin disease image data set, wherein an active learning method of a multi-uncertainty strategy and a semi-supervised learning method based on a shared query value strategy are adopted, and the prediction model obtained by current training is combined with expert labeling to label the current unmarked skin disease images of any batch; repeating iterative training on each prediction model by using the marked skin disease image; and carrying out segmentation and labeling on the skin disease image to be segmented by using the trained skin disease image segmentation network. The invention can still obtain good segmentation effect under the condition of less labeled samples.

Description

Skin disease image segmentation method, device, equipment and medium with low annotation cost
Technical Field
The invention relates to the field of image processing, in particular to a method, a device, equipment and a medium for segmenting a skin disease image with low labeling cost.
Background
Malignant melanoma is one of the fastest growing cancers in the world, with high morbidity and mortality. If it can be found early, a cure rate of 95% can be achieved. At present, the clinical diagnosis is mainly carried out by a skin mirror image. In computer-aided medicine, if the lesion in the dermatoscope image is effectively segmented, the accuracy of skin disease detection can be obviously improved, and great convenience is brought to a dermatologist for judging whether the melanoma is generated.
With the continuous development of artificial intelligence technology, deep learning is widely applied in the field of computer vision. Currently, many excellent segmentation models based on deep learning are used for medical image segmentation tasks, such as FCN, UNet, segNet, etc. However, these models can achieve better segmentation only when there are a large number of labeled training samples. However, a real problem is that often a large number of unmarked raw images are readily available, and it is not possible to spend a lot of time marking these images due to the limited energy of the physician. In the current common skin disease segmentation task, a single segmentation network is used, and the problem of poor robustness exists.
Disclosure of Invention
Aiming at the problems that a large number of unmarked images are easy to obtain at present and experts cannot spend a large amount of time and energy to mark each image, the method, the device, the equipment and the medium for segmenting the skin disease image with low marking cost are provided, and a good segmenting effect can be obtained under the condition of less marked samples.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
a low-annotation-cost skin disease image segmentation method comprises the following steps:
constructing a skin disease image data set, wherein the skin disease image data set comprises unmarked skin disease images and marked skin disease images, and the number of the unmarked skin disease images is greater than that of the marked skin disease images;
designing a skin disease image segmentation network with low annotation cost, wherein the skin disease image segmentation network comprises N prediction models and a multi-model fusion module;
training and labeling each prediction model in batches by using a skin disease image dataset: firstly, dividing unmarked skin disease images into a plurality of batches; then, training a prediction model by using the currently labeled skin disease image; then, an active learning method of various uncertain strategies and a semi-supervised learning method based on a shared query value strategy are adopted, a prediction model obtained by current training is combined with expert labeling, and the current unmarked skin disease images of any batch are labeled; then, fusing the output labels of the prediction models by using a multi-model fusion module; until all skin disease images are marked;
repeatedly training each prediction model by using the marked skin disease image until each prediction model converges;
and (3) respectively segmenting and labeling the skin disease image to be segmented by using the trained prediction models, and then fusing the output labels of the prediction models by using a multi-model fusion module to complete segmentation and labeling of the skin disease image to be segmented.
In a further skin disease image segmentation method, the active learning method adopting multiple uncertain strategies and the semi-supervised learning method based on the shared query value strategy are combined with expert labeling by using a prediction model obtained by current training, and the current unmarked skin disease images of any batch are labeled, and the method specifically comprises the following steps:
two active learning uncertainty strategies are employed
Figure DEST_PATH_IMAGE001
And
Figure DEST_PATH_IMAGE002
separately for pixels in the unmarked skin disease image
Figure DEST_PATH_IMAGE003
Performing pre-classification, and recording the pre-classification as
Figure DEST_PATH_IMAGE004
And
Figure DEST_PATH_IMAGE005
introducing random query factors
Figure DEST_PATH_IMAGE006
For random query factor
Figure 878918DEST_PATH_IMAGE006
And pre-classification
Figure 592796DEST_PATH_IMAGE004
Figure 106954DEST_PATH_IMAGE005
Weighting to obtain pixels
Figure DEST_PATH_IMAGE007
Confidence of classification of (2)
Figure DEST_PATH_IMAGE008
By sharing classification confidence
Figure 580792DEST_PATH_IMAGE008
And distributing a pseudo label to the pixel with the classification confidence coefficient reaching a preset value to finish labeling, and handing the rest pixels serving as uncertain pixels to an expert to finish labeling.
In a further skin disease image segmentation method, two active learning uncertainty strategies are adopted
Figure 395164DEST_PATH_IMAGE001
And
Figure 952047DEST_PATH_IMAGE002
the method for pre-classifying the pixels comprises the following steps:
Figure DEST_PATH_IMAGE009
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE010
is shown as
Figure DEST_PATH_IMAGE011
The prediction model is used for predicting the prediction model,
Figure DEST_PATH_IMAGE012
a class of pixels is represented by a number of pixels,
Figure DEST_PATH_IMAGE013
the number of classes representing the pixel,
Figure DEST_PATH_IMAGE014
representing a pixel
Figure 812556DEST_PATH_IMAGE003
The label of (a) to (b),
Figure DEST_PATH_IMAGE015
representing a predictive model
Figure DEST_PATH_IMAGE016
For the pixel
Figure 287488DEST_PATH_IMAGE003
Output tag of (2)
Figure DEST_PATH_IMAGE017
The probability of (c).
In a further method of skin disease image segmentation, classification confidence is shared
Figure 487525DEST_PATH_IMAGE008
The method for distributing the pseudo labels for the pixels with the classification confidence reaching the preset value comprises the following steps:
Figure DEST_PATH_IMAGE018
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE019
is an assigned pseudo label.
In a further skin disease image segmentation method, the multi-model fusion module classifies pixels according to the magnitude of voting entropy, and the voting entropy
Figure DEST_PATH_IMAGE020
The calculating method comprises the following steps:
Figure DEST_PATH_IMAGE021
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE022
denotes the first
Figure 90675DEST_PATH_IMAGE011
A prediction model to the pixel
Figure 845005DEST_PATH_IMAGE003
The output tag of (1) or an expert-tagged tag.
A low annotation cost dermatologic image segmentation apparatus comprising:
a dataset construction module to: constructing a skin disease image data set which comprises unmarked skin disease images and marked skin disease images, wherein the number of the unmarked skin disease images is more than that of the marked skin disease images;
a split network design module to: designing a skin disease image segmentation network with low annotation cost, wherein the skin disease image segmentation network comprises N prediction models and a multi-model fusion module;
an image annotation module to: training and labeling each prediction model in batches using the skin disease image dataset: firstly, dividing unmarked skin disease images into a plurality of batches; then, training a prediction model by using the currently labeled skin disease image; then, an active learning method of various uncertain strategies and a semi-supervised learning method based on a shared query value strategy are adopted, a prediction model obtained by current training is combined with expert labeling, and the current unmarked skin disease images of any batch are labeled; then, fusing the output labels of the prediction models by using a multi-model fusion module; until all skin disease images are marked;
a model training module to: repeating iterative training on each prediction model by using the marked skin disease image until each prediction model converges;
an image segmentation module to: and (3) respectively segmenting and labeling the skin disease image to be segmented by using the trained prediction models, and then fusing the output labels of the prediction models by using a multi-model fusion module to complete segmentation and labeling of the skin disease image to be segmented.
In a further skin disease image segmentation apparatus, a specific process of labeling a skin disease image by the image labeling module includes:
two active learning uncertainty strategies are employed
Figure 316437DEST_PATH_IMAGE001
And
Figure 636560DEST_PATH_IMAGE002
separately for pixels in the unmarked skin disease image
Figure 800825DEST_PATH_IMAGE003
Performing pre-classification, and recording the pre-classification as
Figure 839189DEST_PATH_IMAGE004
And
Figure 114312DEST_PATH_IMAGE005
introducing random query factors
Figure 492204DEST_PATH_IMAGE006
For random query factor
Figure 624108DEST_PATH_IMAGE006
And pre-classification
Figure 87450DEST_PATH_IMAGE004
Figure 713735DEST_PATH_IMAGE005
Weighting to obtain pixels
Figure 211712DEST_PATH_IMAGE007
Confidence of classification of
Figure 717780DEST_PATH_IMAGE008
By sharing classification confidence
Figure 465156DEST_PATH_IMAGE008
Distributing a pseudo label to the pixels with the classification confidence reaching the preset value to finish labeling, and taking the rest pixels as uncertain pixelsThe pixels are submitted to an expert to finish labeling;
and (5) continuing to train the prediction model by using the currently labeled skin disease image.
In a further skin disease image segmentation device, two active learning uncertainty strategies are used
Figure 82082DEST_PATH_IMAGE001
And
Figure 434566DEST_PATH_IMAGE002
the method for pre-classifying the pixels comprises the following steps:
Figure DEST_PATH_IMAGE023
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE024
is shown as
Figure 705011DEST_PATH_IMAGE011
The prediction model is used for predicting the prediction model,
Figure 877366DEST_PATH_IMAGE012
a class of pixels is represented by a number of pixels,
Figure 845453DEST_PATH_IMAGE013
the number of classes representing the pixel,
Figure 318023DEST_PATH_IMAGE014
representing a pixel
Figure 165893DEST_PATH_IMAGE003
The label of (a) is used,
Figure 887861DEST_PATH_IMAGE015
representing a predictive model
Figure 846590DEST_PATH_IMAGE016
For the pixel
Figure 908087DEST_PATH_IMAGE003
Output tag of
Figure 989175DEST_PATH_IMAGE017
The probability of (c).
An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to implement the skin disease image segmentation method.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, implements a dermatological image segmentation method as defined in one of the above.
Advantageous effects
The invention can effectively reduce the label amount of the skin mirror image, reduce the cost of the segmentation network with the labeled data set and improve the robustness of the segmentation network. The scheme has high segmentation precision and reduced cost, and can assist dermatologists in diagnosing clinical tasks of melanoma. Compared with the existing skin disease image segmentation technology, the method has the following advantages:
(1) The random query method is introduced into two different active learning uncertainty strategy queries, different weights are respectively given to the two different active learning uncertainty strategy queries, the problem of consistent deviation of query pixels can be solved, therefore, difficult pixels with high uncertainty in a skin mirror image can be queried more accurately and are labeled by experts, and the defect of consistent deviation of uncertainty strategies can be effectively overcome.
(2) The multi-model fusion segmentation method provided by the invention can effectively improve the segmentation performance of a single prediction model and improve the robustness of an integrated segmentation network.
(3) The method has strong practicability, can not only mark image data sets with less number, but also keep higher segmentation effect, and effectively improves the performance of the model.
Drawings
Fig. 1 is a schematic diagram of a skin disease image segmentation method with low annotation cost according to an embodiment of the invention.
Fig. 2 is a schematic diagram of various active learning uncertainty query methods with random methods introduced in the embodiment of the present invention.
FIG. 3 is a diagram illustrating a multi-model fusion segmentation method according to an embodiment of the present invention.
FIG. 4 is a graph of the average pixel label amount per image according to an embodiment of the present invention.
FIG. 5 is a diagram illustrating the results of the dermoscopic image segmentation test according to the embodiment of the present invention.
Detailed Description
The following describes embodiments of the present invention in detail, which are developed based on the technical solutions of the present invention, and give detailed implementation manners and specific operation procedures to further explain the technical solutions of the present invention.
The embodiment provides a skin disease image segmentation method with low annotation cost, which can be used for experiments by using a Python programming language, and can also be used for engineering applications by using a C/C + + programming language. Referring to fig. 1, the method comprises the following steps:
step 1, a skin disease image data set is constructed, wherein the skin disease image data set comprises unmarked skin disease images and marked skin disease images, and the number of the unmarked skin disease images is larger than that of the marked skin disease images.
In this embodiment, since the accuracy of the segmented network and the generalization accuracy of the test segmented network are to be verified subsequently, the constructed skin disease image data set includes a training set, a verification set and a test set of the model, the training set is used for training the prediction model, the verification set is used for verifying the accuracy of the training model, and the test set is used for testing the generalization accuracy of the model. The training set comprises unmarked skin disease images and marked skin disease images, and the number of the unmarked skin disease images is larger than that of the marked skin disease images.
Embodiments employ the international collaborative skin imaging organization (ISIC) to provide digital skin lesion image datasets and expert annotations from all over the world for the diagnosis of melanoma and other cancers, including two sets of ISIC 2016 and ISIC 2017 datasets. The ISIC 2016 dataset contained 900 images (727 non-melanoma and 173 melanoma) for training and 379 images (304 non-melanoma and 75 melanoma) for testing. The pixel size of the image varied from 566 x 679-2848 x 4228. The ISIC 2017 dataset contained 2000 images (1626 non-melanoma and 374 melanoma) for training and 600 images (483 non-melanoma and 117 melanoma) for testing, with the pixel sizes of the images varying from 453 x 679-4499 x 6748.
And 2, designing a skin disease image segmentation network with low annotation cost, wherein the skin disease image segmentation network comprises N prediction models and a multi-model fusion module.
Training and labeling each prediction model in batches by using a skin disease image dataset: firstly, dividing unmarked skin disease images into a plurality of batches; then, training a prediction model by using the currently labeled skin disease image; then, an active learning method of various uncertain strategies and a semi-supervised learning method based on a shared query value strategy are adopted, a prediction model obtained by current training is combined with expert labeling, and the current unmarked skin disease images of any batch are labeled; then, fusing the output labels of the prediction models by using a multi-model fusion module; until all skin disease images are marked;
step 3, training and labeling each prediction model in batches by using the skin disease image data set:
step 3.1, dividing the unmarked skin disease images into a plurality of batches;
step 3.2, training a prediction model by using the currently labeled skin disease image;
step 3.3, marking the unmarked skin disease images of any current batch by adopting an active learning method of various uncertain strategies and a semi-supervised learning method based on a shared query value strategy and combining a prediction model obtained by current training with expert marking; specifically, the method comprises the following steps:
(1) Two active learning uncertainty strategies are employed
Figure 136123DEST_PATH_IMAGE001
And
Figure 960860DEST_PATH_IMAGE002
separately for pixels in the unmarked skin disease image
Figure 876863DEST_PATH_IMAGE003
Performing pre-classification, and recording the pre-classification as
Figure 66536DEST_PATH_IMAGE004
And
Figure 513829DEST_PATH_IMAGE005
Figure 814360DEST_PATH_IMAGE023
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE025
is shown as
Figure 647187DEST_PATH_IMAGE011
A prediction model for the prediction of the target,
Figure 804499DEST_PATH_IMAGE012
a class of pixels is represented by a number of pixels,
Figure 191618DEST_PATH_IMAGE013
the number of categories representing the pixels is,
Figure 295840DEST_PATH_IMAGE014
representing a pixel
Figure 983173DEST_PATH_IMAGE003
The label of (a) is used,
Figure 514649DEST_PATH_IMAGE015
representing a predictive model
Figure 670955DEST_PATH_IMAGE025
For the pixel
Figure 844447DEST_PATH_IMAGE003
Output tag of
Figure 589549DEST_PATH_IMAGE017
The probability of (c).
Computing
Figure 354243DEST_PATH_IMAGE002
The value represents a measure of uncertainty by using the probability of predicting samples for all classes.
Figure 184796DEST_PATH_IMAGE002
The higher the value, the greater the uncertainty.
(2) Introducing a random query factor
Figure 693137DEST_PATH_IMAGE006
For random query factor
Figure 558325DEST_PATH_IMAGE006
And pre-classification
Figure 431603DEST_PATH_IMAGE004
Figure 811769DEST_PATH_IMAGE005
Weighting to obtain pixels
Figure 795906DEST_PATH_IMAGE007
Confidence of classification of (2)
Figure DEST_PATH_IMAGE026
Figure DEST_PATH_IMAGE027
In the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE028
representing the weights of three different query strategies,
Figure DEST_PATH_IMAGE029
representing a pixel
Figure 594228DEST_PATH_IMAGE003
The random query factor of (2).
(3) By sharing classification confidence
Figure 903987DEST_PATH_IMAGE008
To achieve the classification confidence value
Figure DEST_PATH_IMAGE030
The pixel of (2) is assigned with a pseudo label to finish labeling, and the rest pixels are taken as uncertain pixels to be handed to an expert to finish labeling:
Figure DEST_PATH_IMAGE031
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE032
the value is marked for the expert and,
Figure 37028DEST_PATH_IMAGE019
is an assigned pseudo label.
And 3.4, fusing the pixel labels by using a multi-model fusion module according to the magnitude of the voting entropy, wherein the voting entropy
Figure DEST_PATH_IMAGE033
The calculation method comprises the following steps:
Figure 637905DEST_PATH_IMAGE021
and 3.5, repeating the steps 3.2 to 3.4 until the labeling of the skin disease images of all batches is completed.
The uncertainty strategy-based active learning module in the embodiment adopts two active learning uncertainty strategies, simultaneously introduces a random query method, weights the three strategies to form a comprehensive querier, can more accurately query the difficult pixels of the dermatoscope image to be labeled by experts, and can effectively solve the defect of consistent deviation of the uncertainty strategies, and the reference is made in fig. 2.
And 4, repeatedly training each prediction model by using the marked skin disease image until each prediction model is converged.
And 5, segmenting and labeling the skin disease image to be segmented by using the trained prediction models, and fusing output labels of the prediction models by using a multi-model fusion module to complete segmentation and labeling of the skin disease image to be segmented.
In the embodiment, the test set is used as the skin disease image to be segmented, and each pixel in the image is classified according to the voting entropy, so that the segmentation of the focus of the skin disease image is completed, as shown in fig. 3.
In step 3 and step 4 of this embodiment, because the sizes of the skin disease images input into the prediction model are not the same, batch loading of data into the network model for training cannot be realized, and the limitation of the memory of the GPU of the hardware device is considered. To this end, the embodiment uniformly scales and crops the input image into a size of 192 × 256.
During the training of the model, the Ubuntu 16.04LTS system is needed, and the system environment needs Pythroch 1.6 and Python 3.6. The hardware platform needs four RTX2080Ti graphics cards as a main computing platform, meanwhile, the CPU memory is not lower than 16G, and the solid state disk is not lower than 256G. In model training, a total of 100 epochs are trained, 16 images are loaded into the system for batch training each time, and the initial learning rate is set to be 0.01.
The embodiment of the invention realizes the segmentation task of the skin disease image under the condition of less marked samples based on the active learning multi-model fusion segmentation network, can effectively reduce the label amount of the skin mirror image, reduce the cost of the segmentation network for a data set with a label, and improve the robustness of the segmentation network, thereby improving the performance of the segmentation network.
Table 1 shows the structural composition of the multi-model fusion network and four networks Net1, net2, net3, and Net4, which respectively include one to five different mainstream skin mirror image segmentation networks in the table to form an integrated segmentation model.
TABLE 1 Multi-model fusion method
Figure DEST_PATH_IMAGE034
Tables 2 and 3 are a comparison of the present invention with the best current dermatoscope image segmentation model. According to the invention, DIC and JAI values of 2016 ISIC can be respectively improved to 94.45% and 89.07% from 88.64% and 81.37%, and the DIC and JAI values can be respectively improved to 87.51% and 80.22% from 79.39% and 72.04% in 2017 ISIC.
Table 2 network architecture performance comparison in ISIC 2016 dataset (%)
Figure DEST_PATH_IMAGE035
Table 3 network architecture ISIC 2017 data set performance comparison (%)
Figure DEST_PATH_IMAGE036
FIG. 4 is the average pixel label amount per image for the present invention. According to the image labeling method based on the uncertainty strategy active learning method and the high-confidence-degree strategy semi-supervised learning method, the average pixel annotation of each image is not more than 15%, so that the most uncertain image pixels can be inquired and annotated, and most of the rest pixels are endowed with pseudo labels.
In order to further verify the effectiveness of the method of the invention, comparison with the method which is mainstream internationally is carried out. Tables 4 and 5 are experimental comparisons between ISIC 2016 and ISIC 2016 data sets in accordance with the present invention, and it can be seen that the methods provided by the present invention exhibit better performance. The invention only uses 80% of the training data set to achieve the segmentation performance which is equivalent to that of other methods using all training data sets.
TABLE 4 comparison of Performance (%)
Figure DEST_PATH_IMAGE037
TABLE 5 comparison of Performance (%)
Figure DEST_PATH_IMAGE038
Fig. 5 shows image labels obtained by different query strategies. It can be seen that on some images, the lesion segmented by the method provided by the invention has more accurate contour than the original label.
The above embodiments are preferred embodiments of the present application, and those skilled in the art can make various changes or modifications without departing from the general concept of the present application, and such changes or modifications should fall within the scope of the claims of the present application.

Claims (5)

1. A low-annotation-cost skin disease image segmentation method is characterized by comprising the following steps:
constructing a skin disease image data set which comprises unmarked skin disease images and marked skin disease images, wherein the number of the unmarked skin disease images is more than that of the marked skin disease images;
designing a skin disease image segmentation network with low annotation cost, wherein the skin disease image segmentation network comprises N prediction models and a multi-model fusion module;
training and labeling each prediction model in batches by using a skin disease image dataset: firstly, dividing unmarked skin disease images into a plurality of batches; then, training a prediction model by using the currently labeled skin disease image; then, an active learning method of various uncertain strategies and a semi-supervised learning method based on a shared query value strategy are adopted, a prediction model obtained by current training is combined with expert labeling, and the current unmarked skin disease images of any batch are labeled; then, fusing the output labels of the prediction models by using a multi-model fusion module; until all skin disease images are marked;
the active learning method adopting various uncertain strategies and the semi-supervised learning method based on the shared query value strategy are used for marking the unmarked skin disease images of any current batch by combining a prediction model obtained by current training with expert marking, and specifically comprise the following steps:
(1) Two active learning uncertainty strategies are employed
Figure 445489DEST_PATH_IMAGE001
And
Figure 532394DEST_PATH_IMAGE002
separately for pixels in the unmarked skin disease image
Figure 209363DEST_PATH_IMAGE003
Performing pre-classification, and recording the pre-classification as
Figure 381718DEST_PATH_IMAGE004
And
Figure 802336DEST_PATH_IMAGE005
Figure 9326DEST_PATH_IMAGE006
in the formula (I), the compound is shown in the specification,
Figure 857196DEST_PATH_IMAGE007
denotes the first
Figure 516848DEST_PATH_IMAGE008
The prediction model is used for predicting the prediction model,
Figure 475576DEST_PATH_IMAGE009
a class of pixels is represented by a number of pixels,
Figure 802653DEST_PATH_IMAGE010
the number of categories representing the pixels is,
Figure 821424DEST_PATH_IMAGE011
representing a pixel
Figure 233951DEST_PATH_IMAGE003
The label of (a) is used,
Figure 980059DEST_PATH_IMAGE012
representing a predictive model
Figure 896062DEST_PATH_IMAGE013
For the pixel
Figure 351314DEST_PATH_IMAGE003
Output tag of (2)
Figure 985558DEST_PATH_IMAGE014
The probability of (d);
(2) Introducing a random query factor
Figure 286089DEST_PATH_IMAGE015
For random query factor
Figure 322179DEST_PATH_IMAGE015
And pre-classification
Figure 682753DEST_PATH_IMAGE004
Figure 804293DEST_PATH_IMAGE005
Weighting to obtain pixels
Figure 908515DEST_PATH_IMAGE016
Confidence of classification of
Figure 799110DEST_PATH_IMAGE017
(3) By sharing classification confidence
Figure 596165DEST_PATH_IMAGE017
Distributing a pseudo label to the pixel with the classification confidence coefficient reaching a preset value to finish labeling, and handing the rest pixels as uncertain pixels to an expert to finish labeling;
Figure 939422DEST_PATH_IMAGE018
in the formula (I), the compound is shown in the specification,
Figure 112914DEST_PATH_IMAGE019
the value is marked for the expert and,
Figure 107284DEST_PATH_IMAGE020
is an assigned pseudo label;
repeating iterative training on each prediction model by using the marked skin disease image until each prediction model converges;
and (3) respectively segmenting and labeling the skin disease image to be segmented by using the trained prediction models, and then fusing the output labels of the prediction models by using a multi-model fusion module to complete segmentation and labeling of the skin disease image to be segmented.
2. The method of claim 1, wherein the multi-model fusion module classifies pixels according to the magnitude of the entropy of the vote, the entropy of the vote being
Figure 809661DEST_PATH_IMAGE021
The calculation method comprises the following steps:
Figure 905793DEST_PATH_IMAGE022
in the formula (I), the compound is shown in the specification,
Figure 86238DEST_PATH_IMAGE023
is shown as
Figure 217005DEST_PATH_IMAGE008
A prediction model to the pixel
Figure 90283DEST_PATH_IMAGE003
The output tag of (1) or an expert-tagged tag.
3. A low labeling cost dermatologic image segmentation apparatus, comprising:
a dataset construction module to: constructing a skin disease image data set, wherein the skin disease image data set comprises unmarked skin disease images and marked skin disease images, and the number of the unmarked skin disease images is greater than that of the marked skin disease images;
a split network design module to: designing a skin disease image segmentation network with low annotation cost, wherein the skin disease image segmentation network comprises N prediction models and a multi-model fusion module;
an image annotation module to: training and labeling each prediction model in batches using the skin disease image dataset: firstly, dividing unmarked skin disease images into a plurality of batches; then, training a prediction model by using the currently labeled skin disease image; then, an active learning method of various uncertain strategies and a semi-supervised learning method based on a shared query value strategy are adopted, a prediction model obtained by current training is combined with expert labeling, and the current unmarked skin disease images of any batch are labeled; then, fusing the output labels of the prediction models by using a multi-model fusion module; until all skin disease images are marked;
the specific process of labeling the skin disease image by the image labeling module comprises the following steps:
(1) Two active learning uncertainty strategies are employed
Figure 408132DEST_PATH_IMAGE001
And
Figure 657848DEST_PATH_IMAGE002
separately for pixels in the unmarked skin disease image
Figure 377542DEST_PATH_IMAGE003
Performing pre-classification, and recording the pre-classification as
Figure 421722DEST_PATH_IMAGE004
And
Figure 492446DEST_PATH_IMAGE005
Figure 280273DEST_PATH_IMAGE024
in the formula (I), the compound is shown in the specification,
Figure 120053DEST_PATH_IMAGE025
is shown as
Figure 584401DEST_PATH_IMAGE008
The prediction model is used for predicting the prediction model,
Figure 611263DEST_PATH_IMAGE009
a class of pixels is represented by a number of pixels,
Figure 468361DEST_PATH_IMAGE010
the number of categories representing the pixels is,
Figure 897068DEST_PATH_IMAGE011
representing a pixel
Figure 548629DEST_PATH_IMAGE003
The label of (a) is used,
Figure 328367DEST_PATH_IMAGE012
representing a predictive model
Figure 723576DEST_PATH_IMAGE025
For the pixel
Figure 6790DEST_PATH_IMAGE003
Output tag of
Figure 829252DEST_PATH_IMAGE014
The probability of (d);
(2) Introducing a random query factor
Figure 96285DEST_PATH_IMAGE015
For random query factor
Figure 764027DEST_PATH_IMAGE015
And pre-classification
Figure 432906DEST_PATH_IMAGE004
Figure 426270DEST_PATH_IMAGE005
Weighting to obtain pixels
Figure 915020DEST_PATH_IMAGE016
Confidence of classification of (2)
Figure 904229DEST_PATH_IMAGE017
(3) By sharing classification confidence
Figure 162035DEST_PATH_IMAGE017
Distributing a pseudo label to the pixel with the classification confidence coefficient reaching a preset value to finish labeling, and handing the rest pixels as uncertain pixels to an expert to finish labeling;
Figure 326300DEST_PATH_IMAGE018
in the formula (I), the compound is shown in the specification,
Figure 302346DEST_PATH_IMAGE019
the value is marked for the expert and,
Figure 577470DEST_PATH_IMAGE020
is an assigned pseudo label;
a model training module to: repeatedly training each prediction model by using the marked skin disease image until each prediction model converges;
an image segmentation module to: and (3) respectively segmenting and labeling the skin disease image to be segmented by using the trained prediction models, and then fusing the output labels of the prediction models by using a multi-model fusion module to complete segmentation and labeling of the skin disease image to be segmented.
4. An electronic device comprising a memory and a processor, the memory having stored therein a computer program, wherein the computer program, when executed by the processor, causes the processor to carry out the method according to any one of claims 1-2.
5. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-2.
CN202211332281.1A 2022-10-28 2022-10-28 Skin disease image segmentation method, device, equipment and medium with low annotation cost Active CN115393361B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211332281.1A CN115393361B (en) 2022-10-28 2022-10-28 Skin disease image segmentation method, device, equipment and medium with low annotation cost

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211332281.1A CN115393361B (en) 2022-10-28 2022-10-28 Skin disease image segmentation method, device, equipment and medium with low annotation cost

Publications (2)

Publication Number Publication Date
CN115393361A CN115393361A (en) 2022-11-25
CN115393361B true CN115393361B (en) 2023-02-03

Family

ID=84115191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211332281.1A Active CN115393361B (en) 2022-10-28 2022-10-28 Skin disease image segmentation method, device, equipment and medium with low annotation cost

Country Status (1)

Country Link
CN (1) CN115393361B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109564A (en) * 2022-12-01 2023-05-12 脉得智能科技(无锡)有限公司 Method, device, equipment and medium for rapidly screening multiple types of skin disease appearance images
CN116763259B (en) * 2023-08-17 2023-12-08 普希斯(广州)科技股份有限公司 Multi-dimensional control method and device for beauty equipment and beauty equipment
CN116935388B (en) * 2023-09-18 2023-11-21 四川大学 Skin acne image auxiliary labeling method and system, and grading method and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348972A (en) * 2020-09-22 2021-02-09 陕西土豆数据科技有限公司 Fine semantic annotation method based on large-scale scene three-dimensional model
CN112163634B (en) * 2020-10-14 2023-09-05 平安科技(深圳)有限公司 Sample screening method and device for instance segmentation model, computer equipment and medium
CN113838058B (en) * 2021-10-11 2024-03-19 重庆邮电大学 Automatic medical image labeling method and system based on small sample segmentation
CN114612702A (en) * 2022-01-24 2022-06-10 珠高智能科技(深圳)有限公司 Image data annotation system and method based on deep learning

Also Published As

Publication number Publication date
CN115393361A (en) 2022-11-25

Similar Documents

Publication Publication Date Title
Madani et al. Fast and accurate view classification of echocardiograms using deep learning
CN115393361B (en) Skin disease image segmentation method, device, equipment and medium with low annotation cost
Zhuang et al. An Effective WSSENet-Based Similarity Retrieval Method of Large Lung CT Image Databases.
Mahmood et al. Deep adversarial training for multi-organ nuclei segmentation in histopathology images
US11842487B2 (en) Detection model training method and apparatus, computer device and storage medium
CN108335303B (en) Multi-scale palm skeleton segmentation method applied to palm X-ray film
WO2021081257A1 (en) Artificial intelligence for personalized oncology
Zhang et al. A survey on deep learning of small sample in biomedical image analysis
Huang et al. Medical image segmentation using deep learning with feature enhancement
Yan et al. Symmetric convolutional neural network for mandible segmentation
Dai et al. Samaug: Point prompt augmentation for segment anything model
WO2022178997A1 (en) Medical image registration method and apparatus, computer device, and storage medium
Wang et al. Medical matting: a new perspective on medical segmentation with uncertainty
Shen et al. Cross-modal fine-tuning: Align then refine
Wang et al. Superpixel inpainting for self-supervised skin lesion segmentation from dermoscopic images
Wu et al. Deep adversarial data augmentation with attribute guided for person re-identification
Shen et al. Dilated transformer: residual axial attention for breast ultrasound image segmentation
Zhao et al. Deeply supervised active learning for finger bones segmentation
Xue et al. Oriented localization of surgical tools by location encoding
Song et al. Classifying tongue images using deep transfer learning
Jiang et al. Automatic classification of heterogeneous slit-illumination images using an ensemble of cost-sensitive convolutional neural networks
Li et al. Automatic bone age assessment of adolescents based on weakly-supervised deep convolutional neural networks
Wang et al. Explainable multitask Shapley explanation networks for real-time polyp diagnosis in videos
Al-Ani et al. A review on detecting brain tumors using deep learning and magnetic resonance images.
Boochoon et al. Deep learning for the assessment of facial nerve palsy: opportunities and challenges

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant