CN115393361B - Skin disease image segmentation method, device, equipment and medium with low annotation cost - Google Patents
Skin disease image segmentation method, device, equipment and medium with low annotation cost Download PDFInfo
- Publication number
- CN115393361B CN115393361B CN202211332281.1A CN202211332281A CN115393361B CN 115393361 B CN115393361 B CN 115393361B CN 202211332281 A CN202211332281 A CN 202211332281A CN 115393361 B CN115393361 B CN 115393361B
- Authority
- CN
- China
- Prior art keywords
- skin disease
- labeling
- disease image
- pixels
- prediction model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/70—Labelling scene content, e.g. deriving syntactic or semantic representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30088—Skin; Dermal
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Computational Linguistics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method, a device, equipment and a medium for segmenting a skin disease image with low annotation cost, wherein the method comprises the following steps: constructing a skin disease image data set, including unmarked and marked skin disease images; designing a skin disease image segmentation network with low annotation cost, wherein the skin disease image segmentation network comprises N prediction models and a multi-model fusion module; training and labeling each prediction model in batches by using a skin disease image data set, wherein an active learning method of a multi-uncertainty strategy and a semi-supervised learning method based on a shared query value strategy are adopted, and the prediction model obtained by current training is combined with expert labeling to label the current unmarked skin disease images of any batch; repeating iterative training on each prediction model by using the marked skin disease image; and carrying out segmentation and labeling on the skin disease image to be segmented by using the trained skin disease image segmentation network. The invention can still obtain good segmentation effect under the condition of less labeled samples.
Description
Technical Field
The invention relates to the field of image processing, in particular to a method, a device, equipment and a medium for segmenting a skin disease image with low labeling cost.
Background
Malignant melanoma is one of the fastest growing cancers in the world, with high morbidity and mortality. If it can be found early, a cure rate of 95% can be achieved. At present, the clinical diagnosis is mainly carried out by a skin mirror image. In computer-aided medicine, if the lesion in the dermatoscope image is effectively segmented, the accuracy of skin disease detection can be obviously improved, and great convenience is brought to a dermatologist for judging whether the melanoma is generated.
With the continuous development of artificial intelligence technology, deep learning is widely applied in the field of computer vision. Currently, many excellent segmentation models based on deep learning are used for medical image segmentation tasks, such as FCN, UNet, segNet, etc. However, these models can achieve better segmentation only when there are a large number of labeled training samples. However, a real problem is that often a large number of unmarked raw images are readily available, and it is not possible to spend a lot of time marking these images due to the limited energy of the physician. In the current common skin disease segmentation task, a single segmentation network is used, and the problem of poor robustness exists.
Disclosure of Invention
Aiming at the problems that a large number of unmarked images are easy to obtain at present and experts cannot spend a large amount of time and energy to mark each image, the method, the device, the equipment and the medium for segmenting the skin disease image with low marking cost are provided, and a good segmenting effect can be obtained under the condition of less marked samples.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
a low-annotation-cost skin disease image segmentation method comprises the following steps:
constructing a skin disease image data set, wherein the skin disease image data set comprises unmarked skin disease images and marked skin disease images, and the number of the unmarked skin disease images is greater than that of the marked skin disease images;
designing a skin disease image segmentation network with low annotation cost, wherein the skin disease image segmentation network comprises N prediction models and a multi-model fusion module;
training and labeling each prediction model in batches by using a skin disease image dataset: firstly, dividing unmarked skin disease images into a plurality of batches; then, training a prediction model by using the currently labeled skin disease image; then, an active learning method of various uncertain strategies and a semi-supervised learning method based on a shared query value strategy are adopted, a prediction model obtained by current training is combined with expert labeling, and the current unmarked skin disease images of any batch are labeled; then, fusing the output labels of the prediction models by using a multi-model fusion module; until all skin disease images are marked;
repeatedly training each prediction model by using the marked skin disease image until each prediction model converges;
and (3) respectively segmenting and labeling the skin disease image to be segmented by using the trained prediction models, and then fusing the output labels of the prediction models by using a multi-model fusion module to complete segmentation and labeling of the skin disease image to be segmented.
In a further skin disease image segmentation method, the active learning method adopting multiple uncertain strategies and the semi-supervised learning method based on the shared query value strategy are combined with expert labeling by using a prediction model obtained by current training, and the current unmarked skin disease images of any batch are labeled, and the method specifically comprises the following steps:
two active learning uncertainty strategies are employedAndseparately for pixels in the unmarked skin disease imagePerforming pre-classification, and recording the pre-classification asAnd;
introducing random query factorsFor random query factorAnd pre-classification、Weighting to obtain pixelsConfidence of classification of (2);
By sharing classification confidenceAnd distributing a pseudo label to the pixel with the classification confidence coefficient reaching a preset value to finish labeling, and handing the rest pixels serving as uncertain pixels to an expert to finish labeling.
In a further skin disease image segmentation method, two active learning uncertainty strategies are adoptedAndthe method for pre-classifying the pixels comprises the following steps:
in the formula (I), the compound is shown in the specification,is shown asThe prediction model is used for predicting the prediction model,a class of pixels is represented by a number of pixels,the number of classes representing the pixel,representing a pixelThe label of (a) to (b),representing a predictive modelFor the pixelOutput tag of (2)The probability of (c).
In a further method of skin disease image segmentation, classification confidence is sharedThe method for distributing the pseudo labels for the pixels with the classification confidence reaching the preset value comprises the following steps:
In a further skin disease image segmentation method, the multi-model fusion module classifies pixels according to the magnitude of voting entropy, and the voting entropyThe calculating method comprises the following steps:
in the formula (I), the compound is shown in the specification,denotes the firstA prediction model to the pixelThe output tag of (1) or an expert-tagged tag.
A low annotation cost dermatologic image segmentation apparatus comprising:
a dataset construction module to: constructing a skin disease image data set which comprises unmarked skin disease images and marked skin disease images, wherein the number of the unmarked skin disease images is more than that of the marked skin disease images;
a split network design module to: designing a skin disease image segmentation network with low annotation cost, wherein the skin disease image segmentation network comprises N prediction models and a multi-model fusion module;
an image annotation module to: training and labeling each prediction model in batches using the skin disease image dataset: firstly, dividing unmarked skin disease images into a plurality of batches; then, training a prediction model by using the currently labeled skin disease image; then, an active learning method of various uncertain strategies and a semi-supervised learning method based on a shared query value strategy are adopted, a prediction model obtained by current training is combined with expert labeling, and the current unmarked skin disease images of any batch are labeled; then, fusing the output labels of the prediction models by using a multi-model fusion module; until all skin disease images are marked;
a model training module to: repeating iterative training on each prediction model by using the marked skin disease image until each prediction model converges;
an image segmentation module to: and (3) respectively segmenting and labeling the skin disease image to be segmented by using the trained prediction models, and then fusing the output labels of the prediction models by using a multi-model fusion module to complete segmentation and labeling of the skin disease image to be segmented.
In a further skin disease image segmentation apparatus, a specific process of labeling a skin disease image by the image labeling module includes:
two active learning uncertainty strategies are employedAndseparately for pixels in the unmarked skin disease imagePerforming pre-classification, and recording the pre-classification asAnd;
introducing random query factorsFor random query factorAnd pre-classification、Weighting to obtain pixelsConfidence of classification of;
By sharing classification confidenceDistributing a pseudo label to the pixels with the classification confidence reaching the preset value to finish labeling, and taking the rest pixels as uncertain pixelsThe pixels are submitted to an expert to finish labeling;
and (5) continuing to train the prediction model by using the currently labeled skin disease image.
In a further skin disease image segmentation device, two active learning uncertainty strategies are usedAndthe method for pre-classifying the pixels comprises the following steps:
in the formula (I), the compound is shown in the specification,is shown asThe prediction model is used for predicting the prediction model,a class of pixels is represented by a number of pixels,the number of classes representing the pixel,representing a pixelThe label of (a) is used,representing a predictive modelFor the pixelOutput tag ofThe probability of (c).
An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to implement the skin disease image segmentation method.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, implements a dermatological image segmentation method as defined in one of the above.
Advantageous effects
The invention can effectively reduce the label amount of the skin mirror image, reduce the cost of the segmentation network with the labeled data set and improve the robustness of the segmentation network. The scheme has high segmentation precision and reduced cost, and can assist dermatologists in diagnosing clinical tasks of melanoma. Compared with the existing skin disease image segmentation technology, the method has the following advantages:
(1) The random query method is introduced into two different active learning uncertainty strategy queries, different weights are respectively given to the two different active learning uncertainty strategy queries, the problem of consistent deviation of query pixels can be solved, therefore, difficult pixels with high uncertainty in a skin mirror image can be queried more accurately and are labeled by experts, and the defect of consistent deviation of uncertainty strategies can be effectively overcome.
(2) The multi-model fusion segmentation method provided by the invention can effectively improve the segmentation performance of a single prediction model and improve the robustness of an integrated segmentation network.
(3) The method has strong practicability, can not only mark image data sets with less number, but also keep higher segmentation effect, and effectively improves the performance of the model.
Drawings
Fig. 1 is a schematic diagram of a skin disease image segmentation method with low annotation cost according to an embodiment of the invention.
Fig. 2 is a schematic diagram of various active learning uncertainty query methods with random methods introduced in the embodiment of the present invention.
FIG. 3 is a diagram illustrating a multi-model fusion segmentation method according to an embodiment of the present invention.
FIG. 4 is a graph of the average pixel label amount per image according to an embodiment of the present invention.
FIG. 5 is a diagram illustrating the results of the dermoscopic image segmentation test according to the embodiment of the present invention.
Detailed Description
The following describes embodiments of the present invention in detail, which are developed based on the technical solutions of the present invention, and give detailed implementation manners and specific operation procedures to further explain the technical solutions of the present invention.
The embodiment provides a skin disease image segmentation method with low annotation cost, which can be used for experiments by using a Python programming language, and can also be used for engineering applications by using a C/C + + programming language. Referring to fig. 1, the method comprises the following steps:
step 1, a skin disease image data set is constructed, wherein the skin disease image data set comprises unmarked skin disease images and marked skin disease images, and the number of the unmarked skin disease images is larger than that of the marked skin disease images.
In this embodiment, since the accuracy of the segmented network and the generalization accuracy of the test segmented network are to be verified subsequently, the constructed skin disease image data set includes a training set, a verification set and a test set of the model, the training set is used for training the prediction model, the verification set is used for verifying the accuracy of the training model, and the test set is used for testing the generalization accuracy of the model. The training set comprises unmarked skin disease images and marked skin disease images, and the number of the unmarked skin disease images is larger than that of the marked skin disease images.
Embodiments employ the international collaborative skin imaging organization (ISIC) to provide digital skin lesion image datasets and expert annotations from all over the world for the diagnosis of melanoma and other cancers, including two sets of ISIC 2016 and ISIC 2017 datasets. The ISIC 2016 dataset contained 900 images (727 non-melanoma and 173 melanoma) for training and 379 images (304 non-melanoma and 75 melanoma) for testing. The pixel size of the image varied from 566 x 679-2848 x 4228. The ISIC 2017 dataset contained 2000 images (1626 non-melanoma and 374 melanoma) for training and 600 images (483 non-melanoma and 117 melanoma) for testing, with the pixel sizes of the images varying from 453 x 679-4499 x 6748.
And 2, designing a skin disease image segmentation network with low annotation cost, wherein the skin disease image segmentation network comprises N prediction models and a multi-model fusion module.
Training and labeling each prediction model in batches by using a skin disease image dataset: firstly, dividing unmarked skin disease images into a plurality of batches; then, training a prediction model by using the currently labeled skin disease image; then, an active learning method of various uncertain strategies and a semi-supervised learning method based on a shared query value strategy are adopted, a prediction model obtained by current training is combined with expert labeling, and the current unmarked skin disease images of any batch are labeled; then, fusing the output labels of the prediction models by using a multi-model fusion module; until all skin disease images are marked;
step 3, training and labeling each prediction model in batches by using the skin disease image data set:
step 3.1, dividing the unmarked skin disease images into a plurality of batches;
step 3.2, training a prediction model by using the currently labeled skin disease image;
step 3.3, marking the unmarked skin disease images of any current batch by adopting an active learning method of various uncertain strategies and a semi-supervised learning method based on a shared query value strategy and combining a prediction model obtained by current training with expert marking; specifically, the method comprises the following steps:
(1) Two active learning uncertainty strategies are employedAndseparately for pixels in the unmarked skin disease imagePerforming pre-classification, and recording the pre-classification asAnd;
in the formula (I), the compound is shown in the specification,is shown asA prediction model for the prediction of the target,a class of pixels is represented by a number of pixels,the number of categories representing the pixels is,representing a pixelThe label of (a) is used,representing a predictive modelFor the pixelOutput tag ofThe probability of (c).
ComputingThe value represents a measure of uncertainty by using the probability of predicting samples for all classes.The higher the value, the greater the uncertainty.
(2) Introducing a random query factorFor random query factorAnd pre-classification、Weighting to obtain pixelsConfidence of classification of (2);
In the formula (I), the compound is shown in the specification,representing the weights of three different query strategies,representing a pixelThe random query factor of (2).
(3) By sharing classification confidenceTo achieve the classification confidence valueThe pixel of (2) is assigned with a pseudo label to finish labeling, and the rest pixels are taken as uncertain pixels to be handed to an expert to finish labeling:
in the formula (I), the compound is shown in the specification,the value is marked for the expert and,is an assigned pseudo label.
And 3.4, fusing the pixel labels by using a multi-model fusion module according to the magnitude of the voting entropy, wherein the voting entropyThe calculation method comprises the following steps:
and 3.5, repeating the steps 3.2 to 3.4 until the labeling of the skin disease images of all batches is completed.
The uncertainty strategy-based active learning module in the embodiment adopts two active learning uncertainty strategies, simultaneously introduces a random query method, weights the three strategies to form a comprehensive querier, can more accurately query the difficult pixels of the dermatoscope image to be labeled by experts, and can effectively solve the defect of consistent deviation of the uncertainty strategies, and the reference is made in fig. 2.
And 4, repeatedly training each prediction model by using the marked skin disease image until each prediction model is converged.
And 5, segmenting and labeling the skin disease image to be segmented by using the trained prediction models, and fusing output labels of the prediction models by using a multi-model fusion module to complete segmentation and labeling of the skin disease image to be segmented.
In the embodiment, the test set is used as the skin disease image to be segmented, and each pixel in the image is classified according to the voting entropy, so that the segmentation of the focus of the skin disease image is completed, as shown in fig. 3.
In step 3 and step 4 of this embodiment, because the sizes of the skin disease images input into the prediction model are not the same, batch loading of data into the network model for training cannot be realized, and the limitation of the memory of the GPU of the hardware device is considered. To this end, the embodiment uniformly scales and crops the input image into a size of 192 × 256.
During the training of the model, the Ubuntu 16.04LTS system is needed, and the system environment needs Pythroch 1.6 and Python 3.6. The hardware platform needs four RTX2080Ti graphics cards as a main computing platform, meanwhile, the CPU memory is not lower than 16G, and the solid state disk is not lower than 256G. In model training, a total of 100 epochs are trained, 16 images are loaded into the system for batch training each time, and the initial learning rate is set to be 0.01.
The embodiment of the invention realizes the segmentation task of the skin disease image under the condition of less marked samples based on the active learning multi-model fusion segmentation network, can effectively reduce the label amount of the skin mirror image, reduce the cost of the segmentation network for a data set with a label, and improve the robustness of the segmentation network, thereby improving the performance of the segmentation network.
Table 1 shows the structural composition of the multi-model fusion network and four networks Net1, net2, net3, and Net4, which respectively include one to five different mainstream skin mirror image segmentation networks in the table to form an integrated segmentation model.
TABLE 1 Multi-model fusion method
Tables 2 and 3 are a comparison of the present invention with the best current dermatoscope image segmentation model. According to the invention, DIC and JAI values of 2016 ISIC can be respectively improved to 94.45% and 89.07% from 88.64% and 81.37%, and the DIC and JAI values can be respectively improved to 87.51% and 80.22% from 79.39% and 72.04% in 2017 ISIC.
Table 2 network architecture performance comparison in ISIC 2016 dataset (%)
Table 3 network architecture ISIC 2017 data set performance comparison (%)
FIG. 4 is the average pixel label amount per image for the present invention. According to the image labeling method based on the uncertainty strategy active learning method and the high-confidence-degree strategy semi-supervised learning method, the average pixel annotation of each image is not more than 15%, so that the most uncertain image pixels can be inquired and annotated, and most of the rest pixels are endowed with pseudo labels.
In order to further verify the effectiveness of the method of the invention, comparison with the method which is mainstream internationally is carried out. Tables 4 and 5 are experimental comparisons between ISIC 2016 and ISIC 2016 data sets in accordance with the present invention, and it can be seen that the methods provided by the present invention exhibit better performance. The invention only uses 80% of the training data set to achieve the segmentation performance which is equivalent to that of other methods using all training data sets.
TABLE 4 comparison of Performance (%)
TABLE 5 comparison of Performance (%)
Fig. 5 shows image labels obtained by different query strategies. It can be seen that on some images, the lesion segmented by the method provided by the invention has more accurate contour than the original label.
The above embodiments are preferred embodiments of the present application, and those skilled in the art can make various changes or modifications without departing from the general concept of the present application, and such changes or modifications should fall within the scope of the claims of the present application.
Claims (5)
1. A low-annotation-cost skin disease image segmentation method is characterized by comprising the following steps:
constructing a skin disease image data set which comprises unmarked skin disease images and marked skin disease images, wherein the number of the unmarked skin disease images is more than that of the marked skin disease images;
designing a skin disease image segmentation network with low annotation cost, wherein the skin disease image segmentation network comprises N prediction models and a multi-model fusion module;
training and labeling each prediction model in batches by using a skin disease image dataset: firstly, dividing unmarked skin disease images into a plurality of batches; then, training a prediction model by using the currently labeled skin disease image; then, an active learning method of various uncertain strategies and a semi-supervised learning method based on a shared query value strategy are adopted, a prediction model obtained by current training is combined with expert labeling, and the current unmarked skin disease images of any batch are labeled; then, fusing the output labels of the prediction models by using a multi-model fusion module; until all skin disease images are marked;
the active learning method adopting various uncertain strategies and the semi-supervised learning method based on the shared query value strategy are used for marking the unmarked skin disease images of any current batch by combining a prediction model obtained by current training with expert marking, and specifically comprise the following steps:
(1) Two active learning uncertainty strategies are employedAndseparately for pixels in the unmarked skin disease imagePerforming pre-classification, and recording the pre-classification asAnd;
in the formula (I), the compound is shown in the specification,denotes the firstThe prediction model is used for predicting the prediction model,a class of pixels is represented by a number of pixels,the number of categories representing the pixels is,representing a pixelThe label of (a) is used,representing a predictive modelFor the pixelOutput tag of (2)The probability of (d);
(2) Introducing a random query factorFor random query factorAnd pre-classification、Weighting to obtain pixelsConfidence of classification of;
(3) By sharing classification confidenceDistributing a pseudo label to the pixel with the classification confidence coefficient reaching a preset value to finish labeling, and handing the rest pixels as uncertain pixels to an expert to finish labeling;
in the formula (I), the compound is shown in the specification,the value is marked for the expert and,is an assigned pseudo label;
repeating iterative training on each prediction model by using the marked skin disease image until each prediction model converges;
and (3) respectively segmenting and labeling the skin disease image to be segmented by using the trained prediction models, and then fusing the output labels of the prediction models by using a multi-model fusion module to complete segmentation and labeling of the skin disease image to be segmented.
2. The method of claim 1, wherein the multi-model fusion module classifies pixels according to the magnitude of the entropy of the vote, the entropy of the vote beingThe calculation method comprises the following steps:
3. A low labeling cost dermatologic image segmentation apparatus, comprising:
a dataset construction module to: constructing a skin disease image data set, wherein the skin disease image data set comprises unmarked skin disease images and marked skin disease images, and the number of the unmarked skin disease images is greater than that of the marked skin disease images;
a split network design module to: designing a skin disease image segmentation network with low annotation cost, wherein the skin disease image segmentation network comprises N prediction models and a multi-model fusion module;
an image annotation module to: training and labeling each prediction model in batches using the skin disease image dataset: firstly, dividing unmarked skin disease images into a plurality of batches; then, training a prediction model by using the currently labeled skin disease image; then, an active learning method of various uncertain strategies and a semi-supervised learning method based on a shared query value strategy are adopted, a prediction model obtained by current training is combined with expert labeling, and the current unmarked skin disease images of any batch are labeled; then, fusing the output labels of the prediction models by using a multi-model fusion module; until all skin disease images are marked;
the specific process of labeling the skin disease image by the image labeling module comprises the following steps:
(1) Two active learning uncertainty strategies are employedAndseparately for pixels in the unmarked skin disease imagePerforming pre-classification, and recording the pre-classification asAnd;
in the formula (I), the compound is shown in the specification,is shown asThe prediction model is used for predicting the prediction model,a class of pixels is represented by a number of pixels,the number of categories representing the pixels is,representing a pixelThe label of (a) is used,representing a predictive modelFor the pixelOutput tag ofThe probability of (d);
(2) Introducing a random query factorFor random query factorAnd pre-classification、Weighting to obtain pixelsConfidence of classification of (2);
(3) By sharing classification confidenceDistributing a pseudo label to the pixel with the classification confidence coefficient reaching a preset value to finish labeling, and handing the rest pixels as uncertain pixels to an expert to finish labeling;
in the formula (I), the compound is shown in the specification,the value is marked for the expert and,is an assigned pseudo label;
a model training module to: repeatedly training each prediction model by using the marked skin disease image until each prediction model converges;
an image segmentation module to: and (3) respectively segmenting and labeling the skin disease image to be segmented by using the trained prediction models, and then fusing the output labels of the prediction models by using a multi-model fusion module to complete segmentation and labeling of the skin disease image to be segmented.
4. An electronic device comprising a memory and a processor, the memory having stored therein a computer program, wherein the computer program, when executed by the processor, causes the processor to carry out the method according to any one of claims 1-2.
5. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211332281.1A CN115393361B (en) | 2022-10-28 | 2022-10-28 | Skin disease image segmentation method, device, equipment and medium with low annotation cost |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211332281.1A CN115393361B (en) | 2022-10-28 | 2022-10-28 | Skin disease image segmentation method, device, equipment and medium with low annotation cost |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115393361A CN115393361A (en) | 2022-11-25 |
CN115393361B true CN115393361B (en) | 2023-02-03 |
Family
ID=84115191
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211332281.1A Active CN115393361B (en) | 2022-10-28 | 2022-10-28 | Skin disease image segmentation method, device, equipment and medium with low annotation cost |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115393361B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116109564A (en) * | 2022-12-01 | 2023-05-12 | 脉得智能科技(无锡)有限公司 | Method, device, equipment and medium for rapidly screening multiple types of skin disease appearance images |
CN116763259B (en) * | 2023-08-17 | 2023-12-08 | 普希斯(广州)科技股份有限公司 | Multi-dimensional control method and device for beauty equipment and beauty equipment |
CN116935388B (en) * | 2023-09-18 | 2023-11-21 | 四川大学 | Skin acne image auxiliary labeling method and system, and grading method and system |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112348972A (en) * | 2020-09-22 | 2021-02-09 | 陕西土豆数据科技有限公司 | Fine semantic annotation method based on large-scale scene three-dimensional model |
CN112163634B (en) * | 2020-10-14 | 2023-09-05 | 平安科技(深圳)有限公司 | Sample screening method and device for instance segmentation model, computer equipment and medium |
CN113838058B (en) * | 2021-10-11 | 2024-03-19 | 重庆邮电大学 | Automatic medical image labeling method and system based on small sample segmentation |
CN114612702A (en) * | 2022-01-24 | 2022-06-10 | 珠高智能科技(深圳)有限公司 | Image data annotation system and method based on deep learning |
-
2022
- 2022-10-28 CN CN202211332281.1A patent/CN115393361B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN115393361A (en) | 2022-11-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Madani et al. | Fast and accurate view classification of echocardiograms using deep learning | |
CN115393361B (en) | Skin disease image segmentation method, device, equipment and medium with low annotation cost | |
Zhuang et al. | An Effective WSSENet-Based Similarity Retrieval Method of Large Lung CT Image Databases. | |
Mahmood et al. | Deep adversarial training for multi-organ nuclei segmentation in histopathology images | |
US11842487B2 (en) | Detection model training method and apparatus, computer device and storage medium | |
CN108335303B (en) | Multi-scale palm skeleton segmentation method applied to palm X-ray film | |
WO2021081257A1 (en) | Artificial intelligence for personalized oncology | |
Zhang et al. | A survey on deep learning of small sample in biomedical image analysis | |
Huang et al. | Medical image segmentation using deep learning with feature enhancement | |
Yan et al. | Symmetric convolutional neural network for mandible segmentation | |
Dai et al. | Samaug: Point prompt augmentation for segment anything model | |
WO2022178997A1 (en) | Medical image registration method and apparatus, computer device, and storage medium | |
Wang et al. | Medical matting: a new perspective on medical segmentation with uncertainty | |
Shen et al. | Cross-modal fine-tuning: Align then refine | |
Wang et al. | Superpixel inpainting for self-supervised skin lesion segmentation from dermoscopic images | |
Wu et al. | Deep adversarial data augmentation with attribute guided for person re-identification | |
Shen et al. | Dilated transformer: residual axial attention for breast ultrasound image segmentation | |
Zhao et al. | Deeply supervised active learning for finger bones segmentation | |
Xue et al. | Oriented localization of surgical tools by location encoding | |
Song et al. | Classifying tongue images using deep transfer learning | |
Jiang et al. | Automatic classification of heterogeneous slit-illumination images using an ensemble of cost-sensitive convolutional neural networks | |
Li et al. | Automatic bone age assessment of adolescents based on weakly-supervised deep convolutional neural networks | |
Wang et al. | Explainable multitask Shapley explanation networks for real-time polyp diagnosis in videos | |
Al-Ani et al. | A review on detecting brain tumors using deep learning and magnetic resonance images. | |
Boochoon et al. | Deep learning for the assessment of facial nerve palsy: opportunities and challenges |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |