CN113256641B - Skin lesion image segmentation method based on deep learning - Google Patents

Skin lesion image segmentation method based on deep learning Download PDF

Info

Publication number
CN113256641B
CN113256641B CN202110769832.XA CN202110769832A CN113256641B CN 113256641 B CN113256641 B CN 113256641B CN 202110769832 A CN202110769832 A CN 202110769832A CN 113256641 B CN113256641 B CN 113256641B
Authority
CN
China
Prior art keywords
skin lesion
module
lesion image
channel
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110769832.XA
Other languages
Chinese (zh)
Other versions
CN113256641A (en
Inventor
周剑
刘业鑫
段辉高
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202110769832.XA priority Critical patent/CN113256641B/en
Publication of CN113256641A publication Critical patent/CN113256641A/en
Application granted granted Critical
Publication of CN113256641B publication Critical patent/CN113256641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a skin lesion image segmentation method based on deep learning, which comprises the following steps: acquiring a skin electron microscope image, adjusting the image to be of a uniform size, and preprocessing the image; dividing the data set into a training set, a verification set and a test set; building a model, building an FCP-Net network, calculating the output of the neural network model and the loss value of the skin lesion image mask, and updating the parameters of the neural network model according to the loss value; traversing all training samples, and continuously training until the loss value of the model does not change in 10 times of traversal, so as to obtain a skin lesion image segmentation model; and (4) actual segmentation, namely inputting the image into an FCP-Net network for segmentation, and outputting the segmented image. The segmentation method provided by the invention can obviously improve the image segmentation accuracy.

Description

Skin lesion image segmentation method based on deep learning
Technical Field
The invention belongs to the technical field of medical image processing and computer vision, and particularly relates to a skin lesion image segmentation method based on deep learning.
Background
Malignant melanoma is one of the fastest growing cancers in the world, and today, despite advanced treatment techniques, such as radiotherapy and immunotherapy, increasingly combined with clinical surgery, the five-year survival rate of advanced melanoma is as low as 15%, while the five-year survival rate of early melanoma exceeds 95%. This difference particularly highlights the importance of timely diagnosis and treatment of melanoma for the survival of patients.
A dermatoscope is a popular in vivo non-invasive imaging tool that uses polarized light to assist dermatologists in examining pigmented skin lesions according to a set of morphological features. Although dermoscopy has proven to result in improved diagnostic accuracy compared to traditional analysis using the ABCD standard, correctly interpreting the dermoscopy image is often time consuming and complex. Therefore, computerized analysis methods have been developed to help dermatologists improve their efficiency and objectivity in the visual interpretation of dermoscopic images. The automatic segmentation of the pigment nodules from the skin image is an important step in the computer analysis of the dermatoscope image. However, this task is not straightforward, as the melanoma typically has a large number of appearances in size, shape and color, different types of skin and texture. Also, some lesions have irregular and fuzzy boundaries. However, the quality of the automatic segmentation method directly results in whether the final image is clear and intuitive, and the segmentation method in the prior art has an unsatisfactory effect.
Disclosure of Invention
The invention aims to provide a skin lesion image segmentation method based on deep learning, which realizes automatic segmentation of a skin lesion image and achieves a good segmentation effect.
In order to solve the problems, the technical scheme of the invention is as follows:
a skin lesion image segmentation method based on deep learning comprises the following steps:
step one, acquiring an original skin lesion image data set;
secondly, preprocessing the acquired skin lesion image data set;
thirdly, carrying out data set division on the preprocessed skin lesion image data set, and dividing the data set into a training set, a verification set and a test set;
inputting the preprocessed training set and verification set image data into an FCP-Net network model for training to obtain a segmentation model of the skin lesion image, and transmitting each image in the test set image data into the trained FCP-Net network model for prediction to obtain a segmentation result; wherein the encoding structure of the FCP-Net network model comprises an embedded feature set (EFE) module, an extended spatial mapping and channel attention (DSMCA) module and a Branching Layer Fusion (BLF) module, and the decoding structure of the FCP-Net network model comprises an extended spatial mapping and channel attention (DSMCA) module and a multi-scale fusion (MSFF) module;
the embedded characteristic collection module extracts features by using deep separable convolution, then inputs the features into the lightweight channel attention module for channel feature extraction, adds extracted channel feature information, and finally extracts features by using 1 multiplied by 1 convolution, wherein the lightweight channel attention module has a calculation formula as follows:
Figure 484509DEST_PATH_IMAGE001
sep denotes the result of the depth separable convolution, H, W being the height and width of the feature map, W1,W2,W3,W4Denotes the parameters of the convolutional layer, delta denotes the Swish activation function, sigma denotes the Sigmoid activation function, siThe scale factor of the ith channel is referred, U represents the output of the embedded feature set matching module, and Dropout represents the result of a Dropout layer;
the extended spatial mapping and channel attention module combines the channel attention and the spatial attention to capture the channel and spatial dependency of the embedded feature set matching module and extracts the aggregated information, and the calculation formula of the extended spatial mapping and channel attention module is as follows:
Figure 995124DEST_PATH_IMAGE002
wherein W5、W6、W7、W8、W9、W10Is a convolutional layer parameter, delta represents the Swish activation function, sigma represents the Sigmoid activation function,
Figure 159215DEST_PATH_IMAGE003
representing the convolution of the hole, r is the expansion ratio of the convolution of the hole, XinIs the input to the expanded spatial mapping and channel attention module, S is the output of the lightweight channel attention result, DSM _𝐵1And DSM𝐵2The terms represent the outputs of the spatial attention block 1 and the spatial attention block 2, respectively, and Dropout represents the result of the Dropout layer;
the branch layer fusion module fuses branches output from the embedded feature set matching module using 'arithmetic addition' operations and 'stitching' operations.
As a preferred refinement of the present invention, in step one, the raw skin lesion image dataset comprises image data of a skin lesion image data and a PH2 dataset obtained from an ISBI2016 and an ISBI 2017 challenge race.
As a preferable improvement of the present invention, in the second step, the pretreatment comprises the steps of:
adjusting the size of the images in the skin lesion image dataset to be uniform, turning over each skin lesion image sample in the ISIC 2016 and ISIC2017 datasets, firstly, horizontally turning over the samples with the probability of 0.5, and then vertically turning over the samples with the same probability; rotating the image by an angle theta by applying rotation operation, wherein theta is obtained by random sampling in Gaussian distribution within the range from-40 degrees to 40 degrees; the images were then recalibrated with a gaussian random factor in the range of [0.7,1.3 ].
As a preferred refinement of the invention, in step three, according to 7: 1: the scale of 2 divides the skin lesion image dataset into a training set, a validation set, and a test set.
In step four, the Xcaption structure is selected as the down-sampling stage of the FCP-Net network model coding structure.
As a preferred refinement of the invention, in step four, the Branch Layer Fusion (BLF) module fuses branches output from the embedded feature set (EFE) module using an 'arithmetic addition' operation and a 'stitching' operation.
As a preferred refinement of the present invention, the multi-scale fusion (MSFF) module implements multi-scale feature fusion using an extended spatial mapping and channel attention module and a stitching operation.
Compared with the prior art, the invention has the following beneficial effects:
a new feature compression pyramid network (FCP-Net) is established based on a deep learning skin lesion image segmentation method, three modules are combined to capture and integrate global/multi-scale context information, the model uses end-to-end training, post-processing or priori knowledge is not needed, and accuracy of segmentation of the pigment tumor is improved;
in the encoding stage, three new block embedded characteristic set combination modules, an extended space mapping and channel attention module and a branch layer fusion module are constructed, so that the efficiency of extracting space information is higher, the identification of space correlation among the characteristics is more effective, and the multi-sensitive wild characteristics from different branches are integrated;
in the decoding stage, a plurality of jump connections are established by using a DSMCA module and a multi-scale feature fusion module, and multi-scale information is fused.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without inventive efforts, wherein:
FIG. 1 is an exemplary diagram of the present invention for diagnosing complex skin lesions;
FIG. 2 is a block diagram of the FCP-Net network model of the present invention;
FIG. 3 is a block diagram of an embedded feature set (EFE) module of the present invention;
fig. 4 is a block diagram of an extended spatial mapping and channel attention (DSMCA) module of the present invention.
The specific implementation mode is as follows:
the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The invention provides a skin lesion image segmentation method based on deep learning, which specifically comprises the following steps:
step one, acquiring an original skin lesion image data set;
specifically, the raw skin lesion image dataset includes image data of a skin lesion image data and a PH2 dataset obtained from an ISBI2016 and an ISBI 2017 challenge race.
Image data including skin lesion images acquired under a plurality of acquisition environments in the format jpg and segmented melanoma label image data in the format png are acquired from the ISIC 2016 dataset. The data set included a test set containing 900 labeled dermatome images (173 melanomas) and a test set containing 379 images (75 melanomas). The skin lesion image sample of ISIC 2016 is an 8bit RGB image, ranging in size from 542 × 718 to 2848 × 4288. The ISIC2017 includes 2000 training pictures, 150 verification set pictures and 600 test set pictures, including 8-bit RGB images, with sizes varying from 540 x 722 to 4499 x 6748 pixels. The PH2 includes 200 dermatoscopic images including 80 common moles, 80 atypical moles, and 40 melanoma fixed images with a size of 560, the partial images being shown in fig. 1, (a) being hair artifacts, (b) being irregular fuzzy boundaries, (c) being blood vessels, (d) being color illumination, (e) being low contrast, (f) being ruler markings, (g) being air bubbles, and (h) being frame artifacts.
Secondly, preprocessing the acquired skin lesion image data set;
the size of the images in the skin lesion image dataset is adjusted to be 256 × 256, each skin lesion image sample in the ISIC 2016 and ISIC2017 dataset is turned over, the horizontal turning is firstly performed, the probability of turning over the sample is 0.5, and then the vertical turning is performed with the same probability. The rotation operation is applied to rotate the image through an angle that is randomly sampled from a gaussian distribution ranging from-40 degrees to 40 degrees. The images were then recalibrated with a gaussian random factor in the range of [0.7,1.3 ].
Thirdly, carrying out data set division on the preprocessed skin lesion image data set, and dividing the data set into a training set, a verification set and a test set;
according to the following steps: 1: the scale of 2 divides the skin lesion image dataset into a training set, a validation set, and a test set.
Inputting the preprocessed training set and verification set image data into an FCP-Net network model for training to obtain a segmentation model of the skin lesion image, and transmitting each image in the test set image data into the trained FCP-Net network model for prediction to obtain a segmentation result; the structure of the FCP-Net network model can be seen in fig. 2, where in fig. 2, EFE-module rate = k denotes an embedded feature set matching module, and the expansion rate of the hole convolution is k; DSMCA represents an extended spatial mapping and channel attention module; BLF denotes branch layer fusion module; MSFF represents a multi-scale fusion module; add represents the addition of the corresponding elements; sep-conv denotes depth separable convolution.
Wherein the encoding structure of the FCP-Net network model comprises an embedded feature set (EFE) module, an extended spatial mapping and channel attention (DSMCA) module and a Branching Layer Fusion (BLF) module, and the decoding structure of the FCP-Net network model comprises an extended spatial mapping and channel attention (DSMCA) module and a multi-scale fusion (MSFF) module. And an Xcenter structure is selected for downsampling in the FCP-Net network model coding stage. Then, the output of Xception is transmitted to convolution branches with different receptive fields, then the features of each branch are input into an embedded feature set matching module (as shown in fig. 3), the embedded feature set matching module extracts features by using deep separable convolution, then inputs the features into a lightweight channel attention module for channel feature extraction, adds the extracted channel feature information, and finally extracts features by using 1 × 1 convolution, wherein the lightweight channel attention module has the following calculation formula:
Figure 679058DEST_PATH_IMAGE004
sep denotes the result of the depth separable convolution, H, W being the height and width of the feature map, W1,W2,W3,W4Denotes the parameters of the convolutional layer, delta denotes the Swish activation function, sigma denotes the Sigmoid activation function, scReferring to the scale factor for the c-th channel, U represents the output of the embedded feature set matching module and Dropout represents the result of the Dropout layer.
Then inputting the features output by the embedded feature set matching module into an extended space mapping and channel attention module, as shown in fig. 4, wherein (a) in fig. 4 represents a DSM block 1; (b) denotes DSM block 2; (c) the entire expanded spatial mapping and channel attention module is represented. The module combines channel attention and spatial attention to capture the channel and spatial dependencies of the embedded feature set module and extract aggregated information. The calculation formula of the expanded spatial mapping and channel attention module is as follows:
Figure 211540DEST_PATH_IMAGE005
wherein W5、W6、W7、W8、W9、W10Is a convolutional layer parameter, delta represents the Swish activation function, sigma represents the Sigmoid activation function,
Figure 706106DEST_PATH_IMAGE006
representing the convolution of the hole, r is the expansion ratio of the convolution of the hole, XinIs the input to the expanded spatial mapping and channel attention module, S is the output of the lightweight channel attention result, DSM _𝐵1And DSM𝐵2The terms represent the outputs of the spatial attention block 1 and the spatial attention block 2, respectively, and Dropout represents the result of the Dropout layer.
Then, the feature map processed by each branch through the embedded feature set matching module, the extended space mapping and the channel attention module is input into the branch layer fusion module, and the branch layer fusion module fuses each branch of the embedded feature set matching module by using 'arithmetic addition' operation and 'splicing' operation.
Further, the output of the branch layer fusion module and the feature map in the 2 nd, 3 rd and 4 th down-sampling of the down-sampling structure are input into a multi-scale fusion module in the decoding structure, and the module uses an extended spatial mapping and channel attention module and a splicing operation to realize multi-scale feature fusion.
Then, the output of the multi-scale fusion module is subjected to two depth separable convolutions and a 1 × 1 ordinary convolution, the activation function is Sigmoid, and the obtained feature map is up-sampled to a size of 256 × 256.
In the training process of the model, the used loss function is a cross entropy loss function, the used optimizer is an Adam optimizer, the LabelSmoothing training skill is adopted, data are input into a DA-Unet network for training, the batch size is 8, the initial learning rate is set to be 0.0001, the model with the best performance on the test set is stored as the final model, and the training is finished.
And testing the trained model on a test set of data sets ISIC 2016, ISIC2017 and PH2, wherein evaluation indexes used in the test are as follows: accuracy (ACC), Sensitivity (SE), Specificity (SP), Precision (PC), dice number (F1-Score), Jaccard index (JA), and Jaccard precision (JS). Accuracy is an assessment of the overall segmentation performance of the lesion image. The sensitivity describes the number of correctly segmented lesion pixels. Precision represents the percentage of the true correct number of samples to the total number of determined samples. The dice number indicates an overlap between the predicted outcome and the ground truth, which provides a degree of similarity. Jaccard similarity is an evaluation of the intersection between the segmentation result and the ground truth mask. The mathematical definition of these indices is as follows:
Figure 97773DEST_PATH_IMAGE007
wherein TP, TN, FP, FN and GT represent true positive, true negative, false positive, false negative and true mask, respectively.
The skin lesion image segmentation method based on deep learning provided by the invention is different from the existing mainstream image segmentation method, table 1 is a test result comparison table of different networks on an ISIC 2016 data set provided by the invention, and the comparison result is shown in table 1.
Figure 686886DEST_PATH_IMAGE009
Table 1 is a comparison table of results of different networks in the ISIC 2016 test set provided by the present invention, and as shown in table 1, the medical image segmentation method and system based on deep learning provided by the present invention achieve the best results. EXB, Mahmudur, and CUMED are the first three of the ISBI2016 challenge test data sets. It is worth noting that FCP-Net is superior to the winner challenged in ISBI2016, with 1.3% improvement in accuracy, 2.9% improvement in F1-Score, 10.6% improvement in JA, 2.1% improvement in sensitivity, and 1.4% improvement in specificity. It is clear from the table that the network proposed by the present invention achieves excellent results, increasing some indexes by more than 10%.
Table 2 is a comparison table of test results of different networks on the ISIC2017 data set, which is provided by the present invention, and the comparison results are shown in table 2.
Figure 683661DEST_PATH_IMAGE011
Table 2 is a comparison table of results of different networks in the ISIC2017 test set provided by the present invention, and as shown in table 2, the medical image segmentation method and system based on deep learning provided by the present invention achieve the best results. Comparison of the proposed evaluation index of FCP-Net with the traditional Deeplabv3 and Att-Deeplabv3 algorithms shows that the proposed module improves network performance. Compared with Deeplabv3, the Jaccard similarity and accuracy are improved by 1.24%, the sensitivity is improved by 8.96%, the F1-Scroe is improved by 3.71%, and the specificity is reduced by 0.6%. The Att-Deeplabv3 adds a channel attention mechanism on the basis of Deeplabv3, and the FCP-Net increases the channel and space attention and integrates multi-scale information.
Table 3 is a comparison table of test results of different networks on the PH2 data set, and the comparison results are shown in table 3.
Figure DEST_PATH_IMAGE013
Table 3 is a comparison table of the results of the different networks provided by the present invention in the PH2 test set, and table 3 lists several conventional algorithms, some most advanced models, and the quantitative results obtained by the proposed networks, each applied to the PH2 data set. The Jaccard similarity and accuracy increased by 1.98%, the sensitivity increased by 6.35%, the F1-score increased by 4.19%, and the specificity decreased by 0.11% compared to Deeplabv 3. Clearly, FCP-Net is superior to other methods.
The invention provides a skin lesion image segmentation method based on deep learning, which can automatically segment pigment tumors. A new strategy is provided in the encoding stage, and comprises an embedded feature integration module, an extended space mapping and channel attention module and a branch layer fusion module. The modules effectively extract spatial information, effectively capture channels and spatial correlation among features, and integrate multi-scale information from different feature branches, thereby improving segmentation performance. And the DSMCA and the multi-scale feature fusion module are realized in the decoding stage, and a plurality of skip connections are realized.
The above description of the present invention is intended to be illustrative. Various modifications, additions and substitutions for the specific embodiments described may be made by those skilled in the art without departing from the scope of the invention as defined in the accompanying claims.

Claims (6)

1. A skin lesion image segmentation method based on deep learning is characterized by comprising the following steps:
step one, acquiring an original skin lesion image data set;
secondly, preprocessing the acquired skin lesion image data set;
thirdly, carrying out data set division on the preprocessed skin lesion image data set, and dividing the data set into a training set, a verification set and a test set;
inputting the preprocessed training set and verification set image data into an FCP-Net network model for training to obtain a segmentation model of the skin lesion image, and transmitting each image in the test set image data into the trained FCP-Net network model for prediction to obtain a segmentation result; the FCP-Net network model comprises an encoding structure, a decoding structure and a feature layer fusion module, wherein the encoding structure comprises an embedded feature set matching module, an extended space mapping and channel attention module and a branch layer fusion module;
the embedded characteristic collection module extracts features by using deep separable convolution, then inputs the features into the lightweight channel attention module for channel feature extraction, adds extracted channel feature information, and finally extracts features by using 1 multiplied by 1 convolution, wherein the lightweight channel attention module has a calculation formula as follows:
Figure 563484DEST_PATH_IMAGE001
sep denotes the result of the depth separable convolution, H, W being the height and width of the feature map, W1,W2,W3,W4Denotes the parameters of the convolutional layer, delta denotes the Swish activation function, sigma denotes the Sigmoid activation function, siThe scale factor of the ith channel is referred, U represents the output of the embedded feature set matching module, and Dropout represents the result of a Dropout layer;
the extended spatial mapping and channel attention module combines the channel attention and the spatial attention to capture the channel and spatial dependency of the embedded feature set matching module and extracts the aggregated information, and the calculation formula of the extended spatial mapping and channel attention module is as follows:
Figure 708027DEST_PATH_IMAGE003
wherein W5、W6、W7、W8、W9、W10Is a convolutional layer parameter, delta represents the Swish activation function, sigma represents the Sigmoid activation function,
Figure 648301DEST_PATH_IMAGE004
representing the convolution of the hole, r is the expansion ratio of the convolution of the hole, XinIs the input to the expanded spatial mapping and channel attention module, S is the output of the lightweight channel attention result, DSM _𝐵1And DSM𝐵2The terms represent the outputs of the spatial attention block 1 and the spatial attention block 2, respectively, and Dropout represents the result of the Dropout layer;
the branch layer fusion module fuses branches output from the embedded feature set matching module using 'arithmetic addition' operations and 'stitching' operations.
2. The method for skin lesion image segmentation based on deep learning of claim 1, wherein in step one, the original skin lesion image data set comprises image data of a skin lesion image data and a PH2 data set obtained from an ISBI2016 and an ISBI 2017 challenge match.
3. The method for skin lesion image segmentation based on deep learning of claim 2, wherein in the second step, the preprocessing comprises the following steps:
adjusting the size of the images in the skin lesion image dataset to be uniform, turning over each skin lesion image sample in the ISIC 2016 and ISIC2017 datasets, firstly, horizontally turning over the samples with the probability of 0.5, and then vertically turning over the samples with the same probability; rotating the image by an angle theta by applying rotation operation, wherein theta is obtained by random sampling in Gaussian distribution within the range from-40 degrees to 40 degrees; the images were then recalibrated with a gaussian random factor in the range of [0.7,1.3 ].
4. The method for skin lesion image segmentation based on deep learning of claim 1, wherein in step three, according to 7: 1: the scale of 2 divides the skin lesion image dataset into a training set, a validation set, and a test set.
5. The method for skin lesion image segmentation based on deep learning of claim 1, wherein in step four, the Xception structure is selected in the downsampling stage of the FCP-Net network model coding structure.
6. The method of claim 1, wherein in step four, the multi-scale fusion module implements multi-scale feature fusion using an extended spatial mapping and channel attention module and a stitching operation.
CN202110769832.XA 2021-07-08 2021-07-08 Skin lesion image segmentation method based on deep learning Active CN113256641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110769832.XA CN113256641B (en) 2021-07-08 2021-07-08 Skin lesion image segmentation method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110769832.XA CN113256641B (en) 2021-07-08 2021-07-08 Skin lesion image segmentation method based on deep learning

Publications (2)

Publication Number Publication Date
CN113256641A CN113256641A (en) 2021-08-13
CN113256641B true CN113256641B (en) 2021-10-01

Family

ID=77190855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110769832.XA Active CN113256641B (en) 2021-07-08 2021-07-08 Skin lesion image segmentation method based on deep learning

Country Status (1)

Country Link
CN (1) CN113256641B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688931B (en) * 2021-09-01 2024-03-29 什维新智医疗科技(上海)有限公司 Deep learning-based ultrasonic image screening method and device
CN114419060B (en) * 2021-12-01 2024-05-31 华南理工大学 Method and system for dividing skin mirror image
CN114255350B (en) * 2021-12-23 2023-08-04 四川大学 Method and system for measuring thickness of soft and hard tissues of palate
CN114693703A (en) * 2022-03-31 2022-07-01 卡奥斯工业智能研究院(青岛)有限公司 Skin mirror image segmentation model training and skin mirror image recognition method and device
CN115311230A (en) * 2022-08-08 2022-11-08 吉林建筑大学 Skin lesion image segmentation method based on deep learning and feature fusion
CN116894820B (en) * 2023-07-13 2024-04-19 国药(武汉)精准医疗科技有限公司 Pigment skin disease classification detection method, device, equipment and storage medium
CN117078697B (en) * 2023-08-21 2024-04-09 南京航空航天大学 Fundus disease seed detection method based on cascade model fusion

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN111681252A (en) * 2020-05-30 2020-09-18 重庆邮电大学 Medical image automatic segmentation method based on multipath attention fusion
CN112102321A (en) * 2020-08-07 2020-12-18 深圳大学 Focal image segmentation method and system based on deep convolutional neural network
CN112446890A (en) * 2020-10-14 2021-03-05 浙江工业大学 Melanoma segmentation method based on void convolution and multi-scale fusion
CN112927243A (en) * 2021-03-31 2021-06-08 上海大学 Micro-hemorrhage focus segmentation method based on convolutional neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190313963A1 (en) * 2018-04-17 2019-10-17 VideaHealth, Inc. Dental Image Feature Detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN111681252A (en) * 2020-05-30 2020-09-18 重庆邮电大学 Medical image automatic segmentation method based on multipath attention fusion
CN112102321A (en) * 2020-08-07 2020-12-18 深圳大学 Focal image segmentation method and system based on deep convolutional neural network
CN112446890A (en) * 2020-10-14 2021-03-05 浙江工业大学 Melanoma segmentation method based on void convolution and multi-scale fusion
CN112927243A (en) * 2021-03-31 2021-06-08 上海大学 Micro-hemorrhage focus segmentation method based on convolutional neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automated skin lesion segmentation using attention-based deep convolutional neural network;Ridhi Arora等;《Biomedical Signal Processing and Control》;20210331;1-10 *
基于深度可分离卷积网络的皮肤镜图像病灶分割方法;崔文成等;《智能科学与技术学报》;20201231;385-393 *

Also Published As

Publication number Publication date
CN113256641A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
CN113256641B (en) Skin lesion image segmentation method based on deep learning
Wang et al. Identification of melanoma from hyperspectral pathology image using 3D convolutional networks
CN110889853B (en) Tumor segmentation method based on residual error-attention deep neural network
CN112150428B (en) Medical image segmentation method based on deep learning
Chan et al. Texture-map-based branch-collaborative network for oral cancer detection
CN110889852A (en) Liver segmentation method based on residual error-attention deep neural network
CN112070772A (en) Blood leukocyte image segmentation method based on UNet + + and ResNet
CN112446891B (en) Medical image segmentation method based on U-Net network brain glioma
CN110717907A (en) Intelligent hand tumor detection method based on deep learning
CN105657580B (en) A kind of capsule endoscope video abstraction generating method
CN113674253A (en) Rectal cancer CT image automatic segmentation method based on U-transducer
CN109492668B (en) MRI (magnetic resonance imaging) different-phase multimode image characterization method based on multi-channel convolutional neural network
CN113034505B (en) Glandular cell image segmentation method and glandular cell image segmentation device based on edge perception network
CN112862805B (en) Automatic auditory neuroma image segmentation method and system
CN112884788B (en) Cup optic disk segmentation method and imaging method based on rich context network
CN112381846A (en) Ultrasonic thyroid nodule segmentation method based on asymmetric network
CN116579982A (en) Pneumonia CT image segmentation method, device and equipment
CN113781488A (en) Tongue picture image segmentation method, apparatus and medium
CN115035127A (en) Retinal vessel segmentation method based on generative confrontation network
CN116912253B (en) Lung cancer pathological image classification method based on multi-scale mixed neural network
CN111986216B (en) RSG liver CT image interactive segmentation algorithm based on neural network improvement
Dai et al. A weakly supervised deep generative model for complex image restoration and style transformation
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
CN114926486B (en) Thyroid ultrasound image intelligent segmentation method based on multi-level improvement
CN114882282A (en) Neural network prediction method for colorectal cancer treatment effect based on MRI and CT images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant