CN113674288A - Automatic segmentation method for non-small cell lung cancer digital pathological image tissues - Google Patents
Automatic segmentation method for non-small cell lung cancer digital pathological image tissues Download PDFInfo
- Publication number
- CN113674288A CN113674288A CN202110754856.8A CN202110754856A CN113674288A CN 113674288 A CN113674288 A CN 113674288A CN 202110754856 A CN202110754856 A CN 202110754856A CN 113674288 A CN113674288 A CN 113674288A
- Authority
- CN
- China
- Prior art keywords
- image
- network
- module
- segmentation
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 94
- 238000000034 method Methods 0.000 title claims abstract description 68
- 230000001575 pathological effect Effects 0.000 title claims abstract description 37
- 208000002154 non-small cell lung carcinoma Diseases 0.000 title claims abstract description 33
- 208000029729 tumor suppressor gene on chromosome 11 Diseases 0.000 title claims abstract description 33
- 238000012549 training Methods 0.000 claims abstract description 42
- 238000002372 labelling Methods 0.000 claims abstract description 37
- 239000013598 vector Substances 0.000 claims abstract description 18
- 238000010586 diagram Methods 0.000 claims abstract description 12
- 230000008569 process Effects 0.000 claims abstract description 11
- 238000012545 processing Methods 0.000 claims abstract description 10
- 230000004044 response Effects 0.000 claims abstract description 7
- 230000004913 activation Effects 0.000 claims description 64
- 210000001519 tissue Anatomy 0.000 claims description 54
- 238000010276 construction Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 12
- 238000007781 pre-processing Methods 0.000 claims description 12
- 238000010606 normalization Methods 0.000 claims description 11
- 206010028980 Neoplasm Diseases 0.000 claims description 10
- 238000011176 pooling Methods 0.000 claims description 8
- 239000000126 substance Substances 0.000 claims description 8
- 210000002751 lymph Anatomy 0.000 claims description 7
- 230000017074 necrotic cell death Effects 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 4
- 230000008520 organization Effects 0.000 claims description 4
- 230000007170 pathology Effects 0.000 claims description 4
- 230000003176 fibrotic effect Effects 0.000 claims description 3
- 210000003563 lymphoid tissue Anatomy 0.000 claims description 3
- 230000001338 necrotic effect Effects 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 230000003213 activating effect Effects 0.000 claims description 2
- 230000001902 propagating effect Effects 0.000 claims description 2
- 238000012805 post-processing Methods 0.000 description 3
- 238000006467 substitution reaction Methods 0.000 description 3
- 238000013145 classification model Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 206010009944 Colon cancer Diseases 0.000 description 1
- 208000001333 Colorectal Neoplasms Diseases 0.000 description 1
- 206010016654 Fibrosis Diseases 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000740 bleeding effect Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000004761 fibrosis Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 210000002540 macrophage Anatomy 0.000 description 1
- 210000003716 mesoderm Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for automatically segmenting non-small cell lung cancer digital pathological image tissues, which comprises the following steps: dividing the non-small cell lung cancer digital pathological image into a plurality of image blocks, and normalizing the pixel values; adopting image-level labeling to organize a labeling vector for the image block label; supervising and training a multi-label classification CNN network to generate a virtual mask; constructing a CAMD module, and adding attention to the characteristic diagram with a set probability or zeroing an area with a high response value of the characteristic diagram in each iteration process of the multi-label classification CNN network; inputting the image blocks into a trained multi-label classification CNN network to generate a plurality of groups of virtual masks; training a fully supervised segmentation network based on a plurality of groups of virtual masks, and inputting image blocks into the trained fully supervised segmentation network to obtain segmentation results; and splicing the segmentation result of each image block to obtain the segmentation result of the whole image. The method has higher accuracy in the segmentation processing of the non-small cell lung cancer digital pathological image.
Description
Technical Field
The invention relates to the technical field of pathological image processing, in particular to a non-small cell lung cancer digital pathological image tissue automatic segmentation method.
Background
In the current image segmentation method, relevant research aiming at non-small cell lung cancer digital pathological image processing is lacked, in other disease species such as colorectal cancer tissue segmentation, the existing method utilizes a deep learning model to carry out single-label multi-class classification on pathological images, and uses a sliding window method to splice classified patches to obtain a tissue segmentation result (all pixels in one patch belong to the same class). The result obtained by the segmentation method is not at the pixel level, only reaches the patch-level segmentation, and is relatively rough. The existing methods also use image-level labeling data to train and generate pixel-level tissue segmentation, but the methods are applied to digital pathological sections of healthy people, are not clear in applicability to non-small cell lung cancer digital pathological images, and do not effectively solve the problem that salient regions of class activation images are excessively concentrated in weak supervision semantic segmentation.
Disclosure of Invention
In order to overcome the defects and shortcomings in the prior art, the invention provides a method for automatically segmenting tissues of a non-small cell lung cancer digital pathological image.
The second objective of the invention is to provide an automatic segmentation system for non-small cell lung cancer digital pathological image tissues.
A third object of the present invention is to provide a storage medium.
It is a fourth object of the invention to provide a computing device.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a method for automatically segmenting tissues of a non-small cell lung cancer digital pathological image, which comprises the following steps:
dividing the non-small cell lung cancer digital pathological image into a plurality of image blocks, performing normalization preprocessing on each image block, and normalizing the pixel values;
labeling the image blocks by adopting an image-level labeling mode, wherein each image block label organizes a label vector;
constructing a multi-label classification CNN network by taking Resnet38 as a network framework, and training the multi-label classification CNN network to generate a virtual mask based on image block supervision of image-level labels;
constructing a CAMD module, wherein the CAMD module adds attention to the feature map according to a set probability in each iteration process of the multi-label classification CNN network, or zeros a region with a high response value of the feature map according to the set probability;
inputting the image blocks into a trained multi-label classification CNN network to generate a plurality of groups of virtual masks;
constructing a full-supervision segmentation network model, training the full-supervision segmentation network based on a plurality of groups of virtual masks, and inputting image blocks into the trained full-supervision segmentation network to obtain a segmentation result;
and performing sliding processing by adopting a sliding window method, and splicing the segmentation result of each image block to obtain the segmentation result of the whole image.
As a preferred technical solution, the tissue label vector is set as a four-digit tissue label vector, and the tissue label vector corresponds to tumor tissue, necrotic tissue, lymphatic tissue, and interstitial tissue, respectively, a value 0 is set in the tissue label vector to indicate that no corresponding tissue exists in the image block, and a value 1 indicates that a corresponding tissue exists in the image block.
As a preferred technical scheme, in the training process of the multi-label classification CNN network, based on the network weight of the previous iteration, generating a class activation graph of the next iteration, overlapping the class activation graphs according to pixels to obtain an attention graph, and taking a partial region with the maximum pixel value in the attention graph as a significant region;
inputting the attention diagram into a sigmoid function, mapping a pixel value between 0 and 1 to obtain an inportant map, setting a significant area of the attention diagram to be 0, and setting a non-significant area to be 1 to obtain a drop map;
and (4) calculating and outputting the inportant map and the drop mask based on the ReLU function, performing multiplexing operation with image input, and continuously propagating the obtained feature map in the forward direction.
As a preferred technical solution, the specific steps of generating the class activation graph include:
giving an input imageThe activation value of the k channel of the characteristic diagram which represents the output of the last convolution layer at (x, y);
and performing global average pooling on the channel k, wherein the result after the global average pooling is obtained is as follows:
for a given class C, the inputs to the Softmax layer are:
the Softmax output for category C is:
wherein the content of the first and second substances,is the weight of the class C corresponding to channel k.
As a preferred technical solution, the fully supervised segmentation network is trained based on multiple groups of virtual masks, multiple groups of virtual masks are generated by respectively using feature maps of different levels in CNN, and feature maps of b4_3, b5_2 and b7 layers of CNN are respectively used;
virtual masks generated by feature maps of different levels in CNN are respectively used as multiple virtual masks of a full-supervision segmentation network for supervision, then segmentation losses are respectively compared and calculated, target losses are calculated to be loss1, loss2 and loss3 respectively, and total target losses are obtained by weighted summation of loss1, loss2 and loss 3.
As a preferred technical scheme, the image block is input into a trained fully supervised segmentation network to obtain a segmentation result, and the specific steps include:
activation map M of classkUp-sampling to original size, and activating graph M for classkNormalization, class activation map MkThe value of each pixel in (a) represents the probability that the pixel belongs to tissue k;
conditional random field pair class activation map M using dense connectionskPost-processing is carried out, and the class activation graph M is combined with the characteristics of the imagekOptimizing and finally fusing the class activation graph MkThe classification result of each pixel point is the organization k corresponding to the maximum probability, and when the minimum probability is less than a set threshold value TbThen classify the pixel as background;
converting the image block into a gray scale image, wherein the mask of the blank area is as follows:
if the maximum value of the activation values in the CAM is still less than the set threshold value thetaotherThe masks of other areas not belonging to the tumor, necrosis, lymph, fibrotic stroma are:
wherein the content of the first and second substances,masks for tumor, necrosis, lymph and interstitial regions respectively;
wherein the content of the first and second substances,representsFor a total of five activation values at the (x, y) position, the function Argmax () returns the index corresponding to the maximum value of the element in the input.
As a preferred technical scheme, the class activation graph MkStandardizing, specifically adopting max-min method to obtain class activation map MkNormalization was performed, and the normalized activation map is shown as:
In order to achieve the second object, the invention adopts the following technical scheme:
a non-small cell lung cancer digital pathological image tissue automatic segmentation system comprises:
the system comprises an image dividing module, a preprocessing module, a labeling module, a multi-label classification CNN network construction module, a multi-label classification CNN network training module, a CAMD construction module, a class activation graph generation module, a full supervision segmentation network model construction module, a network training segmentation module and a splicing module;
the image dividing module is used for dividing the non-small cell lung cancer digital pathological image into a plurality of image blocks;
the preprocessing module is used for carrying out normalization preprocessing on each image block and normalizing the pixel values;
the labeling module is used for labeling the image blocks in an image-level labeling mode, and each image block label organizes a label vector;
the multi-label classification CNN network construction module is used for constructing a multi-label classification CNN network by taking Resnet38 as a network framework;
the multi-label classification CNN network training module is used for training a multi-label classification CNN network to generate a virtual mask based on image block supervision of image-level labeling;
the CAMD construction module is used for constructing a CAMD module, and the CAMD module adds attention to the feature map according to a set probability in each iteration process of the multi-label classification CNN network or zeros a region with a high response value of the feature map according to the set probability;
the class activation graph generation module is used for inputting the image blocks into the trained multi-label classification CNN network to generate a class activation graph;
the all-supervised segmented network model building module is used for building an all-supervised segmented network model;
the network training and segmenting module is used for training a fully supervised segmentation network based on a virtual mask and inputting image blocks into the trained fully supervised segmentation network to obtain segmentation results;
and the splicing module is used for performing sliding processing by adopting a sliding window method, splicing the segmentation result of each image block and obtaining the segmentation result of the whole image.
In order to achieve the third object, the invention adopts the following technical scheme:
a storage medium stores a program which, when executed by a processor, implements the above-described non-small cell lung cancer digital pathology image tissue automatic segmentation method.
In order to achieve the fourth object, the invention adopts the following technical scheme:
a computing device comprises a processor and a memory for storing a program executable by the processor, wherein the processor executes the program stored in the memory to realize the non-small cell lung cancer digital pathological image tissue automatic segmentation method.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) the method adopts a weak supervision semantic segmentation method, processes a large-size slice digital image by a sliding window method, trains a depth classification model to perform multi-label classification on the segmented image blocks, generates a class activation map by using the trained classification model, fuses the class activation maps of various classes to output the final tissue segmentation result, and has higher accuracy in the segmentation processing of the non-small cell lung cancer digital pathological image.
(2) The invention finally generates the tissue segmentation result of the pixel level by utilizing the data labeling mode of the image level, has convenience in data labeling, adopts the data labeling mode of the image level, has simpler and more simple tasks compared with the labeling mode of the pixel level, simultaneously ensures that the result of the tissue segmentation has more objectivity, can further quantize the digital pathological image, and has certain effect on promoting the research and clinical application of calculating the pathology.
(3) The invention adopts CAMD (class Activation Mapping drop) module to solve the problem of over-concentration of the salient region of the class Activation map in the weak supervision semantic segmentation, but the current technical implementation scheme does not provide a solution for the problem, and compared with the patch-level segmentation method, the invention realizes the segmentation result of pixel level, so that the result of tissue segmentation is more accurate.
(4) The invention adopts the technical scheme of multi-element supervision, solves the technical problem that the virtual label has the noise label, and achieves the technical effects of reducing the noise rate in the virtual label and obtaining a more refined segmentation result.
Drawings
FIG. 1 is a schematic flow chart of a method for automatically segmenting non-small cell lung cancer digital pathological image tissues according to the present invention;
FIG. 2 is a schematic diagram of image block labeling according to the present invention;
FIG. 3 is a schematic diagram of a tissue segmentation model training process according to the present invention;
FIG. 4 is a schematic diagram of a CAMD module of the present invention;
FIG. 5 is a schematic illustration of the multivariate supervision of the present invention;
FIG. 6 is a flow chart of the sliding window method according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
As shown in fig. 1, the present embodiment provides a method for automatically segmenting a non-small cell lung cancer digital pathological image tissue, which mainly includes the following two steps: the method comprises the following steps: the method comprises the steps of utilizing pathological image supervision training labeled by image levels to generate a virtual mask, and providing a CAMD module for solving the problem that salient regions of a class activation image are excessively concentrated; step two: a full-supervised segmentation network is trained by utilizing a virtual mask, and a multivariate supervised training method is provided, so that the problem of noise labels in the virtual mask is solved.
The method comprises the following steps: virtual mask generation:
introduction of data: dividing the non-small cell lung cancer digital pathological image into a plurality of image blocks, performing normalization preprocessing on each image block, normalizing the pixel value to be between 0 and 1, as shown in fig. 2, performing data set labeling in an image-level labeling mode, wherein each image block patch corresponds to a four-bit tissue labeling vector, wherein the four-bit tissue labeling vector corresponds to tumor tissue, necrotic tissue, lymphoid tissue and interstitial tissue. A value of 0 indicates that such an organization is not present in the image block patch, and a value of 1 indicates that such an organization is present in the image block patch.
Virtual mask pseudo-mask generation: as shown in fig. 3, the method for obtaining the virtual mask pseudo-mask by training the tissue classification network model specifically includes the following steps:
training a multi-label classification CNN network by taking Resnet38 as a network frame, performing multi-label classification on an image block patch, wherein a loss function adopted during training is cross entropy, and stopping network training when training loss is converged;
using a trained multi-label classification CNN network, 4 class activation maps CAM are generated for each image patch. The trained multi-label classification CNN network respectively outputs the probability P of 4 classes for each input patchk(1. ltoreq. k. ltoreq.4) when P iskGreater than a set threshold value Tk(k is more than or equal to 1 and less than or equal to 4), the corresponding class activation graph M is reservedkNot setting zero;
in this embodiment, the size of the feature map of the last layer of the Resnet38 is 4096 × 28, the feature map is globally pooled to obtain a feature vector of 4096 × 1, the feature vector enters a fully connected layer with an input channel number of 4096 × 4 and an output channel number of 4, the weights of the fully connected layer are taken out and used in the last layer of the feature map for corresponding weighted summation, that is, the feature map of 4096 × 28 can be compressed to 4 × 28, and the total of four channels are provided, wherein each channel corresponds to the class activation map of each class of tissue;
aiming at the problem that the salient regions in the multi-label classification network are too focused, the CAMD module is adopted in the embodiment to solve the problem. Specifically, during each iteration of training the network, attribute is added to the feature map with a certain probability (the elementary map is applied to the feature map in a point-by-point manner), or a salient region of the feature map is set to zero with a certain probability (i.e. a region with a high response value), so that the network is prevented from only focusing on the most salient region when making a decision.
As shown in fig. 4, the method for generating the self-attention map of the present embodiment is as follows: and generating a class activation map of the next iteration by using the network weight of the previous iteration, and overlapping the class activation maps according to pixels to obtain an attention map. The region of the first 10% where the pixel value is the largest in the attention map is taken as the saliency region.
Wherein, the symbol D represents Dropout operation, S represents Sigmoid function, R is randomly selected according to probability, and M represents multiprocessing operation;
the generation method of the importan map comprises the following steps: and inputting the attention map into a sigmoid function, and mapping the pixel value to be between 0 and 1 to obtain the Important map.
The Drop mask generation method comprises the following steps: and setting the salient region of the attribute map to be 0 and setting the non-salient region to be 1 to obtain the drop map.
Attention operation: dot-multiplying the inportant map and feature map;
dropout operation: the Drop mask is multiplied by the feature mask in a point mode, and the Drop mask is applied to the feature map of the network in the embodiment, namely, a Drop operation.
When only Drop mask is applied, the discriminant region of the target object cannot be observed in the multi-label classification CNN network, so that the classification performance of the multi-label classification CNN network is reduced, which directly damages the target positioning capability of the multi-label classification CNN network. When applying only inportant map, the multi-label classification CNN network will largely focus on the discriminative regions of the target object, which is advantageous for the classification task, but such a model is over-fit for the segmentation task. The combination of the inportant map and the Drop mask ensures that the CNN obtains the optimal classification performance and the optimal positioning performance of the target object;
the step of generating the virtual mask is based on a class activation map, a particular class of CAM indicates the discriminative image area that CNN uses to identify objects of that class, and the size of the pixel values in the CAM reflects the importance of the corresponding location in the original image to the classification. Before the final output layer of the network, the last layer of feature graph is subjected to global pooling operation and input into a full-connection layer to obtain the prediction probability of all categories. Through such a simple structure, the weight of the fully-connected layer can be introduced to the feature map before the global average pooling layer, and the feature map is subjected to weighted accumulation, so that the importance degree of the pixels at each position in the feature map of the last layer on the classification result can be calculated.
The global average pooling outputs the spatial average of each feature channel of the last convolutional layer, and the final output of the network is the weighted sum of these values, and likewise, the weighted sum of the last convolutional layer feature map is calculated to obtain the CAM.
This embodiment specifically describes the CAM generation process: giving an input imageThe activation value at (x, y) of the kth channel of the signature graph representing the output of the last convolutional layer, and then the result of global average pooling for channel k is:thus, for a given class C, the input of the Softmax layerWhereinIs the weight of the class C corresponding to channel k,essentially describeImportance for class C. Finally, Softmax output for class C isBy mixingSubstitution intoIt is possible to obtain:
thus, it can be derivedTherefore, it is not only easy to useThe importance of the activation value at spatial position (x, y) for classifying the input image into class C is directly indicated. Finally, by simply upsampling the CAM to the size of the original input image, the image regions most relevant to a particular class can be identified.
Step two (fully supervised network training):
as shown in fig. 5, the multivariate virtual mask in the multivariate supervisory method is generated from feature maps of different levels in CNN; generating a virtual mask by using the characteristic diagrams of b4_3, b5_2 and b7 layers of the CNN respectively;
the CAM-based weakly supervised segmentation method essentially performs segmentation by classification, and thus its segmentation result is coarser than that of the fully supervised segmentation method. Therefore, in the embodiment, the virtual mask generated by the weak supervised segmentation method is used to train an unsupervised segmentation network, so that the unsupervised segmentation network can obtain a more detailed segmentation result through training and can directly output the segmentation result end to end. Since the virtual mask of the present embodiment is generated by the weak supervised method, there are many noise labels marked incorrectly, which may cause great interference to the training of the fully supervised network. Therefore, in the embodiment, multiple groups of virtual masks generated based on CAM are simultaneously applied to training of the fully supervised segmentation network to reduce the influence of noise labels on network training, three groups of virtual masks are generated according to three different convolutional layers of CNN, then the three groups of virtual masks are respectively used as three groups of supervisors of the fully supervised segmentation network, then segmentation losses are respectively calculated by comparing with the three groups of virtual masks, target losses are calculated as loss1, loss2 and loss3, and total target losses are obtained by weighted summation of loss1, loss2 and loss 3.
A series of post-processing operations are required from the CAM to obtain the partition result, and the class activation map M is activatedkUp-sampling to original image size, and adopting max-min method to activate image MkNormalization is performed, this time class activation map MkThe value of each pixel in (a) represents the probability that the pixel belongs to tissue k. Then using the densely connected conditional random field pair class activation map MkPost-processing is carried out, and the class activation graph M is combined with the characteristics of the imagekOptimizing and finally fusing the class activation graph MkThe classification result of each pixel point is k corresponding to the maximum probability, and when the minimum probability is less than a set threshold value TbThen classify the pixel as background;
the CAM is normalized by a maximum-minimum method, wherein the activation values of the CAM are normalized to be between 0 and 1, and the normalized CAM is as follows:
in the pathological image, the pathological image contains four tissues, namely tumor, necrosis, lymph and fibrosis, and also contains background areas, such as blank areas, macrophages and bleeding areas. This embodiment simply converts Patch into a gray-scale map, and then regards the area with the gray-scale value greater than 200 as a blank area, i.e. the mask of the blank area is:
if the maximum value of the activation values in the CAM is still less than the set threshold value thetaotherThen, these regions are regarded as not belonging to the tumor, necrosis, lymph, other regions except the fibrotic stroma, i.e. the mask of the other regions is:
The way of finally determining the segmentation result is as follows: for (x, y) position pixel points in original imageThe classification result is as follows:
whereinRepresentsFor a total of five activation values at the (x, y) position, the function Argmax () returns the index corresponding to the maximum value of the element in the input.
The generated virtual mask is used for training a fully supervised segmentation network, and DeepLab V3+ is adopted in the embodiment;
and inputting the test data into a trained fully supervised segmentation network, and generating a final segmentation result for each patch.
As shown in fig. 6, the sliding window method is used to perform sliding processing, and the segmentation result of each patch is spliced to obtain the segmentation result of the whole slice digital image.
Example 2
The embodiment provides a non-small cell lung cancer digital pathological image tissue automatic segmentation system, which comprises: the system comprises an image dividing module, a preprocessing module, a labeling module, a multi-label classification CNN network construction module, a multi-label classification CNN network training module, a CAMD construction module, a class activation graph generation module, a full supervision segmentation network model construction module, a network training segmentation module and a splicing module;
the image dividing module is used for dividing the non-small cell lung cancer digital pathological image into a plurality of image blocks; the preprocessing module is used for carrying out normalization preprocessing on each image block and normalizing the pixel values; the labeling module is used for labeling the image blocks in an image-level labeling mode, and the labeling vector is organized by labeling each image block; the multi-label classification CNN network construction module is used for constructing a multi-label classification CNN network by taking Resnet38 as a network framework; the multi-label classification CNN network training module is used for training a multi-label classification CNN network to generate a virtual mask based on image block supervision of image-level labeling; the CAMD construction module is used for constructing a CAMD module, and the CAMD module adds attention to the feature map according to a set probability in each iteration process of the multi-label classification CNN network or zeros a region with a high response value of the feature map according to the set probability; the class activation graph generation module is used for inputting the image blocks into the trained multi-label classification CNN network to generate a class activation graph; the full-supervision segmentation network model construction module is used for constructing a full-supervision segmentation network model; the network training and dividing module is used for training the fully supervised division network based on the virtual mask and inputting the image blocks into the trained fully supervised division network to obtain a division result; and the splicing module is used for performing sliding processing by adopting a sliding window method, splicing the segmentation result of each image block and obtaining the segmentation result of the whole image.
Example 3
The present embodiment provides a storage medium, which may be various storage media capable of storing program codes, such as ROM, RAM, magnetic disk, optical disk, etc., and the storage medium stores one or more programs, and when the programs are executed by a processor, the method for automatically segmenting the non-small cell lung cancer digital pathological image tissue according to embodiment 1 is implemented.
Example 4
The embodiment provides a computing device, which may be a desktop computer, a notebook computer, a smart phone, a PDA handheld terminal, a tablet computer, or other terminal devices with a display function, where the computing device includes a processor and a memory, where the memory stores one or more programs, and when the processor executes the programs stored in the memory, the method for automatically segmenting the non-small cell lung cancer digital pathological image tissue according to embodiment 1 is implemented.
A processor may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Programmable Logic Devices (PLDs), field-programmable gate arrays (FPGAs), controllers, micro-controllers, electronic devices, as well as other electronic units designed to perform the functions described herein, or a combination thereof.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.
Claims (10)
1. A method for automatically segmenting tissues of a non-small cell lung cancer digital pathological image is characterized by comprising the following steps:
dividing the non-small cell lung cancer digital pathological image into a plurality of image blocks, performing normalization preprocessing on each image block, and normalizing the pixel values;
labeling the image blocks by adopting an image-level labeling mode, wherein each image block label organizes a label vector;
constructing a multi-label classification CNN network by taking Resnet38 as a network framework, and training the multi-label classification CNN network to generate a virtual mask based on image block supervision of image-level labels;
constructing a CAMD module, wherein the CAMD module adds attention to the feature map according to a set probability in each iteration process of the multi-label classification CNN network, or zeros a region with a high response value of the feature map according to the set probability;
inputting the image blocks into a trained multi-label classification CNN network to generate a plurality of groups of virtual masks;
constructing a full-supervision segmentation network model, training the full-supervision segmentation network based on a plurality of groups of virtual masks, and inputting image blocks into the trained full-supervision segmentation network to obtain a segmentation result;
and performing sliding processing by adopting a sliding window method, and splicing the segmentation result of each image block to obtain the segmentation result of the whole image.
2. The method according to claim 1, wherein the tissue labeling vector is set to four-bit tissue labeling vectors, and the four-bit tissue labeling vectors correspond to tumor tissue, necrotic tissue, lymphatic tissue, and interstitial tissue, respectively, and a value 0 is set in the tissue labeling vector to indicate that no corresponding tissue class exists in the image block, and a value 1 indicates that a corresponding tissue class exists in the image block.
3. The method for automatically segmenting the non-small cell lung cancer digital pathological image tissue according to claim 1, characterized in that in the training process of a multi-label classification CNN network, based on the network weight of the previous iteration, a class activation map of the next iteration is generated, the class activation maps are overlapped according to pixels to obtain an attention map, and the part of the region with the maximum pixel value in the attention map is taken as a significant region;
inputting the attention diagram into a sigmoid function, mapping a pixel value between 0 and 1 to obtain an inportant map, setting a significant area of the attention diagram to be 0, and setting a non-significant area to be 1 to obtain a drop map;
and (4) calculating and outputting the inportant map and the drop mask based on the ReLU function, performing multiplexing operation with image input, and continuously propagating the obtained feature map in the forward direction.
4. The method for automatically segmenting the non-small cell lung cancer digital pathological image tissue according to claim 1, wherein the specific step of generating the activation-like map comprises:
giving an input image The activation value of the k channel of the characteristic diagram which represents the output of the last convolution layer at (x, y);
and performing global average pooling on the channel k, wherein the result after the global average pooling is obtained is as follows:
for a given class C, the inputs to the Softmax layer are:
the Softmax output for category C is:
5. The method of claim 1, wherein the training of the supervised segmentation network based on the virtual mask generates the virtual mask by using feature maps of different levels in the CNN, and the feature maps of b4_3, b5_2 and b7 levels in the CNN;
virtual masks generated by feature maps of different levels in CNN are respectively used as multiple virtual masks of a full-supervision segmentation network for supervision, then segmentation losses are respectively compared and calculated, target losses are calculated to be loss1, loss2 and loss3 respectively, and total target losses are obtained by weighted summation of loss1, loss2 and loss 3.
6. The method for automatically segmenting the non-small cell lung cancer digital pathological image tissues according to claim 1, wherein the image blocks are input into a trained fully supervised segmentation network to obtain segmentation results, and the method comprises the following specific steps:
activation map M of classkUp-sampling to original size, and activating graph M for classkNormalization, class activation map MkThe value of each pixel in (a) represents the probability that the pixel belongs to tissue k;
conditional random field pair class activation map M using dense connectionskPerforming post-treatment and combiningFeature pair class activation map M of image itselfkOptimizing and finally fusing the class activation graph MkThe classification result of each pixel point is the organization k corresponding to the maximum probability, and when the minimum probability is less than a set threshold value TbThen classify the pixel as background;
converting the image block into a gray scale image, wherein the mask of the blank area is as follows:
if the maximum value of the activation values in the CAM is still less than the set threshold value thetaotherThe masks of other areas not belonging to the tumor, necrosis, lymph, fibrotic stroma are:
wherein the content of the first and second substances,masks for tumor, necrosis, lymph and interstitial regions respectively;
7. The method of claim 6, wherein the activation map M is a class activation mapkStandardizing, specifically adopting max-min method to obtain class activation map MkNormalization was performed, and the normalized activation map is shown as:
8. A non-small cell lung cancer digital pathological image tissue automatic segmentation system is characterized by comprising:
the system comprises an image dividing module, a preprocessing module, a labeling module, a multi-label classification CNN network construction module, a multi-label classification CNN network training module, a CAMD construction module, a class activation graph generation module, a full supervision segmentation network model construction module, a network training segmentation module and a splicing module;
the image dividing module is used for dividing the non-small cell lung cancer digital pathological image into a plurality of image blocks;
the preprocessing module is used for carrying out normalization preprocessing on each image block and normalizing the pixel values;
the labeling module is used for labeling the image blocks in an image-level labeling mode, and each image block label organizes a label vector;
the multi-label classification CNN network construction module is used for constructing a multi-label classification CNN network by taking Resnet38 as a network framework;
the multi-label classification CNN network training module is used for training a multi-label classification CNN network to generate a virtual mask based on image block supervision of image-level labeling;
the CAMD construction module is used for constructing a CAMD module, and the CAMD module adds attention to the feature map according to a set probability in each iteration process of the multi-label classification CNN network or zeros a region with a high response value of the feature map according to the set probability;
the class activation graph generation module is used for inputting the image blocks into the trained multi-label classification CNN network to generate a class activation graph;
the all-supervised segmented network model building module is used for building an all-supervised segmented network model;
the network training and segmenting module is used for training a fully supervised segmentation network based on a virtual mask and inputting image blocks into the trained fully supervised segmentation network to obtain segmentation results;
and the splicing module is used for performing sliding processing by adopting a sliding window method, splicing the segmentation result of each image block and obtaining the segmentation result of the whole image.
9. A storage medium storing a program, wherein the program, when executed by a processor, implements the method for automatically segmenting the non-small cell lung cancer digital pathology image tissue according to any one of claims 1-7.
10. A computing device comprising a processor and a memory for storing a program executable by the processor, wherein the processor, when executing the program stored in the memory, implements the method for automatically segmenting the non-small cell lung cancer digital pathology image tissue according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110754856.8A CN113674288B (en) | 2021-07-05 | 2021-07-05 | Automatic segmentation method for digital pathological image tissue of non-small cell lung cancer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110754856.8A CN113674288B (en) | 2021-07-05 | 2021-07-05 | Automatic segmentation method for digital pathological image tissue of non-small cell lung cancer |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113674288A true CN113674288A (en) | 2021-11-19 |
CN113674288B CN113674288B (en) | 2024-02-02 |
Family
ID=78538577
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110754856.8A Active CN113674288B (en) | 2021-07-05 | 2021-07-05 | Automatic segmentation method for digital pathological image tissue of non-small cell lung cancer |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113674288B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114565761A (en) * | 2022-02-25 | 2022-05-31 | 无锡市第二人民医院 | Deep learning-based method for segmenting tumor region of renal clear cell carcinoma pathological image |
CN114612482A (en) * | 2022-03-08 | 2022-06-10 | 福州大学 | Method and system for positioning and classifying gastric cancer neuroinfiltration digital pathological section images |
CN115100467A (en) * | 2022-06-22 | 2022-09-23 | 北京航空航天大学 | Pathological full-slice image classification method based on nuclear attention network |
CN115496744A (en) * | 2022-10-17 | 2022-12-20 | 上海生物芯片有限公司 | Lung cancer image segmentation method, device, terminal and medium based on mixed attention |
CN115880262A (en) * | 2022-12-20 | 2023-03-31 | 桂林电子科技大学 | Weakly supervised pathological image tissue segmentation method based on online noise suppression strategy |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110599448A (en) * | 2019-07-31 | 2019-12-20 | 浙江工业大学 | Migratory learning lung lesion tissue detection system based on MaskScoring R-CNN network |
CN111985536A (en) * | 2020-07-17 | 2020-11-24 | 万达信息股份有限公司 | Gastroscope pathological image classification method based on weak supervised learning |
CN111986150A (en) * | 2020-07-17 | 2020-11-24 | 万达信息股份有限公司 | Interactive marking refinement method for digital pathological image |
CN112017191A (en) * | 2020-08-12 | 2020-12-01 | 西北大学 | Method for establishing and segmenting liver pathology image segmentation model based on attention mechanism |
CN112288026A (en) * | 2020-11-04 | 2021-01-29 | 南京理工大学 | Infrared weak and small target detection method based on class activation diagram |
-
2021
- 2021-07-05 CN CN202110754856.8A patent/CN113674288B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110599448A (en) * | 2019-07-31 | 2019-12-20 | 浙江工业大学 | Migratory learning lung lesion tissue detection system based on MaskScoring R-CNN network |
CN111985536A (en) * | 2020-07-17 | 2020-11-24 | 万达信息股份有限公司 | Gastroscope pathological image classification method based on weak supervised learning |
CN111986150A (en) * | 2020-07-17 | 2020-11-24 | 万达信息股份有限公司 | Interactive marking refinement method for digital pathological image |
CN112017191A (en) * | 2020-08-12 | 2020-12-01 | 西北大学 | Method for establishing and segmenting liver pathology image segmentation model based on attention mechanism |
CN112288026A (en) * | 2020-11-04 | 2021-01-29 | 南京理工大学 | Infrared weak and small target detection method based on class activation diagram |
Non-Patent Citations (1)
Title |
---|
李宾皑;李颖;郝鸣阳;顾书玉;: "弱监督学习语义分割方法综述", 数字通信世界, no. 07, pages 1 - 3 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114565761A (en) * | 2022-02-25 | 2022-05-31 | 无锡市第二人民医院 | Deep learning-based method for segmenting tumor region of renal clear cell carcinoma pathological image |
CN114612482A (en) * | 2022-03-08 | 2022-06-10 | 福州大学 | Method and system for positioning and classifying gastric cancer neuroinfiltration digital pathological section images |
CN114612482B (en) * | 2022-03-08 | 2024-06-07 | 福州大学 | Gastric cancer nerve infiltration digital pathological section image positioning and classifying method and system |
CN115100467A (en) * | 2022-06-22 | 2022-09-23 | 北京航空航天大学 | Pathological full-slice image classification method based on nuclear attention network |
CN115100467B (en) * | 2022-06-22 | 2024-06-11 | 北京航空航天大学 | Pathological full-slice image classification method based on nuclear attention network |
CN115496744A (en) * | 2022-10-17 | 2022-12-20 | 上海生物芯片有限公司 | Lung cancer image segmentation method, device, terminal and medium based on mixed attention |
CN115880262A (en) * | 2022-12-20 | 2023-03-31 | 桂林电子科技大学 | Weakly supervised pathological image tissue segmentation method based on online noise suppression strategy |
CN115880262B (en) * | 2022-12-20 | 2023-09-05 | 桂林电子科技大学 | Weak supervision pathological image tissue segmentation method based on online noise suppression strategy |
US11935279B1 (en) | 2022-12-20 | 2024-03-19 | Guilin University Of Electronic Technology | Weakly supervised pathological image tissue segmentation method based on online noise suppression strategy |
Also Published As
Publication number | Publication date |
---|---|
CN113674288B (en) | 2024-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10902245B2 (en) | Method and apparatus for facial recognition | |
CN113674288A (en) | Automatic segmentation method for non-small cell lung cancer digital pathological image tissues | |
CN110059589B (en) | Iris region segmentation method in iris image based on Mask R-CNN neural network | |
CN110796199B (en) | Image processing method and device and electronic medical equipment | |
CN113642390B (en) | Street view image semantic segmentation method based on local attention network | |
EP3588380A1 (en) | Information processing method and information processing apparatus | |
CN112801146A (en) | Target detection method and system | |
CN111932577B (en) | Text detection method, electronic device and computer readable medium | |
CN112581462A (en) | Method and device for detecting appearance defects of industrial products and storage medium | |
CN112116599A (en) | Sputum smear tubercle bacillus semantic segmentation method and system based on weak supervised learning | |
CN113822951A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN115187530A (en) | Method, device, terminal and medium for identifying ultrasonic automatic breast full-volume image | |
CN112149526A (en) | Lane line detection method and system based on long-distance information fusion | |
CN113239883A (en) | Method and device for training classification model, electronic equipment and storage medium | |
CN117636298A (en) | Vehicle re-identification method, system and storage medium based on multi-scale feature learning | |
CN112818774A (en) | Living body detection method and device | |
CN112801960B (en) | Image processing method and device, storage medium and electronic equipment | |
CN116777929A (en) | Night scene image semantic segmentation method, device and computer medium | |
CN115937596A (en) | Target detection method, training method and device of model thereof, and storage medium | |
CN115587616A (en) | Network model training method and device, storage medium and computer equipment | |
CN116091763A (en) | Apple leaf disease image semantic segmentation system, segmentation method, device and medium | |
CN111798376B (en) | Image recognition method, device, electronic equipment and storage medium | |
CN113609957A (en) | Human behavior recognition method and terminal | |
CN115424250A (en) | License plate recognition method and device | |
Zhou et al. | FENet: Fast Real-time Semantic Edge Detection Network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |