CN111079862B - Deep learning-based thyroid papillary carcinoma pathological image classification method - Google Patents
Deep learning-based thyroid papillary carcinoma pathological image classification method Download PDFInfo
- Publication number
- CN111079862B CN111079862B CN201911415563.6A CN201911415563A CN111079862B CN 111079862 B CN111079862 B CN 111079862B CN 201911415563 A CN201911415563 A CN 201911415563A CN 111079862 B CN111079862 B CN 111079862B
- Authority
- CN
- China
- Prior art keywords
- image
- network
- vgg
- pathological
- loss function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/032—Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a deep learning-based thyroid papillary carcinoma pathological image classification method, which mainly solves the problem of poor thyroid cancer pathological image classification effect in the prior art. The scheme is as follows: 1) Reading a thyroid papillary cancer pathological section image with an amplification factor of 20, and inputting the thyroid papillary cancer pathological section image into the improved VGG-f convolutional neural network to obtain an attention heat map; 2) Normalizing the attention map to obtain the position of the discrimination area; reading thyroid cancer pathological images with 40 times of magnification and obtaining image blocks according to the discrimination area position; 3) Inputting the image block into an original VGG-f network, constructing a loss function, and performing supervision training on the network; 4) Extracting the trained VGG-f network convolution characteristics and classifying to obtain the category of the image block; 5) And judging the category of the thyroid cancer pathological image according to the category of the image block. The method has high classification accuracy and can be used for classifying pathological images of the papillary thyroid cancer by a computer.
Description
Technical Field
The invention belongs to the technical field of image processing, and further relates to an image classification method which can be used for classifying thyroid cancer pathological images.
Background
Thyroid cancer is the most common malignancy of the endocrine system, and is also the most rapidly growing malignancy of global incidence. Among the numerous thyroid cancer examination methods, pathological biopsy is the method with the highest sensitivity and specificity. The pathological diagnosis of thyroid cancer must be determined by a pathologist viewing the stained biopsy under a microscope. Among the four major pathological types of thyroid cancer, papillary thyroid cancer accounts for about 85% or more of thyroid cancer, and the prognosis effect of papillary thyroid cancer is very good, with a ten-year survival rate of about 90%. However, the lymphatic metastasis rate of the cancers is as high as 40% -50%, so that early diagnosis of papillary thyroid cancers and establishment of a proper treatment scheme are of great significance for saving lives of patients. The pathological image classification result is mainly obtained by observing a pathologist under a microscope, the histopathological analysis is a very time-consuming professional work, depends on the experience of a pathologist, is influenced by factors such as fatigue, attention loss and the like, and is very time-consuming and labor-consuming, so that the automatic classification of benign and malignant papillary thyroid cancer is realized, and the method has important effects of improving diagnosis accuracy and relieving doctor pressure.
At present, researchers have conducted extensive and intensive studies on deep learning in the aspects of mitosis detection and image classification of breast cancer pathological images, automatic classification of breast CT lung nodules, segmentation of colon cancer pathological images, alzheimer's disease and the like. Chen et al utilize efficient coarse search models and knowledge transfer fine discrimination models to detect mitosis in breast cancer histological images. Spanhol et al propose a breast cancer pathological image classification method for training CNN based on extracted image patches, so as to realize classification of breast cancer pathological images. Han et al propose a multi-classification approach for breast cancer. Xu and the like take memory reduction and calculation complexity reduction as main targets, and a quantization method is adopted to reduce overfitting in FCNs, so that accurate segmentation of colon cancer pathological tissues is realized.
These methods have been developed for the computer-aided diagnosis of diseases such as breast cancer and prostate cancer, but have been less studied for the diagnosis of thyroid cancer diseases. On one hand, due to the complexity of pathological images of thyroid cancer, the benign and malignant differences of the pathological images are small, the morphological differences of malignant slices are large, and the color distribution of the pathological slices is wide due to the differences of the slice manufacturing process and the dyeing process, so that the classification effect of the traditional classification model on the pathological images of the thyroid cancer is poor; on the other hand, a large number of available annotated data sets are lacked, so that the existing deep learning classification model cannot learn characteristics well, and classification accuracy is affected.
Disclosure of Invention
Aiming at overcoming the defects of the prior art, the invention provides a method for classifying the pathological images of the papillary thyroid cancer based on deep learning so as to improve the classification accuracy of the pathological images of the thyroid cancer.
The technical scheme of the invention is as follows: thyroid cancer pathology images with different magnifications are used as input, and at first, at 20
And under the X condition, finding out a suspected cancerous region by using an attention mechanism, amplifying the suspected cancerous region to 40X, partitioning the suspected cancerous region for further judgment, and finally fusing the block classification result to obtain a final classification result of the thyroid cancer pathological image. The specific implementation steps comprise the following steps:
(1) All thyroid cancer pathology image data with magnification of 20 are read, and the pathology image data is divided into two parts, namely hematoxylin eosin H&Pathological section data with clear and visible nucleus outline after E staining is used as source domain data X S The rest data, namely the data with unclear nucleus, is taken as target domain data X T ;
(2) The method comprises the steps of improving the structures of 5 convolution layers and 3 full-connection layers of an original VGG-f convolution neural network, namely removing the last 2 full-connection layers, setting the number of convolution kernels of a fifth convolution layer to be 512, adding a pooling layer after the convolution layers, setting the number of neurons of the full-connection layers to be 2, and obtaining an improved VGG-f network;
(3) Constructing a loss function: l=l cls +λM(X S ,X T ) Wherein L is cls For cross entropy loss function, M (X S ,X T ) Lambda is a regularization term for measuring importance of the parameters, which is a maximum mean difference loss function.
(4) To source domain data X S With target domain data X T Inputting the data to an improved VGG-f network, and performing supervision training on the network by using the loss function L until the loss function L converges to obtain a trained improved VGG-f network;
(5) To source domain data X S With target domain data X T Inputting the obtained information into a trained improved VGG-f network, extracting an output feature map of the last convolution layer of the improved VGG-f network and category weight of the feature map, and carrying out weighting operation to obtain an attention heat map;
(6) Normalizing the obtained attention heat map, and taking the position of the brightest 30% pixel in the attention map as the position DR of the discrimination area of the pathological image with 20 magnification 20 ;
(7) Based on the discriminant force region position DR 20 Acquiring discrimination area position DR of pathological image with 40 times magnification 40 =2*DR 20 Sliding a window crop size 112 x 112 image block over a pathology image with magnification 40, preserving and positioning DR 40 Overlapping image blocks, namely, distinguishing image blocks;
(8) Inputting the discriminative image block into the original VGG-f network, classifying the loss function L using cross entropy cls The network is supervised and trained until the cross entropy classification loss function L cls Converging to obtain a trained VGG-f network;
(9) Extracting the convolution characteristics of the last convolution layer of the trained VGG-f network, and performing classification processing through a softmax classifier to obtain the category of the discriminative image block;
(10) And obtaining the category of the pathological image according to the categories of the discriminant image blocks, namely, if the categories of the discriminant image blocks are benign, the category of the pathological image is benign, and if one or more of the categories of the discriminant image blocks are malignant, the category of the pathological image is malignant.
Compared with the prior art, the invention has the following advantages:
1) The invention acquires the most discriminative area of the low-power amplified pathological section image by adopting the self-attention mechanism of the convolutional neural network, and further judges the discriminative area under the high-power amplification condition, so that the process imitates the diagnosis process of a pathologist, and the pathological image is classified by utilizing different amplification factor information, so that the classification accuracy is higher than that obtained in the prior art.
2) Aiming at the characteristics of pathological images, the thyroid cancer pathological image is divided into source domain data and target domain data according to the characteristics of the dyed cell nuclei, and the classification accuracy is improved by calculating the maximum mean value difference of the characteristics of the convolution layers.
3) According to the invention, the categories of the pathological images are obtained by fusing the categories of the discriminant image blocks, and the process simulates the decision process of a pathologist, so that the classification accuracy is further improved.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
fig. 2 is a schematic diagram of the present invention.
Detailed Description
Embodiments and effects of the present invention are further described below with reference to the accompanying drawings.
Referring to fig. 1 and 2, the present invention is embodied as follows:
step 1, reading a thyroid cancer pathological image with 20 times magnification and dividing the thyroid cancer pathological image into two parts.
Thyroid cancer pathology image data with magnification of 20 are read and divided into source domain data X S And target domain data X T Pathological section data with clear cell nucleus boundary after dyeing is used as source domain data X S The rest data, namely the data with unclear nucleus, is taken as target domain data X T 。
And 2, improving the original VGG-f network, and training the improved network.
2.1 Improvement of the original VGG-f convolutional neural network):
the original VGG-f convolutional neural network comprises 5 convolutional layers and 3 fully-connected layers, the improvement of the original VGG-f convolutional neural network in the step is that the last 2 fully-connected layers are removed, the number of convolution kernels of the 5 th convolutional layer is set to 512, a pooling layer with the size of 13 x 13 is added after the last convolutional layer, the number of neurons of the fully-connected layers is set to 2, and the improved VGG-f network is obtained;
2.2 According to cross entropy loss function L cls And a maximum mean difference loss function M (X S ,X T ) Constructing an overall loss function L:
L=L cls +λM(X S ,X T ),
wherein: lambda is a regularization term used to measure the importance of the parameters,
L cls =-[p i logq i +(1-p i )log(1-q i )],
wherein: index of i, p i For the true label of each sample, q i In order to predict the label to be used,
wherein: x is x s ∈X S And x t ∈X T Respectively represent source domain X S And target domain data X T Phi (·) represents a kernel function mapping the source domain data and the target domain data to a regenerated hilbert space, P represents 5 convolutional layers of the modified VGG-f network, H represents k single kernel combinations:
2.3 To source domain data X) S With target domain data X T And inputting the data to the improved VGG-f network, and performing supervision training on the network by using the loss function L until the loss function L converges, so as to obtain the trained improved VGG-f network.
And step 3, acquiring an attention heat map.
Extracting an output feature map of the last convolution layer of the VGG-f network after training and improvement and category weight of the feature map, and carrying out weighting operation through the following formula to obtain an attention heat map:
wherein ReLU is the activation function and k represents the convolutionThe sequence number of the layer,representing a series of output characteristic patterns, F conv (k) For the characteristic diagram of the kth convolution layer, < >>For the feature map of the kth convolution layer, which corresponds to the weight of category c, S is the result of the weighting operation, i.e. the attention heat map.
And 4, acquiring the position of the discriminant force area.
Normalizing the obtained attention heat map, and taking the position of the brightest 30% pixel in the normalized attention map as the discrimination area position DR of the thyroid cancer pathological image with 20 amplification factors 20 。
And 5, acquiring a discriminative image block of the pathology image with 40 times of magnification and training a network.
5.1 Based on the discriminant force region location DR 20 Acquiring discrimination area position DR of 40-fold amplified thyroid cancer pathological image 40 =2*DR 20 ;
5.2 Sliding window cut-out size 112 x 112 image block on 40 magnification thyroid cancer pathology image, retention and position DR 40 Overlapping image blocks, namely, distinguishing image blocks;
5.3 Inputting the discriminative image block into the original VGG-f network using the cross entropy classification loss function L cls The network is supervised and trained until the cross entropy classification loss function L cls And converging to obtain a trained original VGG-f network.
And 6, obtaining an image classification result.
6.1 Extracting the convolution characteristics of the last convolution layer of the trained VGG-f network, and classifying by a softmax classifier to obtain the category of the discriminative image block
6.2 Judging the category of the thyroid cancer pathological image according to the category of the distinguishing image block:
if the category of the image block is discriminatedIs malignant and n>1, if the thyroid cancer pathological image is malignant, otherwise, the thyroid cancer pathological image is benign pathological image.
The effect of the present invention can be further illustrated by the following experiments.
Experimental conditions:
the computer used for the experiment was configured to: intel (R) Core (TM) i8 [email protected],128GB memory, and a single block NVIDIA GTX TITAN GPU.
The software environment operated by the experiment is MatlabR2014b software installed under a 64-bit Ubuntu14.04 operating system and Matconvnet deep learning toolkit.
All network training adopts a back propagation algorithm to calculate residual errors of all layers, and a random gradient descent algorithm with a kinetic energy term and a weight attenuation term is used for updating network parameters.
Evaluation index: patient level recognition rate PRR, image level recognition rate IRR, accuracy rate, specificity, sensitivity, F1 score.
Second, experimental details
Experiment 1: the discriminative image block of the 40-fold enlarged pathology image obtained in the present invention and the image block obtained by the existing random obtaining method are classified respectively, and evaluated by the above-mentioned evaluation index, and the evaluation results are shown in table 1.
Table 1 comparison of different image block acquisition methods
Method | PRR | IRR | Accuracy rate of | Specificity (specificity) | Sensitivity of | F1 fraction |
Random acquisition | 0.784 | 0.772 | 0.726 | 0.598 | 0.991 | 0.838 |
The method of the invention | 0.954 | 0.955 | 0.957 | 0.957 | 0.958 | 0.957 |
As can be seen from table 1, the sensitivity of the method for randomly acquiring image blocks is higher, which indicates that the method has higher identification capability for malignant samples. This is because the randomly acquired image blocks are more representative and diverse. However, this approach also creates more noisy image blocks, which can interfere with the training process of the image blocks, affecting the classification results. Thus, while this approach achieves a higher high sensitivity, the high sensitivity comes at the expense of accuracy. For the other four indexes, the image blocks obtained by the method are better in classification result than the image blocks obtained by a random method.
Experiment 2: the classification method of the present invention and the six conventional methods were used to classify the pathological images of thyroid cancer, and the evaluation was performed using the above-mentioned evaluation indexes, and the evaluation results are shown in table 2.
Table 2 comparison of classification results of different classification methods
Method | PRR | IRR | Accuracy rate of | Specificity (specificity) | Sensitivity of | F1 fraction |
Resnet-50 | 0.899 | 0.894 | 0.884 | 0.858 | 0.933 | 0.902 |
VGG-16 | 0.890 | 0.894 | 0.884 | 0.875 | 0.940 | 0.916 |
VGG-f | 0.902 | 0.897 | 0.899 | 0.873 | 0.925 | 0.906 |
YanXu | 0.936 | 0.934 | 0.947 | 0.951 | 0.913 | 0.932 |
Fabio A.Spanhol | 0.588 | 0.566 | 0.639 | 0.824 | 0.319 | 0.420 |
LingqiaoLi | 0.924 | 0.921 | 0.949 | 0.944 | 0.885 | 0.918 |
The method of the invention | 0.954 | 0.955 | 0.957 | 0.951 | 0.958 | 0.957 |
Wherein ResNet-50 is a method of obtaining a classified first-class prize of ImageNet large visual recognition challenge 2015ILSVRC15,
VGG-16 is a method by which the ILSVRC14 obtains a classified jackpot,
VGG-f is an existing deep learning classification model,
YanXu is an existing deep learning method using AlexNet as a basic network,
fabio A.Spanhol is an existing tile-based approach,
a large-scale complex pathological image fine-granularity classification method based on a depth network by LingqiaoLi.
As can be seen from Table 2, the accuracy of the method of the present invention is improved by about 5% and the specificity is improved by about 10% compared with Resnet-50, VGG-16 and VGG-f. The worst classification result obtained by the Fabio A.Spathol method is due to the fact that the image block size designed by the Fabio A.Spathol method is too small to adapt to the characteristics of pathological images of thyroid cancer. Compared with two methods of YanXu and LingqiaoLi, the method of the invention obtains higher results because the method can more effectively extract the characteristics of the pathological image of thyroid cancer and can reduce the adverse effect of insufficient training data by using the multi-stage magnification and the graph block.
Claims (4)
1. Deep learning-based thyroid papillary carcinoma pathological image classification system is characterized by comprising the following steps:
(1) All thyroid cancer pathology image data with magnification of 20 are read, and the pathology image data is divided into two parts, namely hematoxylin eosin H&Pathological section data with clear and visible nucleus outline after E staining is used as source domain data X S Will beThe remaining data, i.e. the data with unclear nuclei, is used as the target domain data X T ;
(2) The method comprises the steps of improving the structures of 5 convolution layers and 3 full-connection layers of an original VGG-f convolution neural network, namely removing the last 2 full-connection layers, setting the number of convolution kernels of a fifth convolution layer to be 512, adding a pooling layer after the convolution layers, setting the number of neurons of the full-connection layers to be 2, and obtaining an improved VGG-f network;
(3) Constructing a loss function: l=l cls +λM(X S ,X T ) Wherein L is cls For cross entropy loss function, M (X S ,X T ) As the maximum mean difference loss function, lambda is a regularization term for measuring the importance of the parameters;
(4) To source domain data X S With target domain data X T Inputting the data to an improved VGG-f network, and performing supervision training on the network by using the loss function L until the loss function L converges to obtain a trained improved VGG-f network;
(5) To source domain data X S With target domain data X T Inputting the obtained information into a trained improved VGG-f network, extracting an output feature map of the last convolution layer of the improved VGG-f network and category weight of the feature map, and carrying out weighting operation to obtain an attention heat map;
(6) Normalizing the obtained attention heat map, and taking the position of the brightest 30% pixel in the attention map as the position DR of the discrimination area of the pathological image with 20 magnification 20 ;
(7) Based on the discriminant force region position DR 20 Acquiring discrimination area position DR of pathological image with 40 times magnification 40 =2*DR 20 Sliding a window crop size 112 x 112 image block over a pathology image with magnification 40, preserving and positioning DR 40 Overlapping image blocks, namely, distinguishing image blocks;
(8) Inputting the discriminative image block into the original VGG-f network, classifying the loss function L using cross entropy cls The network is supervised and trained until the cross entropy classification loss function L cls Converging to obtain a trained VGG-f network;
(9) Extracting the convolution characteristics of the last convolution layer of the trained VGG-f network, and performing classification processing through a softmax classifier to obtain the category of the discriminative image block;
(10) And obtaining the category of the pathological image according to the categories of the discriminant image blocks, namely, if the categories of the discriminant image blocks are benign, the category of the pathological image is benign, and if one or more of the categories of the discriminant image blocks are malignant, the category of the pathological image is malignant.
2. The system of claim 1, wherein (3) a loss function M (X S ,X T ) Calculated by the following formula
Wherein x is s ∈X S And x t ∈X T Respectively represent source domain X S And target domain data X T Phi (·) represents a kernel function mapping the source domain data and the target domain data to a regenerated hilbert space, P represents 5 convolutional layers of the modified VGG-f network, H represents k single kernel combinations:
3. The system of claim 1, wherein the cross entropy loss function L of step (3) cls The expression is as follows:
L cls =-[p i logq i +(1-p i )log(1-q i )]
where i is the index of the sample, p i For the true label of each sample, q i Is a predicted tag.
4. The system of claim 1, wherein the weighting operation in (5) is performed by the formula:
wherein, reLU is an activation function, k represents the sequence number of the convolution layer,representing a series of output characteristic patterns, F conv (k) For the characteristic diagram of the kth convolution layer, < >>For the feature map of the kth convolution layer corresponding to the weight of category c, S is the result of the weighting operation. />
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911415563.6A CN111079862B (en) | 2019-12-31 | 2019-12-31 | Deep learning-based thyroid papillary carcinoma pathological image classification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911415563.6A CN111079862B (en) | 2019-12-31 | 2019-12-31 | Deep learning-based thyroid papillary carcinoma pathological image classification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111079862A CN111079862A (en) | 2020-04-28 |
CN111079862B true CN111079862B (en) | 2023-05-16 |
Family
ID=70320839
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911415563.6A Active CN111079862B (en) | 2019-12-31 | 2019-12-31 | Deep learning-based thyroid papillary carcinoma pathological image classification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111079862B (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111754520B (en) * | 2020-06-09 | 2023-09-15 | 江苏师范大学 | Deep learning-based cerebral hematoma segmentation method and system |
CN111814893A (en) * | 2020-07-17 | 2020-10-23 | 首都医科大学附属北京胸科医院 | Lung full-scan image EGFR mutation prediction method and system based on deep learning |
CN111968147B (en) * | 2020-08-06 | 2022-03-15 | 电子科技大学 | Breast cancer pathological image comprehensive analysis system based on key point detection |
CN112102332A (en) * | 2020-08-30 | 2020-12-18 | 复旦大学 | Cancer WSI segmentation method based on local classification neural network |
CN112101437B (en) * | 2020-09-07 | 2024-05-31 | 平安科技(深圳)有限公司 | Fine granularity classification model processing method based on image detection and related equipment thereof |
CN112101451B (en) * | 2020-09-14 | 2024-01-05 | 北京联合大学 | Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block |
CN112364920B (en) * | 2020-11-12 | 2023-05-23 | 西安电子科技大学 | Thyroid cancer pathological image classification method based on deep learning |
CN113139930B (en) * | 2021-03-17 | 2022-07-15 | 杭州迪英加科技有限公司 | Thyroid slice image classification method and device, computer equipment and storage medium |
CN113128599A (en) * | 2021-04-23 | 2021-07-16 | 南方医科大学南方医院 | Machine learning-based head and neck tumor distal metastasis prediction method |
WO2022226949A1 (en) * | 2021-04-29 | 2022-11-03 | 深圳硅基智控科技有限公司 | Artificial neural network-based identification method and system for tissue lesion identification |
CN113177554B (en) * | 2021-05-19 | 2023-01-20 | 中山大学 | Thyroid nodule identification and segmentation method, system, storage medium and equipment |
CN113378796B (en) * | 2021-07-14 | 2022-08-19 | 合肥工业大学 | Cervical cell full-section classification method based on context modeling |
CN113838558B (en) * | 2021-08-16 | 2023-04-18 | 电子科技大学 | Method and device for analyzing breast cancer pathological image based on convolutional neural network |
CN114119458A (en) * | 2021-09-14 | 2022-03-01 | 福州大学 | Thyroid medullary cancer ultrasonic image identification method based on clinical priori knowledge guidance |
CN113723573B (en) * | 2021-11-03 | 2022-01-14 | 浙江大学 | Tumor tissue pathological classification system and method based on adaptive proportion learning |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107240102A (en) * | 2017-04-20 | 2017-10-10 | 合肥工业大学 | Malignant tumour area of computer aided method of early diagnosis based on deep learning algorithm |
-
2019
- 2019-12-31 CN CN201911415563.6A patent/CN111079862B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107240102A (en) * | 2017-04-20 | 2017-10-10 | 合肥工业大学 | Malignant tumour area of computer aided method of early diagnosis based on deep learning algorithm |
Also Published As
Publication number | Publication date |
---|---|
CN111079862A (en) | 2020-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111079862B (en) | Deep learning-based thyroid papillary carcinoma pathological image classification method | |
Man et al. | Classification of breast cancer histopathological images using discriminative patches screened by generative adversarial networks | |
Li et al. | Classification of breast cancer histology images using multi-size and discriminative patches based on deep learning | |
JP6704005B2 (en) | Digital holography microscopy data analysis for hematology | |
He et al. | Histology image analysis for carcinoma detection and grading | |
Mi et al. | Deep learning-based multi-class classification of breast digital pathology images | |
AlZubaidi et al. | Computer aided diagnosis in digital pathology application: Review and perspective approach in lung cancer classification | |
Linkon et al. | Deep learning in prostate cancer diagnosis and Gleason grading in histopathology images: An extensive study | |
Hou | Breast cancer pathological image classification based on deep learning | |
Li et al. | Transfer learning based classification of cervical cancer immunohistochemistry images | |
CN111340770B (en) | Method for constructing cancer prognosis model by combining global weighted LBP (local binary pattern) and texture analysis | |
Chaddad et al. | Improving of colon cancer cells detection based on Haralick's features on segmented histopathological images | |
He et al. | A review: The detection of cancer cells in histopathology based on machine vision | |
Xu et al. | Using transfer learning on whole slide images to predict tumor mutational burden in bladder cancer patients | |
Ye et al. | Cervical cancer metastasis and recurrence risk prediction based on deep convolutional neural network | |
Spyridonos, P. Ravazoula, D. Cavouras, K. Berberidis, G. Nikiforidis | Computer-based grading of haematoxylin-eosin stained tissue sections of urinary bladder carcinomas | |
Zhang et al. | Automatic detection of invasive ductal carcinoma based on the fusion of multi-scale residual convolutional neural network and SVM | |
Kothari et al. | Histological image feature mining reveals emergent diagnostic properties for renal cancer | |
Afify et al. | Novel prediction model on OSCC histopathological images via deep transfer learning combined with Grad-CAM interpretation | |
Kumar et al. | Colon cancer classification of histopathological images using data augmentation | |
CN117038060A (en) | Raman spectrum molecular detection and imaging device based on machine learning cascade | |
KR101297336B1 (en) | Device For Classifying Mucinous Cystadenoma | |
Saxena et al. | Study of Computerized Segmentation & Classification Techniques: An Application to Histopathological Imagery | |
Li et al. | Computer-aided detection breast cancer in whole slide image | |
Jiang et al. | Classifying cervical histopathological whole slide images via deep multi-instance transfer learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |