CN116664911A - Breast tumor image classification method based on interpretable deep learning - Google Patents
Breast tumor image classification method based on interpretable deep learning Download PDFInfo
- Publication number
- CN116664911A CN116664911A CN202310433791.6A CN202310433791A CN116664911A CN 116664911 A CN116664911 A CN 116664911A CN 202310433791 A CN202310433791 A CN 202310433791A CN 116664911 A CN116664911 A CN 116664911A
- Authority
- CN
- China
- Prior art keywords
- attention
- image
- prototype
- loss
- breast tumor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 208000026310 Breast neoplasm Diseases 0.000 title claims abstract description 80
- 206010006187 Breast cancer Diseases 0.000 title claims abstract description 79
- 238000000034 method Methods 0.000 title claims abstract description 72
- 238000013135 deep learning Methods 0.000 title claims abstract description 25
- 230000006870 function Effects 0.000 claims abstract description 29
- 210000000481 breast Anatomy 0.000 claims abstract description 20
- 238000000926 separation method Methods 0.000 claims abstract description 18
- 230000002776 aggregation Effects 0.000 claims abstract description 13
- 238000004220 aggregation Methods 0.000 claims abstract description 13
- 238000012549 training Methods 0.000 claims description 67
- 206010028980 Neoplasm Diseases 0.000 claims description 43
- 238000000605 extraction Methods 0.000 claims description 24
- 238000011176 pooling Methods 0.000 claims description 22
- 230000008569 process Effects 0.000 claims description 18
- 230000004913 activation Effects 0.000 claims description 15
- 238000002372 labelling Methods 0.000 claims description 13
- 201000011510 cancer Diseases 0.000 claims description 7
- 238000013527 convolutional neural network Methods 0.000 claims description 5
- 230000001419 dependent effect Effects 0.000 claims description 5
- 230000003211 malignant effect Effects 0.000 claims description 5
- 239000011800 void material Substances 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000003321 amplification Effects 0.000 claims 2
- 238000003199 nucleic acid amplification method Methods 0.000 claims 2
- 238000013136 deep learning model Methods 0.000 abstract description 7
- 238000012545 processing Methods 0.000 abstract description 4
- 238000001514 detection method Methods 0.000 description 10
- 238000012360 testing method Methods 0.000 description 5
- 230000003416 augmentation Effects 0.000 description 4
- 210000001519 tissue Anatomy 0.000 description 4
- 238000012935 Averaging Methods 0.000 description 3
- 238000001574 biopsy Methods 0.000 description 3
- 210000005075 mammary gland Anatomy 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 201000007295 breast benign neoplasm Diseases 0.000 description 2
- 238000003759 clinical diagnosis Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000013399 early diagnosis Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008081 blood perfusion Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000004821 distillation Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 230000036210 malignancy Effects 0.000 description 1
- 238000009607 mammography Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000013421 nuclear magnetic resonance imaging Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
- G06T7/0014—Biomedical image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30068—Mammography; Breast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Medical Treatment And Welfare Office Work (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
Abstract
A breast tumor image classification method based on interpretable deep learning belongs to the technical field of medical image processing, and comprises the following steps: first,: saving an attention prototype generating module for comparing the classified attention prototypes subsequently; secondly: designing a separation loss term and an aggregation loss term in the loss function, and simultaneously, marking fine granularity information according to the breast image data to obtain the fine granularity loss term so as to train a classification network model; finally: and comparing and classifying medical images acquired in real time by adopting the classification network model. The invention greatly improves the limit of the interpretation of the deep learning model on the classification of the medical images, is a decision-making assisting person and aims at better overall human-computer cooperation.
Description
Technical Field
The invention discloses a breast tumor image classification method based on interpretable deep learning, and belongs to the technical field of medical image processing.
Background
According to investigation, breast cancer is the most common malignancy in women, and the incidence is increasing year by year, early diagnosis and treatment can effectively reduce the mortality rate of breast cancer. At present, mammary gland nuclear magnetic resonance imaging is a common method for detecting breast cancer, but early mammary gland data has a large number of and poor imaging characteristics, early diagnosis is very difficult, and a mammary gland medical expert with abundant experience can analyze complex tumor structures, but the judgment is subjective and time-consuming. Doctor fatigue and fatigue may also lead to diagnostic errors, increased workload, and reduced overall quality of service. In order to solve the above-mentioned problems, modern processing techniques such as Artificial Intelligence (AI) techniques have been widely used in the field of medical image processing. The deep learning model can autonomously extract valuable features from the image to complete specified tasks such as target detection, image classification, image segmentation and the like.
But in order to make a greater clinical contribution, future methods need to be able to assist in accomplishing more difficult tasks, such as' do a patient need to biopsy the lesion? ' in breast cancer screening, most biopsy results are benign, but during biopsy they all accept invasive tests, which can cause pain to the patient and increase social medical costs. Even experienced breast cancer specialists, suggesting whether a patient is biopsied remains a challenging decision, and in this difficult task, the consistency of the recommendations of different doctors regarding whether a patient is biopsied is relatively low.
The prior art discloses related patent technology for this:
the Chinese patent document CN114004806A discloses a training method of a breast tumor detection model and a breast tumor detection method, wherein the method comprises the steps that a control feature module determines a plurality of first feature images of a training breast image; the control pyramid module determines a plurality of second feature maps based on the plurality of first feature maps; the control prediction module determines a positive sample set and a negative sample set based on the plurality of second feature maps and determines a loss function based on the positive sample set and the negative sample set; training a breast tumor detection model to be trained based on the loss function to obtain the breast tumor detection model. The method comprises the steps of obtaining a corresponding breast tumor detection model according to training data, then learning global space information of a full-automatic breast ultrasonic image, and improving accuracy of detection of a focus region, wherein visual discrimination effects of the method are more visual for diagnosticians, but as algorithms are embedded, a medical staff cannot obtain a corresponding discrimination process, a database cannot be effectively updated, the obtained discrimination result cannot be continuously improved, and risks of misdiagnosis suggestions exist.
The invention discloses a method and a device for detecting breast tumor tissue based on a deep learning algorithm, which obviously improves the detection rate of the breast tumor tissue and can realize accurate benign and malignant judgment and classification of breast focus by the method and the device for detecting the breast tumor tissue based on the deep learning algorithm; the contrast enhancement energy spectrum mammography is carried out by combining the low-energy image and the high-low energy subtraction image, so that the defect of single low-energy image detection is overcome; by combining the blood perfusion change rate and focus tissue classification, the identification capability of benign and malignant diseases is improved, and false negative and false positive detection is not easy to occur. However, the document still faces the risk that medical staff cannot obtain a corresponding discriminating process and misdiagnosis suggestions are generated.
Chinese patent document CN112508943a discloses a breast tumor recognition method based on ultrasound images, comprising: collecting a breast tumor ultrasonic image, and labeling the breast tumor ultrasonic image; preprocessing the marked breast tumor ultrasonic image; training the preprocessed breast tumor ultrasonic image in a distillation neural network to obtain a trained breast tumor recognition model; and inputting the breast ultrasonic image to be identified into a breast tumor identification model to obtain an identification result output by the breast tumor identification model. However, the document still faces the risk that medical staff cannot obtain a corresponding discriminating process and misdiagnosis suggestions are generated.
The Chinese patent document CN112348794A discloses an ultrasonic breast tumor automatic segmentation method based on an attention-enhancing U-shaped network, and the document can be used for extracting focus areas of breast ultrasonic images, so that the accuracy of breast tumor segmentation can be effectively improved, and a doctor can be assisted to rapidly and accurately position focus areas, but the document still faces the risk that medical staff cannot obtain corresponding judging processes and misdiagnosis suggestions are generated.
In summary, although the prior art utilizes the neural network to learn and build the model and can effectively classify the patients, the deviation of the training set can cause that the diagnosis result of the diagnostician depends on the composition quality of the training set with high probability, and the new recognition result deepens the offset of the original model and cannot complement and interact with the experience of the diagnostician.
Disclosure of Invention
In order to solve the problems of difficult medical image classification and limited interpretation of a deep learning model in the field in the prior art, the invention provides a breast tumor image classification method based on interpretation deep learning.
The technical scheme of the invention is as follows:
a breast tumor image classification method based on interpretable deep learning, comprising the steps of:
first,: saving an attention prototype generating module for comparing the classified attention prototypes subsequently;
secondly: adopting the idea of contrast learning to design a separation loss term and an aggregation loss term in the loss function, and simultaneously, marking fine granularity information according to the breast image data to obtain the fine granularity loss term so as to train a classification network model;
finally: and comparing and classifying medical images acquired in real time by adopting the classification network model.
According to the invention, the breast tumor image classification method based on interpretable deep learning specifically comprises the following steps:
step 1, constructing a classification network model;
the classifying network model comprises a feature extraction network, an attention prototype generation module and a classifier;
step 2, feature labeling is carried out on the breast tumor image, a training data set is constructed, and then the classification network model is trained; wherein the breast tumor image comprises a breast benign tumor image and a breast malignant tumor image;
and step 3, acquiring the breast tumor image to be classified in real time, and sending the breast tumor image to a classification network model after training is completed, so as to obtain a classification result of the current breast medical image.
According to the invention, a basic convolutional neural network is preferably adopted as a feature extraction network, the feature extraction network is composed of basic convolutional networks such as VGG16, resNet50, densenet161 and the like, each convolutional network structure is composed of a plurality of stages, each stage comprises a convolutional layer, when a training image, namely a breast tumor image for training, is input into the feature extraction network, the space size of the feature image is reduced by half, the number of channels is doubled, and the training image x is obtained after each stage i An input feature extraction network, the output feature map f (x i ) As an output feature of the feature extraction network.
Preferably according to the present invention, the attention prototype generation module comprises a multi-scale spatial attention τ and an attention prototype layer ρ, and the feature map f (x i ) First, a multi-scale spatial attention τ is used to obtain a combination of distinguishing features τ (f (x) i ))∈R C*W*H Wherein C represents the number of channels of the feature; w represents the width of the feature map; h represents the height of the feature map;
the multiscale spatial attention τ is expressed as:
in the case of the formula (1),conv1 (x); conv2 (x); conv3 (x) is a convolution kernel with different void ratios, and the bigger the void ratio is, the bigger the receptive field of the convolution kernel is, and the more comprehensive the learned space characteristics are; the SA is spatial attention to acquire a distinguishing spatial feature; f (f) n×n Represents a convolution kernel of size n x n; avgPool (x); maxPool (x) represents mean pooling and maximum pooling operations, respectively, to integrate characteristic channel information; the Sigmoid function is used to calculate the importance of pixels in space;
after obtaining the attention characteristic τ (f (x) i ) Using it to calculate an attention prototype, and constructing an attention prototype layer ρ containing m attention prototypes learned from the training setThe attention prototype is a sum τ (f (x i ) Image blocks having the same number of channels but a spatial scale of 1 x 1; since the attention prototypes have the same number of channels, but a spatial scale much smaller than the attention profile, the attention prototypes can be understood as representing prototype activation patterns of this type to visualize the attention prototypes as one patch of the training image that appears; for example, the model learns attention prototype representations of benign and malignant breast tumor images and stores these attention prototypes in an attention prototype layer ρ for use in comparison of subsequent test images, wherein each of the attention prototype layers ρ is an attention prototype AP t Will be compared with the input image by calculating each attention prototype AP t And a attention feature τ (f (x) i ) L between all 1 x 1 image blocks) 2 Distance d t,j And converts it into a similarity score S t,j ;
The similarity score S t,j :
In the formula (2) of the present invention,representing the attention characteristic τ (f (x) i ) 1 x 1 image blocks; τ (f (x) i )) j Represents the j th th Image blocks of 1 x 1 attention features; epsilon represents a very small positive number, preventing the denominator from being 0;
if an image x is input i Is a benign tumor, the attention characteristic τ (f (x i ) There will be image blocks τ (f (x) i )) j And with one or more attention prototype APs in an attention prototype layer ρ t Very closely, by computing similarity scores, similarity scores S between these attention prototypes and image blocks representing benign tumors t,j Is very large; on the other hand, l between the image block representing malignant tumor and the attention prototype representing benign tumor 2 The distance is larger, i.e. the similarity score between them is lower; image block τ (f (x) i )) j And attention prototype AP t The similarity scores between are spatially combined into a similarity graph, denoted asWherein σ () represents the value used to calculate t th Attention prototype AP t And a feature map f (x i ) Is more convincing with top_k averaging pooling to make the similarity score more convincing for the similarity graph than using max pooling alone and average pooling alone to convert the similarity graph to a similarity score>top _ k averaging pooling is where the k highest similarity scores are found, and the average of the k highest similarity scores is calculated,
top_k average pooling is represented by:
S j =AVGPOOL(top_k({S t,j })) (3)
in the formula (3), S j Representing similarity of final representation featuresA sex score; AVGPOoL is the max pooling operation; top_k represents finding the k highest similarity scores.
Preferably according to the invention, the classifier comprises a fully connected layer FC 1 And a SoftMax function; the full connection layer FC 1 Score S of similarity j Multiplying the corresponding weight matrix, wherein each neuron in the full-connection layer has a corresponding weight value, and the output scores of benign tumor and malignant tumor are finally obtained as known in the technical field, and the output scores are normalized by using a softMax function to obtain the classification probability y benign And y malignantt The method comprises the steps of carrying out a first treatment on the surface of the The process of normalizing using a SoftMax function to obtain a classification probability from the two output scores will be apparent to those skilled in the art.
According to a preferred embodiment of the invention, the total loss function L of the classification network model finnal :
L final =L cls +λ c L cluster +λ s L seperate +λ a L anotation (4)
In formula (4), L cls Representing the class cross entropy loss; l (L) chster Representing aggregate loss; l (L) seperate Representing separation loss; l (L) anomtion Representing fine granularity labeling loss; lambda (lambda) c ,λ s ,λ a All represent balance parameters, weights for each balance loss function; wherein, the cross entropy loss L of the classification cls As a classification loss, a feature f (x i ) The method comprises the following steps of:
in the formula (5) of the present invention,is a truth value label of an input image and is represented by a one-hot vector; t represents a transpose; cls represents a classifier; p is p i ∈R N Is the prediction score directionThe quantity, N, is the number of categories where n=2, which also ensures the learned convolution characteristic f (x i ) And the attention prototype is related to the kind of the predicted image;
in order to bring the attention prototypes in the attention prototype layer p closer to the same category, while minimizing the impact caused by other categories, an aggregate loss for the attention prototypes in the same category and a separation loss for the attention prototypes between different categories are added,
aggregation loss L xluster The method comprises the following steps:
separation loss L sepeate The method comprises the following steps:
in equations (6), (7), min_k is used to calculate the convolution characteristic f (x) i ) Each of the image blocks and j th K minimum l among prototypes 2 A distance;for calculating a mean value; />Representing prototype APs t And the input image belongs to the same category; />Representing prototype APs t And the input images do not belong to the same category. Aggregation loss L by the technical characteristics chster Minimizing to make the attention prototype belonging to the same kind in the attention prototype layer more approximate to the convolution feature of the training image; and the similar attention prototypes are more similar; separation loss L seperate Can make attention prototypes and training in the attention prototyping layer not belonging to the same classTraining l between convolved features of an image 2 Distance increases to help separate attention prototypes of different classes; by using L chster And L seperate The obtained attention prototype is more robust;
because the invention uses the image additionally provided with expert annotation for training, the technical scheme adopts the fine granularity annotation loss L anotation The method can punish the attention prototype activation of the tumor-irrelevant area in the training image, thus reducing the influence caused by confusion information in the image;
the fine granularity labeling loss L anotation The method comprises the following steps:
in the formula (8) of the present invention,representing an element-wise multiplication operation; upsampling ({ S) t,j ") is a bilinear upsampling operation to generate a fine mask s with expert labels i The activation mapping with the same dimension is used for calculating the Hadamard product of the two; for a given training instance x i I e d, d representing images with expert labels in the training image of the breast medical image, is a subset of the training image, hereinafter the fine dataset; fine mask s with expert annotation i : the mask is 1 on the tumor-independent area and the tumor-dependent pixel is 0, so:
when the attention prototype and the training image belong to the same class, the fine granularity annotation loss L anotation The first term in (a) calculates the activation of the attention prototype on the tumor-independent area and reduces the activation of the attention prototype on the tumor-independent area during training, which in turn facilitates the learning of the attention prototype on the tumor-dependent area in the image;
when the attention prototype and the training image do not belong to the same class, then the fine granularity annotation loss L anotation The second term in (1) penalizes the activation of attention prototypes of different categories so that these attention prototypes will be far from any convolution features that do not belong to the same class so that the attention prototypes of a particular class specifically represent that class.
According to the invention, the specific process of the step 2 is as follows:
step 2.1, constructing an original data set and a fine data set by amplifying the original image:
the acquired original breast tumor images are subjected to data preprocessing in a horizontal overturning and center cutting mode, data expansion is achieved, 1000 original breast medical images in each original category are expanded into 5000 images in each category, and an expansion data set is constructed. The original breast tumor image is mainly used for constructing and learning an attention prototype during training, and an original data set is constructed; the radiologist marks the original breast tumor image with fine granularity, 1 is arranged on the breast tumor irrelevant area, 0 is arranged on the tumor relevant pixels, the part of data with fine granularity marks is used as fine data to construct a fine data set, and the fine granularity marking loss L is utilized anotation To make the classification network model more accurate for the tumor part of interest;
step 2.2, taking the augmentation data set, the original data set and the fine data set as training data sets, sending the augmentation data set, the original data set and the fine data set into a classification network model, firstly fixing parameters of a feature extraction network and parameters of a full connection layer in a classifier in a first stage of training, training and optimizing the parameters in an attention prototype generation module, enabling an attention prototype in the attention prototype layer in the module to be capable of better learning distinguishing features from the original data set in the training data set, and storing the attention prototype in the attention prototype layer; then, all the learnable parameters of the model are released, so that the performance of the whole model is improved, attention prototypes are further optimized, the same type of attention prototypes in the attention prototyping layer are more similar by adopting the aggregation loss and the separation loss, and the difference of the attention prototypes of different types is larger; and finally, when the whole model is trained to be converged, obtaining the trained classification network model.
According to the present invention, the specific process of the step 3 is as follows:
firstly, a breast tumor image to be classified is sent into a feature extraction network to obtain a feature image f (x) i ) Then inputting the image into an attention prototype generation module, and obtaining a feature map which focuses more on the tumor position through multi-scale space attention, wherein the feature map can carry out similarity calculation with an attention prototype in an attention prototype layer, the similarity score between the image features of the same category and the attention prototype is high, and the similarity score between images of different categories and the attention prototype is low; finally, the classification result of the current image is obtained through the full connection layer and the softMax function in the classifier.
The invention has the beneficial technical effects that:
the invention greatly improves the limit of the interpretation of the deep learning model on medical image classification. Firstly, an effective characteristic representation of a breast tumor image is obtained through a basic convolutional neural network, so that the characteristic extraction capacity of a model is improved; through the use of the multi-scale space attention and fine granularity loss items in the unique training data set and the attention prototype generation module, the characteristics extracted by the classification network are more focused on the tumor position in the image rather than other confusion information, and are used for constructing an attention prototype layer to obtain attention prototype representations focused on benign tumors and malignant tumors respectively, and the performance of the model is further improved through the attention prototype constructed by focusing on the characteristics of the tumor position; the aggregation loss items and the separation loss items of the learning ideas are fused and compared, so that the attention prototype in the attention prototype layer is more similar to the same category, and the difference between different categories is larger; finally, our model adopts a case-based reasoning mode to faithfully visualize the whole model, and generates a high-quality attention prototype through the proposed attention prototype generation model; similar to the clinical diagnosis process of a oncologist, after a test image enters a model, corresponding feature representations are first extracted, and then similarity comparison is performed with the attention prototypes in the attention prototyping layer to obtain a similarity score. Finally, the images are classified through a classifier, and the model is a decision-making assisting person and aims at better overall human-computer cooperation.
In addition, the invention further comprises the following application scenarios: its reasoning process is provided to its medical expert partners to provide useful help in these difficult and high risk medical decisions. Unlike the existing deep learning 'black box' model, the explicit reasoning process of the present invention can be understood and validated by the physician and can provide an explanation for each case, displaying its decision process. The present invention focuses on the tumor portion of a breast tumor image, interprets that this portion of the image it considers to be similar to a previously seen typical case, and calculates the similarity between the two, ultimately yielding probabilities of benign and malignant. The reasoning process that the present invention interprets to humans is the reasoning process itself that is used to understand images. In addition, the invention also combines medical team as expert to make fine labeling for data set in order to extract more information from limited data set because of small data set scale. This allows a better generalization with fewer images, enabling a high quality reasoning and prediction process. Meanwhile, the invention provides an attention prototype generating module, which generates a higher-quality attention prototype through multi-scale spatial attention focusing, and the invention can jointly use the original data and the fine marked data, so that the model focuses on the tumor position in the image, the influence caused by confusion information in the image is reduced, and the interpretability of the model is further enhanced.
Drawings
FIG. 1 is a flow chart of a classification method according to the present invention;
FIG. 2 is a schematic diagram of the overall structure of a classification network model according to the present invention;
fig. 3 is a schematic structural diagram of an attention prototype generating module in a classification network model according to the present invention.
Detailed Description
The invention will be described in further detail with reference to the drawings and the detailed description, but is not limited thereto.
Example 1
A breast tumor image classification method based on interpretable deep learning, comprising:
first,: saving an attention prototype generating module for comparing the classified attention prototypes subsequently;
secondly: adopting the idea of contrast learning to design a separation loss term and an aggregation loss term in the loss function, and simultaneously, marking fine granularity information according to the breast image data to obtain the fine granularity loss term so as to train a classification network model;
finally: and comparing and classifying medical images acquired in real time by adopting the classification network model.
The breast tumor image classification method based on interpretable deep learning specifically comprises the following steps:
as shown in fig. 1 and 2: step 1, constructing a classification network model;
the classifying network model comprises a feature extraction network, an attention prototype generation module and a classifier;
step 2, feature labeling is carried out on the breast tumor image, a training data set is constructed, and then the classification network model is trained; wherein the breast tumor image comprises a breast benign tumor image and a breast malignant tumor image;
and step 3, acquiring the breast tumor image to be classified in real time, and sending the breast tumor image to a classification network model after training is completed, so as to obtain a classification result of the current breast medical image.
The basic convolutional neural network is adopted as a characteristic extraction network, the characteristic extraction network is composed of VGG16, resNet50, densenet161 and other basic convolutional networks, each convolutional network structure is composed of a plurality of stages, each stage comprises a convolutional layer, when a training image, namely a breast tumor image for training, is input into the characteristic extraction network, the space size of the characteristic image is reduced by half, the number of channels is doubled, and the training image x is input into the characteristic extraction network after each stage i An input feature extraction network, the output feature map f (x i ) As an output feature of the feature extraction network.
As shown in fig. 3: the attention prototype generation moduleComprising a multi-scale spatial attention τ and an attention prototype layer ρ, the feature map f (x i ) First, a multi-scale spatial attention τ is used to obtain a combination of distinguishing features τ (f (x) i ))∈R C*W*H Wherein C represents the number of channels of the feature; w represents the width of the feature map; h represents the height of the feature map;
the multiscale spatial attention τ is expressed as:
in formula (1), conv1 (x); conv2 (x); conv3 (x) is a convolution kernel with different void ratios, and the bigger the void ratio is, the bigger the receptive field of the convolution kernel is, and the more comprehensive the learned space characteristics are; the SA is spatial attention to acquire a distinguishing spatial feature; f (f) n×n Represents a convolution kernel of size n x n; avgPool (x); maxPool (x) represents mean pooling and maximum pooling operations, respectively, to integrate characteristic channel information; the Sigmoid function is used to calculate the importance of pixels in space;
after obtaining the attention characteristic τ (f (x) i ) Using it to calculate an attention prototype, and constructing an attention prototype layer ρ containing m attention prototypes learned from the training setThe attention prototype is a sum τ (f (x i ) Image blocks having the same number of channels but a spatial scale of 1 x 1; since the attention prototypes have the same number of channels, but a spatial scale much smaller than the attention profile, the attention prototypes can be understood as representing prototype activation patterns of this type to visualize the attention prototypes as one patch of the training image that appears; for example, the model learns attention prototype representations of benign and malignant breast tumor images and stores these attention prototypes in an attention prototype layer ρ for use in comparison of subsequent test images, wherein each of the attention prototype layers ρPrototype AP t Will be compared with the input image by calculating each attention prototype AP t And a attention feature τ (f (x) i ) L between all 1 x 1 image blocks) 2 Distance d t,j And converts it into a similarity score S t,j ;
The similarity score S t,j :
In the formula (2) of the present invention,representing the attention characteristic τ (f (x) i ) 1 x 1 image blocks; τ (f (x) i )) j Represents the j th th Image blocks of 1 x 1 attention features; epsilon represents a very small positive number, preventing the denominator from being 0;
if an image x is input i Is a benign tumor, the attention characteristic τ (f (x i ) There will be image blocks τ (f (x) i )) j And with one or more attention prototype APs in an attention prototype layer ρ t Very closely, by computing similarity scores, similarity scores S between these attention prototypes and image blocks representing benign tumors t,j Is very large; on the other hand, l between the image block representing malignant tumor and the attention prototype representing benign tumor 2 The distance is larger, i.e. the similarity score between them is lower; image block τ (f (x) i )) j And attention prototype AP t The similarity scores between are spatially combined into a similarity graph, denoted asWherein σ () represents the value used to calculate t th Attention prototype AP t And a feature map f (x i ) Is a function of the similarity graph between image blocks of (a) compared to using maximum pooling alone and average pooling aloneThe graph is converted into a similarity score, and top_k average pooling is adopted to make the similarity score more convincing, for the similarity graph +.>top _ k averaging pooling is where the k highest similarity scores are found, and the average of the k highest similarity scores is calculated,
top_k average pooling is represented by:
S j =AVGPOOL(top_k({S t,j })) (3)
in the formula (3), S j A similarity score representing the final representation feature; AVGPOOL is a max-pooling operation; top_k represents finding the k highest similarity scores.
The classifier comprises a full connection layer FC 1 And a SoftMax function; the full connection layer FC 1 Score S of similarity j Multiplying the corresponding weight matrix, wherein each neuron in the full-connection layer has a corresponding weight value, and the output scores of benign tumor and malignant tumor are finally obtained as known in the technical field, and the output scores are normalized by using a softMax function to obtain the classification probability y benign And y malignant The method comprises the steps of carrying out a first treatment on the surface of the The process of normalizing using a SoftMax function to obtain a classification probability from the two output scores will be apparent to those skilled in the art.
Example 2
A breast tumor image classification method based on interpretable deep learning as in embodiment 1, wherein the total loss function L of the classification network model finnal :
L finnal =L cls +λ c L chister +λ s L seperate +λ a L anotation (4)
In formula (4), L cls Representing the class cross entropy loss; l (L) chuster Representing aggregate loss; l (L) seperate Representing separation loss; l (L) anotation Representing fine granularity labeling loss; lambda (lambda) c ,λ s ,λ a All represent balance parameters, weights for each balance loss function; wherein, the cross entropy loss L of the classification cls As a classification loss, a feature f (x i ) The method comprises the following steps of:
in the formula (5) of the present invention,is a truth value label of an input image and is represented by a one-hot vector; t represents a transpose; cls represents a classifier; p is p i ∈R N Is a predictive score vector, N is a class number, where n=2, which also ensures a learned convolution characteristic f (x i ) And the attention prototype is related to the kind of the predicted image;
in order to bring the attention prototypes in the attention prototype layer p closer to the same category, while minimizing the impact caused by other categories, an aggregate loss for the attention prototypes in the same category and a separation loss for the attention prototypes between different categories are added,
aggregation loss L cluster The method comprises the following steps:
separation loss L seperate The method comprises the following steps:
in equations (6), (7), min_k is used to calculate the convolution characteristic f (x) i ) Each of the image blocks and j th K minimum l among prototypes 2 A distance;for calculating a mean value; />Representing prototype APs t And the input image belongs to the same category; />Representing prototype APs t And the input images do not belong to the same category. Aggregation loss L by the technical characteristics chster Minimizing to make the attention prototype belonging to the same kind in the attention prototype layer more approximate to the convolution feature of the training image; and the similar attention prototypes are more similar; separation loss L seperate Can be made to be between the convolved features of the attention prototype and the training image in the attention prototype layer that do not belong to the same class 2 Distance increases to help separate attention prototypes of different classes; by using L cluster And Ls eperate The obtained attention prototype is more robust;
because the invention uses the image additionally provided with expert annotation for training, the technical scheme adopts the fine granularity annotation loss L anotation The method can punish the attention prototype activation of the tumor-irrelevant area in the training image, thus reducing the influence caused by confusion information in the image;
the fine granularity labeling loss L anotation The method comprises the following steps:
in the formula (8) of the present invention,representing an element-wise multiplication operation; upsampling ({ S) t,j ") is a bilinear upsampling operation to generate a fine mask s with expert labels i The activation mapping with the same dimension is used for calculating the Hadamard product of the two; for a given training instance x i I epsilon d, d stands for training of breast medical imagesThe images with expert labels in the training images are a subset of the training images, hereinafter the fine data set; fine mask s with expert annotation i : the mask is 1 on the tumor-independent area and the tumor-dependent pixel is 0, so:
when the attention prototype and the training image belong to the same class, the fine granularity annotation loss L anotation The first term in (a) calculates the activation of the attention prototype on the tumor-independent area and reduces the activation of the attention prototype on the tumor-independent area during training, which in turn facilitates the learning of the attention prototype on the tumor-dependent area in the image;
when the attention prototype and the training image do not belong to the same class, then the fine granularity annotation loss L anotation The second term in (1) penalizes the activation of attention prototypes of different categories so that these attention prototypes will be far from any convolution features that do not belong to the same class so that the attention prototypes of a particular class specifically represent that class.
Example 3
The specific procedure of the step 2 is as follows, which is a breast tumor image classification method based on interpretable deep learning as described in embodiments 1 and 2:
step 2.1, constructing an original data set and a fine data set by amplifying the original image:
the acquired original breast tumor images are subjected to data preprocessing in a horizontal overturning and center cutting mode, data expansion is achieved, 1000 original breast medical images in each original category are expanded into 5000 images in each category, and an expansion data set is constructed. The original breast tumor image is mainly used for constructing and learning an attention prototype during training, and an original data set is constructed; the radiologist marks the original breast tumor image with fine granularity, 1 is arranged on the breast tumor irrelevant area, 0 is arranged on the tumor relevant pixels, the part of data with fine granularity marks is used as fine data to construct a fine data set, and the fine granularity marking loss L is utilized anotation To make the classification network model more accurate for the tumor part of interest;
step 2.2, taking the augmentation data set, the original data set and the fine data set as training data sets, sending the augmentation data set, the original data set and the fine data set into a classification network model, firstly fixing parameters of a feature extraction network and parameters of a full connection layer in a classifier in a first stage of training, training and optimizing the parameters in an attention prototype generation module, enabling an attention prototype in the attention prototype layer in the module to be capable of better learning distinguishing features from the original data set in the training data set, and storing the attention prototype in the attention prototype layer; then, all the learnable parameters of the model are released, so that the performance of the whole model is improved, attention prototypes are further optimized, the same type of attention prototypes in the attention prototyping layer are more similar by adopting the aggregation loss and the separation loss, and the difference of the attention prototypes of different types is larger; and finally, when the whole model is trained to be converged, obtaining the trained classification network model.
Example 4
The method for classifying breast tumor images based on interpretable deep learning according to embodiments 1, 2 and 3 comprises the following specific procedures in step 3:
firstly, a breast tumor image to be classified is sent into a feature extraction network to obtain a feature image f (x) i ) Then inputting the image into an attention prototype generation module, and obtaining a feature map which focuses more on the tumor position through multi-scale space attention, wherein the feature map can carry out similarity calculation with an attention prototype in an attention prototype layer, the similarity score between the image features of the same category and the attention prototype is high, and the similarity score between images of different categories and the attention prototype is low; finally, the classification result of the current image is obtained through the full connection layer and the softMax function in the classifier.
In summary, the method for classifying breast tumor images based on the fusion attention prototype generation module and capable of explaining deep learning combines a convolutional neural network in deep learning and an improved attention module method to classify breast tumor images. The method of the invention enhances the capability of extracting the characteristics of the basic convolution network to the greatest extent. In the classification network model, a mode based on case reasoning is adopted by the proposed attention prototype generation module, and the similarity comparison between the test image and the learned attention prototype is similar to the clinical diagnosis process of doctors, so that the interpretability of the deep learning model is improved, the 'black box' attribute of the deep learning model is solved, and meanwhile, the overfitting of a smaller training set task can be controlled; and the contrast learning loss term designed in the loss function is fused with the contrast learning thought, and a novel fine granularity loss term is adopted, so that the classification performance of the network model is improved. The invention solves the problems that the interpretation of the breast tumor image classification task by using a deep learning model is insufficient, the tumor position can not be accurately focused, the most obvious characteristics of the object are focused only when the attention mechanism is applied, and the like.
Claims (8)
1. A breast tumor image classification method based on interpretable deep learning, comprising the steps of:
first,: saving an attention prototype generating module for comparing the classified attention prototypes subsequently;
secondly: designing a separation loss term and an aggregation loss term in the loss function, and simultaneously, marking fine granularity information according to the breast image data to obtain the fine granularity loss term so as to train a classification network model;
finally: and comparing and classifying medical images acquired in real time by adopting the classification network model.
2. The breast tumor image classification method based on interpretable deep learning of claim 1, wherein the classification method specifically comprises the following steps:
step 1, constructing a classification network model;
the classifying network model comprises a feature extraction network, an attention prototype generation module and a classifier;
step 2, feature labeling is carried out on the breast tumor image, a training data set is constructed, and then the classification network model is trained; wherein the breast tumor image comprises a breast benign tumor image and a breast malignant tumor image;
and step 3, acquiring the breast tumor image to be classified in real time, and sending the breast tumor image to a classification network model after training is completed, so as to obtain a classification result of the current breast medical image.
3. The breast tumor image classification method based on interpretable deep learning of claim 2, wherein the training image x is obtained by using a basic convolutional neural network as a feature extraction network i An input feature extraction network, the output feature map f (x i ) As an output feature of the feature extraction network.
4. A breast tumor image classification method based on interpretable deep learning according to claim 3, wherein the attention prototype generation module comprises a multi-scale spatial attention τ and an attention prototype layer ρ, and the feature map f (x i ) First, a multi-scale spatial attention τ is used to obtain a combination of distinguishing features τ (f (x) i ))∈R C*W*H Wherein C represents the number of channels of the feature; w represents the width of the feature map; h represents the height of the feature map;
the multiscale spatial attention τ is expressed as:
in formula (1), conv1 (x); conv2 (x); conv3 (x) is a convolution kernel with different void fractions, respectively; the SA is spatial attention to acquire a distinguishing spatial feature; f (f) n×n Represents a convolution kernel of size n x n; avgPool (x); maxPool (x) represents mean pooling and maximum pooling operations, respectively, to integrate characteristic channel information; the Sigmoid function is used to calculate the importance of pixels in space;
after obtaining the attention characteristic τ (f (x) i ) Using it to calculate an attention prototype, and constructing an attention prototype layer ρ containing m attention prototypes learned from the training setEach attention prototype AP in the attention prototype layer ρ t Will be compared with the input image by calculating each attention prototype AP t And a attention feature τ (f (x) i ) L between all 1 x 1 image blocks) 2 Distance d t,j And converts it into a similarity score S t,j ;
The similarity score S t,j :
In the formula (2) of the present invention,j e { (1, 1),.. i ) 1 x 1 image blocks; τ (f (x) i )) j Represents the j th th Image blocks of 1 x 1 attention features; epsilon represents a very small positive number, preventing the denominator from being 0;
image block τ (f (x) i )) j And attention prototype AP t The similarity scores between are spatially combined into a similarity graph, denoted as{S t,j }=σ(AP t ,f(x i ) Where σ () represents the value used to calculate t th Attention prototype AP t And a feature map f (x i ) Is a function of the similarity map between image blocks, with top_k average pooling, for said similarity map +.>Top_k average pooling is to find the k highest similarity scores from among them, and calculate the kThe average of the highest similarity scores,
top_k average pooling is represented by:
S j =AVGPOOL(top_k({S t,j })) (3)
in the formula (3), S j A similarity score representing the final representation feature; AVGPOOL is a max-pooling operation; top_k represents finding the k highest similarity scores.
5. The method for classifying breast tumor images based on interpretable deep learning of claim 4, wherein said classifier comprises a full-connected layer FC 1 And a SoftMax function; the full connection layer FC 1 Score S of similarity j Multiplying the corresponding weight matrix to finally obtain the output scores of benign tumor and malignant tumor, normalizing the two output scores by using a softMax function to obtain the classification probability y benign And y malignant 。
6. The method for classifying breast tumor images based on interpretable deep learning of claim 5, wherein the total loss function L of the classification network model finnal :
L finnal =L cls +λ c L chlster +λ s L seperate +λ a L anotation (4)
In formula (4), L cls Representing the class cross entropy loss; l (L) chuster Representing aggregate loss; l (L) seperate Representing separation loss; l (L) anotation Representing fine granularity labeling loss; lambda (lambda) c ,λ s ,λ a All represent balance parameters, weights for each balance loss function; wherein, the cross entropy loss L of the classification cls :
In the formula (5) of the present invention,is a truth value label of an input image and is represented by a one-hot vector; t represents a transpose; cls represents a classifier; p is p i ∈R N Is a predictive score vector, N is a category number;
aggregation loss L cluster The method comprises the following steps:
separation loss L seperate The method comprises the following steps:
in equations (6), (7), min_k is used to calculate the convolution characteristic f (x) i ) Each of the image blocks and j th K minimum l among prototypes 2 A distance;for calculating a mean value; />Representing prototype APs t And the input image belongs to the same category; />Representing prototype APs t And the input image does not belong to the same category;
the fine granularity labeling loss L anotation The method comprises the following steps:
in the formula (8) of the present invention,representing an element-wise multiplication operation; upsampling ({ S) t,j ") is a bilinear upsampling operation to generate a fine mask s with expert labels i An activation map having the same dimension; for a given training instance x i I e d, d representing images with expert labels in the training image of the breast medical image, is a subset of the training image, hereinafter the fine dataset; fine mask s with expert annotation i : the mask is 1 on the tumor-independent area and the tumor-dependent pixel is 0, so:
when the attention prototype and the training image belong to the same class, the fine granularity annotation loss L anotation The first term in (1) calculates the activation of the attention prototype on the tumor-free area;
when the attention prototype and the training image do not belong to the same class, then the fine granularity annotation loss L anotation The second term in (1) penalizes the category-diverse attention prototype activation so that the attention prototype of a particular category specifically represents that category.
7. The breast tumor image classification method based on interpretable deep learning of claim 2, wherein the specific process of step 2 is as follows:
step 2.1, constructing an original data set and a fine data set by amplifying the original image:
carrying out data preprocessing on the acquired original breast tumor image in a horizontal overturning and center cutting mode, realizing data expansion, and constructing an amplification data set; the breast tumor expert carries out fine granularity labeling on the original breast tumor image, wherein 1 is arranged on a breast tumor irrelevant area, 0 is arranged on a tumor relevant pixel, and the part of data with fine granularity labeling is used as fine data to construct a fine data set;
step 2.2, taking the amplification data set, the original data set and the fine data set as training data sets, sending the training data sets into a classification network model, firstly fixing parameters of a feature extraction network and parameters of a full connection layer in a classifier in the first stage of training, training and optimizing the parameters in an attention prototype generation module, and storing an attention prototype in an attention prototype layer; then, all the learnable parameters of the model are released, so that the performance of the whole model is improved, attention prototypes are further optimized, the same type of attention prototypes in the attention prototyping layer are more similar by adopting the aggregation loss and the separation loss, and the difference of the attention prototypes of different types is larger; and finally, when the whole model is trained to be converged, obtaining the trained classification network model.
8. The breast tumor image classification method based on interpretable deep learning of claim 2, wherein the specific process of step 3 is as follows:
firstly, a breast tumor image to be classified is sent into a feature extraction network to obtain a feature image f (x) i ) Then inputting the image into an attention prototype generation module, and obtaining a feature map which focuses more on the tumor position through multi-scale space attention, wherein the feature map can carry out similarity calculation with an attention prototype in an attention prototype layer, the similarity score between the image features of the same category and the attention prototype is high, and the similarity score between images of different categories and the attention prototype is low; finally, the classification result of the current image is obtained through the full connection layer and the softMax function in the classifier.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310433791.6A CN116664911A (en) | 2023-04-17 | 2023-04-17 | Breast tumor image classification method based on interpretable deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310433791.6A CN116664911A (en) | 2023-04-17 | 2023-04-17 | Breast tumor image classification method based on interpretable deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116664911A true CN116664911A (en) | 2023-08-29 |
Family
ID=87716078
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310433791.6A Pending CN116664911A (en) | 2023-04-17 | 2023-04-17 | Breast tumor image classification method based on interpretable deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116664911A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117036830A (en) * | 2023-10-07 | 2023-11-10 | 之江实验室 | Tumor classification model training method and device, storage medium and electronic equipment |
CN117893792A (en) * | 2023-12-14 | 2024-04-16 | 中山大学附属第一医院 | Bladder tumor classification method based on MR signals and related device |
CN118016283A (en) * | 2024-04-09 | 2024-05-10 | 北京科技大学 | Interpreted breast cancer new auxiliary chemotherapy pCR prediction method and system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114170533A (en) * | 2021-12-08 | 2022-03-11 | 西安电子科技大学 | Landslide identification method and system based on attention mechanism and multi-mode characterization learning |
CN115631369A (en) * | 2022-10-09 | 2023-01-20 | 中国石油大学(华东) | Fine-grained image classification method based on convolutional neural network |
-
2023
- 2023-04-17 CN CN202310433791.6A patent/CN116664911A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114170533A (en) * | 2021-12-08 | 2022-03-11 | 西安电子科技大学 | Landslide identification method and system based on attention mechanism and multi-mode characterization learning |
CN115631369A (en) * | 2022-10-09 | 2023-01-20 | 中国石油大学(华东) | Fine-grained image classification method based on convolutional neural network |
Non-Patent Citations (2)
Title |
---|
ALINA JADE BARNETT ET AL.: "IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography", 《ARXIV》, pages 1 - 38 * |
HUAZHONG JIN ET AL.: "Semantic segmentation of remote sensing images based on dilated convolution and spatial-channel attention mechanism", 《JOURNAL OF APPLIED REMOTE SENSING》, pages 1 - 17 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117036830A (en) * | 2023-10-07 | 2023-11-10 | 之江实验室 | Tumor classification model training method and device, storage medium and electronic equipment |
CN117036830B (en) * | 2023-10-07 | 2024-01-09 | 之江实验室 | Tumor classification model training method and device, storage medium and electronic equipment |
CN117893792A (en) * | 2023-12-14 | 2024-04-16 | 中山大学附属第一医院 | Bladder tumor classification method based on MR signals and related device |
CN118016283A (en) * | 2024-04-09 | 2024-05-10 | 北京科技大学 | Interpreted breast cancer new auxiliary chemotherapy pCR prediction method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Shah et al. | A robust approach for brain tumor detection in magnetic resonance images using finetuned efficientnet | |
Wang et al. | Breast cancer detection using extreme learning machine based on feature fusion with CNN deep features | |
CN109493308B (en) | Medical image synthesis and classification method for generating confrontation network based on condition multi-discrimination | |
CN108364006B (en) | Medical image classification device based on multi-mode deep learning and construction method thereof | |
Qi et al. | Automated diagnosis of breast ultrasonography images using deep neural networks | |
CN106682435B (en) | System and method for automatically detecting lesion in medical image through multi-model fusion | |
AU2019311336B2 (en) | Computer classification of biological tissue | |
CN116664911A (en) | Breast tumor image classification method based on interpretable deep learning | |
Wan et al. | Hierarchical temporal attention network for thyroid nodule recognition using dynamic CEUS imaging | |
CN112750115A (en) | Multi-modal cervical carcinoma pre-lesion image recognition method based on graph neural network | |
Songsaeng et al. | Multi-scale convolutional neural networks for classification of digital mammograms with breast calcifications | |
Rahman et al. | BreastMultiNet: A multi-scale feature fusion method using deep neural network to detect breast cancer | |
Hu et al. | A multi-instance networks with multiple views for classification of mammograms | |
Li et al. | Medical image identification methods: a review | |
Karthiga et al. | Automated diagnosis of breast cancer from ultrasound images using diverse ML techniques | |
Pavithra et al. | An Overview of Convolutional Neural Network Architecture and Its Variants in Medical Diagnostics of Cancer and Covid-19 | |
CN115409812A (en) | CT image automatic classification method based on fusion time attention mechanism | |
CN115880245A (en) | Self-supervision-based breast cancer disease classification method | |
Al-Shouka et al. | A Transfer Learning for Intelligent Prediction of Lung Cancer Detection | |
Cao et al. | Deep learning based mass detection in mammograms | |
Yin et al. | Hybrid regional feature cutting network for thyroid ultrasound images classification | |
Gowri et al. | An improved classification of MR images for cervical cancer using convolutional neural networks | |
CN116823767B (en) | Method for judging lung transplantation activity grade based on image analysis | |
Wibisono et al. | Segmentation-based knowledge extraction from chest X-ray images | |
Zhao et al. | Key techniques for classification of thorax diseases based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |