CN113822252B - Pathological image cell robust detection method under microscope - Google Patents

Pathological image cell robust detection method under microscope Download PDF

Info

Publication number
CN113822252B
CN113822252B CN202111397880.7A CN202111397880A CN113822252B CN 113822252 B CN113822252 B CN 113822252B CN 202111397880 A CN202111397880 A CN 202111397880A CN 113822252 B CN113822252 B CN 113822252B
Authority
CN
China
Prior art keywords
pathological image
image
features
feature
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111397880.7A
Other languages
Chinese (zh)
Other versions
CN113822252A (en
Inventor
亢宇鑫
崔磊
杨林
刘欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Diyingjia Technology Co ltd
Original Assignee
Hangzhou Diyingjia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Diyingjia Technology Co ltd filed Critical Hangzhou Diyingjia Technology Co ltd
Priority to CN202111397880.7A priority Critical patent/CN113822252B/en
Publication of CN113822252A publication Critical patent/CN113822252A/en
Application granted granted Critical
Publication of CN113822252B publication Critical patent/CN113822252B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H70/00ICT specially adapted for the handling or processing of medical references
    • G16H70/60ICT specially adapted for the handling or processing of medical references relating to pathologies

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Primary Health Care (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

The invention discloses a pathological image cell robust detection method under a microscope, which comprises the following steps: alternately inputting the existing digital pathological image and the pathological image under the microscope into an encoder for feature extraction to respectively obtain the depth feature of the digital pathological image and the depth feature of the pathological image under the microscope; after the depth features of the digital pathological image and the depth features of the microscopic pathological image are extracted, the game thought in a countercheck learning mechanism is utilized, and an image-level discriminator and an example-level discriminator are respectively adopted to classify the two features so as to discriminate the consistency of the two features. The invention combines the counterstudy with the training of the target detection network, trains the pathological image cell robust detection model under the microscope under the condition of only the digital pathological image labeling data set, and can effectively improve the detection precision of the pathological image cells of the microscope.

Description

Pathological image cell robust detection method under microscope
Technical Field
The invention relates to the technical field of medical treatment, in particular to a pathological image cell robust detection method under a microscope.
Background
There are often hundreds of thousands of cells in the pathology image, and the pathologist needs to perform detailed analysis of the tumor cells in the section under the microscope. Therefore, the effective accurate detection and classification of positive cells under a microscope is one of the currently desired aids for pathologists.
In recent years, with the development of deep learning technology and digital scanners, a plurality of cell detection algorithms based on digital pathological images are researched, and the algorithms perform optimal model fitting by performing a large amount of tumor cell labeling on the digital pathological images and combining the deep learning technology, so that the aim of auxiliary detection is fulfilled. However, in the current clinical diagnosis process, the pathologist uses a microscope to diagnose the disease more, so that the algorithm is not strictly fit with the medical diagnosis process and is abandoned by the clinic. However, the auxiliary detection method based on the deep learning technology is researched based on the image under the microscope, and has the following inconveniences: the image heterogeneity is serious, the image often has different edge distributions due to the influence of illumination and focal length under a microscope, and the model generalization performance can be directly reduced due to the image heterogeneity, namely, cells of the same type are detected to be different types under different edge distribution conditions; image annotation is costly; therefore, how to use the existing digital pathological image (digital pathological image) to label data, combine the pathological image (second image) under the microscope, and use the migration learning idea to perform the research of the cell robust detection method of the pathological image under the microscope is one of the challenges of realizing the clinical application of the computer-aided pathological diagnosis.
In the digital pathological image cell detection method, a target detection network based on deep learning is widely used. The target detection network generally carries out coding through effective multi-scale features of a convolutional neural network and context information thereof, then obtains a candidate region frame by utilizing a region generation network of the coded depth features, and finally regresses the category and the position of the candidate region frame through technical means such as non-maximum suppression, full-connection classification and the like, thereby achieving the effects of detecting and classifying cells. The target detection network needs rich context information and local fine-grained feature information. Meanwhile, the target detection network is particularly sensitive to the characteristic domain of the data set sample, when the model is trained and close to fitting by using data of a single domain, the parameters of the model are very sensitive to image information under multiple domains, and the performance of the model is reduced due to the influence of different noise distributions.
The current style migration method is mainly used for solving the problem of cell robust detection of pathological images under a microscope. The commonality of these methods is that they are trained in two stages; firstly, taking a digital pathological image as the input of a depth network, transferring the image distribution of a source domain to the pathological image distribution under a microscope by using a countermeasure generation network, secondly, carrying out target detection network training by using a pseudo-microscopic pathological image after style transfer and an original label thereof, and finally, testing on the pathological image under the microscope. However, due to the fact that the pathological image edges under the microscope are distributed differently, in the style migration process, the network cannot effectively extract common features of the pathological images under the microscope, and therefore the trained target detection model is often inaccurate. If the pathological images under the microscope of each distribution are migrated, the training cost is too large, the time is long, and the method is not an intelligent choice.
Disclosure of Invention
The invention aims to provide a cell robust detection method for pathological images under a microscope, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme: a microscopic pathological image cell robust detection method comprises the following steps:
s1, preprocessing the input image: alternately inputting the existing digital pathological image and the pathological image under the microscope into an encoder for feature extraction to respectively obtain the depth feature of the digital pathological image and the depth feature of the pathological image under the microscope;
s2, after the depth features of the digital pathological image and the depth features of the microscopic pathological image are extracted, classifying the two features by respectively adopting an image-level discriminator and an example-level discriminator by utilizing a game thought in a countercheck learning mechanism so as to discriminate the consistency of the two features;
and S3, step S2, on the basis of the depth features of the digital pathological images, adopting an image-level classification regularization module to predict multi-label classification results of the digital pathological images, and then carrying out classification consistency judgment on the obtained multi-label classification results and the example-level classification results through a classification consistency regularization module so as to constrain target detection classification results.
Preferably, the specific step of classifying the two features by the instance-level discriminator in step S2 includes:
s201, inputting the extracted depth features of the digital pathological image and the depth features of the microscopic pathological image into a region generation network to generate a first candidate frame feature and a second candidate frame feature respectively;
s202, screening the obtained first candidate frame feature and the second candidate frame feature, and then performing coordinate regression and classification on the candidate frame features by using an example-level coordinate regressor and an example-level classifier;
s203, inputting the first candidate frame feature and the second candidate frame feature into an example-level discriminator respectively to obtain a first example feature consistency confidence coefficient and a second example feature consistency confidence coefficient respectively.
Preferably, the multi-label classification result in step S3 is subjected to classification consistency determination with the first classification result obtained by the example classifier.
Preferably, the area generation network in step S201 is composed of a series of convolutional layers.
Preferably, the example-level coordinate regressor and the example-level classifier in step S202 are each composed of three fully-connected layers, and the obtained example-level class prediction results are respectively: first positioning result
Figure DEST_PATH_IMAGE001
First classification result, second positioning result
Figure DEST_PATH_IMAGE002
Second classification of results
Figure DEST_PATH_IMAGE003
(ii) a For the first positioning result
Figure 303951DEST_PATH_IMAGE001
And first classification result
Figure DEST_PATH_IMAGE004
And strongly supervising the cross entropy loss:
Figure DEST_PATH_IMAGE005
Figure DEST_PATH_IMAGE006
Figure DEST_PATH_IMAGE007
Figure DEST_PATH_IMAGE008
the method is characterized in that the method is a strong supervision loss for a first positioning result and a first classification result, wherein N represents the total number of digital pathological images, i represents the ith digital pathological image, M represents the number of labels contained in the ith digital pathological image, j represents the jth label in the ith digital pathological image, C represents the total number of prediction categories, K represents the number of coordinates of a current label frame, K represents the current kth category or the kth label coordinate, y represents the label value of the current label, and p represents the prediction probability of the current category of the current label.
Preferably, the example-level discriminator consists of three convolutional layers, a maximum pooling layer, a full-link layer and a sigmoid active layer; the example level discriminator loss function is as follows:
Figure DEST_PATH_IMAGE009
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE010
representing the depth characteristics of the digital pathological image,
Figure DEST_PATH_IMAGE011
representing the microscopic pathology image depth feature.
Preferably, the image-level discriminator consists of three convolution layers, a maximum pooling layer, a full-link layer and a sigmoid active layer; the loss function is as follows:
Figure DEST_PATH_IMAGE012
wherein
Figure DEST_PATH_IMAGE013
The presence of the discriminator is indicated by the expression,
Figure 456977DEST_PATH_IMAGE010
representing the depth characteristics of the digital pathological image,
Figure 315343DEST_PATH_IMAGE011
representing the microscopic pathology image depth feature.
Compared with the prior art, the invention provides a pathological image cell robust detection method under a microscope, which has the following beneficial effects:
(1) the method combines the counterstudy with the training of a target detection network, trains a pathological image cell robust detection model under a microscope under the condition of only a digital pathological image labeling data set, and can effectively improve the detection precision of the pathological image cells of the microscope.
(2) The invention utilizes the antagonism learning thought, distinguishes the depth characteristics of the digital pathological image and the depth characteristics of the microscopic pathological image through the discriminator in the antagonism generation network, further guides the network to extract the consistent characteristics of the two characteristics and effectively utilizes the consistent characteristics.
(3) The image-level category regularization module utilizes the weak positioning characteristic of the multi-label classifier to promote the network to pay attention to important areas containing main targets.
(4) And (3) utilizing a classification consistency regularization module to calculate the consistency between image-level and instance-level predictions after obtaining the image-level multi-label classification result, and further constraining the target detection classification result, thereby improving the detection capability of the network on difficult samples.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention without limiting the invention in which:
FIG. 1 is a diagram of a cell robust detection network model structure of a pathological image under a microscope according to the present invention;
FIG. 2 is a schematic diagram of a dense connection block structure according to the present invention;
FIG. 3 is a graph of the effect of directly using a digital pathology image training model to predict microscopic pathology images;
fig. 4 is a graph of the effect of using the method to predict microscopic pathology images.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment discloses a pathological image cell robust detection method under a microscope, which is developed based on a target detection network and comprises the following steps:
and inputting the existing digital pathological image and the pathological image under the microscope alternately into an encoder for feature extraction to obtain the depth feature of the digital pathological image and the depth feature of the pathological image under the microscope respectively. Specifically, the digital pathological image and the microscopic pathological image are alternately input into the encoder, so that the encoder parameters are composed of the digital pathological image and the second imageLike sharing. The encoder consists of five densely connected blocks. Each dense connection block may include M coding layers and one pooling layer, each pooling layer being at the last layer of the dense connection block. Each coding layer consists of an expansion convolutional layer, a LeakyReLU activation layer and a batch processing normalization layer. Wherein the dimension of the feature of the nth dimension extracted by the mth coding layer is larger than the dimension of the feature of the (M-1) th dimension extracted by the (M-1) th coding layer, the Mth dimension is the largest dimension in the multiple dimensions, M is a positive integer greater than or equal to 2, and M is less than or equal to M and greater than or equal to 2. As an example, M may be 5, in which case, as shown in fig. 2, a first encoding layer (denoted as "1" in fig. 2) of the plurality of encoding layers may be first utilized to extract a feature (having a size of (512 x (N-24))) of a first scale of the input feature map (having a size of (512 x N)); secondly, extracting features (with the size of (512 x (N + 24))) of a second scale by using a second coding layer (denoted as "2" in fig. 2) in the plurality of coding layers based on the features of the first scale; thirdly, based on the features of the first scale and the second scale, firstly splicing the features of the first scale and the features of the second scale along the channel, and further extracting the features of the third scale (with the size of (512 x (N + 48))) by utilizing a third coding layer (represented as '3' in fig. 2) in the plurality of coding layers; thirdly, based on the features of the first scale, the second scale and the third scale, firstly splicing the features of the first scale, the second scale and the third scale along the channel, and further extracting the features of the fourth scale (with the size of (512 x (N + 72))) by utilizing a fourth coding layer (represented as '4' in fig. 2) in the plurality of coding layers; finally, based on the features of the first scale, the second scale, the third scale and the fourth scale, the features of the first scale, the features of the second scale, the features of the third scale and the features of the fourth scale are firstly spliced along the channel, and then the features of the fifth scale (with the size of (512 x (N + 96))) are extracted by using a fifth coding layer (denoted by '5' in fig. 2) in the plurality of coding layers. It should be noted that the difference between the number of channels of each feature map at the next scale and the number of channels of the feature map at the previous scale is 24. In dense link blocks, the stitching operation may enhance the performanceThe feature reuse and the feature propagation are realized, and the expansion convolution effectively ensures that the extracted features have multi-scale information in a mode of continuously expanding the receptive field. The expansion ratio is selected as a super-parameter according to the depth of the coded feature, and is {1, 2, 4, 8, 16} from shallow to deep as an example. After passing through the pooling layer, the feature size of the fifth dimension changed from (512 × N + 96)) to 256 × (N + 96), i.e., the aspect size was reduced by one time. And after five densely-connected blocks, the final coding feature size is 16 x (N + 96). The digital pathological image and the pathological image under the microscope are subjected to encoder feature extraction, and then digital pathological image depth features are obtained on a final encoding layer respectively
Figure 117077DEST_PATH_IMAGE010
Depth characteristics of pathological image of microscope
Figure 134712DEST_PATH_IMAGE011
. Wherein
Figure DEST_PATH_IMAGE014
Showing the ith digital pathology image,
Figure DEST_PATH_IMAGE015
showing the pathology image under the ith microscope.
Figure DEST_PATH_IMAGE016
Representing an encoder.
After the depth features of the digital pathological image and the depth features of the microscopic pathological image are extracted, the game thought in a countercheck learning mechanism is utilized, and an image-level discriminator is adopted to classify the two features so as to discriminate the consistency of the two features. Specifically, depth characteristics of digital pathological image
Figure 160437DEST_PATH_IMAGE010
Depth characteristics of pathological image of microscope
Figure 771678DEST_PATH_IMAGE011
Respectively input into an image-level discriminator to obtainAnd the confidence coefficient of the consistency of the depth features of the digital pathological image and the confidence coefficient of the consistency of the depth features of the microscopic pathological image. Wherein the digital pathological image depth characteristic label is 1, and the microscopic pathological image depth characteristic label is 0. The final goal of the counterlearning is to effectively classify two features when the classifier fails, which means that the distributions of the two features are very similar, and therefore the features are called consistent features. The image-level discriminator consists of three convolution layers, a maximum pooling layer, a full-link layer and a sigmoid active layer. The image-level discriminator loss function is as follows:
Figure DEST_PATH_IMAGE017
wherein
Figure 767315DEST_PATH_IMAGE013
The discriminator is shown. Loss function of current discriminator
Figure DEST_PATH_IMAGE018
When the depth reaches 0.5, the judgment unit cannot distinguish the two types of features, and the depth features of the digital pathological image extracted by the feature extractor at the moment
Figure 706366DEST_PATH_IMAGE010
Depth characteristics of pathological image of microscope
Figure 27626DEST_PATH_IMAGE011
Are consistent features.
Meanwhile, after the depth features of the digital pathological image and the depth features of the microscopic pathological image are extracted, the game thought in the countercheck learning mechanism is utilized, and the case-level discriminator is adopted to classify the two features so as to discriminate the consistency of the two features. The specific classification process is as follows:
(1) inputting the obtained digital pathological image depth features into the area generation network to obtain a first candidate frame, and inputting the microscope pathological image depth features into the area generation network to obtain a second candidate frame. In extracting depth features of digital pathological images
Figure 595005DEST_PATH_IMAGE010
Depth characteristics of pathological image of microscope
Figure 659913DEST_PATH_IMAGE011
Then, inputting the characteristics into the area generation network respectively to obtain area candidate frames
Figure DEST_PATH_IMAGE019
And
Figure DEST_PATH_IMAGE020
wherein the area generation network is composed of a series of convolutional layers. The feature map is firstly convolved by 3 × 3 to obtain a 256 × 16 × 6 feature map, and then convolved by 1 × 1 twice to respectively obtain an 18 × 16 × 6 feature map and a 36 × 16 × 6 feature map, each feature including 2 fractional features and 4 coordinate features, where two fractional features express that the candidate box is a target and not a target, and four coordinates are four coordinate features of the box. Use of
Figure 871714DEST_PATH_IMAGE019
And
Figure 363875DEST_PATH_IMAGE020
the first candidate frame feature and the second candidate frame feature are represented.
(2) Obtaining a first candidate frame
Figure 887391DEST_PATH_IMAGE019
And the second candidate frame
Figure 569039DEST_PATH_IMAGE021
And then, screening the candidate frames by using a non-maximum value inhibition method, namely sequencing the regional confidence degrees of the candidate frames, and taking the first T candidate frames as final candidate frames. T is a hyper-parameter and needs to be set manually. In this method, T is 10. Then, the final first candidate frame is obtained
Figure DEST_PATH_IMAGE022
And the final second candidate frame
Figure DEST_PATH_IMAGE023
. And performing coordinate regression and classification on the candidate frames by using an example-level coordinate regressor and an example-level classifier. The example-level coordinate regressor and the example-level classifier are both composed of three fully connected layers. Obtaining a first positioning result
Figure 556718DEST_PATH_IMAGE001
First classification of results
Figure 360726DEST_PATH_IMAGE004
Second positioning result
Figure 433855DEST_PATH_IMAGE002
Second classification of results
Figure 919194DEST_PATH_IMAGE003
. For the first positioning result and the first classification result, since they have labels, they are strongly supervised with cross entropy loss:
Figure 653058DEST_PATH_IMAGE005
Figure 688620DEST_PATH_IMAGE006
Figure 514625DEST_PATH_IMAGE007
Figure 538076DEST_PATH_IMAGE008
and monitoring loss for the first positioning result and the first classification result. Wherein N represents the total number of the digital pathological images, i represents the ith digital pathological image, and M represents the number of labels contained in the ith digital pathological imageJ represents the jth label in the ith digital pathological image, C represents the total number of the predicted categories, K represents the number of coordinates of the current label frame, K represents the current kth category or the kth label coordinate, y represents the label value of the current label, p represents the prediction probability (or the prediction value of the label frame coordinate) of the current category of the current label,
Figure DEST_PATH_IMAGE024
and
Figure DEST_PATH_IMAGE025
the more the value of (A) is close to 0, the more accurate the detection classification and positioning results are.
(3) And respectively inputting the first candidate frame feature and the second candidate frame feature into the example-level discriminator to respectively obtain a first example feature consistency confidence coefficient and a second example feature consistency confidence coefficient. Where the first instance feature label is 1 and the second instance feature label is 0. The final goal of the counterlearning is to effectively classify two features when the classifier fails, which means that the distributions of the two features are very similar, and therefore the features are called consistent features. The example level arbiter consists of three convolutional layers, one max pooling layer, one full link layer, and one sigmoid active layer. The example level discriminator loss function is as follows:
Figure DEST_PATH_IMAGE026
wherein
Figure 188762DEST_PATH_IMAGE027
An example level discriminator is shown. Loss function of current discriminator
Figure 803414DEST_PATH_IMAGE018
When the value of the number of the candidate frame features reaches 0.5, the result shows that the discriminator cannot distinguish the two types of features, and the first candidate frame feature extracted by the feature extractor is used for extracting the first candidate frame feature
Figure 116715DEST_PATH_IMAGE019
And second candidate frame feature
Figure 748117DEST_PATH_IMAGE020
Is a consistent feature; where λ is obtained by the classification consistency regularization module.
The method comprises the steps of adopting an image-level discriminator to classify two obtained features so as to discriminate consistency of the two features, and adopting an image-level category regularization module to predict a multi-label classification result of a digital pathological image on the basis of depth features of the digital pathological image aiming at the digital pathological image with labels. The image-level classification regularization module is composed of three convolution layers, a maximum pooling layer, a full-connection layer and a softmax activation layer. As shown in fig. 1, the digital pathological image includes three categories, i.e., high lesion cells, medium lesion cells, and mild lesion cells, and after being predicted by the image-level category regularization module, a multi-label classification result can be obtained
Figure DEST_PATH_IMAGE028
I.e. highly diseased cells, moderately diseased cells, mildly diseased cells, this module takes advantage of the weak localization feature of multi-label classifiers to facilitate the network to focus on those important areas that contain the primary target.
Aiming at the digital pathological images with labels, obtaining multi-label classification results
Figure DEST_PATH_IMAGE029
Then, the classification consistency regularization module combines the classification consistency regularization module with a first classification result obtained by the instance classifier
Figure DEST_PATH_IMAGE030
And judging the classification consistency, and further constraining the target detection classification result, thereby improving the detection capability of the network on difficult samples. Specifically, with the first classification result
Figure 66359DEST_PATH_IMAGE030
For reference, discriminating the multi-label classification result
Figure 586334DEST_PATH_IMAGE029
If the category is contained, the weight of the category is 1, otherwise, the weight is 3. Thus, a classification consistency weight matrix λ is obtained, where λ is the λ described in section 6, and further, the classification consistency weight matrix λ is used to perform weight constraint on the first image instance level discriminant loss.
In order to verify the effectiveness of the method, a training data set is prepared for experimental comparison verification. Wherein the training data set comprises a cervical cancer TCT digital pathological image (a first image containing the annotation) and a cervical cancer TCT microscopic pathological image (a second image containing no annotation). Model training is respectively carried out on the basic method Faster rcnn and the method by using a training data set, and a model 1 and a model 2 are obtained. Further, the performance of the model 1 and the model 2 is verified on pathological image data under a cervical cancer TCT microscope, and qualitative results of fig. 3 and fig. 4 are obtained. Wherein FIG. 3 shows the test results of model 1, FIG. 4 shows the test results of model 2, and the cells in the box of FIG. 4 are diseased cells. By comparing fig. 3 with fig. 4, it can be seen that model 1 can not effectively locate the diseased cells in the cervical cancer TCT microscopic pathology image, while model 2 can. Therefore, the conclusion that the generalization performance of the model 2 is higher than that of the model 1 can be drawn, and the effectiveness of the method is also proved. The box in fig. 4 shows the probability value 99 of the lesion cells on the image, which is the detection result of the model.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (5)

1. The method for detecting the cell robustness of the pathological image under the microscope is characterized by comprising the following steps:
s1, preprocessing the input image: alternately inputting the existing digital pathological image and the pathological image under the microscope into an encoder for feature extraction to respectively obtain the depth feature of the digital pathological image and the depth feature of the pathological image under the microscope;
s2, after the depth features of the digital pathological image and the depth features of the microscopic pathological image are extracted, classifying the two features by respectively adopting an image-level discriminator and an example-level discriminator by utilizing a game thought in a countercheck learning mechanism so as to discriminate the consistency of the two features;
s3, step S2, on the basis of the depth features of the digital pathological images, an image-level category regularization module is adopted to predict multi-label classification results of the digital pathological images, and then classification consistency judgment is carried out on the obtained multi-label classification results and the example-level classification results through a classification consistency regularization module to constrain target detection classification results;
the image-level discriminator in the S2 consists of three convolution layers, a maximum pooling layer, a full-connection layer and a sigmoid activation layer; the loss function is as follows:
Figure 830633DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 937129DEST_PATH_IMAGE002
the presence of the discriminator is indicated by the expression,
Figure 289744DEST_PATH_IMAGE003
representing the depth characteristics of the digital pathological image,
Figure 874309DEST_PATH_IMAGE004
representing a microscopic pathology image depth feature;
the example-level discriminator consists of three convolution layers, a maximum pooling layer, a full-connection layer and a sigmoid activation layer; example level discriminator loss function is as follows
Figure 245248DEST_PATH_IMAGE006
In the formula,
Figure 991487DEST_PATH_IMAGE003
Representing the depth characteristics of the digital pathological image,
Figure 283928DEST_PATH_IMAGE004
representing the microscopic pathology image depth feature.
2. The method for cell robust detection of pathological image under microscope as claimed in claim 1, wherein the specific step of classifying two features by the instance-level discriminator in step S2 includes:
s201, inputting the extracted depth features of the digital pathological image and the depth features of the microscopic pathological image into a region generation network to generate a first candidate frame feature and a second candidate frame feature respectively;
s202, screening the obtained first candidate frame feature and the second candidate frame feature, and then performing coordinate regression and classification on the candidate frame features by using an example-level coordinate regressor and an example-level classifier;
s203, inputting the first candidate frame feature and the second candidate frame feature into an example-level discriminator respectively to obtain a first example feature consistency confidence coefficient and a second example feature consistency confidence coefficient respectively.
3. The method for cell robust detection of pathological images under microscope as claimed in claim 2, wherein the multi-label classification result in step S3 is classified according to the first classification result obtained by the example classifier.
4. The method for cell robust detection of pathological image under microscope according to claim 2, wherein the region generation network in step S201 is composed of a series of convolutional layers.
5. The method for cell robust detection of pathological image under microscope as claimed in claim 2, wherein the example-level coordinate regression in step S202The device and the example classifier are composed of three full-connection layers, and the obtained example class prediction results are respectively as follows: first positioning result
Figure 954075DEST_PATH_IMAGE007
First classification of results
Figure 710678DEST_PATH_IMAGE008
Second positioning result
Figure 362239DEST_PATH_IMAGE009
Second classification of results
Figure 751764DEST_PATH_IMAGE010
(ii) a For the first positioning result
Figure 678131DEST_PATH_IMAGE007
And first classification result
Figure 774394DEST_PATH_IMAGE008
And strongly supervising the cross entropy loss:
Figure 393595DEST_PATH_IMAGE011
Figure 191786DEST_PATH_IMAGE012
Figure 921845DEST_PATH_IMAGE013
Figure 590724DEST_PATH_IMAGE014
a strong supervision loss for the first positioning result and the first classification result, wherein N represents a numberThe method comprises the steps of obtaining a total number of word pathological images, wherein i represents an ith digital pathological image, M represents the number of labels contained in the ith digital pathological image, j represents a jth label in the ith digital pathological image, C represents the total number of prediction categories, K represents the number of coordinates of a current label frame, K represents the current kth category or the kth label coordinate, y represents the label value of the current label, and p represents the prediction probability of the current category of the current label.
CN202111397880.7A 2021-11-24 2021-11-24 Pathological image cell robust detection method under microscope Active CN113822252B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111397880.7A CN113822252B (en) 2021-11-24 2021-11-24 Pathological image cell robust detection method under microscope

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111397880.7A CN113822252B (en) 2021-11-24 2021-11-24 Pathological image cell robust detection method under microscope

Publications (2)

Publication Number Publication Date
CN113822252A CN113822252A (en) 2021-12-21
CN113822252B true CN113822252B (en) 2022-04-22

Family

ID=78919731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111397880.7A Active CN113822252B (en) 2021-11-24 2021-11-24 Pathological image cell robust detection method under microscope

Country Status (1)

Country Link
CN (1) CN113822252B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078985B (en) * 2023-10-17 2024-01-30 之江实验室 Scene matching method and device, storage medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105612514A (en) * 2013-08-05 2016-05-25 脸谱公司 Systems and methods for image classification by correlating contextual cues with images
CN109670489A (en) * 2019-02-18 2019-04-23 广州视源电子科技股份有限公司 Weakly supervised formula early-stage senile maculopathy classification method based on more case-based learnings
CN112489218A (en) * 2020-11-30 2021-03-12 江苏科技大学 Single-view three-dimensional reconstruction system and method based on semi-supervised learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105612514A (en) * 2013-08-05 2016-05-25 脸谱公司 Systems and methods for image classification by correlating contextual cues with images
CN109670489A (en) * 2019-02-18 2019-04-23 广州视源电子科技股份有限公司 Weakly supervised formula early-stage senile maculopathy classification method based on more case-based learnings
CN112489218A (en) * 2020-11-30 2021-03-12 江苏科技大学 Single-view three-dimensional reconstruction system and method based on semi-supervised learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
iFAN: Image-Instance Full Alignment Networks for Adaptive Object Detection;Chenfan Zhuang等;《The Thirty-Fourth AAAI Conference on Artificial Intelligence》;20200309;第13122-13129页 *

Also Published As

Publication number Publication date
CN113822252A (en) 2021-12-21

Similar Documents

Publication Publication Date Title
Roy et al. Patch-based system for classification of breast histology images using deep learning
CN111882560B (en) Lung parenchyma CT image segmentation method based on weighted full convolution neural network
Negahbani et al. PathoNet introduced as a deep neural network backend for evaluation of Ki-67 and tumor-infiltrating lymphocytes in breast cancer
Ashwin et al. Efficient and reliable lung nodule detection using a neural network based computer aided diagnosis system
JP7422235B2 (en) Non-tumor segmentation to aid tumor detection and analysis
Pan et al. Cell detection in pathology and microscopy images with multi-scale fully convolutional neural networks
CN112150442A (en) New crown diagnosis system based on deep convolutional neural network and multi-instance learning
Khan et al. Classification and region analysis of COVID-19 infection using lung CT images and deep convolutional neural networks
Xu et al. Using transfer learning on whole slide images to predict tumor mutational burden in bladder cancer patients
Batool et al. Lightweight EfficientNetB3 model based on depthwise separable convolutions for enhancing classification of leukemia white blood cell images
Mahanta et al. IHC-Net: A fully convolutional neural network for automated nuclear segmentation and ensemble classification for Allred scoring in breast pathology
US20190080146A1 (en) Systems and methods for automatic generation of training sets for machine interpretation of images
JP2022547722A (en) Weakly Supervised Multitask Learning for Cell Detection and Segmentation
Wetteland et al. Automatic diagnostic tool for predicting cancer grade in bladder cancer patients using deep learning
CN113822252B (en) Pathological image cell robust detection method under microscope
Scheurer et al. Semantic segmentation of histopathological slides for the classification of cutaneous lymphoma and eczema
CN114580501A (en) Bone marrow cell classification method, system, computer device and storage medium
Chen et al. Identifying cardiomegaly in chest x-rays using dual attention network
Tang et al. M-SEAM-NAM: multi-instance self-supervised equivalent attention mechanism with neighborhood affinity module for double weakly supervised segmentation of COVID-19
Nalluri et al. Pneumonia screening on chest X-rays with optimized ensemble model
Aljuhani et al. Uncertainty aware sampling framework of weak-label learning for histology image classification
Ovi et al. Infection segmentation from covid-19 chest ct scans with dilated cbam u-net
Taher et al. Morphology analysis of sputum color images for early lung cancer diagnosis
Kadirappa et al. Histopathological carcinoma classification using parallel, cross‐concatenated and grouped convolutions deep neural network
Taher et al. Automatic sputum color image segmentation for lung cancer diagnosis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant