CN112215790A - KI67 index analysis method based on deep learning - Google Patents

KI67 index analysis method based on deep learning Download PDF

Info

Publication number
CN112215790A
CN112215790A CN201910548431.4A CN201910548431A CN112215790A CN 112215790 A CN112215790 A CN 112215790A CN 201910548431 A CN201910548431 A CN 201910548431A CN 112215790 A CN112215790 A CN 112215790A
Authority
CN
China
Prior art keywords
cell
tumor cells
cells
positive
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910548431.4A
Other languages
Chinese (zh)
Inventor
崔灿
杨林
谢园普
石永华
朱思汉
徐建红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Diyingjia Technology Co ltd
Original Assignee
Hangzhou Diyingjia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Diyingjia Technology Co ltd filed Critical Hangzhou Diyingjia Technology Co ltd
Priority to CN201910548431.4A priority Critical patent/CN112215790A/en
Publication of CN112215790A publication Critical patent/CN112215790A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

The invention relates to a KI67 index analysis method based on deep learning, which comprises the steps of S10, obtaining a digital pathology whole-field image as an original pathology whole-field image; s20, dividing the original pathology full-field map into an interested region and an uninteresting region; s30, filtering the regions of no interest in the original pathology whole-field map, performing semantic segmentation on all cells in the remaining regions of interest, and segmenting the central position of the cell nucleus; s40, determining the cell types to which the cell nucleus belongs, wherein the cell types comprise tumor cells and normal cells, and specifically comprise positive tumor cells, positive normal cells, negative tumor cells and negative normal cells; s50, KI67 index was calculated based on positive tumor cells and total number of tumor cells. According to the method, the cell nucleuses of the positive tumor cells, the positive normal cells, the negative tumor cells and the negative normal cells are marked by the marking points with different colors in the image, and the cell classification model is established through deep learning, so that the accuracy of the KI67 index is greatly improved.

Description

KI67 index analysis method based on deep learning
Technical Field
The invention relates to the field of cell detection, in particular to a KI67 index analysis method based on deep learning.
Background
In the diagnosis of various tumors, such as breast cancer, ovarian cancer, gastric cancer, colorectal cancer and the like, the mitotic index is considered, and KI67 is a protein which is closely related to the proliferation of cells, KI67 protein can be detected in cells at mitosis and division stages (G1, S, G2 and M stage), and KI67 protein is not present in cells at mitosis stop (G0 stage). Therefore, the KI67 index is regarded as an index for judging the proliferation condition of tumor cells in many tumor pathologies, and the higher the KI67 index is, the faster the proliferation of tumor cells is. The KI67 index is generally used for tumor grading, to assess patient prognosis and to determine whether a patient is susceptible to chemotherapy, among other things. However, not all cells strongly expressing KI67 are tumor cells, cells with larger proliferation activity such as stem cells, mother cells and the like are also strongly expressed, and some tumor cells with slower proliferation are not strongly expressed by KI 67. At present, the mainstream analysis method is to use an image processing method to classify cells into negative and positive according to different cell nucleus stains, and count the two cells, however, the method cannot accurately detect positive normal cell nuclei and negative tumor cells, so that the obtained KI67 index is inaccurate, and the accuracy of a tumor judgment result is affected.
Disclosure of Invention
The invention provides a KI67 index analysis method for deep learning, aiming at the problem that the traditional method cannot accurately detect positive normal cell nucleuses and negative tumor cells and the calculation is inaccurate according to KI67 indexes.
The invention realizes the purpose through the following technical scheme: the KI67 index analysis method based on deep learning comprises S10, acquiring a digitalized pathology whole-field image as an original pathology whole-field image; s20, dividing the original pathology full-field map into an interested region and an uninteresting region; s30, filtering the regions of no interest in the original pathology whole-field map, performing semantic segmentation on all cells in the remaining regions of interest, and segmenting the central position of the cell nucleus; s40, determining the cell types to which the cell nucleus belongs, wherein the cell types comprise tumor cells and normal cells, and specifically comprise positive tumor cells, positive normal cells, negative tumor cells and negative normal cells; s50, KI67 index was calculated based on positive tumor cells and total number of tumor cells.
Further, step S20 specifically includes: reading in an original pathology full-field image, and intercepting image areas with the size of n x n one by one from the original pathology full-field image, wherein n is a pixel value; inputting the image area into an area filtering model, and outputting the attribute of the input image area by the area filtering model, wherein the attribute of the image area comprises an interested area or an uninteresting area.
Further, the obtaining of the region filtering model comprises: s201, acquiring a certain number of pictures marked as regions of interest and pictures marked as regions of no interest as sample data, wherein the size of the sample data is the same as that of the image region; s202, performing data enhancement and normalization processing on the sample data; and S203, taking the processed picture as the input of the neural network model, and training to obtain the region filtering model.
Further, step S30 specifically includes: filtering the uninterested regions in the original pathology full-field graph, and intercepting the small graphs with the size of n x n one by using a sliding window for the rest interesting regions, converting the small graphs into a gray graph and taking the gray graph as an input image of a semantic segmentation neural network model; and the semantic segmentation neural network model generates a corresponding characteristic probability matrix for each input image, and finds out all maximum value points from the generated characteristic probability matrix, wherein the positions of the maximum value points are the central positions of the detected cell nucleuses.
Further, step S40 specifically includes: taking the central position of the cell nucleus as a center, and intercepting an image segment with the size of w x w, namely the cell nucleus, wherein w is a pixel value; inputting the image segments into a cell classification model, and outputting cell classes by the cell classification model; recording the central position of the cell nucleus and the corresponding cell types in a detection list, and counting the number of positive tumor cells and the number of all tumor cells.
Further, the obtaining of the cell classification model includes: respectively obtaining a certain number of positive tumor cell pictures, positive normal cell pictures, negative tumor cell pictures and negative normal cell pictures; taking the central position of the cell nucleus in the picture as the center, and intercepting a small picture with the size of w x w; and inputting the intercepted small picture as a training sample into a neural network model, and training to obtain a cell classification model.
Further, the steps of obtaining the positive tumor cell picture, the positive normal cell picture, the negative tumor cell picture and the negative normal cell picture include: taking a picture with the size of m by taking a central pixel point of a cell nucleus as a center from various pictures marked with positive tumor cells, positive normal cells, negative tumor cells and negative normal cells, wherein m is a pixel value, and w is more than m and less than n; and performing data enhancement on the captured images with the sizes of m × m, wherein the labeling modes of the four cells are different.
Further, the KI67 index of step S50 is the number of positive tumor cells divided by the number of all tumor cells.
Compared with the prior art, the invention has the following substantial effects: according to the KI67 index analysis method, the cell nucleuses of the positive tumor cells, the positive normal cells, the negative tumor cells and the negative normal cells are marked out by the marking points with different colors in the image, and a cell classification model is established through deep learning, so that accurate, efficient and stable cell positioning, classification and counting are realized, and on one hand, the accuracy of the KI67 index is greatly improved; on the other hand, the system can help doctors diagnose or research and analyze relevant characteristics of tissue pathology, and improves the working efficiency.
Drawings
FIG. 1 is a flow chart of the KI67 index analysis method of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings in which:
the KI67 index analysis method based on deep learning, as shown in FIG. 1, comprises the following steps:
s10, obtaining a 20-fold or 40-fold magnification, putting the digital pathology whole field image into a digital pathology whole field image, taking the digital pathology whole field image as an original pathology whole field image, scanning pathological sections through a scanner to obtain the original pathology whole field image, selecting a representative area in the whole field image to capture images, performing exhaustive labeling on cell nucleuses on the captured images, and dividing cells in the images into four types: and marking positive cancer cells, positive normal cells, negative cancer cells and negative normal cells by drawing points at the centers of different types of cell nuclei by using brushes with different colors during marking.
S20, area detection: reading in an original pathology full-field image from a low-power visual field, for example, 2 times, using a sliding window method to cut out a certain size of area such as 256x256 from the full-field image one by one to be used as an input image of an area detection network, wherein the model can predict that the input image is 0 (area of no interest) or 1 (area of interest), recording the information in a list, and performing subsequent analysis, namely, not processing the area of no interest, so that most invalid areas can be filtered out, and the efficiency and speed of the algorithm are improved. Of course, the doctor may also perform KI67 index analysis on the designated region, and if the doctor designates a region, the region detection process is automatically skipped. In the pathology whole-field image, a relatively large part of regions are regions which are not interested by doctors, such as blank regions, and a region filtering model performs two classifications on each image in a relatively low-magnification visual field to filter out the regions which are not interested.
The obtaining of the region filtering model comprises the following steps:
s201, acquiring a certain number of pictures marked as interesting areas and pictures marked as blank areas as sample data, wherein the sample data has the same size as the image areas;
s202, performing data enhancement and normalization processing on the sample data; specifically, randomly rotating and turning sample data, zooming the picture in a certain range, adjusting brightness, contrast, saturation, chroma and the like, performing data enhancement processing, intercepting a small graph with the size of 256 × 256 from the center of the sample data, and finally performing normalization processing;
s203, taking the processed picture as input of a neural network model, training to obtain an area filtering model, wherein the neural network model can select a network model with a good classification effect such as increment or ResNet, a Binary Cross Entropy (Binary Cross Entropy) is used as a loss function, a gradient descent (SGD) is used as an optimization method, the accuracy and the validation loss of the model in a training set and a validation set are drawn in the training process, and when the two indexes are not changed obviously any more, the model finishes training.
S30, cell assay: and performing semantic segmentation on all cells in the region of interest in the pathology full-field map to segment the central position of the cell nucleus. The method specifically comprises the following steps: and (3) intercepting small graphs with the size of 256 × 256 or 512 × 512 one by using a sliding window for the original pathology full-field graph after the region without interest is filtered, converting the small graphs into a gray graph serving as an input image of a semantic segmentation neural network model, generating a corresponding characteristic probability matrix for each input image by the semantic segmentation neural network model, and finding out all maximum value points from the generated characteristic probability matrix, wherein the positions of the maximum value points are the central positions of the detected cell nucleuses. Training a cell classification model: the model is mainly used for carrying out four classifications, and each cell is classified into one of positive tumor cells, positive normal cells, negative tumor cells and negative normal cells.
The semantic segmentation neural network model comprises the following training processes:
s301: intercepting a certain amount of pictures from an original pathology whole-field picture, performing exhaustive labeling on all cell nuclei on the pictures, and labeling different types of cell nuclei by using different color brushes;
s302, extracting the label and converting the label into a binary mask image, wherein the intercepted image and the corresponding binary mask image are used as sample data;
s303: converting the intercepted picture into a gray image and carrying out normalization processing, and carrying out data enhancement processing such as turning, rotating and the like on the processed picture and the corresponding binary mask image;
s304: inputting the processed picture and the binary mask map as training data into a neural network model, training to obtain a semantic segmentation neural network model, wherein the semantic segmentation neural network model is a Unet or similar full convolution neural network, adopting a combination of cross-over ratio (IOU) and Binary Cross Entropy (BCE) as a loss function, selecting a gradient descent method as an optimization method for training, drawing a loss curve, evaluating a semantic segmentation effect by irregularly observing the binary mask map generated by the neural network, and stopping training when the variation range of the loss curve reaches a preset range and the binary mask map generated by the neural network reaches an expected effect.
S40, cell classification: and determining the cell types to which the cell nucleuses belong, wherein the cell types comprise tumor cells and normal cells, and specifically comprise positive tumor cells, positive normal cells, negative tumor cells and negative normal cells. The method specifically comprises the following steps: and intercepting image segments with the size of 40-40 by taking the central position of the cell nucleus as the center, inputting the image segments into a cell classification model, outputting cell categories by the cell classification model, recording the central position of the cell nucleus and the corresponding cell categories in a detection list, and counting the number of positive tumor cells and the number of all tumor cells.
The obtaining of the cell classification model comprises: selecting a slice sample with a certain size for exhaustive labeling, labeling positive tumor cells, positive normal cells, negative tumor cells and negative normal cells by adopting different labeling modes, taking a labeling point as a center to intercept a picture with the size of 40 x, performing data enhancement on the intercepted picture to obtain a processed picture of the positive tumor cells, a processed picture of the positive normal cells, a processed picture of the negative tumor cells and a processed picture of the negative normal cells as an initial sample, taking the central position of cell nucleuses in the initial sample as the center, intercepting a small picture with the size of 32 x, taking the intercepted small picture as the input of a neural network model, and training to obtain a cell classification model. The cell classification model adopts an increment network, adopts Cross Entropy Cross control as a loss function, adopts a gradient descent method as an optimization method for training, draws a loss curve, adopts an error matrix fusion matrix to evaluate the classification effect, and stops training when the variation range of the loss curve reaches a preset range and the classification effect evaluated by the error matrix meets an expectation.
And S50, calculating a KI67 index according to the positive tumor cells and the total number of the tumor cells, wherein the KI67 index is the number of the positive tumor cells divided by the number of all the tumor cells.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed.

Claims (8)

1. The KI67 index analysis method based on deep learning is characterized by comprising the following steps:
s10, acquiring a digital pathology whole field image as an original pathology whole field image;
s20, dividing the original pathology full-field map into an interested region and an uninteresting region;
s30, filtering the regions of no interest in the original pathology whole-field map, performing semantic segmentation on all cells in the remaining regions of interest, and segmenting the central position of the cell nucleus;
s40, determining the cell types to which the cell nucleus belongs, wherein the cell types comprise tumor cells and normal cells, and specifically comprise positive tumor cells, positive normal cells, negative tumor cells and negative normal cells;
s50, KI67 index was calculated based on positive tumor cells and total number of tumor cells.
2. The deep learning-based KI67 index analysis method according to claim 1,
step S20 specifically includes:
reading in an original pathology full-field image, and intercepting image areas with the size of n x n one by one from the original pathology full-field image, wherein n is a pixel value;
inputting the image area into an area filtering model, and outputting the attribute of the input image area by the area filtering model, wherein the attribute of the image area comprises an interested area or an uninteresting area.
3. The deep learning-based KI67 index analysis method according to claim 2,
the obtaining of the region filtering model comprises:
s201, acquiring a certain number of pictures marked as regions of interest and pictures marked as regions of no interest as sample data, wherein the size of the sample data is the same as that of the image region;
s202, performing data enhancement and normalization processing on the sample data;
and S203, taking the processed picture as the input of the neural network model, and training to obtain the region filtering model.
4. The deep learning-based KI67 index analysis method according to claim 1, wherein the step S30 specifically comprises:
filtering the uninterested regions in the original pathology full-field graph, and intercepting the small graphs with the size of n x n one by using a sliding window for the rest interesting regions, converting the small graphs into a gray graph and taking the gray graph as an input image of a semantic segmentation neural network model;
and the semantic segmentation neural network model generates a corresponding characteristic probability matrix for each input image, and finds out all maximum value points from the generated characteristic probability matrix, wherein the positions of the maximum value points are the central positions of the detected cell nucleuses.
5. The deep learning-based KI67 index analysis method according to claim 4, wherein the step S40 specifically comprises:
intercepting an image segment with the size of w x w by taking the central position of the cell nucleus as a center, wherein w is a pixel value;
inputting the image segments into a cell classification model, and outputting cell classes by the cell classification model;
recording the central position of the cell nucleus and the corresponding cell types in a detection list, and counting the number of positive tumor cells and the number of all tumor cells.
6. The method for deep learning-based KI67 index analysis according to claim 5 wherein the obtaining of the cell classification model includes:
respectively obtaining a certain number of positive tumor cell pictures, positive normal cell pictures, negative tumor cell pictures and negative normal cell pictures;
taking the central position of the cell nucleus in the picture as the center, and intercepting a small picture with the size of w x w;
and inputting the intercepted small picture as a training sample into a neural network model, and training to obtain a cell classification model.
7. The deep learning-based KI67 index analysis method according to claim 6, wherein the steps of obtaining the positive tumor cell picture, the positive normal cell picture, the negative tumor cell picture and the negative normal cell picture comprise:
taking a picture with the size of m by taking a central pixel point of a cell nucleus as a center from various pictures marked with positive tumor cells, positive normal cells, negative tumor cells and negative normal cells, wherein m is a pixel value, and w is more than m and less than n;
and performing data enhancement on the captured images with the sizes of m × m, wherein the labeling modes of the four cells are different.
8. The method for deep learning-based KI67 index analysis according to claim 1, wherein the KI67 index in step S50 is the number of positive tumor cells divided by the number of all tumor cells.
CN201910548431.4A 2019-06-24 2019-06-24 KI67 index analysis method based on deep learning Pending CN112215790A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910548431.4A CN112215790A (en) 2019-06-24 2019-06-24 KI67 index analysis method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910548431.4A CN112215790A (en) 2019-06-24 2019-06-24 KI67 index analysis method based on deep learning

Publications (1)

Publication Number Publication Date
CN112215790A true CN112215790A (en) 2021-01-12

Family

ID=74047029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910548431.4A Pending CN112215790A (en) 2019-06-24 2019-06-24 KI67 index analysis method based on deep learning

Country Status (1)

Country Link
CN (1) CN112215790A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884725A (en) * 2021-02-02 2021-06-01 杭州迪英加科技有限公司 Correction method for neural network model output result for cell discrimination
CN112991263A (en) * 2021-02-06 2021-06-18 杭州迪英加科技有限公司 Method and equipment for improving calculation accuracy of TPS (acute respiratory syndrome) of PD-L1 immunohistochemical pathological section
CN113033287A (en) * 2021-01-29 2021-06-25 杭州依图医疗技术有限公司 Pathological image display method and device
CN113096086A (en) * 2021-04-01 2021-07-09 中南大学 Ki67 index determination method and system
CN114299490A (en) * 2021-12-01 2022-04-08 万达信息股份有限公司 Tumor microenvironment heterogeneity evaluation method
CN114638782A (en) * 2022-01-10 2022-06-17 武汉中纪生物科技有限公司 Method for detecting cervical exfoliated cell specimen
CN114693646A (en) * 2022-03-31 2022-07-01 中山大学中山眼科中心 Corneal endothelial cell active factor analysis method based on deep learning
CN117373695A (en) * 2023-10-12 2024-01-09 北京透彻未来科技有限公司 Extreme deep convolutional neural network-based diagnosis system for diagnosis of cancer disease
CN117523205A (en) * 2024-01-03 2024-02-06 广州锟元方青医疗科技有限公司 Segmentation and identification method for few-sample ki67 multi-category cell nuclei

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106327508A (en) * 2016-08-23 2017-01-11 马跃生 Ki67 index automatic analysis method
US20170372117A1 (en) * 2014-11-10 2017-12-28 Ventana Medical Systems, Inc. Classifying nuclei in histology images
CN109389129A (en) * 2018-09-15 2019-02-26 北京市商汤科技开发有限公司 A kind of image processing method, electronic equipment and storage medium
CN109554432A (en) * 2018-11-30 2019-04-02 苏州深析智能科技有限公司 A kind of cell type analysis method, analytical equipment and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170372117A1 (en) * 2014-11-10 2017-12-28 Ventana Medical Systems, Inc. Classifying nuclei in histology images
CN106327508A (en) * 2016-08-23 2017-01-11 马跃生 Ki67 index automatic analysis method
CN109389129A (en) * 2018-09-15 2019-02-26 北京市商汤科技开发有限公司 A kind of image processing method, electronic equipment and storage medium
CN109554432A (en) * 2018-11-30 2019-04-02 苏州深析智能科技有限公司 A kind of cell type analysis method, analytical equipment and electronic equipment

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033287A (en) * 2021-01-29 2021-06-25 杭州依图医疗技术有限公司 Pathological image display method and device
CN112884725A (en) * 2021-02-02 2021-06-01 杭州迪英加科技有限公司 Correction method for neural network model output result for cell discrimination
CN112991263B (en) * 2021-02-06 2022-07-22 杭州迪英加科技有限公司 Method and equipment for improving TPS (tissue specific differentiation) calculation accuracy of PD-L1 immunohistochemical pathological section
CN112991263A (en) * 2021-02-06 2021-06-18 杭州迪英加科技有限公司 Method and equipment for improving calculation accuracy of TPS (acute respiratory syndrome) of PD-L1 immunohistochemical pathological section
CN113096086A (en) * 2021-04-01 2021-07-09 中南大学 Ki67 index determination method and system
CN113096086B (en) * 2021-04-01 2022-05-17 中南大学 Ki67 index determination method and system
CN114299490A (en) * 2021-12-01 2022-04-08 万达信息股份有限公司 Tumor microenvironment heterogeneity evaluation method
CN114299490B (en) * 2021-12-01 2024-03-29 万达信息股份有限公司 Tumor microenvironment heterogeneity evaluation method
CN114638782A (en) * 2022-01-10 2022-06-17 武汉中纪生物科技有限公司 Method for detecting cervical exfoliated cell specimen
CN114638782B (en) * 2022-01-10 2023-02-07 武汉中纪生物科技有限公司 Method for detecting cervical exfoliated cell specimen
CN114693646A (en) * 2022-03-31 2022-07-01 中山大学中山眼科中心 Corneal endothelial cell active factor analysis method based on deep learning
CN117373695A (en) * 2023-10-12 2024-01-09 北京透彻未来科技有限公司 Extreme deep convolutional neural network-based diagnosis system for diagnosis of cancer disease
CN117373695B (en) * 2023-10-12 2024-05-17 北京透彻未来科技有限公司 Extreme deep convolutional neural network-based diagnosis system for diagnosis of cancer disease
CN117523205A (en) * 2024-01-03 2024-02-06 广州锟元方青医疗科技有限公司 Segmentation and identification method for few-sample ki67 multi-category cell nuclei
CN117523205B (en) * 2024-01-03 2024-03-29 广州锟元方青医疗科技有限公司 Segmentation and identification method for few-sample ki67 multi-category cell nuclei

Similar Documents

Publication Publication Date Title
CN112215790A (en) KI67 index analysis method based on deep learning
US10489904B2 (en) Assessing risk of breast cancer recurrence
Sheikhzadeh et al. Automatic labeling of molecular biomarkers of immunohistochemistry images using fully convolutional networks
JP7197584B2 (en) Methods for storing and retrieving digital pathology analysis results
US8600143B1 (en) Method and system for hierarchical tissue analysis and classification
US11674883B2 (en) Image-based assay performance improvement
JP4948647B2 (en) Urine particle image region segmentation method and apparatus
US9275441B2 (en) Method for preparing quantitative video-microscopy and associated system
US20220351860A1 (en) Federated learning system for training machine learning algorithms and maintaining patient privacy
CN111369523B (en) Method, system, equipment and medium for detecting cell stack in microscopic image
CN110796661B (en) Fungal microscopic image segmentation detection method and system based on convolutional neural network
Ström et al. Pathologist-level grading of prostate biopsies with artificial intelligence
JP2023510915A (en) Non-tumor segmentation to aid tumor detection and analysis
Sankarapandian et al. A pathology deep learning system capable of triage of melanoma specimens utilizing dermatopathologist consensus as ground truth
CN111583226B (en) Cell pathological infection evaluation method, electronic device and storage medium
Lyashenko et al. Wavelet Analysis of Cytological Preparations Image in Different Color Systems
Rachna et al. Detection of Tuberculosis bacilli using image processing techniques
CN113724235B (en) Semi-automatic Ki67/ER/PR negative and positive cell counting system and method under condition of changing environment under mirror
Chalfoun et al. Segmenting time‐lapse phase contrast images of adjacent NIH 3T3 cells
JP2007516428A (en) A system for determining the staining quality of slides using a scatter plot distribution
Lal et al. A robust method for nuclei segmentation of H&E stained histopathology images
CN112703531A (en) Generating annotation data for tissue images
US8538122B2 (en) Localization of a valid area of a blood smear
Huang et al. HEp-2 cell images classification based on textural and statistic features using self-organizing map
JP4897488B2 (en) A system for classifying slides using a scatter plot distribution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210112