CN111798416B - Intelligent glomerulus detection method and system based on pathological image and deep learning - Google Patents

Intelligent glomerulus detection method and system based on pathological image and deep learning Download PDF

Info

Publication number
CN111798416B
CN111798416B CN202010560815.0A CN202010560815A CN111798416B CN 111798416 B CN111798416 B CN 111798416B CN 202010560815 A CN202010560815 A CN 202010560815A CN 111798416 B CN111798416 B CN 111798416B
Authority
CN
China
Prior art keywords
image
detection
feature map
images
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010560815.0A
Other languages
Chinese (zh)
Other versions
CN111798416A (en
Inventor
李明
周晓霜
李荣山
郝芳
王晨
李心宇
岳俊宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Technology
Shanxi Provincial Peoples Hospital
Original Assignee
Taiyuan University of Technology
Shanxi Provincial Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology, Shanxi Provincial Peoples Hospital filed Critical Taiyuan University of Technology
Publication of CN111798416A publication Critical patent/CN111798416A/en
Application granted granted Critical
Publication of CN111798416B publication Critical patent/CN111798416B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30084Kidney; Renal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of intelligent auxiliary reading of digital pathological images of kidneys; the method comprises the steps that a traditional algorithm and a known deep learning algorithm need to carry out more manual preprocessing to realize classification or detection of an interested region, and an effective fine-grained pathological image detection algorithm is lacked, so that the method applies the Faster R-CNN method to detection of the interested region, and provides an intelligent glomerular detection method and system aiming at a nephropathological image based on the Faster R-CNN; the image data enhancement process is realized in the deep neural network, the correlation between the image and the extracted features is calculated by extracting the features of the images of the same category, the process of generating the recommended region for target detection is more pertinent, the positioning and detection of the region of interest are realized, the system of the number of glomeruli on the whole slice is returned, when fine-grained target detection is carried out on a pathological image, the algorithm convergence speed is higher, the detection result is more accurate, the pertinence is realized, and the function of auxiliary film reading is realized.

Description

Intelligent glomerulus detection method and system based on pathological image and deep learning
Technical Field
The invention relates to an intelligent auxiliary image reading system for a digital pathological image of a kidney, in particular to an intelligent detection method and system for a region of interest of a kidney tissue based on a pathological image and deep learning.
Background
Chronic Kidney Disease (CKD) results in a gradual loss of kidney function, leading to end-stage renal disease, requiring dialysis or kidney transplantation. At present, the chronic kidney disease patients in China reach one hundred million to two million, and the prevalence rate is 10.8%. In China, the number of chronic nephropathies occurring in glomerulonephritis is 40% at most. Chronic kidney disease usually has unobvious early symptoms, reaches late stage renal failure, and endangers the life of patients. However, the clinical manifestations of most types of chronic kidney diseases, such as glomerular diseases, are hematuria and proteinuria, and there are no specific clinical manifestations, so the pathological examination of renal biopsy tissue is the gold standard for determining the type of chronic kidney disease.
When a pathologist diagnoses renal pathological tissue sections, the shapes and the numbers of glomeruli, arterioles and renal tubules play a key role in diagnosing diseases. Therefore, the pathologist needs to perform a contrast check on the common five stained sections of the same case, and confirm all the regions of interest on each stained section one by one.
Under the severe condition of shortage of the current pathologists, an intelligent kidney pathology auxiliary film reading system is urgently needed to assist the pathologists to complete a large number of repeated simple works, namely positioning all glomeruli, arterioles and tubular atrophy areas on a glass slide, counting the number of normal glomeruli and hardened glomeruli and the area of the tubular atrophy areas, and enabling the pathologists to distribute a large amount of time to complex works such as difficult and complicated diseases. At present, the research and development work of an artificial intelligent film reading diagnosis system focusing on the kidney pathological image is relatively less no matter at home or abroad. The pathological image has the characteristic of fine granularity image, and the difference between different types of targets to be detected is not large, and the difference between the different types of targets to be detected and the background is not large, so that the effect of the common target detection method on the pathological image is not obvious.
At present, the classification or detection of interested areas on a nephropathy pathological image can be realized only by a traditional algorithm or a known deep learning algorithm under the condition of carrying out more manual preprocessing on a digital pathology image; the traditional algorithm does not aim at the characteristics of fine-grained images, and an effective fine-grained pathological image detection algorithm is lacked; and the position and number of glomeruli on the whole pathological image can not be returned, and whether the hardened glomeruli or the renal tubular atrophy area exists or not can not be returned.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a glomerular intelligent detection method and system based on pathological images and deep learning, provides a deep convolution neural network algorithm capable of realizing fine granularity identification, obtains anchor points with fewer quantity and higher accuracy by extracting the characteristics of images of the same category and then calculating the correlation between the images and the extracted characteristics, makes the process of generating a recommended region of target detection more specific, and obtains an algorithm with higher convergence rate and more accurate result.
In order to achieve the purpose, the invention provides the following technical scheme:
the utility model provides a glomerulus intelligence detecting system based on pathology image and deep learning, includes image preprocessing module, image detection model training module and result output module, wherein: the image preprocessing module carries out preprocessing of format conversion, patient information desensitization, picture threshold segmentation background removal and image clipping on the kidney physiological image; the image detection training module is based on a target detection network fast R-CNN, aiming at the fine-grained detection of the kidney pathological image, training is carried out by using the existing image to detect the region of interest of the kidney pathological image, and a model with the classification verification precision Accuracy higher than a threshold value is obtained; the result output module detects the region of interest of the renal pathology image to be detected and outputs a detection result of the renal pathology image.
Further, the threshold of the validation Accuracy of the model of the image detection training module is 95%.
A glomerular intelligent detection method based on pathological images and deep learning comprises the following steps:
step 1, acquiring four common stains of kidney tissue slices, and scanning the slices by using a glass slide scanner to acquire digital nephropathology image Data;
step 2, carrying out image preprocessing on the image Data acquired in the step 1, and acquiring image Data _ T for fine-grained image recognition and corresponding real label information GT _ T comprising category information and position information;
step 3, training the image Data _ T by adopting a fine-grained deep neural network to obtain a Model _ f for detecting the kidney interested region and a corresponding deep network Model parameter Model _ T;
and 4, detecting the region of interest of the renal physiological image to be detected, and outputting a detection result through a result output module.
Further, step 2 comprises the steps of:
step 2.1, carrying out format conversion on the image to obtain an image F which is converted into a TIFF or SVS image format, deleting a slice label layer to obtain an image A with desensitized patient information, and naming the image A by slice coding;
and 2.2, obtaining images A with different resolutions by using an Openslide package: respectively obtaining images with the compression multiple of 1 time and 2 times of the image A by adjusting the zoom level _ count of the Openslide, and recording all the images with different compression multiples of the image A as image Data _ T;
step 2.3, reserving the image with the tissue part in the pathological image by using a threshold image segmentation method, and removing blank part images to obtain an image B only containing main tissues;
step 2.4, cutting the image B obtained in the step 2.3 into a plurality of small images, generating a plurality of image blocks C with 1000 x 1000 pixels, wherein the overlapping area of the adjacent image blocks C is 200 pixel points, and the area which exceeds the image B and is less than 1000 pixels is filled with black;
and 2.5, marking the image block C obtained in the step 2.4 after cutting to obtain image Data _ T with marking information and real label information GT _ T.
Further, step 3 comprises the steps of:
step 3.1, divide the image Data into two parts: training set Data _ train and verification set Data _ validation;
step 3.2, using a deep residual error neural network ResNet50 as a basic network to obtain a basic network Model _1 with natural image target detection capability;
step 3.3, adjusting all images in the training set Data _ train to be uniform in size to obtain a Data set Data _ train1, performing feature extraction on the Data set Data _ train1 by using a basic network Model _1 to obtain a convolution feature map F1 of the Data set Data _ train1, wherein the length, width and depth of the convolution feature map are P × Q × N;
step 3.4, on the convolution feature map F1 obtained in the step 3.3, mapping the labeling position information in the real label information GT _ T to the convolution feature map F1 and labeling the region of interest, extracting the convolution feature maps F2 of the regions of interest of the same category from the labeled regions of interest of all categories, and analyzing the convolution feature maps F2 by a Principal Component Analysis (PCA) methodObtaining a feature vector F3 corresponding to each category;
step 3.5, respectively calculating feature vectors F3 of four categories of the region of interest and a covariance matrix Co (P x Q) of the convolution feature map F1, selecting positions with values larger than 0.5 in the covariance matrix Co in the convolution feature map F1 as anchor points, generating 9 images for each anchor point, and using the images generated by all the anchor points as a recommended region feature map F _ RP;
step 3.6, each recommended region feature map F _ RP outputs 5 predicted values through a classification convolutional layer and a regression convolutional layer, the predicted values comprise the scores of the interested regions or the backgrounds and the predicted values of the boundary frames of the first corrected recommended region feature maps F _ RP, the cross-over ratio coefficient IoU1 of the region feature maps F _ RP and the real marking information GT _ T comprising the category information and the cross-over ratio coefficient IoU2 of all recommended region feature map F _ RP images comprising the position information are calculated through the 5 predicted values, the category information and the position information of the screened recommended region feature maps F _ RP1 are obtained, and the classification Loss Loss _ C1 and the regression Loss Loss _ R1 of the recommended region network Model _2 are calculated;
step 3.7, the recommended area feature map F _ RP1 generated in the step 3.6, 5 predicted values corresponding to the recommended area feature map F _ RP1 and the convolution feature map F1 are jointly input into a result detection network Model _3, and the predicted values of the bounding box in the step 3.6 are corrected to obtain the final classification result of the recommended area and the position information of the target; detecting the classification Loss Loss _ C2 and the regression Loss Loss _ R2 of the network Model _3 according to the predicted classification result, the position information of the target and the calculation result of the real label information GT _ T, and calculating the total Loss Loss of the Model _ f, wherein Loss = lambda 1 Loss_C1+λ 2 Loss_R1+λ 3 Loss_C2+λ 4 Loss _ R2, where λ i To balance the four lost weights (i =1,2,3,4); optimizing the parameter of the initial parameter Model _ t0 of the deep neural Model by a random gradient descent algorithm to obtain a parameter Model _ t1 of the deep neural Model;
step 3.8, verifying the deep neural Model parameter Model _ t1 by using a verification set Data _ validation, and stopping training if the classification verification precision Accuracy is more than 0.95 or the training period number (epoch) reaches 200 times to obtain the trained deep neural network Model parameter Model _ t; otherwise, repeating the step 3 to continue the next round of training.
Further, the 9 images generated by each anchor point in step 3.5 include images of 3 different pixels, the 3 pixel sizes are 128 pixels, 256 pixels, and 512 pixels, respectively, and the image of each pixel includes 3 recommended region aspect ratios, which are 1.
Further, step 4 comprises the steps of:
step 4.1, obtaining an image T to be detected through the step 2, inputting the image T into a basic network Model _1 of the deep neural network Model _ T, and calculating a convolution characteristic diagram TF1;
step 4.2, calculating a covariance matrix TC of the convolution feature map TF1 and the feature vector F3 in the step 3.4, and acquiring a recommended region feature map TF _ RP of the image T according to the step 3.5;
4.3, calculating the confidence scores of the boundary box and the type of the recommended region feature map TF _ RP in the step 4.2, and obtaining the recommended region feature map through non-maximum suppression;
step 4.4, sending the result of the step 4.3 to a result detection network Model _3 to obtain the category confidence score of the recommended area and the corrected position information of the boundary box to obtain the result of target detection;
and 4.5, according to the position information of the target in the step 4.4, cutting, storing and outputting the image T to be detected, simultaneously drawing the boundary frame and the category of the detected target in a new high-resolution image, compressing, storing and outputting.
Further, the threshold for non-maximum suppression is 0.7.
In conclusion, the invention has the following beneficial effects:
the invention adopts a deep convolutional neural network which is based on fine-grained identification and can realize full-automatic target detection, and realizes the target detection of the interested area (the detection target is a normal glomerulus, a hardened glomerulus, an arteriole and a tubular atrophy area) on the digital kidney pathological image; the method does not need a large amount of preprocessing work in the using process, realizes the image data enhancement process in the deep neural network, realizes end-to-end output, reduces human intervention in the algorithm operation process, and only needs to convert the format of the digital pathological image into a usable svs or tiff format; aiming at the characteristic that the digital pathological image is a fine-grained image, a new recommended region-of-interest algorithm is proposed based on a target detection algorithm deep learning network fast R-CNN, so that fine targets on the image are concerned more, and when fine-grained target detection is carried out on the pathological image, the algorithm has higher convergence speed, more accurate detection result and more pertinence; the invention develops a system which can realize the positioning of the interested region (normal glomerulus, hardened glomerulus, arteriole and renal tubular atrophy region), detect whether the interested region is the above interested region and return the related information of the area, the number and the category of the interested region on the whole slice, thereby realizing the function of assisting in reading the slice in the true sense.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a schematic flow chart of the pathology auxiliary radiographing system of the present invention.
Fig. 3 is a schematic diagram of a generic glomerulus of the region of interest.
Fig. 4 is a schematic view of a hardened glomerulus of a region of interest.
Fig. 5 is a schematic illustration of arterioles of a region of interest.
Fig. 6 is a schematic diagram of renal tubular atrophy of a region of interest.
Fig. 7 is a schematic diagram showing the high-resolution results of all the regions of interest output by the intelligent glomerular detection system of the present invention.
Fig. 8 is a compression diagram of the identification result of the region of interest in the intelligent glomerular detection system of the present invention.
Fig. 9 is a partially enlarged view of the area a in fig. 8.
Fig. 10 is a partially enlarged view of the region B in fig. 8.
Fig. 11 is a partially enlarged view of the area C in fig. 8.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The utility model provides a glomerulus intelligence detecting system based on pathology image and deep learning, includes image preprocessing module, image detection model training module and result output module, wherein:
the image preprocessing module is mainly used for preprocessing the acquired pathological images according to the characteristics of the pathological images, so that the pathological images can be adopted in the training, verifying and testing stages of the deep learning algorithm and are realized by the Python codes which can be processed in batches. The image preprocessing module completes format conversion, patient information desensitization, picture threshold segmentation background removal and image cutting operation on a kidney physiological image, and the preprocessing process mainly comprises the steps of picture format conversion, patient information desensitization, picture threshold segmentation background removal and image cutting of the kidney physiological image, wherein the picture format conversion is to convert a private format picture obtained by a scanner into a general KFB or TIFF format; desensitizing patient information refers to removing the label layer of the image; performing threshold segmentation on the picture to remove the background by adopting threshold segmentation of an averaging method; and image cutting, namely cutting the image to be detected into image blocks with fixed sizes. After the kidney physiological image to be detected is processed by the image preprocessing module, the training and the detection of an image detection model are facilitated.
The image detection model training module is mainly based on a target detection network, a region-of-interest detection algorithm for a renal pathology image is provided aiming at fine-grained detection of the renal pathology image, and the selection mode of an Anchor point (Anchor) by an RPN network in the Faster R-CNN is improved: the original anchor point is selected in a mode that all P-Q points on a convolution characteristic diagram (the length and the width are P-Q) are used as anchor points, the accurate characteristics of a target to be detected are obtained through a principal component analysis method, and then points with higher correlation degree are selected as the anchor points through calculating covariance, so that the number of the anchor points of the characteristic diagram is reduced, the calculated amount is reduced, and the characteristics of the images of the same type are extracted, so that the original anchor point is more targeted when target detection is carried out. The kidney pathology image and the label are used for training the region-of-interest detection algorithm, a deep neural network model with verification precision Accuracy higher than 95% is obtained and serves as a detection model, and the image detection model training module is achieved through the built fast R-CNN-based deep neural network.
And the result output module detects the region of interest of the renal pathological image to be detected and outputs the detection result of the renal pathological image, the renal pathological image to be detected is subjected to image preprocessing and image detection by the image preprocessing module and the image detection model training module, an image with the region of interest category and position information marked is obtained, and a high-resolution image only containing the region of interest is returned.
The invention provides a glomerular intelligent detection method based on pathological images and deep learning, which comprises the following steps:
step 1, acquiring four common stains of kidney tissue sections, and then scanning the sections by using a glass slide scanner to obtain digital nephropathology image Data.
Step 2, image preprocessing is carried out on the image Data acquired in the step 1, and image Data _ T used for fine-grained image recognition and corresponding real tag information GT _ T comprising category information and position information are acquired, wherein the real tag information GT _ T (Ground Truth) is marked by professionals such as doctors and the like and comprises the category information and the position information, and the method comprises the following steps:
step 2.1, carrying out format conversion on the image to obtain an image F which is converted into a TIFF or SVS image format, wherein the lossless conversion adopted in the step generally considers that the image has no information loss in the conversion process; the image F contains multilayer information, wherein one image layer is a slice label, and the label content comprises a hospital name, a slice code and a patient name. And reading in the image F in a Python environment, deleting the slice label layer to obtain an image A with desensitized patient information, and naming the image A by using a slice code.
And 2.2, obtaining images A with different resolutions by using an Openslide package: in the step 2.1, the magnification of the image A is 40 times, the image is not zoomed by adjusting the parameter zoom level _ count of the Openslide, and when the level _ count =0, the image A is obtained by taking the level _ count =1,2, and the images with the compression multiples of 1 time and 2 times are obtained respectively; all the images with different compression multiples of the image A are recorded as image Data Data _ T, the image Data Data _ T comprises three Data with different resolutions, the Data volume of the image Data Data _ T is expanded, and the Data diversity of the image Data Data _ T is increased.
Step 2.3, retaining the image with the tissue part in the pathological image by using a threshold image segmentation method, and removing blank part images, wherein the threshold image segmentation method comprises the following specific operation processes: and solving a gray average value AG of the image, namely dividing the sum of all pixel gray values by the number of pixel points, and reserving all points on the image, of which the pixel values are smaller than the gray average value AG, so as to obtain an image B only containing main tissues.
And 2.4, cutting the image B which is obtained in the step 2.3 and only keeps the main tissues into a plurality of small images to generate a plurality of image blocks C with the image size of 1000 pixels by 1000 pixels, wherein the overlapping area of the adjacent image blocks C is 200 pixel points during cutting, and the area which exceeds the image B and is not more than 1000 pixels is filled with black.
And 2.5, marking the image C obtained in the step 2.4 after cutting to obtain image Data _ T with marking information and real marking information GT _ T. The real tag information GT _ T is stored in an xml file, and includes category information and location information. The kidney physiological image adopts a quasi-pixel level labeling method, namely, a rectangular bounding box (boundary frame) is labeled on an interested area to be identified, and the label of the interested area to be labeled is divided into: normal glomeruli, FSGS (hardened glomerulus), arterioles, areas of tubular atrophy.
Step 3, training the image Data _ T and the real label information GT _ T by adopting a fine-grained deep neural network to obtain a Model _ f for detecting the kidney interested region and a corresponding depth network Model parameter Model _ T, wherein the Model _ f comprises three parts, namely a first part: and a base network Model _1 for extracting the features of the image, wherein the base network Model _1 selects a residual neural network ResNet50 and is pre-trained on a million-level natural image library ImageNet, so that the classification verification precision of the natural image in the image library ImageNet reaches 95%. The ResNet50 structure consists of 1 input block and 4 residual blocks, the input block containing a convolutional layer of size 7 × 7 and a max pooling layer of size 3 × 3; each residual block is composed of a different number of convolution blocks. A convolution block consists of three layers: a variable-dimension convolutional layer (the size of a convolution kernel is 1 multiplied by 1), a fixed convolution component (the size of the convolution kernel is 3 multiplied by 3), and a variable-dimension convolutional layer (the size of the convolution kernel is 1 multiplied by 1); the convolution blocks of different residual blocks are different in number, and the convolution kernels of each layer in the convolution blocks are different in number, so that after the image passes through ResNet50, a convolution feature diagram with the length, the width and the depth being reduced and increased is obtained, wherein the length, the width and the depth are P x Q x N.
A second part: a recommended area network Model _2 which mainly comprises a network capable of realizing the steps 3.4-3.6 and is used for generating an area of interest; model _2 is composed of one variable-dimension convolutional layer (convolutional kernel size is 1 × 1), one SoftMax function layer, and one region recommendation block. The contents of the region recommendation block are to calculate the convolution characteristic graphs of the similar targets and calculate the covariance between the convolution characteristic graphs. And obtaining an anchor point of the producible recommended area according to the covariance matrix and the set threshold value.
And a third part: and the result detection network Model _3 mainly classifies and regresses the recommended regions to generate a target detection result. The result detection network Model _3 is composed of a pooling layer (ROIPooling) and two fully-connected layers. The RoIPooling layer is responsible for collecting the recommendation areas and calculating the integrated recommendation area characteristic diagram. There are 2 inputs to the Rol pooling layer: the convolution feature map of the basic network Model _1 and the classification and position prediction results of the recommended region feature map output by the recommended region network Model _ 2. And outputting two results through two full-connection layers, wherein the first layer result is a classification result, and the first layer result is combined with a SoftMax function to obtain a final position prediction result.
In the training process of the network, the parameters of the Model _ f are all adjusted, and the Model training comprises the following steps:
step 3.1, divide the image Data into two parts: training set Data _ train and verification set Data _ validation, the division ratio of the training set and the verification set is 4.
And 3.2, using a deep residual error neural network ResNet50 as a basic network, and pre-training on a data set of a natural image library ImageNet to obtain a basic network Model _1 with natural image target detection capability on images in the natural image library ImageNet.
Step 3.3, uniformly adjusting all images in the training set Data _ train to a Data set Data _ train1 with the size of 512 × 512 pixels, adopting batch training for the Data set Data _ train1, wherein the batch size is 32, and during batch processing, performing Data enhancement on each batch of Data independently, wherein the content of the image Data enhancement specifically comprises (1) translation: respectively translating the 128 pixels horizontally and vertically; (2) horizontal inversion; (3) vertically turning over; (4) affine transformation: and rotating each image by 90 degrees, 180 degrees and 270 degrees respectively to enlarge the image data of each batch and the corresponding label data set to 8 times. Feature extraction is carried out on the Data set Data _ train1 by using the basic network Model _1, and a convolution feature map F1 of the Data set Data _ train1 is obtained, wherein the length, the width and the depth of the convolution feature map are P x Q N, namely N feature maps with the length and the width being P x Q.
Step 3.4, on the convolution feature map F1 obtained in step 3.3, mapping the labeling position information in the real label information GT _ T to the convolution feature map F1 and labeling an interested region, where the identified interested region has four categories, namely, glomerulus, hardened glomerulus, arteriole, and tubular atrophy region, each category has a corresponding feature vector F3, extracting convolution feature maps F2 of interested regions of the same category from the labeled interested regions of all categories, analyzing the convolution feature map F2 by a Principal Component Analysis (PCA) method to obtain a feature vector F3 corresponding to each category, and there are four feature vectors F3 (1 × n dimension) in total.
Step 3.5, respectively calculating feature vectors F3 of four categories of the region of interest and a covariance matrix Co (P x Q) of a convolution feature map F1, selecting a position with a value larger than 0.5 in the covariance matrix Co, selecting a corresponding position in the convolution feature map F1 as an anchor point, generating 3 images with different pixel sizes by each anchor point, wherein the pixel sizes are 128 pixels, 256 pixels and 512 pixels; each pixel has a recommended region of 3 aspect ratios, 1, 2; and generating 9 images by each anchor point, wherein the images generated by all the anchor points are used as a recommended region feature map F _ RP.
And 3.6, sending the recommended region feature map F _ RP into two convolutional layers, wherein the first convolutional layer is used for obtaining category information in a classified mode, the second convolutional layer is used for obtaining position information in a regressive mode, one predicted value output by the classified convolutional layer is the score of an interesting region or a background, the regressive convolutional layer outputs four predicted values as the predicted values of a boundary frame of the first corrected recommended region feature map F _ RP, each recommended region feature map F _ RP outputs 5 predicted values after passing through the two convolutional layers, the intersection and comparison coefficient IoU1 of the region feature map F _ RP and the real marking information GT _ T including the category information and the intersection and comparison coefficient IoU2 of all recommended region feature map F _ RP images including the position information are calculated through the 5 predicted values, the category information and the position information of the screened recommended region feature map F _ RP1 are obtained, and the classification Loss Loss _ C1 and the Loss _ R1 of the recommended region network Model _2 are calculated.
Figure DEST_PATH_IMAGE002
In the formula, a and B represent the areas of two images in the recommended region feature map for which the intersection ratio coefficient IoU is calculated.
Non-maxima suppression: sorting according to the confidence scores of all recommendation areas, selecting the recommendation area with the highest confidence score, traversing all the rest recommendation areas to calculate the area overlapping degree IoU, if the IoU is greater than 0.7, representing that two recommendation areas are probably detected as the same target, keeping the recommendation area characteristic diagram with higher confidence score, and deleting the recommendation area characteristic diagram with low confidence score. And after traversing, selecting an unprocessed recommended region with the highest confidence score from the rest region feature maps, and repeating the process. And if the intersection ratio coefficient IoU between all recommendation areas is less than 0.7, selecting a recommendation area feature map with the maximum intersection ratio coefficient IoU, removing repeated recommendation areas through non-maximum suppression, and reserving a recommendation area feature map F _ RP1 with lower quantity and higher quality after the first correction position. The recommendation area feature map of IoU1>0.5 is used as a positive sample, the recommendation area feature map of IoU1<0.3 is used as a negative sample, neither the positive sample nor the negative sample of IoU1 can not be calculated, and the subsequent training is not participated in; the number of positive samples is 128, and if 128 is not sufficient, the sample is filled with negative samples. The IoU2 is an intersection and comparison coefficient between every two recommended region feature map F _ RP images, is mainly obtained by a non-maximum suppression method, and reserves a candidate frame with the IoU2 being greater than 0.7, if the IoU2 is less than 0.7, reserves a candidate frame with the largest IoU2, namely the recommended region feature map F _ RP1.
And 3.7, inputting the recommended region feature map F _ RP1 generated in the step 3.6, the 5 corresponding predicted values and the convolution feature map F1 into the result detection network Model _3 together. The result detection network Model _3 comprises a ROIPooling layer and two full connection layers, and the input recommended region feature map F _ RP realizes the convolution feature output with fixed length after passing through the ROIPooling layer. After the convolution characteristics pass through two fully-connected layers, the convolution characteristics are expanded into characteristic vectors for classification or regression, wherein each fully-connected layer comprises 4096 neurons, activation functions ReLu (linear rectification function) and dropout layers (random inactivation) are used, and the dropout rate is 0.5; finally, the two full-connection layers finish classification of each candidate region, and output probability scores of four categories of normal glomeruli, FSGS (hardened glomerulus), arterioles and renal tubular atrophy regions and 4 bounding box predicted values of secondary correction positions. The results of the two fully-connected layers are combined with a SoftMax function to realize the fine correction of the position of each recommended area, namely, the predicted value of the bounding box in the step 3.6 is corrected to obtain the final classification result of the recommended area and the position information of the target. Detecting the classification Loss Loss _ C2 and the regression Loss Loss _ R2 of the network Model _3 according to the predicted classification result, the position information of the target and the calculation result of the real label information GT _ T, and calculating the total Loss Loss of the Model _ f, wherein Loss = lambda 1 Loss_C1+λ 2 Loss_R1+λ 3 Loss_C2+λ 4 Loss _ R2, where λ i In order to balance the weight of the four losses (i =1,2,3, 4), cross entropy losses are adopted by the Loss _ C1 and the Loss _ C2, and the Loss _ R1 and the Loss _ R2 are losses of position offset; and optimizing the parameter of the initial parameter Model _ t0 of the deep neural Model by a random gradient descent algorithm to obtain a parameter Model _ t1 of the deep neural Model.
And 3.8, verifying the deep neural network Model parameter Model _ t1 of the Model _ f by using a verification set Data _ validation, training the deep neural network Model parameter Model _1, calculating a loss error of a prediction result, and if the Model _ t1 is verified to be accurateWhen the Accuracy is less than or equal to 0.95, performing the next round of training on the deep neural Model parameter Model _1 by using the Data set Data _ train1, and continuously adjusting the parameter of the deep neural Model parameter Model _ t1 until the verification Accuracy of the prediction result is more than 0.95 or the training period (epoch) reaches 200 times, and stopping training; otherwise, repeating the step 3 to continue the next round of training. The iteration times during training is 10000 times, the weight is updated by using a momentum gradient descent method, the momentum factor is 0.9, and the learning rate is 0.001. Before each activation function, batch Normalization (BN) is used, so that the average value of output data is close to 0, the standard deviation is close to 1, and the combination function of the BN layer and the activation function isy =SBNωx)+β) WhereinBNIs a batch normalization function,y =Sas an activation function used by the activation function layer,ωin order to be the weight of the weight,βis a bias term; the BN layer can be used for accelerating convergence, fitting is controlled, the network is allowed to use a larger learning rate, a good effect is achieved on a deep network Model, 100 epochs (period) are iterated, the period 1 represents that all Data sets Data _ train1 are trained once, after each epoch is trained, a verification set Data _ validation is used for verifying a Model _ f, and finally a trained deep neural Model parameter Model _ t is obtained.
And 4, detecting the region of interest of the renal physiological image to be detected, and outputting a detection result through a result output module, wherein the method comprises the following steps:
and 4.1, acquiring an image T to be detected through the step 2, inputting the image T into a basic network Model _1 of a Model _ f for detecting the kidney interesting region, and calculating a convolution characteristic map TF1 of the image T.
And 4.2, calculating a covariance matrix TC of the convolution feature map TF1 and the feature vector F3 in the step 3.4, and acquiring a recommended region feature map TF _ RP of the image T according to the step 3.5.
And 4.3, calculating the confidence scores of the boundary frames and the categories of the feature map TF _ RP of the recommendation region in the step 4.2, adopting non-maximum suppression, and reserving the category with the highest confidence score in the overlapped recommendation region to avoid the situation that the same object is repeatedly identified, namely reserving the boundary frame with the highest confidence score when the overlapping rate of the boundary frames of the two recommendation regions is greater than the threshold value of the non-maximum suppression to obtain the feature map of the recommendation region, wherein the threshold value of the non-maximum suppression is 0.7.
Step 4.4, sending the result of the step 4.3 to a result detection network Model _3 to obtain the category confidence score of the recommended area and the corrected position information of the boundary box, and obtaining the result of target detection: and taking the category with the maximum category confidence coefficient value as the category output of the candidate frame, and outputting the position of the boundary frame as the position of the detection target.
And 4.5, according to the position information of the target in the step 4.4, cutting, saving and outputting the high-resolution image obtained in the step 2.1, drawing the boundary frame and the type of the detected target in a new high-resolution image, compressing, saving and outputting the image.
In some embodiments, the four common stains described in step 1 are common stains for renal pharmacology: HE (hematoxylin eosin), MASSON, PAS (periodic acid, snow-white stain), PASM (periodic acid, hexamine silver stain), and all are technical means well known to those skilled in the art; the immunohistochemical staining object may be selected from the group consisting of tissue or cells, such as in particular the connective tissue nucleus, cytoplasm, cytoskeleton; staining information for a particular protein or nucleic acid fragment(s); or to stain certain types of carbohydrates, lipids, and amyloids, etc.
In some embodiments, the high resolution digital pathology image of step 1 refers to an image obtained by scanning an objective lens at a multiple of not less than 20 times during the slide scanning, and the resolution is 0.25 mm/pixel.
In some embodiments, the diagnostic report of the renal pathological section employed comprises essentially of: diagnosing the diseases of minimal change, membranous nephropathy and glomerular sclerosis.
In some embodiments, the calculation method for calculating the classification validation Accuracy is as follows: the ratio of the number of correctly predicted samples to the total number of predicted samples.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.

Claims (6)

1. An intelligent glomerulus detection method based on pathological images and deep learning utilizes an intelligent glomerulus detection system based on pathological images and deep learning to detect, and is characterized in that: the intelligent glomerulus detection system based on pathological images and deep learning comprises an image preprocessing module, an image detection model training module and a result output module, wherein: the image preprocessing module carries out preprocessing of format conversion, patient information desensitization, picture threshold segmentation background removal and image cutting on a kidney physiological image; the image detection model training module is based on a target detection network fast R-CNN, aims at the fine-grained detection of the kidney pathological image, and trains by using the existing image to detect the detection algorithm of the region of interest of the kidney pathological image to obtain a model with the classification verification precision Accuracy being higher than a threshold value; the result output module detects an interested region of a renal physiological image to be detected and outputs a detection result of the renal physiological image;
the intelligent glomerulus detection method based on pathological images and deep learning comprises the following steps:
step 1, acquiring four common stains of kidney tissue sections, and then scanning the sections by using a glass slide scanner to acquire digital nephropathology image Data;
step 2, carrying out image preprocessing on the image Data acquired in the step 1 to acquire image Data _ T for fine-grained image recognition and corresponding real label information GT _ T comprising category information and position information;
step 3, training the image Data _ T by adopting a fine-grained deep neural network to obtain a Model _ f for detecting the interested region of the kidney and a corresponding depth network Model parameter Model _ T;
the step 3 comprises the following steps:
step 3.1, divide the image Data into two parts: training set Data _ train and verification set Data _ validation;
step 3.2, using a deep residual error neural network ResNet50 as a basic network to obtain a basic network Model _1 with natural image target detection capability;
step 3.3, adjusting all images in the training set Data _ train to be uniform in size to obtain a Data set Data _ train1, performing feature extraction on the Data set Data _ train1 by using a basic network Model _1 to obtain a convolution feature map F1 of the Data set Data _ train1, wherein the length, width and depth of the convolution feature map are P × Q × N;
step 3.4, on the convolution feature map F1 obtained in the step 3.3, mapping the labeling position information in the real label information GT _ T to the convolution feature map F1 and labeling the region of interest, extracting convolution feature maps F2 of the regions of interest of the same category from the labeled regions of interest of all categories, analyzing the convolution feature map F2 by a Principal Component Analysis (PCA) method, and obtaining a feature vector F3 corresponding to each category;
step 3.5, respectively calculating feature vectors F3 of four categories of the region of interest and covariance matrixes Cov (P x Q) of the convolution feature map F1, selecting positions with values larger than 0.5 in the covariance matrixes Cov in the convolution feature map F1 as anchor points, generating 9 images by each anchor point, and generating images by all the anchor points as a recommended region feature map F _ RP;
step 3.6, each recommended region feature map F _ RP outputs 5 predicted values through a classification convolutional layer and a regression convolutional layer, the predicted values comprise the scores of the interested regions or the backgrounds and the predicted values of the boundary frames of the first corrected recommended region feature maps F _ RP, the cross-over ratio coefficient IoU1 of the region feature maps F _ RP and the real marking information GT _ T comprising the category information and the cross-over ratio coefficient IoU2 of all recommended region feature map F _ RP images comprising the position information are calculated through the 5 predicted values, the category information and the position information of the screened recommended region feature maps F _ RP1 are obtained, and the classification Loss Loss _ C1 and the regression Loss Loss _ R1 of the recommended region network Model _2 are calculated;
step 3.7. Step3.6, the generated recommended area feature map F _ RP1, 5 predicted values corresponding to the recommended area feature map F _ RP1 and the convolution feature map F1 are jointly input into a result detection network Model _3, and the predicted values of the boundary frame in the step 3.6 are corrected to obtain the final classification result of the recommended area and the position information of the target; detecting the classification Loss Loss _ C2 and the regression Loss Loss _ R2 of the network Model _3 according to the predicted classification result, the position information of the target and the calculation result of the real label information GT _ T, and calculating the total Loss Loss of the Model _ f, wherein the Loss = lambda 1 Loss_C1+λ 2 Loss_R1+λ 3 Loss_C2+λ 4 Loss _ R2, where λ i To balance the weight of the four losses (i =1,2,3,4); optimizing the parameter of the initial parameter Model _ t0 of the deep neural Model by a random gradient descent algorithm to obtain a parameter Model _ t1 of the deep neural Model;
step 3.8, verifying the deep neural Model parameter Model _ t1 by using a verification set Data _ validation, and stopping training if the classification verification precision Accuracy is more than 0.95 or the training period number epoch reaches 200 times to obtain a trained deep neural network Model parameter Model _ t; otherwise, repeating the step 3 to continue the next round of training;
and 4, detecting the region of interest of the renal pathological image to be detected, and outputting a detection result through a result output module.
2. The intelligent glomerular detection method based on pathological image and deep learning of claim 1, which is characterized in that: the threshold value of the verification precision Accuracy of the model of the image detection model training module is 95%.
3. The intelligent glomerular detection method based on pathological image and deep learning of claim 1, which is characterized in that: the step 2 comprises the following steps:
step 2.1, carrying out format conversion on the image to obtain an image F converted into a TIFF or SVS image format, deleting a slice label image layer to obtain an image A with desensitized patient information, and naming the image A by using slice codes;
and 2.2, obtaining images A with different resolutions by using an Openslide package: respectively obtaining images with the compression multiple of 1 time and 2 times of the image A by adjusting the zoom level _ count of the Openslide, and recording all the images with different compression multiples of the image A as image Data _ T;
step 2.3, reserving the image with the tissue part in the pathological image by using a threshold image segmentation method, and removing blank part images to obtain an image B only containing main tissues;
step 2.4, cutting the image B obtained in the step 2.3 into a plurality of small images to generate a plurality of image blocks C with 1000 x 1000 pixels, wherein the overlapping area of the adjacent image blocks C is 200 pixel points, and the area which exceeds the image B and is not more than 1000 pixels is filled with black;
and 2.5, marking the image block C obtained in the step 2.4 after cutting to obtain image Data _ T with marking information and real label information GT _ T.
4. The intelligent glomerular detection method based on pathological image and deep learning of claim 1, which is characterized in that: the 9 images generated by each anchor point in step 3.5 include images of 3 different pixels, the 3 pixel sizes are 128 pixels, 256 pixels, and 512 pixels, respectively, and the image of each pixel includes 3 recommended region aspect ratios, which are 1.
5. The intelligent glomerular detection method based on pathological image and deep learning of claim 1, which is characterized in that: the step 4 comprises the following steps:
step 4.1, obtaining an image T to be detected through the step 2, inputting the image T into a basic network Model _1 of the deep neural network Model _ T, and calculating a convolution feature map TF1;
step 4.2, calculating a covariance matrix TC of the convolution feature map TF1 and the feature vector F3 in the step 3.4, and acquiring a recommended region feature map TF _ RP of the image T according to the step 3.5;
4.3, calculating the confidence scores of the boundary box and the type of the recommended region feature map TF _ RP in the step 4.2, and obtaining the recommended region feature map through non-maximum suppression;
step 4.4, sending the result of the step 4.3 to a result detection network Model _3 to obtain a category confidence score of the recommended area and corrected position information of the boundary box to obtain a target detection result;
and 4.5, according to the position information of the target in the step 4.4, cutting, storing and outputting the image T to be detected, simultaneously drawing the boundary frame and the category of the detected target in a new high-resolution image, compressing, storing and outputting.
6. The intelligent glomerular detection method based on pathological image and deep learning of claim 5, which is characterized in that: the threshold for non-maximum suppression is 0.7.
CN202010560815.0A 2019-06-20 2020-06-18 Intelligent glomerulus detection method and system based on pathological image and deep learning Active CN111798416B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910538571 2019-06-20
CN2019105385713 2019-06-20

Publications (2)

Publication Number Publication Date
CN111798416A CN111798416A (en) 2020-10-20
CN111798416B true CN111798416B (en) 2023-04-18

Family

ID=72804453

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010560815.0A Active CN111798416B (en) 2019-06-20 2020-06-18 Intelligent glomerulus detection method and system based on pathological image and deep learning

Country Status (1)

Country Link
CN (1) CN111798416B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508854B (en) * 2020-11-13 2022-03-22 杭州医派智能科技有限公司 Renal tubule detection and segmentation method based on UNET
CN112669258A (en) * 2020-11-24 2021-04-16 淮阴工学院 FPC circuit defect detection system
CN112419292B (en) * 2020-11-30 2024-03-26 深圳云天励飞技术股份有限公司 Pathological image processing method and device, electronic equipment and storage medium
CN113269747B (en) * 2021-05-24 2023-06-13 浙江大学医学院附属第一医院 Pathological image liver cancer diffusion detection method and system based on deep learning
CN113313685B (en) * 2021-05-28 2022-11-29 太原理工大学 Renal tubular atrophy region identification method and system based on deep learning
CN113393454A (en) * 2021-07-02 2021-09-14 北京邮电大学 Method and device for segmenting pathological target examples in biopsy tissues
CN113486974A (en) * 2021-07-22 2021-10-08 上海嘉奥信息科技发展有限公司 Pathological image recognition and classification method, system and medium based on glomerulus
CN114220485B (en) * 2021-11-05 2024-05-07 华南理工大学 Membranous glomerular data autonomous processing and intelligent typing method
CN114240836B (en) * 2021-11-12 2024-06-25 杭州迪英加科技有限公司 Nasal polyp pathological section analysis method, system and readable storage medium
CN114141339B (en) * 2022-01-26 2022-08-05 杭州未名信科科技有限公司 Pathological image classification method, device, equipment and storage medium for membranous nephropathy
CN114565761B (en) * 2022-02-25 2023-01-17 无锡市第二人民医院 Deep learning-based method for segmenting tumor region of renal clear cell carcinoma pathological image
CN115240000B (en) * 2022-07-22 2023-05-02 司法鉴定科学研究院 Diabetes detection device and method for forensic identification

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423760A (en) * 2017-07-21 2017-12-01 西安电子科技大学 Based on pre-segmentation and the deep learning object detection method returned
CN108492258A (en) * 2018-01-17 2018-09-04 天津大学 A kind of radar image denoising method based on generation confrontation network
CN108830172A (en) * 2018-05-24 2018-11-16 天津大学 Aircraft remote sensing images detection method based on depth residual error network and SV coding

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10740895B2 (en) * 2018-06-25 2020-08-11 International Business Machines Corporation Generator-to-classifier framework for object classification
JP7208480B2 (en) * 2018-10-12 2023-01-19 富士通株式会社 Learning program, detection program, learning device, detection device, learning method and detection method
CN109961059A (en) * 2019-04-10 2019-07-02 杭州智团信息技术有限公司 Detect the method and system in kidney tissue of interest region
CN111160135A (en) * 2019-12-12 2020-05-15 太原理工大学 Urine red blood cell lesion identification and statistical method and system based on improved Faster R-cnn

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423760A (en) * 2017-07-21 2017-12-01 西安电子科技大学 Based on pre-segmentation and the deep learning object detection method returned
CN108492258A (en) * 2018-01-17 2018-09-04 天津大学 A kind of radar image denoising method based on generation confrontation network
CN108830172A (en) * 2018-05-24 2018-11-16 天津大学 Aircraft remote sensing images detection method based on depth residual error network and SV coding

Also Published As

Publication number Publication date
CN111798416A (en) 2020-10-20

Similar Documents

Publication Publication Date Title
CN111798416B (en) Intelligent glomerulus detection method and system based on pathological image and deep learning
US20230186476A1 (en) Object detection and instance segmentation of 3d point clouds based on deep learning
CN111524137B (en) Cell identification counting method and device based on image identification and computer equipment
CN111612008B (en) Image segmentation method based on convolution network
CN111784671A (en) Pathological image focus region detection method based on multi-scale deep learning
CN112819821B (en) Cell nucleus image detection method
CN112017192B (en) Glandular cell image segmentation method and glandular cell image segmentation system based on improved U-Net network
CN110853005A (en) Immunohistochemical membrane staining section diagnosis method and device
CN110517272B (en) Deep learning-based blood cell segmentation method
CN115909006B (en) Mammary tissue image classification method and system based on convolution transducer
CN112862830A (en) Multi-modal image segmentation method, system, terminal and readable storage medium
CN110930378A (en) Emphysema image processing method and system based on low data demand
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN112348059A (en) Deep learning-based method and system for classifying multiple dyeing pathological images
CN112365973A (en) Pulmonary nodule auxiliary diagnosis system based on countermeasure network and fast R-CNN
CN114693971A (en) Classification prediction model generation method, classification prediction method, system and platform
CN114581709A (en) Model training, method, apparatus, and medium for recognizing target in medical image
CN114972202A (en) Ki67 pathological cell rapid detection and counting method based on lightweight neural network
CN115439654A (en) Method and system for finely dividing weakly supervised farmland plots under dynamic constraint
CN115239672A (en) Defect detection method and device, equipment and storage medium
Wu et al. Fast particle picking for cryo-electron tomography using one-stage detection
CN113313678A (en) Automatic sperm morphology analysis method based on multi-scale feature fusion
CN117095169A (en) Ultrasonic image disease identification method and system
CN114565626A (en) Lung CT image segmentation algorithm based on PSPNet improvement
Wu et al. A closer look at segmentation uncertainty of scanned historical maps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant