CN113076909B - Automatic cell detection method - Google Patents

Automatic cell detection method Download PDF

Info

Publication number
CN113076909B
CN113076909B CN202110412349.6A CN202110412349A CN113076909B CN 113076909 B CN113076909 B CN 113076909B CN 202110412349 A CN202110412349 A CN 202110412349A CN 113076909 B CN113076909 B CN 113076909B
Authority
CN
China
Prior art keywords
module
network
cell
neural network
enhancement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202110412349.6A
Other languages
Chinese (zh)
Other versions
CN113076909A (en
Inventor
易琳
陈丹
付晓琴
雷雅杰
周颖婷
范祉辰
林昌海
葛闯
陈一超
刘然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Chongqing University Cancer Hospital
Original Assignee
Chongqing University
Chongqing University Cancer Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University, Chongqing University Cancer Hospital filed Critical Chongqing University
Priority to CN202110412349.6A priority Critical patent/CN113076909B/en
Publication of CN113076909A publication Critical patent/CN113076909A/en
Application granted granted Critical
Publication of CN113076909B publication Critical patent/CN113076909B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic cell detection method, which comprises the following steps: 1) Building a deep convolution neural network for detecting cells, 2) training the built deep convolution neural network; 3) Inputting cell image data to be detected into a trained deep convolution neural network, and outputting a detection result by the deep convolution neural network; the deep convolutional neural network comprises an SD-DenseNet module for extracting characteristic images, a Region Proposal Net for generating preliminary target suggestions, three RoI Align modules for solving the problem of mismatching caused in the pooling operation, and three detectors for classifying, positioning and cell segmentation of cells. The deep convolutional neural network built by the invention can efficiently and accurately realize cell detection tasks such as cell classification, location, segmentation and the like.

Description

Automatic cell detection method
Technical Field
The invention relates to the technical field of cell classification, cell segmentation and convolutional neural networks, in particular to an automatic cell detection method.
Background
Morphological characteristics of cells include shape, color, size, and the like; to automatically obtain these characteristics of cells requires classification, location, segmentation, etc. of cells, i.e., example segmentation, also known as cell segmentation in cervical cytology. Conventional cell segmentation methods typically employ general algorithms for image segmentation, including threshold-based methods, cluster-based methods, contour-based methods, conventional machine learning methods, and so forth. For some cells with complex morphological characteristics, such as cervical cells, the traditional cell segmentation method often cannot meet the requirement of cell detection.
In the prior art, a pixel level recognition can be realized on a target based on a Region-based Deep Convolutional Neural network (R-DCNN), including fast R-CNN, mask R-CNN, hyperNet, cascade R-CNN and the like, and the method has very excellent performance and is often used for target detection. The network firstly obtains a candidate region (region proxy), and then classifies and frame regression (bounding box) is carried out on the candidate region, so that the content identification and classification of the image can be realized, the positioning problem of the target to be detected can be solved more accurately, and the pixel level identification and segmentation of a plurality of targets can be realized. Because of the above excellent performance of R-DCNN, it has been used for object detection of human faces, pedestrians, and the like. Since the area-based deep convolutional neural network is originally designed for image target detection in a natural scene, when the area-based deep convolutional neural network is used for automatic cell detection, the network needs to be reconstructed and optimized so as to improve the performance of the network on automatic cell detection.
Disclosure of Invention
In view of the above, the present invention is directed to an automatic cell detection method, which solves the technical problem of automatically detecting complex morphological features of cells.
The automatic cell detection method comprises the following steps:
1) Building a deep convolution neural network for detecting cells,
2) Training the built deep convolution neural network;
3) Inputting cell image data to be detected into a trained deep convolution neural network, and outputting a detection result by the deep convolution neural network;
the deep convolutional neural network comprises an SD-DenseNet module for extracting characteristic images, a Region Proposal Net for generating preliminary target suggestions, three RoI Align modules for solving the problem of mismatching caused in the pooling operation and three detectors for classifying, positioning and segmenting cells;
the SD-DenseNet module consists of a variable convolution network, a first extrusion excitation network, a first dense module, a second extrusion excitation network, a first transition layer, a second dense module, a third extrusion excitation network, a second transition layer, a third dense module, a fourth extrusion excitation network, a third transition layer, a fourth dense module and a fifth extrusion excitation network which are connected in sequence;
the detected cell image data is input into a variable convolution network of an SD-DenseNet module, a fifth extrusion excitation network output characteristic diagram of the SD-DenseNet module, a characteristic diagram output by the fifth extrusion excitation network is input into an area suggestion network, the output of the area suggestion network and the fifth extrusion excitation network output characteristic diagram are input into a first RoI Align module, the output of the first RoI Align module is input into a first detector, the frame regression result of the first detector is input into a second RoI Align module, the output of the second RoI Align module is input into a second detector, the frame regression result of the second detector is input into a third RoI Align module, the output of the third RoI Align module is input into a third detector, and the third detector outputs the cell detection result.
Further, in the step 2), a Herlev data set which is subjected to data enhancement processing and sample data equalization is adopted to train the built deep convolutional neural network;
the data enhancement processing comprises the steps of carrying out random translation enhancement, rotation enhancement, scaling enhancement, ZCA whitening enhancement, feature standardization enhancement, horizontal mirror image enhancement and vertical mirror image enhancement on an original Herlev data set;
the sample data balancing method comprises the following steps: a number of new samples corresponding to the number of each category are generated by 7 methods in the data enhancement module, and then the required number of samples are randomly extracted from the new samples and added into the original Herlev data set, so that the number of cell images of each category is equal.
The invention has the beneficial effects that:
according to the automatic cell detection method, the built deep convolution neural network can efficiently and accurately realize cell detection tasks such as cell classification, cell positioning and cell division, the working efficiency is greatly improved compared with manual cell detection, and the problem that the manual detection result is easily influenced by professional technical level and subjective emotion is solved.
Drawings
FIG. 1 is a flowchart of an automated cell detection method according to an embodiment;
FIG. 2 is a diagram showing the structure of SD-Dense-Cascade R-CNN in the examples;
FIG. 3 is an internal architecture diagram of the SD-DenseNet sub-module;
FIG. 4 is an exemplary diagram of random translation to generate a new set of images from an original image of cells;
FIG. 5 is an exemplary diagram of generating a new set of images from an original image of cells by rotation;
FIG. 6 is an exemplary diagram of generating a new image from an original cell image by scaling;
FIG. 7 is an exemplary diagram of generating a new image from an original cell image by ZCA whitening;
FIG. 8 is an exemplary diagram of generating a new image from an original cell image by feature normalization;
FIG. 9 is a schematic diagram of enhanced data for horizontal mirroring and vertical mirroring, wherein (a) original image, (b) horizontal mirroring, and (c) vertical mirroring;
FIG. 10 is a graph of the change in loss value over the training set;
FIG. 11 is a graph of the variation of mAP on the validation set;
FIG. 12 is a graph showing the results of cell division.
Detailed Description
The invention is further described below with reference to the figures and examples.
The automatic cell detection method in the embodiment comprises the following steps:
1) And (4) building a deep convolution neural network for detecting cells.
2) And training the built deep convolutional neural network by adopting a Herlev data set which is subjected to data enhancement processing and sample data equalization in sequence.
The Herlev pap smear dataset (referred to as Herlev dataset for short) is prepared by combining Technical University of Denmark (TUD) and Herlev University Hospital (HUH). The image of the data set is in BMP format. Each image is a single whole cell with an average resolution of about 150 x 140. The data set comprises two sets of data, the first set of data is called oldData data set, which comprises 500 single cell images (cell images); the second set of data is called the newData data set, which contains 917 single cell images. Both sets of cellular images were classified by cytologists as normal cells (normal cells) and abnormal cells (abnormal cells). In the manual classification process, two cytologists classify all cells and compare classification results; if the results of the two-person classification differ for a cell, the cell is discarded. This results in a very high quality of the resulting data set. The newData data set is used in this embodiment.
The labeled raw Herlev data set used in this example is shown in table 1. In machine learning, a data set is typically divided into a training set, a validation set, and a test set. The training set is used for training the model, and the testing set is used for testing the trained model. In order to monitor the accuracy of the model on the unseen data and adjust the hyper-parameter (hyper-parameter) during the training process, a part of samples in the training set needs to be set aside as the verification set (the verification set in this embodiment occupies about 10% of the training set, and the number of samples in each category is the same). For each category in table 1, training set + test set = number of samples. The total number of samples is 917, the training set 805 and the test set 112 (about 12% of the total data set). The test set is randomly drawn from the samples of each category.
TABLE 1 labeled Herlev dataset (newData dataset)
Figure BDA0003024381080000041
Figure BDA0003024381080000051
The data enhancement process includes random translation enhancement, rotation enhancement, scale enhancement, ZCA whitening enhancement, feature normalization enhancement, horizontal mirror enhancement, and vertical mirror enhancement of the original Herlev dataset.
In the embodiment, the random translation enhancement is to expand the original cell image by d pixels, and fill the expanded region with zero padding (that is, the pixel value is set to 0); the expansion is done to ensure that the cell contours are always in the image when translated. Then, the content of each image is randomly translated for a plurality of times in 4 directions, namely up, down, left and right, so that a plurality of new images are obtained. The relative positions of the same cell outline in each image generated by random translation are different with a high probability. In the training phase, the network treats the cell images with different outlines as different samples, thereby increasing the number of training samples of the convolutional network.
The rotation enhancement in this embodiment is to expand the original cell image by d pixels around, and then select the center of the image as the rotation center. The cell image is randomly rotated between-180 degrees and 180 degrees, and the rotation direction is random and can be anticlockwise or clockwise. The coordinates of the outline in the label data are also "rotated" accordingly. Every time the training sample is rotated, a new image is obtained to be used as the training sample. The image quality of the image can be slightly reduced by rotating the cell image, the outline can also be slightly deformed, but the image quality is beneficial to improving the example segmentation performance of the network, and the method is very important for improving the performance of the deep convolutional neural network.
In the embodiment, the zooming enhancement is a mode of amplifying the original image with the aspect ratio unchanged, and the label data of the cell contour is also processed correspondingly, so that image deformation or feature loss caused by simple zooming can be avoided.
In the embodiment, the label data does not need to be changed when the ZCA whitening is used for enhancing, the redundancy of image content can be removed by the ZCA whitening, and the structure and the characteristics of an image are more highlighted by less redundancy.
In the embodiment, the characteristic normalization enhancement is to normalize the pixel values of the original cell image, and after normalization, the mean value of the image is approximately 0 and the variance is approximately 1. The tag data need not be changed when performing feature normalization.
In the embodiment, the horizontal mirror image enhancement and the vertical mirror image enhancement are respectively to horizontally turn and vertically turn the image. When horizontal and vertical mirroring is performed, corresponding tag data is also processed accordingly. The number of samples can be increased by a factor of 2 by horizontal and vertical mirroring.
The sample data balancing method comprises the following steps: a number of new samples corresponding to each category are generated by 7 methods in the data enhancement module, and then the required number of samples are randomly extracted from the new samples and added into the original Herlev data set, so that the number of cell images of each category is equal.
The number of training set samples for the 7 classes in the Herlev dataset is comparatively unbalanced, with the most one (squamous epithelial severe dysplasia) being nearly 3 times the least one (mid-level squamous epithelium). This easily causes the deep convolutional neural network to tend to classify (cell classification) cells into a class with a large number of samples when classifying. Although this results in high sensitivity (i.e., a measure of the correct classification of abnormal cells), it is also undesirable from a practical standpoint to misclassify normal cells as a percentage of abnormal cells. In the embodiment, sample data equalization is adopted to solve the problem, and the precision and the convergence speed of the deep convolutional neural network during training are improved by doing so. Sample data is equalized so that the number of cellular images in each category in the training set is the same. Since there are 3 and 4 classes of normal and abnormal cells in the Herlev dataset, the number of samples of normal and abnormal cells after the training set was balanced was also roughly comparable.
It should be noted that the new samples added by the sample data equalization cannot be too many, otherwise it would result in an overfitting of the deep convolutional neural network model. This embodiment increases the number of samples (805 samples in total) in the original training set to about 9 times of the original number. From this, the number of samples that need to be added for each category can be calculated, as shown in table 2.
TABLE 2 training set equalization the number of samples that need to be added per class
Figure BDA0003024381080000061
Figure BDA0003024381080000071
3) Inputting the cell image data to be detected into the trained deep convolution neural network, and outputting a detection result by the deep convolution neural network.
The deep convolutional neural network comprises an SD-DenseNet module for extracting characteristic images, an area suggestion network (area suggestion network) for generating preliminary target suggestions, three RoI Align modules for solving the problem of mismatching caused in the pooling operation, and three detectors for classifying, positioning and cell segmentation of cells. The deep convolutional neural network in this embodiment is named: SD-Dense-Cascade R-CNN network.
The SD-DenseNet module is composed of a variable Convolutional network (variable Convolutional network), a first extrusion Excitation network (Squeez-and-Excitation network), a first Dense module (Dense module), a second extrusion Excitation network, a first Transition layer (Transition layer), a second Dense module, a third extrusion Excitation network, a second Transition layer, a third Dense module, a fourth extrusion Excitation network, a third Transition layer, a fourth Dense module and a fifth extrusion Excitation network which are sequentially connected. The variable convolution module can replace a common module of the conventional convolution network and utilizes reverse propagation to carry out end-to-end training to generate a deformable convolution neural network, so that the modeling of the model on a deformation image is improved; the extrusion excitation network (namely the SE module in the table 3) automatically acquires the importance degree of each feature channel in a learning mode, then improves the features useful for the current task and inhibits the features with low use according to the importance degree, and the attention of the network to important information can be improved by adding the SE module; the dense module connects the feature graph through a cross-network layer, so that the reuse of the features is enhanced, and the problems of gradient disappearance and model degradation are relieved; the transition layer is added to solve the problem that different dense modules cannot be fused due to different sizes of output feature diagrams.
The network architecture diagram of the SD-DenseNet module is shown in table 3:
TABLE 3SD-DenseNet architecture (input image 1333X 800)
Figure BDA0003024381080000081
The detected cell image data is input into a variable Convolutional network (Deformable conditional network) of an SD-DenseNet module, a fifth compressive Excitation network (Squeeze-and-Excitation network) of the SD-DenseNet module outputs a characteristic diagram, the characteristic diagram output by the fifth compressive Excitation network is input into an area suggestion network (Region Proposal Net), the output of the area suggestion network and the output characteristic diagram of the fifth compressive Excitation network are input into a first RoI Align module, the output of the first RoI Align module is input into a first detector, the frame regression result of the first detector is input into a second RoI Align module, the output of the second RoI Align module is input into a second detector, the frame regression result of the second detector is input into a third RoI Align module, the output of the third RoI Align module is input into a third detector, and the third detector outputs the cell detection result.
The structure of the detector in this embodiment is shown in fig. 2 in the specification: each detector comprises a convolution layer for processing the output data of the RoI Align module, a Mask layer for processing the output data of the convolution layer, a first full-connection layer for processing the output data of the RoI Align module, a second full-connection layer for processing the output data of the first full-connection layer, a frame regression (frame regression) for processing the output data of the second full-connection layer, a third full-connection layer for processing the output data of the first full-connection layer and a softmax classifier for processing the output data of the third full-connection layer. The RoI Align (region of interest matching) module is a region feature gathering mode, and the problem of mismatching caused in the pooling operation is well solved.
The cervical cell test set (112 samples) shown in table 1 was tested by the cell detection method in this example, and the optimized SD-sense-Cascade R-CNN hyper-parameters are as follows in table 3:
TABLE 2 optimized SD-Dense-Cascade R-CNN superparameters
Figure BDA0003024381080000091
Before testing, the SD-Dense-Cascade R-CNN network is trained, 700 samples are randomly extracted from 7000 samples of a new training set to serve as a verification set, and the rest 6300 samples are used as training sets. The mAP and loss value (loss value) after each round (epoch) were saved during the training of SD-Dense-Cascade R-CNN. Fig. 10 and 11 plot the loss values on the training set and the variation of the maps on the validation set, respectively.
Cervical cell test results
Fig. 12 shows the cell segmentation results. The numbers in the graph represent confidence scores (confidence scores) that represent the probability that the bounding box contains the target, as predicted by the network. As can be seen from fig. 12, the various types of cells are accurately classified, localized and segmented. Wherein, the cell contour in the original cell image corresponding to A1 and B1 can be well divided, and the phenomenon of contour retraction is not common. The original cell image corresponding to E1 has the problems of poor contrast of a cell contour region and extremely irregular cell contour, but in this case, the method can still perform segmentation roughly following the real marking of the cell contour. These results indicate that our method has better cell segmentation performance.
Comparison of cell division Performance
Table 3 compares the performance of several advanced methods (Mask R-CNN, dense R-CNN, cascade R-CNN, SD-Dense-Cascade R-CNN) on the test set, with the underlined data indicating the best performance among these methods. From Table 3 we can see that both mAP and mAR of Dense R-CNN are the lowest, suggesting that DenseNet substitution for ResNet is the cause of performance degradation, and therefore the framework of Mask R-CNN may not be suitable for DenseNet. In the experimental process, we find that the loss functions of the former three networks have converged when the epoch =200, but the loss function of the SD-density-Cascade R-CNN still keeps a large slope reduction when the epoch is greater than 200, and through experiments, we find that the loss function begins to converge only when the epoch =800, so that the experimental result of our method is based on 800 rounds. In all methods, mAP and mAR were higher for SD-Dense-Cascade R-CNN. This shows that our method works best for cervical cell detection and can be used as a detector for medical students according to clinical needs.
TABLE 3 Performance of different methods of cervical cell detection
Figure BDA0003024381080000101
Finally, the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, and all of them should be covered in the claims of the present invention.

Claims (2)

1. An automatic cell detection method comprises the following steps:
1) Constructing a deep convolution neural network for detecting cells,
2) Training the built deep convolution neural network;
3) Inputting cell image data to be detected into a trained deep convolution neural network, and outputting a detection result by the deep convolution neural network;
the method is characterized in that: the deep convolutional neural network comprises an SD-DenseNet module for extracting characteristic images, an area suggestion network for generating preliminary target suggestions, three RoIAlign modules for solving the problem of mismatching caused in the pooling operation and three detectors for classifying, positioning and cell segmenting cells;
the SD-DenseNet module consists of a variable convolution network, a first extrusion excitation network, a first dense module, a second extrusion excitation network, a first transition layer, a second dense module, a third extrusion excitation network, a second transition layer, a third dense module, a fourth extrusion excitation network, a third transition layer, a fourth dense module and a fifth extrusion excitation network which are connected in sequence;
the detected cell image data is input into a variable convolution network of an SD-DenseNet module, a fifth extrusion excitation network output characteristic diagram of the SD-DenseNet module, a characteristic diagram output by the fifth extrusion excitation network is input into an area suggestion network, the output of the area suggestion network and the fifth extrusion excitation network output characteristic diagram are input into a first Roalign module, the output of the first Roalign module is input into a first detector, the frame regression result of the first detector is input into a second Roalign module, the output of the second Roalign module is input into a second detector, the frame regression result of the second detector is input into a third Roalign module, the output of the third Roalign module is input into a third detector, and the third detector outputs the cell detection result.
2. The method for automatically detecting cells according to claim 1, wherein: in the step 2), a Herlev data set which is subjected to data enhancement processing and sample data equalization is adopted to train the built deep convolutional neural network;
the data enhancement processing comprises the steps of carrying out random translation enhancement, rotation enhancement, scaling enhancement, ZCA whitening enhancement, feature standardization enhancement, horizontal mirror image enhancement and vertical mirror image enhancement on an original Herlev data set;
the sample data balancing method comprises the following steps: a number of new samples corresponding to each category are generated by 7 methods in the data enhancement module, and then the required number of samples are randomly extracted from the new samples and added into the original Herlev data set, so that the number of cell images of each category is equal.
CN202110412349.6A 2021-04-16 2021-04-16 Automatic cell detection method Expired - Fee Related CN113076909B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110412349.6A CN113076909B (en) 2021-04-16 2021-04-16 Automatic cell detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110412349.6A CN113076909B (en) 2021-04-16 2021-04-16 Automatic cell detection method

Publications (2)

Publication Number Publication Date
CN113076909A CN113076909A (en) 2021-07-06
CN113076909B true CN113076909B (en) 2022-10-25

Family

ID=76617942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110412349.6A Expired - Fee Related CN113076909B (en) 2021-04-16 2021-04-16 Automatic cell detection method

Country Status (1)

Country Link
CN (1) CN113076909B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035184A (en) * 2018-06-08 2018-12-18 西北工业大学 A kind of intensive connection method based on the deformable convolution of unit
CN110119710A (en) * 2019-05-13 2019-08-13 广州锟元方青医疗科技有限公司 Cell sorting method, device, computer equipment and storage medium
CN110659718A (en) * 2019-09-12 2020-01-07 中南大学 Small convolution nuclear cell counting method and system based on deep convolution neural network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160069903A1 (en) * 2014-09-10 2016-03-10 Fundació Institute De Ciències Foròniques Method for detecting cells
US11730387B2 (en) * 2018-11-02 2023-08-22 University Of Central Florida Research Foundation, Inc. Method for detection and diagnosis of lung and pancreatic cancers from imaging scans
CN109614985B (en) * 2018-11-06 2023-06-20 华南理工大学 Target detection method based on densely connected feature pyramid network
CN109886922B (en) * 2019-01-17 2023-08-18 丽水市中心医院 Automatic grading method for hepatocellular carcinoma based on SE-DenseNet deep learning framework and enhanced MR image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035184A (en) * 2018-06-08 2018-12-18 西北工业大学 A kind of intensive connection method based on the deformable convolution of unit
CN110119710A (en) * 2019-05-13 2019-08-13 广州锟元方青医疗科技有限公司 Cell sorting method, device, computer equipment and storage medium
CN110659718A (en) * 2019-09-12 2020-01-07 中南大学 Small convolution nuclear cell counting method and system based on deep convolution neural network

Also Published As

Publication number Publication date
CN113076909A (en) 2021-07-06

Similar Documents

Publication Publication Date Title
CN110428432B (en) Deep neural network algorithm for automatically segmenting colon gland image
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
Tong et al. Salient object detection via bootstrap learning
CN111798416B (en) Intelligent glomerulus detection method and system based on pathological image and deep learning
CN112150493B (en) Semantic guidance-based screen area detection method in natural scene
CN108090906B (en) Cervical image processing method and device based on region nomination
CN110059586B (en) Iris positioning and segmenting system based on cavity residual error attention structure
CN111160407B (en) Deep learning target detection method and system
CN112102229A (en) Intelligent industrial CT detection defect identification method based on deep learning
CN111798409A (en) Deep learning-based PCB defect data generation method
CN108305253A (en) A kind of pathology full slice diagnostic method based on more multiplying power deep learnings
CN111027475A (en) Real-time traffic signal lamp identification method based on vision
CN112200121A (en) Hyperspectral unknown target detection method based on EVM and deep learning
CN113313149B (en) Dish identification method based on attention mechanism and metric learning
WO2024021461A1 (en) Defect detection method and apparatus, device, and storage medium
CN112926652A (en) Fish fine-grained image identification method based on deep learning
CN114648806A (en) Multi-mechanism self-adaptive fundus image segmentation method
CN114299383A (en) Remote sensing image target detection method based on integration of density map and attention mechanism
Ling et al. A model for automatic recognition of vertical texts in natural scene images
CN110826629A (en) Otoscope image auxiliary diagnosis method based on fine-grained classification
CN113658206A (en) Plant leaf segmentation method
CN116245855B (en) Crop variety identification method, device, equipment and storage medium
CN113076909B (en) Automatic cell detection method
CN110889418A (en) Gas contour identification method
CN114663658B (en) Small sample AOI surface defect detection method with cross-domain migration capability

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20221025

CF01 Termination of patent right due to non-payment of annual fee