WO2022012110A1 - 胚胎光镜图像中细胞的识别方法及***、设备及存储介质 - Google Patents

胚胎光镜图像中细胞的识别方法及***、设备及存储介质 Download PDF

Info

Publication number
WO2022012110A1
WO2022012110A1 PCT/CN2021/090357 CN2021090357W WO2022012110A1 WO 2022012110 A1 WO2022012110 A1 WO 2022012110A1 CN 2021090357 W CN2021090357 W CN 2021090357W WO 2022012110 A1 WO2022012110 A1 WO 2022012110A1
Authority
WO
WIPO (PCT)
Prior art keywords
network
light microscope
prediction frame
initial
embryonic
Prior art date
Application number
PCT/CN2021/090357
Other languages
English (en)
French (fr)
Inventor
王剑波
李伟忠
王文军
张宁锋
Original Assignee
中山大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中山大学 filed Critical 中山大学
Publication of WO2022012110A1 publication Critical patent/WO2022012110A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification

Definitions

  • the invention relates to the technical field of artificial intelligence, in particular to a method for recognizing cells in a light microscope image of embryos, a system for recognizing cells in a light microscope image of embryos, computer equipment and a computer storage medium.
  • Faster RCNN model (Ren S, He K, Girshick R, et al. Faster r-cnn:Towards real-time object detection with region proposal networks[C]//Advances in neural information processing systems.2015:91- 99.) is a general target detection framework proposed by Ren Shaoqing in 2017. It is an improved version based on Fast RCNN, which achieves faster detection speed under the same detection accuracy.
  • Faster RCNN is a deep convolutional neural network model, including three modules: feature extraction network, RPN network, and classification and regression network. In the original paper, the author used the VGG16 convolutional neural network as the feature extraction network.
  • the process of Faster RCNN for target detection is as follows: First, train the Faster RCNN model and save the trained weights, and then perform detection, the Faster RCNN model loads the trained weights, and the image enters the feature extraction network for feature extraction to obtain a feature map; Then, the feature map is input to the RPN network to generate the recommendation frame; then, the generated recommendation frame enters the classification and regression network to screen and correct the boundary of the recommendation frame, and obtain the predicted frame; finally, use the NMS algorithm to remove redundant Predict the box and mark the predicted box on the picture, and output the picture.
  • the original Faster RCNN model has a faster detection speed
  • the NMS algorithm it uses has a high missed detection rate for the detection of overlapping objects, resulting in some highly overlapping cells being missed.
  • Soft NMS (Bodla N, Singh B, Chellappa R, et al. Soft-NMS--Improving Object Detection With One Line of Code[C]//Proceedings of the IEEE international conference on computer vision.2017:5561- 5569.) is an algorithm proposed in 2017 to detect overlapping objects.
  • NMS Non-Maximum Suppression Algorithm
  • Soft NMS is an improvement to the original NMS algorithm, and it performs better in the detection of low-overlapping objects. Good, but it performs poorly on high-overlap objects as well as overlying objects, causing some high-overlap cells to be missed.
  • the prior art does not propose a technical method that can automatically and efficiently detect and quantitatively evaluate the embryonic development of in vitro fertilized eggs.
  • the technical problem to be solved by the present invention is to provide a method, system, computer equipment and computer storage medium for identifying cells in embryonic light microscope images, which can significantly reduce the missed detection rate.
  • the present invention provides a method for identifying cells in a light microscope image of an embryo, including: preprocessing the light microscope image of the embryo; labeling the preprocessed light microscope image of the embryo;
  • the rear embryo light microscope picture is input into the pre-trained Faster RCNN recognition model to generate the cell prediction result, and the Faster RCNN recognition model includes the feature extraction network, the RPN network, the Roi Align network, the classification regression network and the C-NMS network; According to the described Cell prediction results for cell identification.
  • the step of inputting the labeled and processed embryo light microscope picture into a pre-trained Faster RCNN recognition model to generate a cell prediction result includes: inputting the labeled embryo light microscope picture into a feature extraction network for Feature extraction to obtain a feature map; input the feature map into the RPN network for identification and screening processing to obtain a recommendation frame; input the feature map and the recommendation frame into the Roi Align network for mapping and pooling to obtain a recommended feature map;
  • the recommended feature map is input into the classification and regression network for classification and regression processing to obtain the initial coordinates, initial category and initial confidence of the prediction frame; the initial coordinates, initial category and initial confidence of the prediction frame are input into the C-NMS network for screening Process to obtain the target coordinates, target category and target confidence of the prediction frame.
  • the step of inputting the initial coordinates, initial category and initial confidence of the prediction frame into the C-NMS network for screening, and obtaining the target coordinates, target category and target confidence of the prediction frame includes: The initial coordinates, initial category and initial confidence of the prediction frame are input into the C-NMS network; the prediction frame with the largest initial confidence is used as the reference prediction frame; each prediction frame and the reference prediction frame are calculated respectively according to the initial coordinates the overlap ratio; the prediction frame whose overlap ratio is greater than or equal to the preset reliability threshold is used as the prediction frame to be adjusted; the prediction frame to be adjusted is updated according to the overlap ratio and area ratio of the prediction frame to be adjusted and the reference prediction frame
  • the initial confidence level of ; the prediction frame takes the initial coordinates as target coordinates, the initial category as the target category, and the updated initial confidence level as the target confidence level.
  • the feature extraction network is a ResNet50 fully convolutional network.
  • the RPN network includes three kinds of recommendation boxes, the length-to-width ratios of the three kinds of recommendation boxes are 1:1.5, 1:1 and 1.5:1 respectively, and the maximum number of the recommended boxes is 80- 120.
  • the neighborhood histogram equalization method is used to preprocess the embryonic light microscope image.
  • the present invention also provides a system for identifying cells in the embryonic light microscope image, including: a preprocessing module for preprocessing the embryonic light microscope image; a labeling module for preprocessing the embryonic light microscope image The picture is labeled and processed; the prediction module is used to input the labeled embryo light microscope picture into the pre-trained Faster RCNN recognition model to generate the cell prediction result.
  • the Faster RCNN recognition model includes feature extraction network, RPN network, Roi Align network, classification and regression network and C-NMS network; an identification module, used for cell identification according to the cell prediction result.
  • the present invention also provides a computer device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the above identification method when executing the computer program.
  • the present invention also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps of the above identification method.
  • the invention realizes the precise extraction of cells in the embryonic light microscope picture, and can effectively assist the doctor to determine the optimal embryo. specifically:
  • the invention constructs a brand-new C-NMS network.
  • the C-NMS network adopts a non-maximum suppression algorithm suitable for crowded detection objects.
  • the area ratio between the prediction boxes is screened, which significantly reduces the missed detection rate;
  • the present invention abandons the rounding operation of the ROI Pooling network used by the original model, but uses the bilinear interpolation method of the Roi Align network to estimate the coordinate values of the non-integer points of the recommendation frame, and then unifies the recommendation frame through the pooling operation. , which greatly improves the accuracy of the predicted frame position.
  • the present invention also introduces a ResNet50 full convolution network into the feature extraction network, with deeper layers and a residual structure, which greatly improves the feature extraction function;
  • the invention also optimizes the parameters of the RPN network according to the unique physical state of the embryonic cells, thereby improving the efficiency of feature extraction.
  • Fig. 1 is the flow chart of the first embodiment of the method for identifying cells in the embryo light microscope image of the present invention
  • Fig. 2 is the flow chart of the second embodiment of the method for identifying cells in the embryo light microscope image of the present invention
  • Fig. 3 is the schematic diagram of Faster RCNN recognition model in the present invention.
  • Fig. 4 is the structural representation of the identification system of cells in the embryo light microscope image of the present invention.
  • FIG. 5 is a schematic structural diagram of a prediction module in the present invention.
  • FIG. 1 shows a flowchart of the first embodiment of the method for identifying cells in the embryonic light microscope image of the present invention, which includes:
  • the light microscope pictures of embryos were taken by light microscope and were not stained, so the light microscope pictures of embryos were gray as a whole. Due to the transparency of the cells and the serious overlapping, the boundaries of the cells are blurred. At the same time, the brightness difference between the embryonic light microscope images is small, and the color distinction between the foreground and the background is insufficient, all of which cause great difficulties in cell identification. .
  • the present invention adopts the neighborhood histogram equalization method to preprocess the embryonic light microscope images, thereby improving the contrast of the embryonic light microscope images and the difference between the embryonic light microscope images.
  • the difference in brightness between the two cells makes the cell boundaries in the embryonic light microscope images clearer, and the cell recognition accuracy is increased by nearly 3%.
  • each embryonic light microscope image of the training set, validation set and test set needs to be labeled. specifically:
  • the annotation information of the training set is used to calculate the loss, and then the gradient is calculated for solving;
  • the annotation information of the validation set is used to calculate the accuracy of the validation set under the current training conditions, and to evaluate whether the current model has converged and whether it is overfitting;
  • the annotation information of the test set is used to compare with the prediction results of the final model, and the accuracy of the final model is calculated.
  • the existing Faster RCNN framework consists of three parts: feature extraction network, RPN network and classification and regression network.
  • the Faster RCNN recognition model has done deep optimization on the task of detecting embryonic cells, which includes feature extraction network, RPN network, Roi Align network, classification regression network and C-NMS.
  • the internet includes feature extraction network, RPN network, Roi Align network, classification regression network and C-NMS.
  • the present invention also introduces a new C-NMS method, which is specially used for the detection of highly overlapping objects in embryonic light microscope pictures, so that the missed detection rate of cells is significantly reduced.
  • the present invention uses an approximate joint training method.
  • the feature extraction network is initialized as the pre-trained ResNet50 network weights on ImageNet; then, the weights of other networks are randomly initialized to a normal distribution with a mean of 0.1 and a variance of 0.01; then Keras and Tensorflow are used as the deep learning framework, in
  • the optimization algorithm used in backpropagation is SGD (Stochastic Gradient Descent), the learning rate is 0.025 (the learning rate decays to 0.001 with the number of iteration steps), and a total of 70,000 iterations are done.
  • the present invention uses the technology of artificial intelligence learning, adopts the deeply optimized Faster RCNN recognition model to automatically detect the light microscope pictures of embryos of in vitro fertilized eggs, and obtains the number of cells that divide normally to assist doctors in determining the optimal embryo.
  • Fig. 2 and Fig. 3 show the flowchart of the second embodiment of the method for identifying cells in the embryonic light microscope image of the present invention, which includes:
  • the feature extraction network generally uses the VGG network.
  • the feature extraction network is a ResNet50 fully convolutional network.
  • the ResNet50 network has deeper layers than the original VGG network, and has a residual structure, which is superior in feature extraction.
  • ResNet50 is a fully convolutional network with a total of 50 convolutional layers.
  • the input of the ResNet50 fully convolutional network is the RGB three-channel embryo light microscope image.
  • the size of the embryo light microscope image is not required.
  • the output is a tensor with 1024 channels, which contains all the features of the embryo light microscope image.
  • the size of the tensor depends on Enter the size of the embryo light microscope image, and the length and width of the tensor will be reduced to 1/16 of the length and width of the embryo light microscope image.
  • the feature map is input into the RPN network for identification and screening processing to obtain a recommendation frame.
  • the RPN (Region Proposal Network) network is a three-layer convolutional network. Its function is to generate some recommended regions of interest (also called recommendation boxes) for the feature map, and perform a preliminary screening of these recommended regions. Regions may represent the location of cells on the original image.
  • the user slides a small window on the feature map, which is fully connected to the feature map.
  • Each sliding window gets a low-dimensional vector, which will be used for classification and regression of recommended regions.
  • the size of the sliding window is selected as 3 ⁇ 3, and the center point of the sliding window will be mapped to the feature map for extracting recommendation boxes on the feature map.
  • Each center point extracts 9 different recommendation boxes (including 3 sizes and 3 scales) on the feature map.
  • These 9 different recommendation boxes will be sent to the classification network and regression network inside the RPN network.
  • the classification network is used to determine whether these recommendation boxes belong to cells, and the regression network is used to correct the boundaries of the recommendation boxes, so that It is recommended that the box be accurately framed by the cells.
  • the classification network is a convolutional network with a 1 ⁇ 1 convolution kernel, which generates a score for each recommendation box. The higher the score, the higher the probability of the cell.
  • the regression network is also a convolutional network with a 1 ⁇ 1 convolution kernel, which performs 4-position regression for each recommendation box, including the center point coordinates (x, y) of the recommendation box and the length and width of the recommendation box. On the entire feature map, about 20,000 recommendation boxes will be generated. These recommendation boxes will be screened before entering the classification network and regression network. After screening, there will be about 100 recommendation boxes left.
  • the RPN network includes three kinds of recommendation boxes.
  • the length-to-width ratios of the three kinds of recommendation boxes in the RPN network are generally 1:2, 1:1 and 2:1, and the maximum number of recommendation boxes is 300.
  • the present invention optimizes some hyperparameters of the RPN network. Specifically, the present invention changes the aspect ratios of the three recommended boxes in the RPN network from the existing (1:2, 1:1, 2:1) to (1:1.5, 1:1, 1.5:1), The maximum number of recommendation boxes is changed from 300 to 80-120 (preferably 100), which greatly improves the efficiency of feature extraction.
  • the Roi Align network There are two inputs to the Roi Align network, one is the feature map output by the feature extraction network, and the other is the recommendation box output by the RPN network. After the feature map and the recommendation box are input into the Roi Align network, the recommendation box is first mapped to the feature map through the Roi Align network, and then the recommendation box is performed on the feature map through the Roi Align network. Pooling to generate fixed-size recommendation feature maps.
  • the size of the recommendation box output by the RPN layer is inconsistent, and it cannot be input to the fully connected layer to determine the category of the recommendation box.
  • the function of the ROI Align network is to pool the input recommendation box to the same size (14 ⁇ 14 ).
  • the present invention abandons the rounding operation of the ROI Pooling network used by the original model, but uses the bilinear interpolation method to estimate the coordinate values of the non-integer points of the recommendation frame, and then unifies them through the pooling operation.
  • the size of the recommended box which improves the accuracy of the predicted box location.
  • S206 Input the recommended feature map into a classification and regression network for classification and regression processing, and obtain initial coordinates, initial categories, and initial confidence levels of the prediction frame.
  • the input of the classification and regression network is the recommended feature map of the same size output by the Roi Align network.
  • the classification and regression network performs bounding box regression on the recommended feature map again to obtain a higher-precision recommendation frame, wherein the classification and regression network includes classification.
  • network and regression network specifically: the classification network identifies and judges the recommended feature map, and generates an initial category of the prediction frame and an initial confidence level of the prediction frame; the regression network performs boundary correction on the recommended feature map to generate a prediction Box initial coordinates.
  • S207 Input the initial coordinates, the initial category and the initial confidence of the prediction frame into the C-NMS network for screening processing to obtain the target coordinates, target category and target confidence of the prediction frame.
  • NMS Non-Maximum Suppression Method
  • NMS is a standard method for object detection models to eliminate redundant objects in the prediction stage, and it solves this problem by setting overlapping thresholds for specific object categories. NMS first generates a series of detection boxes and corresponding scores in the detected pictures. When the detection box with the largest score is selected, any adjacent detection boxes whose overlap with the detection box is greater than the overlap threshold will also be removed. The biggest problem in the non-maximum suppression algorithm is that it forces the scores of adjacent detection boxes to zero. In this case, if a real object appears in the overlapping area, it will cause the detection of this object to fail and reduce the average detection rate of the algorithm. Since NMS only uses the overlap ratio between the detected objects and does not use the characteristics of the objects themselves, NMS performs well in common object detection problems, but when the detection target overlap rate is high, NMS is easy to miss detection.
  • the present invention introduces a new and unique C-NMS network.
  • the C-NMS network adopts a non-maximum suppression algorithm suitable for crowded detection objects.
  • the prediction frames are screened. Output the target coordinates, target category and target confidence of the final screened prediction frame.
  • the steps of inputting the initial coordinates, initial category and initial confidence of the prediction frame into the C-NMS network for screening, and obtaining the target coordinates, target category and target confidence of the prediction frame include:
  • the C-NMS in the present invention not only uses the overlap ratio between the detected objects, but also uses the ratio between the areas occupied by the detected objects themselves.
  • the higher the overlap ratio the larger the area difference.
  • the method for updating the initial confidence level of the prediction frame to be adjusted according to the overlap ratio and area ratio of the prediction frame to be adjusted and the reference prediction frame is:
  • the s i is the initial confidence
  • the f is the score penalty function
  • iou (M, b i) larger, to be adjusted predicted value of the block b i is reduced more; ar (M, b i) larger, to be adjusted predicted value of the block b i is reduced Also the more.
  • iou(M,b i ) and ar(M,b i ) are independent.
  • C-NMS sets a score penalty function for adjacent detection boxes based on the size of the overlap and the area ratio between adjacent detection boxes, instead of completely setting their scores to zero, the score penalty
  • the variables of the function are the overlap ratio between adjacent detection boxes and the area ratio between adjacent detection boxes. In simple terms, if a detection box has a large overlap with the baseline prediction box, and their area ratio is close to 1, it will have a low score; If the area ratio between them is lower than a certain threshold, then its original detection score will not be greatly affected. Moreover, C-NMS requires no additional training and is easy to implement.
  • the prediction frame takes the initial coordinates as the target coordinates, the initial category as the target category, and the updated initial confidence level as the target confidence level.
  • step S207 the target coordinates, target category and target confidence of the prediction frame can be further determined.
  • the present invention realizes the precise extraction of cells in the light microscope picture of the embryo through the deep optimization of the Faster RCNN network, and can effectively assist the doctor to determine the optimal embryo.
  • the present invention constructs a C-NMS network, and flexibly adjusts the detection score by detecting the overlap ratio and area ratio between the detected objects, which significantly reduces the missed detection rate;
  • the ResNet50 fully convolutional network is introduced into the network, with deeper layers and residual structure, which greatly improves the function of feature extraction; in addition, the present invention also optimizes the parameters of the RPN network according to the unique physical state of embryonic cells. Improve the efficiency of feature extraction.
  • FIG. 4 shows the specific structure of the cell identification system 100 in the embryo light microscope image of the present invention, which includes:
  • the preprocessing module 1 is used to preprocess the embryonic light microscope images.
  • the preprocessing module 1 uses the neighborhood histogram equalization method to preprocess the embryonic light microscope images, so as to improve the contrast of the embryonic light microscope images and the brightness difference between the embryonic light microscope images, so that the cells in the embryonic light microscope images are more distinct. Boundaries are clearer.
  • the labeling module 2 is used to label and process the preprocessed embryo light microscope images. Before entering the Faster RCNN recognition model, each embryonic light microscope image of the training set, validation set and test set needs to be marked by the marking module 2, wherein the marking information of the training set is used to calculate the loss, and then calculate the gradient to solve ;
  • the annotation information of the verification set is used to calculate the accuracy of the verification set under the current training conditions, and to evaluate whether the current model converges and whether it is overfitting; the annotation information of the test set is used to compare with the prediction results of the final model. The accuracy of the final model.
  • the prediction module 3 is used to input the labeled embryonic light microscope pictures into the pre-trained Faster RCNN recognition model to generate cell prediction results.
  • the Faster RCNN recognition model includes feature extraction network, RPN network, Roi Align network, classification regression network and C-NMS network.
  • the identification module 4 is used for cell identification according to the cell prediction result.
  • the present invention uses computer-aided means, uses artificial intelligence learning technology, and adopts the deeply optimized Faster RCNN recognition model to automatically detect the embryonic light microscope pictures of in vitro fertilized eggs to obtain the number of cells that divide normally, so as to assist doctors in determining developmental the best embryo.
  • the prediction module 3 includes:
  • the feature extraction unit 31 is configured to input the labeled embryonic light microscope picture into a feature extraction network for feature extraction to obtain a feature map.
  • the feature extraction unit 31 introduces a ResNet50 full convolution network, which is deeper than the original VGG network and has a residual structure, which is superior in feature extraction.
  • the RPN unit 32 is configured to input the feature map into the RPN network for identification and screening processing to obtain a recommendation frame.
  • the RPN unit 32 optimizes some hyperparameters of the RPN network. Specifically, the RPN unit changes the aspect ratio of the three recommendation boxes in the RPN network from the existing (1:2, 1:1, 2:1) to (1:1.5, 1:1, 1.5:1), The maximum number of recommendation boxes is changed from 300 to 80-120 (preferably 100), which greatly improves the efficiency of feature extraction.
  • the Roi Align unit 33 is used to input the feature map and the recommendation frame into the Roi Align network for mapping and pooling processing to obtain the recommended feature map.
  • the classification and regression unit 34 is used to input the recommended feature map into the classification and regression network for classification and regression processing, and obtain the initial coordinates, the initial category and the initial confidence of the prediction frame;
  • the C-NMS unit 35 is configured to input the initial coordinates, the initial category and the initial confidence of the prediction frame into the C-NMS network for screening processing to obtain the target coordinates, target category and target confidence of the prediction frame.
  • the C-NMS network adopts a non-maximum suppression algorithm suitable for crowded detection objects. According to the size of the initial confidence of the prediction frame, the overlap ratio between the prediction frames and the area ratio between the prediction frames, the prediction frames are screened. Output the target coordinates, target category and target confidence of the final screened prediction frame.
  • the present invention also provides a computer device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the above identification method when executing the computer program.
  • the present invention also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above identification method are implemented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Apparatus Associated With Microorganisms And Enzymes (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Analysis (AREA)

Abstract

一种胚胎光镜图像中细胞的识别方法、识别***(100)、计算机设备及计算机可读存储介质。所述方法包括:对胚胎光镜图片进行预处理(S101);对预处理后的胚胎光镜图片进行标注处理(S102);将经标注处理后的胚胎光镜图片输入事先训练的Faster RCNN识别模型以生成细胞预测结果(S103),所述Faster RCNN识别模型包括特征提取网络、RPN网络、Roi Align网络、分类回归网络及C-NMS网络;根据所述细胞预测结果进行细胞识别(S104)。所述方法通过对Faster RCNN网络的深度优化,实现了胚胎光镜图片中细胞的精准提取,同时还构建了全新的C-NMS网络,通过对被检测物体之间的重叠比例及面积比例的检测,灵活地调整检测分数,显著地降低了漏检率。

Description

胚胎光镜图像中细胞的识别方法及***、设备及存储介质 技术领域
本发明涉及人工智能技术领域,尤其涉及一种胚胎光镜图像中细胞的识别方法、胚胎光镜图像中细胞的识别***、计算机设备及计算机存储介质。
背景技术
随着医学的不断进步,医院试管婴儿、体外授精的技术日趋成熟,到医院尝试试管婴儿的个案不断增多,生殖科医生的工作量和强度随之增大。生殖科医生对胚胎光镜图片进行胚胎细胞的数量检测以及质量评估,既需要高度的精准判断,也需要不断地重复的图像浏览。目前,胚胎细胞数量检测与质量评估均是由人工完成,国内外并没有相应的自动化辅助技术。如何减少医生的重复劳动,同时解决胚胎图片中细胞重叠比例高,边界不明显等问题,提高判断的准确率,是当前医疗人工智能的一个重要和必要的任务。
深度学习技术应用于类似的医疗领域的技术也有一些尝试,例如:
技术1:Faster RCNN模型(Ren S,He K,Girshick R,et al.Faster r-cnn:Towards real-time object detection with region proposal networks[C]//Advances in neural information processing systems.2015:91-99.)是任少卿于2017年提出的一个通用目标检测框架,它是基于Fast RCNN的改进版,在相同的检测精度下,达到了更快的检测速度。Faster RCNN是一种深度卷积神经网络模型,包括特征提取网络、RPN网络、分类回归网络三个模块。原论文中,作者采用了VGG16卷积神经网络作为特征提取网络。Faster RCNN进行目标检测的流程如下:首先,训练Faster RCNN模型,并且保存训练好的权重,然后进行检测,Faster RCNN模型载入训练好的权重,图片进入特征提取网络进行特征提取,得到特征图;然后,特征图输入到RPN网络,进行推荐框的产生;接着,产生的推荐框进入分类回归网络,进行推荐框的筛选与边界修正,并且得到预测的框;最后,用NMS算法去除冗余的预测框并在图片上标出预测得到的框,输出图片。虽然,原始的Faster RCNN模型检测速度较快,但是,其使用的NMS算法对于重叠物体的检测存在较高的漏检率,导致一些高重叠的细胞被漏检。
技术2:Soft NMS(Bodla N,Singh B,Chellappa R,et al.Soft-NMS--Improving Object Detection With One Line of Code[C]//Proceedings of the IEEE international conference on computer vision.2017:5561-5569.)是2017年提出的一种检测重叠物体的算法。NMS(非极大值抑制算法)是faster rcnn的基本结构,它用于在模型预测阶段去除冗余的重叠检测框,Soft NMS是对原始NMS算法的改进,在低重叠物体的检测中表现较好,但是,其在高重叠物体以及覆盖物体上表现很差,会导致部分高重叠细胞被漏检。
因此,现有技术并没有提出能自动化高效检测和量化评估体外授精卵胚胎发育的技术方法。
发明内容
本发明所要解决的技术问题在于,提供一种胚胎光镜图像中细胞的识别方法、***、计算机设备及计算机存储介质,可显著地降低了漏检率。
为了解决上述技术问题,本发明提供了一种胚胎光镜图像中细胞的识别方法,包括:对胚胎光镜图片进行预处理;对预处理后的胚胎光镜图片进行标注处理;将经标注处理后的胚胎光镜图片输入事先训练的Faster RCNN识别模型以生成细胞预测结果,所述Faster RCNN识别模型包括特征提取网络、RPN网络、Roi Align网络、分类回归网络及C-NMS网络;根据所述细胞预测结果进行细胞识别。
作为上述方案的改进,所述将经标注处理后的胚胎光镜图片输入事先训练的Faster RCNN识别模型以生成细胞预测结果的步骤包括:将经标注处理后的胚胎光镜图片输入特征提取网络进行特征提取,获得特征图;将所述特征图输入RPN网络进行识别及筛选处理,获得推荐框;将所述特征图及推荐框输入Roi Align网络进行映射及池化处理,获得推荐特征图;将所述推荐特征图输入分类回归网络进行分类回归处理,获得预测框的初始坐标、初始类别及初始置信度;将所述预测框的初始坐标、初始类别及初始置信度输入C-NMS网络进行筛选处理,获得预测框的目标坐标、目标类别及目标置信度。
作为上述方案的改进,所述将预测框的初始坐标、初始类别及初始置信度输入C-NMS网络进行筛选处理,获得预测框的目标坐标、目标类别及目标置信度的步骤包括:将所述预测框的初始坐标、初始类别及初始置信度输入C-NMS 网络;将所述初始置信度最大的预测框作为基准预测框;根据所述初始坐标分别计算每一预测框与所述基准预测框的重叠比;将所述重叠比大于或等于预设置信度阈值的预测框作为待调整预测框;根据所述待调整预测框与基准预测框的重叠比及面积比更新所述待调整预测框的初始置信度;所述预测框将所述初始坐标作为目标坐标,将所述初始类别作为目标类别,将更新后的初始置信度作为目标置信度。
作为上述方案的改进,所述根据待调整预测框与基准预测框的重叠比及面积比更新所述待调整预测框的初始置信度的步骤包括:根据公式s i=s if(iou(M,b i),ar(M,b i)),更新所述待调整预测框的初始置信度,其中,所述s i为初始置信度,所述f为分值惩罚函数且
Figure PCTCN2021090357-appb-000001
所述a=iou(M,b i),所述iou(M,b i)为待调整预测框b i与基准预测框M的重叠比,所述b=ar(M,b i),所述ar(M,b i)为待调整预测框b i与基准预测框M的面积比。
作为上述方案的改进,所述特征提取网络为ResNet50全卷积网络。
作为上述方案的改进,所述RPN网络包括三种推荐框,所述三种推荐框的长宽比例分别为1:1.5,1:1及1.5:1,所述推荐框的最大数量为80-120个。
作为上述方案的改进,采用邻域直方图均衡方法对所述胚胎光镜图片进行预处理。
相应地,本发明还提供了一种胚胎光镜图像中细胞的识别***,包括:预处理模块,用于对胚胎光镜图片进行预处理;标注模块,用于对预处理后的胚胎光镜图片进行标注处理;预测模块,用于将经标注处理后的胚胎光镜图片输入事先训练的Faster RCNN识别模型以生成细胞预测结果,所述Faster RCNN识别模型包括特征提取网络、RPN网络、Roi Align网络、分类回归网络及C-NMS网络;识别模块,用于根据所述细胞预测结果进行细胞识别。
相应地,本发明还提供了一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述识别方法的步骤。
相应地,本发明还提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述识别方法的步骤。
实施本发明,具有如下有益效果:
本发明通过对Faster RCNN网络的深度优化,实现了胚胎光镜图片中细胞的精准提取,可有效辅助医生确定发育最优的胚胎。具体地:
本发明构建了全新的C-NMS网络,C-NMS网络采用适用于拥挤检测物体的非极大值抑制算法,依据预测框的初始置信度的大小、预测框之间的重叠比例以及预测框之间的面积比,对预测框进行筛选,显著地降低了漏检率;
同时,本发明摒弃了原模型使用的ROI Pooling网络的取整操作,而是使用Roi Align网络的双线性插值的方法估计推荐框非整数点的坐标值,然后再通过池化操作统一推荐框的尺寸,大大地提高了预测框位置的准确性。
本发明还在特征提取网络中引入ResNet50全卷积网络,层数更深,且具有残差结构,大大地提升了在特征提取的功能;
本发明还针对胚胎细胞的独特物理状态,对RPN网络的参数做了优化,提高了特征提取的效率。
附图说明
图1是本发明胚胎光镜图像中细胞的识别方法的第一实施例流程图;
图2是本发明胚胎光镜图像中细胞的识别方法的第二实施例流程图;
图3是本发明中Faster RCNN识别模型的示意图;
图4是本发明胚胎光镜图像中细胞的识别***的结构示意图;
图5是本发明中预测模块的结构示意图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明作进一步地详细描述。
参见图1,图1显示了本发明胚胎光镜图像中细胞的识别方法的第一实施例流程图,其包括:
S101,对胚胎光镜图片进行预处理。
胚胎光镜图片由光镜拍摄,并未做染色处理,所以胚胎光镜图片整体呈现灰色。由于细胞的透明性,严重的重叠性,使得细胞的边界模糊,同时,胚胎光镜图片之间的亮度差异小,前景与背景颜色区分度的不足,这都给细胞识别造成了很大的困难。
为了提升前景与背景,以及各个细胞之间的颜色差异,本发明采用邻域直方图均衡方法对所述胚胎光镜图片进行预处理,从而提升胚胎光镜图片的对比度及各个胚胎光镜图片之间的亮度差异,使得胚胎光镜图片中细胞的边界更加清晰,细胞识别准确率提升将近3%。
S102,对预处理后的胚胎光镜图片进行标注处理。
在进入Faster RCNN识别模型之前,每一张训练集、验证集及测试集的胚胎光镜图片都需要进行标注。具体地:
训练集的标注信息用于计算损失,然后计算梯度进行求解;
验证集的标注信息用于计算当前训练条件下,验证集的准确率,评估当前模型是否收敛,是否过拟合;
测试集的标注信息用于与最终得到的模型的预测结果比较,计算得到最终模型的准确率。
S103,将经标注处理后的胚胎光镜图片输入事先训练的Faster RCNN识别模型以生成细胞预测结果。
现有的Faster RCNN框架由特征提取网络、RPN网络及分类回归网络3部分组成。与现有技术不同的是,本发明中,所述Faster RCNN识别模型对检测胚胎细胞的任务做了深度的优化,其包括特征提取网络、RPN网络、Roi Align网络、分类回归网络及C-NMS网络。同时,本发明还引入了全新的C-NMS方法,专门用于胚胎光镜图片中高重叠物体的检测,使得细胞的漏检率显著降低。
进一步,Faster RCNN识别模型的训练方式有三种:交替训练、近似联合训练和非近似联合训练。
本发明使用的是近似联合训练方式。首先,将特征提取网络初始化为ImageNet上的预训练ResNet50网络权重;然后,将其他网络的权重随机初始化为均值为0.1,方差为0.01的正态分布;再使用Keras和Tensorflow作为深度学习框架,在反向传播中使用的优化算法为SGD(Stochastic Gradient Descent)、学习率为0.025(学习率随迭代步数衰减至0.001),总共做了70000次迭代。
S104,根据细胞预测结果进行细胞识别。
由上可知,本发明使用人工智能学习的技术,采用深度优化后的Faster RCNN识别模型自动检测体外授精卵的胚胎光镜图片,获得正常***的细胞个数以辅助医生确定发育最优的胚胎。
参见图2及图3,图2及图3显示了本发明胚胎光镜图像中细胞的识别方法的第二实施例流程图,其包括:
S201,对胚胎光镜图片进行预处理。
S202,对预处理后的胚胎光镜图片进行标注处理。
S203,将经标注处理后的胚胎光镜图片输入特征提取网络进行特征提取,获得特征图。
现有技术中,特征提取网络一般选用VGG网络。与现有技术不同的是,本发明中,所述特征提取网络为ResNet50全卷积网络。ResNet50网络比原来的VGG网络层数更深,而且具有残差结构,在特征提取的功能上更胜一筹。
ResNet50是一个全卷积网络,总共有50个卷积层。ResNet50全卷积网络的输入是RGB三通道的胚胎光镜图片,胚胎光镜图片大小不做要求,输出是1024个通道的张量,包含了胚胎光镜图片的全部特征,张量大小取决于输入胚胎光镜图片的大小,张量的长宽会缩小为胚胎光镜图片长宽的1/16。
S204,将所述特征图输入RPN网络进行识别及筛选处理,获得推荐框。
RPN(Region Proposal Network)网络是一个三层的卷积网络,其作用是针对特征图产生一些感兴趣的推荐区域(也称作推荐框),并且对这些推荐区域做一个初步的筛选,这些推荐区域可能代表着原始图像上的细胞的位置。
实际应用时,用户在特征图上滑动一个小窗口,这个窗口与特征图是全连接的。每一个滑动窗口得到一个低维的向量,这个向量将被用于推荐区域的分类和回归。事实上,滑动窗口的尺寸选取为3×3,滑动窗口的中心点将被映射到特征图上,用于在特征图上提取推荐框。每个中心点在特征图上提取9个不同的推荐框(含3种尺寸及3种比例)。这9个不同的推荐框将会送入RPN网络内部的分类网络及回归网络中,其中,分类网络用于判断这些推荐框是否属于细胞,而回归网络用于对推荐框的边界进行修正,使得推荐框准确的框住的细胞。分类网络是一个1×1卷积核的卷积网络,对每个推荐框都产生一个得分,得分越高则是细胞的概率越高。回归网络也是一个1×1卷积核的卷积网络,对每个推荐框都做4个位置的回归,包括推荐框的中心点坐标(x,y)以及推荐框的长与宽。在整个特征图上,大概会产生20000个推荐框,这些推荐框在进入分类网络以及回归网络之前,会进行一个筛选,筛选过后大概会留下100个推荐框。
所述RPN网络包括三种推荐框,现有技术中,RPN网络中三种推荐框的长宽比例一般为1:2,1:1及2:1,推荐框的最大数量为300。针对胚胎细胞的独特物理状态(细胞的形状偏圆形),个数在11个以下等特点,本发明对RPN网络的一些超参数做了优化。具体地,本发明将RPN网络中三种推荐框的长宽比由现有的(1:2,1:1,2:1)改为(1:1.5,1:1,1.5:1),将推荐框的最大个数由300改为80-120个(优选为100个),大大地提高了特征提取的效率。
S205,将所述特征图及推荐框输入Roi Align网络进行映射及池化处理,获得推荐特征图。
Roi Align网络的输入有两个,一个是特征提取网络输出的特征图,另一个是RPN网络输出的推荐框。特征图及推荐框输入Roi Align网络后,先通过所述Roi Align网络将所述推荐框映射至所述特征图上,再通过所述Roi Align网络在所述特征图上对所述推荐框进行池化处理,生成固定大小的推荐特征图(proposal feature maps)。
需要说明的是,RPN层输出的推荐框尺寸不一致,没法输入到全连接层进行推荐框类别的判定,而ROI Align网络的作用是将输入的推荐框池化到同一种尺寸(14×14)。与现有技术不同的是,本发明摒弃了原模型使用的ROI Pooling网络的取整操作,而是使用双线性插值的方法估计推荐框非整数点的坐标值,然后再通过池化操作统一推荐框的尺寸,这提高了预测框位置的准确性。
S206,将所述推荐特征图输入分类回归网络进行分类回归处理,获得预测框的初始坐标、初始类别及初始置信度。
分类回归网络的输入是Roi Align网络输出的大小相同的推荐特征图,分类回归网络对推荐特征图再次进行边界框回归的操作,获得更高精度的推荐框,其中,所述分类回归网络包括分类网络及回归网络,具体地:所述分类网络对所述推荐特征图进行识别判断,生成预测框初始类别及预测框初始置信度;所述回归网络对所述推荐特征图进行边界修正,生成预测框初始坐标。
S207,将所述预测框的初始坐标、初始类别及初始置信度输入C-NMS网络进行筛选处理,获得预测框的目标坐标、目标类别及目标置信度。
传统的物体检测流程常常采用多尺度滑动窗口,根据每个物体类别的前景/背景分数对每个窗口计算其特征。然而,相邻窗口往往具有相关的分数(即置信度),这会增加检测结果的假阳性。为了避免这样的问题,通常会采用NMS (非极大值抑制方法)对预测框进行后续处理来得到最终的预测框。
NMS是目标检测模型在预测阶段剔除冗余目标的标准方法,其针对特定物体类别分别设定重叠阈值来解决这个问题。NMS首先在被检测图片中产生一系列的检测框以及对应的分数。当选中最大分数的检测框后,任何与该检测框的重叠部分大于重叠阈值的相邻检测框也将随之移除。非极大值抑制算法中的最大问题就是它将相邻检测框的分数均强制归零。在这种情况下,如果一个真实物体在重叠区域出现,则将导致对该物体的检测失败并降低了算法的平均检测率。由于NMS仅仅用到检测物体之间的重叠比例,没有用到物体本身的特性,使得NMS在普通的物体检测问题的表现优异,但是在检测目标重叠率较高时,NMS容易漏检。
因此,本发明引入了全新、独特的C-NMS网络。C-NMS网络采用适用于拥挤检测物体的非极大值抑制算法,依据预测框的初始置信度的大小、预测框之间的重叠比例以及预测框之间的面积比,对预测框进行筛选,输出最终筛选得到的预测框的目标坐标、目标类别及目标置信度。
具体地,所述将预测框的初始坐标、初始类别及初始置信度输入C-NMS网络进行筛选处理,获得预测框的目标坐标、目标类别及目标置信度的步骤包括:
(1)将所述预测框的初始坐标、初始类别及初始置信度输入C-NMS网络;
(2)将所述初始置信度最大的预测框作为基准预测框;
(3)根据所述初始坐标分别计算每一预测框与所述基准预测框的重叠比;
(4)将所述重叠比大于或等于预设置信度阈值的预测框作为待调整预测框;
(5)根据所述待调整预测框与基准预测框的重叠比及面积比更新所述待调整预测框的初始置信度;
因此,本发明中的C-NMS不仅使用了被检测物体之间的重叠比例,而且使用了被检测物体自身所占的面积之间的比例,重叠比例越高,面积差距越大的细胞,被抑制的可能性就越大,该算法在胚胎细胞检测这种超高重叠比例的检测中,显著降低了漏检率。
具体地,所述根据待调整预测框与基准预测框的重叠比及面积比更新所述待调整预测框的初始置信度的方法为:
根据公式s i=s if(iou(M,b i),ar(M,b i)),更新所述待调整预测框的初始置信度,其中,
所述s i为初始置信度,所述f为分值惩罚函数且
Figure PCTCN2021090357-appb-000002
所述a=iou(M,b i),所述iou(M,b i)为待调整预测框b i与基准预测框M的重叠比,即iou(M,b i)=M与b i的交集面积/M与b i的并集面积;
所述b=ar(M,b i),所述ar(M,b i)为待调整预测框b i与基准预测框M的面积比,即r(M,b i)=min(M,b i)/max(M,b i)。
需要说明的是,iou(M,b i)越大,待调整预测框b i的分值降低的越多;ar(M,b i)越大,待调整预测框b i的分值降低的也越多。而且iou(M,b i)与ar(M,b i)是独立的。
综上所述,C-NMS基于重叠部分的大小以及相邻检测框之间的面积比,为相邻检测框设置一个分值惩罚函数,而非彻底将其分数置为零,该分值惩罚函数的变量是相邻检测框之间的重叠比例以及相邻检测框之间的面积比。简单来讲,如果一个检测框与基准预测框有大部分重叠,而且他们的面积比接近于1,则它会有很低的分数;而如果检测框与基准预测框只有小部分重叠,或者他们之间的面积比低于某一个阈值,那么它的原有检测分数不会受太大影响。而且,C-NMS不需要额外的训练且易于实现。
(6)所述预测框将所述初始坐标作为目标坐标,将所述初始类别作为目标类别,将更新后的初始置信度作为目标置信度。
因此,通过步骤S207可进一步确定预测框的目标坐标、目标类别及目标置信度。
S208,根据细胞预测结果进行细胞识别。
由上可知,本发明通过对Faster RCNN网络的深度优化,实现了胚胎光镜图片中细胞的精准提取,可有效辅助医生确定发育最优的胚胎。具体地:本发明构建了C-NMS网络,通过对被检测物体之间的重叠比例及面积比例的检测,灵活地调整检测分数,显著地降低了漏检率;同时,本发明在特征提取网络中引入ResNet50全卷积网络,层数更深,且具有残差结构,大大地提升了在特征提取的功能;另外,本发明还针对胚胎细胞的独特物理状态,对RPN网络的参数做了优化,提高了特征提取的效率。
参见图4,图4显示了本发明胚胎光镜图像中细胞的识别***100的具体结构,其包括:
预处理模块1,用于对胚胎光镜图片进行预处理。预处理模块1采用邻域直 方图均衡方法对所述胚胎光镜图片进行预处理,从而提升胚胎光镜图片的对比度及各个胚胎光镜图片之间的亮度差异,使得胚胎光镜图片中细胞的边界更加清晰。
标注模块2,用于对预处理后的胚胎光镜图片进行标注处理。在进入Faster RCNN识别模型之前,每一张训练集、验证集及测试集的胚胎光镜图片都需要通过标注模块2进行标注,其中,训练集的标注信息用于计算损失,然后计算梯度进行求解;验证集的标注信息用于计算当前训练条件下,验证集的准确率,评估当前模型是否收敛,是否过拟合;测试集的标注信息用于与最终得到的模型的预测结果比较,计算得到最终模型的准确率。
预测模块3,用于将经标注处理后的胚胎光镜图片输入事先训练的Faster RCNN识别模型以生成细胞预测结果。所述Faster RCNN识别模型包括特征提取网络、RPN网络、Roi Align网络、分类回归网络及C-NMS网络。
识别模块4,用于根据所述细胞预测结果进行细胞识别。
因此,本发明通过计算机辅助的手段,使用人工智能学习的技术,采用深度优化后的Faster RCNN识别模型自动检测体外授精卵的胚胎光镜图片,获得正常***的细胞个数,以辅助医生确定发育最优的胚胎。
如图5所示,预测模块3包括:
特征提取单元31,用于将经标注处理后的胚胎光镜图片输入特征提取网络进行特征提取,获得特征图。所述特征提取单元31引入ResNet50全卷积网络,比原来的VGG网络层数更深,而且具有残差结构,在特征提取的功能上更胜一筹。
RPN单元32,用于将所述特征图输入RPN网络进行识别及筛选处理,获得推荐框。RPN单元32对RPN网络的一些超参数做了优化。具体地,RPN单元将RPN网络中三种推荐框的长宽比由现有的(1:2,1:1,2:1)改为(1:1.5,1:1,1.5:1),将推荐框的最大个数由300改为80-120个(优选为100个),大大地提高了特征提取的效率。
Roi Align单元33,用于将所述特征图及推荐框输入Roi Align网络进行映射及池化处理,获得推荐特征图。
分类回归单元34,用于将所述推荐特征图输入分类回归网络进行分类回归处理,获得预测框的初始坐标、初始类别及初始置信度;
C-NMS单元35,用于将所述预测框的初始坐标、初始类别及初始置信度输入C-NMS网络进行筛选处理,获得预测框的目标坐标、目标类别及目标置信度。
需要说明的是,C-NMS单元35中引入了独特的C-NMS网络。C-NMS网络采用适用于拥挤检测物体的非极大值抑制算法,依据预测框的初始置信度的大小、预测框之间的重叠比例以及预测框之间的面积比,对预测框进行筛选,输出最终筛选得到的预测框的目标坐标、目标类别及目标置信度。
工作时,C-NMS单元35先将所述初始置信度最大的预测框作为基准预测框;再根据所述初始坐标分别计算每一预测框与所述基准预测框的重叠比;并将所述重叠比大于或等于预设置信度阈值的预测框作为待调整预测框;然后,根据所述待调整预测框与基准预测框的重叠比及面积比更新所述待调整预测框的初始置信度,具体地,根据公式s i=s if(iou(M,b i),ar(M,b i)),更新所述待调整预测框的初始置信度,其中,所述s i为初始置信度,所述f为分值惩罚函数且
Figure PCTCN2021090357-appb-000003
Figure PCTCN2021090357-appb-000004
所述a=iou(M,b i),所述iou(M,b i)为待调整预测框b i与基准预测框M的重叠比,即iou(M,b i)=M与b i的交集面积/M与b i的并集面积;所述b=ar(M,b i),所述ar(M,b i)为待调整预测框b i与基准预测框M的面积比,即r(M,b i)=min(M,b i)/max(M,b i)。
相应地,本发明还提供了一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述识别方法的步骤。同时,本发明还提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述识别方法的步骤。
以上所述是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也视为本发明的保护范围。

Claims (10)

  1. 一种胚胎光镜图像中细胞的识别方法,其特征在于,包括:
    对胚胎光镜图片进行预处理;
    对预处理后的胚胎光镜图片进行标注处理;
    将经标注处理后的胚胎光镜图片输入事先训练的Faster RCNN识别模型以生成细胞预测结果,所述Faster RCNN识别模型包括特征提取网络、RPN网络、Roi Align网络、分类回归网络及C-NMS网络;
    根据所述细胞预测结果进行细胞识别。
  2. 如权利要求1所述的胚胎光镜图像中细胞的识别方法,其特征在于,所述将经标注处理后的胚胎光镜图片输入事先训练的Faster RCNN识别模型以生成细胞预测结果的步骤包括:
    将经标注处理后的胚胎光镜图片输入特征提取网络进行特征提取,获得特征图;
    将所述特征图输入RPN网络进行识别及筛选处理,获得推荐框;
    将所述特征图及推荐框输入Roi Align网络进行映射及池化处理,获得推荐特征图;
    将所述推荐特征图输入分类回归网络进行分类回归处理,获得预测框的初始坐标、初始类别及初始置信度;
    将所述预测框的初始坐标、初始类别及初始置信度输入C-NMS网络进行筛选处理,获得预测框的目标坐标、目标类别及目标置信度。
  3. 如权利要求2所述的胚胎光镜图像中细胞的识别方法,其特征在于,所述将预测框的初始坐标、初始类别及初始置信度输入C-NMS网络进行筛选处理,获得预测框的目标坐标、目标类别及目标置信度的步骤包括:
    将所述预测框的初始坐标、初始类别及初始置信度输入C-NMS网络;
    将所述初始置信度最大的预测框作为基准预测框;
    根据所述初始坐标分别计算每一预测框与所述基准预测框的重叠比;
    将所述重叠比大于或等于预设置信度阈值的预测框作为待调整预测框;
    根据所述待调整预测框与基准预测框的重叠比及面积比更新所述待调整预测框的初始置信度;
    所述预测框将所述初始坐标作为目标坐标,将所述初始类别作为目标类别,将更新后的初始置信度作为目标置信度。
  4. 如权利要求3所述的胚胎光镜图像中细胞的识别方法,其特征在于,所述根据待调整预测框与基准预测框的重叠比及面积比更新所述待调整预测框的初始置信度的步骤包括:
    根据公式s i=s if(iou(M,b i),ar(M,b i)),更新所述待调整预测框的初始置信度,其中,
    所述s i为初始置信度,所述f为分值惩罚函数且
    Figure PCTCN2021090357-appb-100001
    所述a=iou(M,b i),所述iou(M,b i)为待调整预测框b i与基准预测框M的重叠比,
    所述b=ar(M,b i),所述ar(M,b i)为待调整预测框b i与基准预测框M的面积比。
  5. 如权利要求2所述的胚胎光镜图像中细胞的识别方法,其特征在于,所述特征提取网络为ResNet50全卷积网络。
  6. 如权利要求2所述的胚胎光镜图像中细胞的识别方法,其特征在于,所述RPN网络包括三种推荐框,所述三种推荐框的长宽比例分别为1:1.5,1:1及1.5:1,所述推荐框的最大数量为80-120个。
  7. 如权利要求1所述的胚胎光镜图像中细胞的识别方法,其特征在于,采用邻域直方图均衡方法对所述胚胎光镜图片进行预处理。
  8. 一种胚胎光镜图像中细胞的识别***,其特征在于,包括:
    预处理模块,用于对胚胎光镜图片进行预处理;
    标注模块,用于对预处理后的胚胎光镜图片进行标注处理;
    预测模块,用于将经标注处理后的胚胎光镜图片输入事先训练的Faster  RCNN识别模型以生成细胞预测结果,所述Faster RCNN识别模型包括特征提取网络、RPN网络、Roi Align网络、分类回归网络及C-NMS网络;
    识别模块,用于根据所述细胞预测结果进行细胞识别。
  9. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至7中任一项所述的方法的步骤。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至7中任一项所述的方法的步骤。
PCT/CN2021/090357 2020-07-17 2021-04-27 胚胎光镜图像中细胞的识别方法及***、设备及存储介质 WO2022012110A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010689861.0 2020-07-17
CN202010689861.0A CN112069874B (zh) 2020-07-17 2020-07-17 胚胎光镜图像中细胞的识别方法及***、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022012110A1 true WO2022012110A1 (zh) 2022-01-20

Family

ID=73657532

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/090357 WO2022012110A1 (zh) 2020-07-17 2021-04-27 胚胎光镜图像中细胞的识别方法及***、设备及存储介质

Country Status (2)

Country Link
CN (1) CN112069874B (zh)
WO (1) WO2022012110A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115115825A (zh) * 2022-05-27 2022-09-27 腾讯科技(深圳)有限公司 图像中的对象检测方法、装置、计算机设备和存储介质
CN115937214A (zh) * 2023-03-08 2023-04-07 深圳丹伦基因科技有限公司 一种基于深度学习的间充质干细胞衰老检测方法
CN116051560A (zh) * 2023-03-31 2023-05-02 武汉互创联合科技有限公司 基于胚胎多维度信息融合的胚胎动力学智能预测***
CN116778482A (zh) * 2023-08-17 2023-09-19 武汉互创联合科技有限公司 胚胎图像卵裂球目标检测方法、计算机设备及存储介质
CN117095180A (zh) * 2023-09-01 2023-11-21 武汉互创联合科技有限公司 基于分期识别的胚胎发育阶段预测与质量评估方法
CN118015006A (zh) * 2024-04-10 2024-05-10 武汉互创联合科技有限公司 基于动态圆形卷积的胚胎细胞空泡检测方法及电子设备

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112069874B (zh) * 2020-07-17 2022-07-05 中山大学 胚胎光镜图像中细胞的识别方法及***、设备及存储介质
CN112580786B (zh) * 2020-12-22 2021-09-28 之江实验室 一种用于ReID的神经网络构造方法及其训练方法
CN112819821B (zh) * 2021-03-01 2022-06-17 南华大学 一种细胞核图像检测方法
CN113111879B (zh) * 2021-04-30 2023-11-10 上海睿钰生物科技有限公司 一种细胞检测的方法和***
CN117649660B (zh) * 2024-01-29 2024-04-19 武汉互创联合科技有限公司 基于全局信息融合的细胞***均衡度评估方法及终端

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170206431A1 (en) * 2016-01-20 2017-07-20 Microsoft Technology Licensing, Llc Object detection and classification in images
US20190065817A1 (en) * 2017-08-29 2019-02-28 Konica Minolta Laboratory U.S.A., Inc. Method and system for detection and classification of cells using convolutional neural networks
CN109598224A (zh) * 2018-11-27 2019-04-09 微医云(杭州)控股有限公司 基于区域推荐卷积神经网络的骨髓切片中白细胞检测方法
CN110110799A (zh) * 2019-05-13 2019-08-09 广州锟元方青医疗科技有限公司 细胞分类方法、装置、计算机设备和存储介质
CN110363218A (zh) * 2019-06-06 2019-10-22 张孝东 一种胚胎无创评估方法及装置
CN112069874A (zh) * 2020-07-17 2020-12-11 中山大学 胚胎光镜图像中细胞的识别方法及***、设备及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056101A (zh) * 2016-06-29 2016-10-26 哈尔滨理工大学 用于人脸检测的非极大值抑制方法
CN108427912B (zh) * 2018-02-05 2020-06-05 西安电子科技大学 基于稠密目标特征学习的光学遥感图像目标检测方法
CN108550133B (zh) * 2018-03-02 2021-05-18 浙江工业大学 一种基于Faster R-CNN的癌细胞检测方法
CN108537775A (zh) * 2018-03-02 2018-09-14 浙江工业大学 一种基于深度学习检测的癌细胞跟踪方法
CN109255320B (zh) * 2018-09-03 2020-09-25 电子科技大学 一种改进的非极大值抑制方法
CN109886128B (zh) * 2019-01-24 2023-05-23 南京航空航天大学 一种低分辨率下的人脸检测方法
CN110736747B (zh) * 2019-09-03 2022-08-19 深思考人工智能机器人科技(北京)有限公司 一种细胞液基涂片镜下定位的方法及***

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170206431A1 (en) * 2016-01-20 2017-07-20 Microsoft Technology Licensing, Llc Object detection and classification in images
US20190065817A1 (en) * 2017-08-29 2019-02-28 Konica Minolta Laboratory U.S.A., Inc. Method and system for detection and classification of cells using convolutional neural networks
CN109598224A (zh) * 2018-11-27 2019-04-09 微医云(杭州)控股有限公司 基于区域推荐卷积神经网络的骨髓切片中白细胞检测方法
CN110110799A (zh) * 2019-05-13 2019-08-09 广州锟元方青医疗科技有限公司 细胞分类方法、装置、计算机设备和存储介质
CN110363218A (zh) * 2019-06-06 2019-10-22 张孝东 一种胚胎无创评估方法及装置
CN112069874A (zh) * 2020-07-17 2020-12-11 中山大学 胚胎光镜图像中细胞的识别方法及***、设备及存储介质

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115115825A (zh) * 2022-05-27 2022-09-27 腾讯科技(深圳)有限公司 图像中的对象检测方法、装置、计算机设备和存储介质
CN115115825B (zh) * 2022-05-27 2024-05-03 腾讯科技(深圳)有限公司 图像中的对象检测方法、装置、计算机设备和存储介质
CN115937214A (zh) * 2023-03-08 2023-04-07 深圳丹伦基因科技有限公司 一种基于深度学习的间充质干细胞衰老检测方法
CN116051560A (zh) * 2023-03-31 2023-05-02 武汉互创联合科技有限公司 基于胚胎多维度信息融合的胚胎动力学智能预测***
CN116051560B (zh) * 2023-03-31 2023-06-20 武汉互创联合科技有限公司 基于胚胎多维度信息融合的胚胎动力学智能预测***
CN116778482A (zh) * 2023-08-17 2023-09-19 武汉互创联合科技有限公司 胚胎图像卵裂球目标检测方法、计算机设备及存储介质
CN116778482B (zh) * 2023-08-17 2023-10-31 武汉互创联合科技有限公司 胚胎图像卵裂球目标检测方法、计算机设备及存储介质
CN117095180A (zh) * 2023-09-01 2023-11-21 武汉互创联合科技有限公司 基于分期识别的胚胎发育阶段预测与质量评估方法
CN117095180B (zh) * 2023-09-01 2024-04-19 武汉互创联合科技有限公司 基于分期识别的胚胎发育阶段预测与质量评估方法
CN118015006A (zh) * 2024-04-10 2024-05-10 武汉互创联合科技有限公司 基于动态圆形卷积的胚胎细胞空泡检测方法及电子设备

Also Published As

Publication number Publication date
CN112069874B (zh) 2022-07-05
CN112069874A (zh) 2020-12-11

Similar Documents

Publication Publication Date Title
WO2022012110A1 (zh) 胚胎光镜图像中细胞的识别方法及***、设备及存储介质
WO2018108129A1 (zh) 用于识别物体类别的方法及装置、电子设备
TWI742382B (zh) 透過電腦執行的、用於車輛零件識別的神經網路系統、透過神經網路系統進行車輛零件識別的方法、進行車輛零件識別的裝置和計算設備
US11842487B2 (en) Detection model training method and apparatus, computer device and storage medium
CN111047609B (zh) 肺炎病灶分割方法和装置
CN111091109B (zh) 基于人脸图像进行年龄和性别预测的方法、***和设备
US10783643B1 (en) Segmentation-based damage detection
CN112750106B (zh) 一种基于非完备标记的深度学习的核染色细胞计数方法、计算机设备、存储介质
WO2020087838A1 (zh) 血管壁斑块识别设备、***、方法及存储介质
CN110264444B (zh) 基于弱分割的损伤检测方法及装置
CN108734108B (zh) 一种基于ssd网络的裂纹舌识别方法
CN112365497A (zh) 基于TridentNet和Cascade-RCNN结构的高速目标检测方法和***
CN110648322A (zh) 一种子宫颈异常细胞检测方法及***
CN110543906B (zh) 基于Mask R-CNN模型的肤质自动识别方法
WO2020029915A1 (zh) 基于人工智能的中医舌像分割装置、方法及存储介质
CN111160407A (zh) 一种深度学习目标检测方法及***
CN111814569A (zh) 一种人脸遮挡区域的检测方法及***
US11037299B2 (en) Region merging image segmentation algorithm based on boundary extraction
CN107871315B (zh) 一种视频图像运动检测方法和装置
CN114219936A (zh) 目标检测方法、电子设备、存储介质和计算机程序产品
CN116682109B (zh) 一种病理显微图像的分析方法、装置、设备及存储介质
CN117541574A (zh) 一种基于ai语义分割和图像识别的舌诊检测方法
CN112926694A (zh) 基于改进的神经网络对图像中的猪只进行自动识别的方法
CN113313678A (zh) 一种基于多尺度特征融合的***形态学自动分析方法
CN112037173A (zh) 染色体检测方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21843000

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.05.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21843000

Country of ref document: EP

Kind code of ref document: A1