TWI754945B - Artificial intelligence based cell detection method by using optical kinetics and system thereof - Google Patents
Artificial intelligence based cell detection method by using optical kinetics and system thereof Download PDFInfo
- Publication number
- TWI754945B TWI754945B TW109117822A TW109117822A TWI754945B TW I754945 B TWI754945 B TW I754945B TW 109117822 A TW109117822 A TW 109117822A TW 109117822 A TW109117822 A TW 109117822A TW I754945 B TWI754945 B TW I754945B
- Authority
- TW
- Taiwan
- Prior art keywords
- cells
- time point
- cell
- images
- image
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
- G06F18/2414—Smoothing the distance, e.g. radial basis function networks [RBFN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Radiology & Medical Imaging (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Multimedia (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Artificial Intelligence (AREA)
- Quality & Reliability (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
- Investigating Or Analysing Biological Materials (AREA)
- Apparatus Associated With Microorganisms And Enzymes (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
本發明描述一種利用光動力學技術之人工智慧的細胞檢測方法及其系統,尤指一種具有偵測細胞品質以及辨識細胞功能之人工智慧的細胞檢測方法及其系統。 The present invention describes an artificial intelligence cell detection method and system using photodynamic technology, in particular an artificial intelligence cell detection method and system capable of detecting cell quality and identifying cell functions.
隨著科技日新月異,適逢生育年齡的婦女常會因為工作壓力、飲食習慣、文明病、***功能異常、賀爾蒙失調或是一些慢性病而導致***。在現今,***症是高自費的療程,在台灣、中國以及國際市場需求巨大,且其需求每年是高度成長。許多婦女會選擇體外人工受孕(In Vitro Fertilization,IVF)來治療***症的問題。體外人工受孕是將卵子與***取出,在人為操作下進行體外受精,並培養成胚胎,再將胚胎植回母體內。然而,現有的***症療程之成功率僅三成。***症的療程之重點在於胚胎的選擇。然而,現有選擇胚胎的方法主要還是由胚胎醫師利用胚胎照片或是縮時影片的資料,以主觀方式判斷胚胎的優劣。以目前的技術而論,由於缺乏一種系統化以及自動化的方式判斷胚胎的良劣,故在***症療程中,由醫師主觀地選擇植入胚胎,其植入成功率仍低,故也是目前***症療程的瓶頸。 With the rapid development of science and technology, women of reproductive age often suffer from infertility due to work stress, eating habits, civilized diseases, abnormal ovulation function, hormonal imbalance or some chronic diseases. Nowadays, infertility is a high self-paying treatment with huge demand in Taiwan, China and the international market, and its demand is growing at a high rate every year. Many women choose In Vitro Fertilization (IVF) to treat infertility problems. In vitro artificial fertilization is the removal of eggs and sperm, in vitro fertilization under artificial operation, and culture into embryos, and then the embryos are implanted back into the mother. However, the success rate of existing infertility treatments is only 30%. The focus of infertility treatment is the selection of embryos. However, the existing methods for selecting embryos are mainly based on embryologists using embryo photos or time-lapse video data to subjectively judge the quality of embryos. As far as the current technology is concerned, due to the lack of a systematic and automated way to judge the quality of embryos, in the treatment of infertility, the doctor subjectively chooses to implant the embryos, and the implantation success rate is still low, so it is also the current The bottleneck of infertility treatment.
換句話說,在目前的***症療程下,醫生僅能以主觀的方式觀察胚 胎在***時的情況。例如,醫生會以主觀的方式,依據胚胎在發育中的細胞數目、細胞***的均勻程度以及***時的碎片程度,將胚胎由優到劣區分為多個等級。例如,均勻地***為雙數細胞的胚胎較優,而產生不完整的單數細胞***以及碎片越多的胚胎,生長潛力較差。然而,如前述提及,由於目前判斷胚胎的優劣主要還是依據醫生的經驗,以主觀的判斷方式來選擇較佳胚胎。因此,目前技術治療***症的成功率很難提升,且容易受到不同醫生的主觀見解而影響成功率(例如誤判)。 In other words, under current infertility treatments, doctors can only observe embryos in a subjective way The condition of the tire when it splits. For example, doctors subjectively grade embryos into grades from good to bad, based on the number of cells they are developing, how evenly the cells divide, and how fragmented they are as they divide. For example, embryos that divide evenly into even cells are superior, while embryos that produce incomplete single cell divisions and more fragmentation have poorer growth potential. However, as mentioned above, since the quality of embryos is currently judged mainly based on the experience of doctors, the best embryos are selected by subjective judgment. Therefore, it is difficult to improve the success rate of infertility treatment with current technology, and the success rate is easily affected by the subjective opinions of different doctors (such as misjudgment).
本發明一實施例提出一種利用光動力學技術之人工智慧的細胞檢測方法。利用光動力學技術之人工智慧的細胞檢測方法包含取得複數個細胞,在第一時間點以及第二時間點之間,取樣該些細胞的N張影像,決定每一個細胞的分析區域,依據該些細胞的N張影像,對每一個細胞的分析區域進行影像處理程序,以產生複數個光動力學向量參數,依據該些光動力學向量參數,產生該些細胞在第一時間點以及第二時間點之間的形變向量資訊,依據形變向量資訊,取得該些細胞的至少一個關鍵形變向量特徵數值,將形變向量資訊及至少一個關鍵形變向量特徵數值輸入至類神經網路,以訓練類神經網路,以及利用類神經網路,以人工智慧的程序建立細胞品質檢測模型,以檢測細胞品質及/或辨識細胞。第一時間點在第二時間點之前,且N為大於2的正整數。 An embodiment of the present invention provides a cell detection method using artificial intelligence of photodynamic technology. The artificial intelligence cell detection method using photodynamic technology includes obtaining a plurality of cells, sampling N images of these cells between the first time point and the second time point, and determining the analysis area of each cell, according to the N images of these cells, image processing is performed on the analysis area of each cell to generate a plurality of photodynamic vector parameters, and according to the photodynamic vector parameters, the cells at the first time point and the second time point are generated. Deformation vector information between time points, obtaining at least one key deformation vector characteristic value of the cells according to the deformation vector information, and inputting the deformation vector information and at least one key deformation vector characteristic value to the neural network to train the neural network The network, and the use of a neural network to establish a cell quality detection model with an artificial intelligence program to detect cell quality and/or identify cells. The first time point is before the second time point, and N is a positive integer greater than 2.
本發明另一實施例提出一種利用光動力學技術之人工智慧的細胞檢測系統。利用光動力學技術之人工智慧的細胞檢測系統包含載具、透鏡模組、影像擷取裝置、處理器及記憶體。載具具有容置槽,用以放置複數個細胞。透鏡模組面對載具,用以放大該些細胞的細節。影像擷取裝置面對透鏡模組,用以透過透鏡模組取得該些細胞的影像。處理器耦接於透鏡模組及影像擷取裝 置,用以調整透鏡模組的放大倍率以及處理該些細胞的影像。載具之容置槽放置該些細胞後,處理器控制影像擷取裝置,透過透鏡模組在第一時間點以及第二時間點之間,取樣該些細胞的N張影像,處理器決定每一個細胞的分析區域,依據該些細胞的N張影像,對每一個細胞的分析區域進行影像處理程序,以產生複數個光動力學向量參數,依據該些光動力學向量參數,產生該些細胞在第一時間點以及第二時間點之間的形變向量資訊,依據形變向量資訊,取得該些細胞的至少一個關鍵形變向量特徵數值。處理器包含類神經網路。形變向量資訊及至少一個關鍵形變向量特徵數值用以訓練類神經網路。處理器利用類神經網路,以人工智慧的程序建立細胞品質檢測模型,以檢測細胞品質及/或辨識細胞。第一時間點在第二時間點之前,且N為大於2的正整數。 Another embodiment of the present invention provides a cell detection system utilizing artificial intelligence of photodynamic technology. The artificial intelligence cell detection system using photodynamic technology includes a carrier, a lens module, an image capture device, a processor and a memory. The carrier has an accommodating slot for placing a plurality of cells. The lens module faces the carrier for magnifying the details of the cells. The image capturing device faces the lens module, and is used to obtain the images of the cells through the lens module. The processor is coupled to the lens module and the image capture device The setting is used to adjust the magnification of the lens module and process the images of the cells. After the cells are placed in the accommodating groove of the carrier, the processor controls the image capture device to sample N images of the cells between the first time point and the second time point through the lens module, and the processor determines each An analysis area of a cell, according to the N images of the cells, image processing is performed on the analysis area of each cell to generate a plurality of photodynamic vector parameters, and the cells are generated according to the photodynamic vector parameters At least one key deformation vector characteristic value of the cells is obtained according to the deformation vector information between the first time point and the second time point. The processor contains a neural network-like network. The deformation vector information and at least one key deformation vector feature value are used to train the neural network. The processor uses a neural network to establish a cell quality detection model with an artificial intelligence program to detect cell quality and/or identify cells. The first time point is before the second time point, and N is a positive integer greater than 2.
100:利用光動力學技術之人工智慧的細胞檢測系統 100: A Cell Detection System Using Artificial Intelligence Using Photodynamic Technology
10:載具 10: Vehicle
11:透鏡模組 11: Lens module
12:影像擷取裝置 12: Image capture device
13:處理器 13: Processor
14:記憶體 14: Memory
S201至S208:步驟 S201 to S208: Steps
S301至S302:步驟 S301 to S302: Steps
D1:形變向量資訊 D1: Deformation vector information
D2:至少一個關鍵形變向量特徵數值 D2: At least one key deformation vector characteristic value
D3:邊緣特徵數據 D3: Edge Feature Data
D4:***個數 D4: number of splits
D5:***時間點 D5: Split time point
D6:細胞品質輸出資料 D6: Cell quality output data
第1圖係為本發明之利用光動力學技術之人工智慧的細胞檢測系統之實施例的方塊圖。 FIG. 1 is a block diagram of an embodiment of an artificial intelligence cell detection system utilizing photodynamic technology of the present invention.
第2圖係為第1圖之利用光動力學技術之人工智慧的細胞檢測系統執行細胞檢測方法的流程圖。 FIG. 2 is a flow chart of the cell detection method performed by the artificial intelligence cell detection system using the photodynamic technology of FIG. 1 .
第3圖係為第1圖之利用光動力學技術之人工智慧的細胞檢測系統中,加入額外步驟以增強細胞檢測的精確度的示意圖。 FIG. 3 is a schematic diagram of adding additional steps to enhance the accuracy of cell detection in the artificial intelligence cell detection system using photodynamic technology of FIG. 1 .
第4圖係為第1圖之利用光動力學技術之人工智慧的細胞檢測系統中,具有類神經網路的處理器之輸入資料以及輸出資料的示意圖。 FIG. 4 is a schematic diagram of input data and output data of a processor having a neural network in the artificial intelligence cell detection system using photodynamic technology of FIG. 1 .
第1圖係為本發明之利用光動力學技術之人工智慧的細胞檢測系統
100之實施例的方塊圖。為了簡化描述,利用光動力學技術之人工智慧的細胞檢測系統100後文稱為「細胞檢測系統100」。細胞檢測系統100包含載具10、透鏡模組11、影像擷取裝置12、處理器13以及記憶體14。載具10具有容置槽,用以放置複數個細胞。舉例而言,載具10可為培養皿,其內部的容置槽可包含一些培養液。複數個細胞可在培養液中發育。複數個細胞可為複數個生殖細胞、複數個胚胎或是任何欲觀察且可***的複數個細胞。透鏡模組11面對載具10,用以放大該些細胞的細節。透鏡模組11可為任何具有光學或是數位變焦能力的透鏡模組,例如顯微鏡模組。影像擷取裝置12面對透鏡模組11,用以透過透鏡模組11取得該些細胞的影像。在細胞檢測系統100中,影像擷取裝置12可為具有感光元件的相機或是高光譜儀。換句話說,影像擷取裝置12透過透鏡模組11取得該些細胞的影像,可為灰階影像、高光譜影像或是景深合成影像。若影像擷取裝置12為高光譜儀,該些細胞的影像可為:(A)高光譜儀影像中之任意一個波長對應的細胞影像,(B)高光譜影像各波長之細胞影像所合成的灰階細胞影像。任何合理的影像格式都屬於本發明所揭露的範疇。處理器13耦接於透鏡模組11及影像擷取裝置12,用以調整透鏡模組11的放大倍率以及處理該些細胞的影像。
處理器13可為中央處理器、微處理器、或是任何的可程式化處理單元。處理器13具有類神經網路,例如深度神經網路(Deep Neural Networks,DNN),可以執行機器學習以及深度學習的功能。因此,處理器13的類神經網路可以被訓練,可視為人工智慧的處理核心。記憶體14耦接於處理器13,用以儲存訓練資料以及影像處理時的分析資料。
Fig. 1 is a cell detection system using artificial intelligence of photodynamic technology according to the present invention
Block diagram of an embodiment of 100. In order to simplify the description, the
在細胞檢測系統100中,載具10之容置槽在放置該些細胞後,處理器13控制影像擷取裝置12,透過透鏡模組11在第一時間點以及第二時間點之間,取樣該些細胞的N張影像。接著,處理器13決定每一個細胞的分析區域,依據該些細胞的N張影像,對每一個細胞的分析區域進行影像處理程序,以產生複數個
光動力學向量參數。處理器13依據該些光動力學向量參數,產生該些細胞在第一時間點以及第二時間點之間的形變向量資訊,並依據形變向量資訊,取得該些細胞的至少一個關鍵形變向量特徵數值。如前述提及,處理器13包含可訓練的類神經網路。因此,形變向量資訊及至少一個關鍵形變向量特徵數值可用以訓練類神經網路。在類神經網路訓練完成後,處理器13可以利用類神經網路,以人工智慧的程序建立細胞品質檢測模型,以檢測細胞品質及/或辨識細胞。在細胞檢測系統100中,第一時間點在第二時間點之前,且N為大於2的正整數。換句話說,細胞檢測系統100可以用兩個不同時間點之間的時間序列的細胞影像資訊來訓練類神經網路。在類神經網路訓練完成後,細胞檢測系統100即具備人工智慧的細胞檢測功能,具有自動化地檢測細胞品質及/或辨識細胞的能力。細胞檢測系統100如何訓練類神經網路以執行人工智慧的細胞檢測功能的細節,將描述於後文。
In the
第2圖係為利用光動力學技術之人工智慧的細胞檢測系統100執行細胞檢測方法的流程圖。細胞檢測方法可包含步驟S201至步驟S208。任何合理的步驟變或是技術更動都屬於本發明所揭露的範疇。步驟S201至步驟S208的內容描述於下:步驟S201:取得複數個細胞;步驟S202:在第一時間點以及第二時間點之間,取樣該些細胞的N張影像;步驟S203:決定每一個細胞的分析區域;步驟S204:依據該些細胞的N張影像,對每一個細胞的分析區域進行影像處理程序,以產生複數個光動力學向量參數;步驟S205:依據該些光動力學向量參數,產生該些細胞在第一時間點以及第二時間點之間的形變向量資訊;
步驟S206:依據形變向量資訊,取得該些細胞的至少一個關鍵形變向量特徵數值;步驟S207:將形變向量資訊及至少一個關鍵形變向量特徵數值輸入至類神經網路,以訓練類神經網路;步驟S208:利用類神經網路,以人工智慧的程序建立細胞品質檢測模型,以檢測細胞品質及/或辨識細胞。
FIG. 2 is a flowchart of a cell detection method performed by the artificial intelligence
為了描述簡化,後文的「細胞」僅以「胚胎」為實施例進行說明,然而本發明並不限於此,細胞的定義可為生殖細胞、神經細胞、組織細胞、動植物細胞或是任何需要研究及觀察的細胞。在步驟S201中,研究人員或是醫療人員可先取得多個胚胎。在步驟S202中,處理器13可以控制影像擷取裝置12,在第一時間點以及第二時間點之間,取樣該些胚胎的N張影像。影像擷取裝置12可用任何方式取得兩個不同時間點之間的N張影像。舉例而言,影像擷取裝置12可在一段時間內(第一時間點以及第二時間點之間),以錄影的方式(如30fps或是60fps)取得多張幀(Frames)構成的影像集合。或是,影像擷取裝置12可在一段時間內(第一時間點以及第二時間點之間),週期性地對多個胚胎拍照,以取得該些胚胎的N張影像。並且,第一時間點以及第二時間點可為該些胚胎於培養液中發育且***之觀察週期中的任兩時間點。舉例而言,第一時間點以及第二時間點可以分別選為第一天以及第五天,以觀察該些胚胎發育以及***的狀況。換句話說,影像擷取裝置12可以在第0個小時到第120小時之內對該些胚胎連續地拍照,以取得N張影像。在步驟S203中,處理器13可以決定每一個胚胎的分析區域。醫療人員或是研究人員也可以用手動的方式決定每一個胚胎的分析區域。於此,每一個胚胎的分析區域可選為兩胚胎之間差異性最大的區域、或是優等胚胎與劣等胚胎之間差異性最大的區域。舉例而言,分析區域可為該些胚胎中每一個胚胎的囊胚(Blastocyst)區域或自定義區域。並且,分析區域的範圍小於單一 胚胎的尺寸。 In order to simplify the description, the following "cell" is only described with "embryo" as an example, but the present invention is not limited to this. and observed cells. In step S201, researchers or medical personnel can obtain multiple embryos first. In step S202, the processor 13 may control the image capturing device 12 to sample N images of the embryos between the first time point and the second time point. The image capturing device 12 can obtain N images between two different time points in any manner. For example, the image capturing device 12 can acquire an image set composed of multiple frames (Frames) in a video recording manner (eg, 30fps or 60fps) within a period of time (between the first time point and the second time point). . Alternatively, the image capturing device 12 may periodically photograph a plurality of embryos within a period of time (between the first time point and the second time point) to obtain N images of the embryos. Moreover, the first time point and the second time point can be any two time points in the observation period during which the embryos develop and divide in the culture medium. For example, the first time point and the second time point can be selected as the first day and the fifth day, respectively, to observe the development and division of the embryos. In other words, the image capturing device 12 can take pictures of these embryos continuously from the 0th hour to the 120th hour to obtain N images. In step S203, the processor 13 may determine the analysis area of each embryo. Medical personnel or researchers can also manually determine the analysis area of each embryo. Here, the analysis area of each embryo can be selected as the area with the largest difference between the two embryos, or the area with the largest difference between the superior embryo and the inferior embryo. For example, the analysis region may be a blastocyst region or a custom region of each of the embryos. Also, the scope of the analysis area is smaller than a single the size of the embryo.
接著,在步驟S204中,處理器13可依據該些胚胎的N張影像,對每一個胚胎的分析區域進行影像處理程序,以產生複數個光動力學向量參數。舉例而言,處理器13可依據該些胚胎的N張影像,對每一個胚胎的分析區域進行數位影像相關(Digital Image Correlation,DIC)技術的分析。數位影像相關(DIC)技術為一種於前後時間點的時間區間內,分析物體特徵變化以計算物體形變量之技術。換句話說,在DIC技術下,該些光動力學向量參數可為在N張影像中,依據第M-1張影像(前)及第M影像(後)之該些胚胎的形變量變化而得,且MN。並且,在細胞檢測系統100中,該些光動力學向量參數可包含位移向量資訊、應變向量資訊與變形速度向量資訊,說明如下。在步驟S204中,處理器13可依據該些胚胎的N張影像,對每一個胚胎的分析區域進行DIC技術的分析。利用DIC技術將N張影像分析之後,處理器13可以得到前後影像(第M-1張影像及第M影像)內,每一個胚胎的垂直位移資訊、水平位移資訊、應變向量分布資訊以及變形速度向量資訊。垂直位移資訊可定義為胚胎在兩張影像之間,其分析區域之垂直位移的距離以及座標。水平位移資訊可定義為胚胎在兩張影像之間,其分析區域之水平位移的距離以及座標。應變向量分布資訊可定義為胚胎在兩張影像之間,將分析區域擴張或是收縮的程度量化。變形速度向量資訊可以依據位移距離和應變量除上時間差推導出來。例如,第M-1張影像及第M影像的時間差為t,水平位移距離為p,則變形速度向量之水平分量的數值為p/t。第M-1張影像及第M影像的時間差為t,垂直位移距離為q,則變形速度向量之垂直分量的數值為q/t。第M-1張影像及第M影像的時間差為t,應變量為r,則變形速度向量之應變分量的數值為r/t。接著,於步驟S205中,處理器13可以依據該些光動力學向量參數,產生該些胚胎在第一時間點以及第二時間點之間的形變向量資訊。形變向量資訊可由上述的該些光動力學向量參數以線性或是非線性之公式產生。
Next, in step S204, the processor 13 may perform an image processing procedure on the analysis area of each embryo according to the N images of the embryos, so as to generate a plurality of photodynamic vector parameters. For example, the processor 13 may perform digital image correlation (DIC) analysis on the analysis area of each embryo according to the N images of the embryos. Digital Image Correlation (DIC) technology is a technology that analyzes the change of object characteristics in the time interval before and after the time point to calculate the deformation amount of the object. In other words, under the DIC technique, the photodynamic vector parameters can be determined according to the variation of the deformation of the embryos in the M-1 th image (front) and the M th image (back) in the N images. get, and M N. Moreover, in the
接著,在步驟S206中,處理器13可以依據形變向量資訊,取得該些胚胎的至少一個關鍵形變向量特徵數值。舉例而言,處理器13可以依據該些胚胎的N張影像,對每一個胚胎的分析區域進行DIC技術的分析,以在該些胚胎的形變向量資訊中,取得至少一個局部極大值(Local Maximum)的應變量。類似地,處理器13可以依據該些胚胎的N張影像,對每一個胚胎的分析區域進行DIC的分析,以在該些胚胎的形變向量資訊中,取得至少一個局部極小值(Local Minimum)的應變量。並且,至少一個關鍵形變向量特徵數值可包含至少一個局部極大值的應變量及/或至少一個局部極小值的應變量。為了便於理解,表T1中描述了不同時間點下的應變量,如下所示。 Next, in step S206, the processor 13 may obtain at least one key deformation vector characteristic value of the embryos according to the deformation vector information. For example, the processor 13 may perform DIC analysis on the analysis area of each embryo according to the N images of the embryos, so as to obtain at least one local maximum value in the deformation vector information of the embryos ) of the dependent variable. Similarly, the processor 13 can perform DIC analysis on the analysis area of each embryo according to the N images of the embryos, so as to obtain at least one local minimum value in the deformation vector information of the embryos. strain. And, the at least one key deformation vector characteristic value may include at least one local maximum value of the strain and/or at least one local minimum of the strain. For ease of understanding, the amount of strain at different time points is described in Table T1 as follows.
依據表T1,在時間點0-01時,最大應變量為0.227。0.227明顯高於時間點0-00的最大應變量0.0795以及時間點0-02的最大應變量0.113。因此,時間點 0-01的最大應變量0.227可以被定義為局部極大值(Local Maximum)的應變量。因此,對應時間點0-01的應變分布指向「應變大」的註記。類似地,在時間點0-03時,最大應變量為0.336。0.336明顯高於時間點0-02的最大應變量0.113、時間點0-04的最大應變量0.104、時間點0-05的最大應變量0.147、時間點0-06的最大應變量0.061以及時間點0-07的最大應變量0.138。因此,時間點0-03的最大應變量0.336可以被定義為局部極大值的應變量。因此,對應時間點0-03的應變分布指向「應變大」的註記。類似地,至少一個局部極小值(Local Minimum)的應變量也可以由統計而得。處理器13可將表T1、至少一個局部極大值的應變量、至少一個局部極小值的應變量之資料存入記憶體14中。 According to Table T1, at time point 0-01, the maximum strain amount was 0.227. 0.227 was significantly higher than the maximum strain amount at time point 0-00 of 0.0795 and the maximum strain amount at time point 0-02 of 0.113. Therefore, the point in time The maximum strain of 0-01, 0.227, can be defined as the strain of the Local Maximum. Therefore, the strain distribution corresponding to time points 0-01 points to the "large strain" notation. Similarly, at time point 0-03, the maximum strain amount is 0.336. 0.336 is significantly higher than the maximum strain amount at time point 0-02 of 0.113, the maximum strain amount at time point 0-04 of 0.104, and the maximum strain amount at time point 0-05 The amount of strain is 0.147, the maximum amount of strain at time points 0-06 is 0.061, and the maximum amount of strain at time points 0-07 is 0.138. Therefore, the maximum strain of 0.336 at time points 0-03 can be defined as the strain of the local maximum. Therefore, the strain distribution corresponding to time points 0-03 points to the "large strain" notation. Similarly, the dependent variable of at least one local minimum (Local Minimum) can also be obtained from statistics. The processor 13 may store the data of the table T1 , the dependent amount of at least one local maximum value, and the dependent amount of at least one local minimum value in the memory 14 .
如前述提及,處理器13具有類神經網路,例如深度神經網路(DNN),可以執行機器學習以及深度學習的功能。因此,處理器13的類神經網路可以被訓練。為了訓練處理器13的類神經網路,於步驟S207中,細胞檢測系統100可將形變向量資訊及至少一個關鍵形變向量特徵數值輸入至處理器13內的類神經網路,以訓練類神經網路。在類神經網路被訓練完成後,依據步驟S208,處理器13可以利用類神經網路,以人工智慧的程序建立細胞品質檢測模型,以檢測胚胎品質及/或辨識胚胎。換句話說,細胞檢測系統100利用人工智慧檢測胚胎品質及/或辨識胚胎可包含兩個階段。第一階段為訓練階段,細胞檢測系統100可將時間序列的整個資料量,或是縮減時間之資料量的特徵數值輸入至類神經網路中,也可進一步將至少一個關鍵形變向量特徵數值輸入至類神經網路中,以建立細胞品質檢測模型。第二階段為人工智慧之偵測階段。在類神經網路訓練完成後,處理器13可以利用已經訓練完成的類神經網路之細胞品質檢測模型,判斷胚胎品質以及辨識胚胎。因此,本發明的細胞檢測系統100可以避免醫療人員或是研究人員以主觀的方式來判斷胚胎的優劣,因此針對***症的療程,其受孕成功率能夠大幅度地提升。
As mentioned above, the processor 13 has a neural network-like network, such as a deep neural network (DNN), which can perform machine learning and deep learning functions. Thus, a neural-like network of the processor 13 can be trained. In order to train the neural-like network of the processor 13, in step S207, the
第3圖係為利用光動力學技術之人工智慧的細胞檢測系統100中,加入額外步驟以增強細胞檢測的精確度的示意圖。為了進一步加強判斷胚胎品質以及辨識胚胎的精確度,細胞檢測系統100還可以引入型態學的偵測技術來強化類神經網路的細胞檢測能力,如下所示。
FIG. 3 is a schematic diagram of adding additional steps to enhance the accuracy of cell detection in the artificial intelligence
步驟S201:取得複數個細胞;步驟S301:利用型態學的邊緣偵測技術偵測該些細胞於第一時間點以及第二時間點之間的邊緣特徵數據,並將邊緣特徵數據輸入至類神經網路;步驟S302:利用型態學的橢圓偵測方法,依據邊緣特徵數據,偵測該些細胞於第一時間點以及第二時間點之間,每一個細胞的***時間點以及***個數,並將每一個細胞的***時間點以及***個數輸入至類神經網路,以訓練類神經網路。 Step S201: Acquire a plurality of cells; Step S301: Use the morphological edge detection technology to detect the edge feature data of the cells between the first time point and the second time point, and input the edge feature data to the class Neural network; Step S302: Using the morphological ellipse detection method, according to the edge feature data, detect the cell division time point and the division time point of each cell between the first time point and the second time point and input the division time point and division number of each cell into the neural network to train the neural network.
類似地,為了描述簡化,後文的「細胞」僅以「胚胎」為實施例進行說明,然而本發明並不限於此,細胞的定義可為生殖細胞、神經細胞、組織細胞、動植物細胞或是任何需要研究及觀察的細胞。為了進一步加強判斷胚胎品質以及辨識胚胎的精確度,細胞檢測系統100在前述步驟S201取得複數個細胞(胚胎)後,依據步驟S301,處理器13可以利用型態學的邊緣偵測技術偵測該些胚胎於第一時間點以及第二時間點之間的邊緣特徵數據,並將邊緣特徵數據輸入至類神經網路。邊緣特徵數據可為胚胎整體或是某一個特定部分(例如囊胚)的輪廓,其數據格式可用多個座標的方式表示。例如在二維平面上,單一胚胎的輪廓可為封閉型的線段,可用(X1,Y1)至(XL,YL)表示,L為正整數且L越大解析度越高。接著,在步驟S302中,處理器13可以利用型態學的橢圓偵測方法,依據邊緣特徵數據,偵測該些胚胎於第一時間點以及第二時間點之間,每一個胚胎的***時間點以及***個數,並將每一個胚胎的***時間點以及***個數輸入至
類神經網路,以訓練類神經網路。於此說明,處理器13可以預先設定好至少一個橢圓的曲線擬合函數。當前述步驟S301偵測出至少一個封閉線段對應的座標(X1,Y1)至(XL,YL)後,處理器13可以用至少一個橢圓的曲線擬合函數去匹配其座標(X1,Y1)至(XL,YL)是否符合橢圓形的封閉曲線。因此,處理器13可以得出在某個時間點下的影像中,擬合成功之橢圓形的封閉曲線之個數。處理器13可將擬合成功之橢圓形的封閉曲線之個數視為胚胎的***個數。因此,依據不同時間點下之N張影像的資訊,處理器13可以偵測出每一個胚胎的***時間點以及***個數。換句話說,對比於第2圖所述之步驟S201至步驟S208的細胞偵測方法,細胞檢測系統100可以引入額外的步驟S301至步驟S302,以獲取更多的資訊(如邊緣特徵數據、每一個胚胎的***時間點以及***個數)來訓練類神經網路。因此,類神經網路的訓練會更加優化,從而增加了人工智慧偵測細胞品質的精確度。
Similarly, in order to simplify the description, the following "cell" is only described with "embryo" as an example, but the present invention is not limited to this. The definition of cell can be germ cell, nerve cell, tissue cell, animal or plant cell or Any cell that needs to be studied and observed. In order to further enhance the accuracy of judging the quality of the embryos and identifying the embryos, after the
第4圖係為利用光動力學技術之人工智慧的細胞檢測系統100中,具有類神經網路的處理器13之輸入資料以及輸出資料的示意圖。如前述提及,影像擷取裝置12可以在兩個不同時間點之間,對多個胚胎拍照,以產生N張影像。細胞檢測系統100可以利用DIC技術,對N張影像進行分析,以產生該些光動力學向量參數。並且,該些光動力學向量參數可用於產生形變向量資訊D1以及至少一個關鍵形變向量特徵數值D2。產生形變向量資訊D1以及至少一個關鍵形變向量特徵數值D2可用於訓練處理器13內的類神經網路。當類神經網路訓練完成後,處理器13即可利用人工智慧的程序判斷胚胎品質及/或辨識胚胎。處理器13可輸出細胞品質輸出資料D6。應當理解的是,N張影像中之每一張影像可為二維影像或三維影像。若N張影像中之每一張影像為二維影像時,該些光動力學向量參數、形變向量資訊及至少一個關鍵形變向量特徵數值為K維資料的格式。舉例而言,在時間點T、高光譜儀之特定波長λ下,座標(x,y)的畫素S1的光訊號可以表示為S1(λ,T,x,y)。畫素S1的光訊號S1(λ,T,x,y)為四個維度的訊號格式。同
理,若N張影像中之每一張影像為三維影像時,該些光動力學向量參數、該形變向量資訊及該至少一個關鍵形變向量特徵數值係為(K+1)維資料的格式。舉例而言,在時間點T、高光譜儀之特定波長λ下,座標(x,y,z)的畫素S2的光訊號可以表示為S2(λ,T,x,y,z)。畫素S2的光訊號S2(λ,T,x,y,z)為五個維度的訊號格式。
K為大於2的正整數。因此可預期地,當細胞檢測系統100的資料格式為較高的維度時,運算複雜度將變高。當細胞檢測系統100的資料格式為較低的維度時,運算複雜度將變低。
FIG. 4 is a schematic diagram of the input data and output data of the processor 13 having a neural network-like network in the
並且,如前述提及,處理器13的類神經網路,可以利用邊緣特徵數據D3及每一個胚胎的***時間點D5以及***個數D4進行訓練後,類神經網路可以利用人工智慧的程序優化細胞品質檢測模型。因此,如第4圖所示,處理器內的類神經網路可以接收形變向量資訊D1、至少一個關鍵形變向量特徵數值D2、邊緣特徵數據D3、***個數D4、***時間點D5。並且,當處理器內的類神經網路訓練完成後,處理器即可利用人工智慧的程序對***症婦女的取卵/胚胎進行遴選,以輸出細胞品質輸出資料D6。細胞品質輸出資料D6可為任何形式的資料格式,如輸出胚胎優劣的分級資料、輸出至少一個胚胎的優劣排序百分率、或是輸出至少一個胚胎的細胞品質。細胞品質可為細胞之詳細化學成分數值含量多寡、遺傳基因優劣,也可以為細胞於特定時間內之發育狀態的優劣,或是細胞是否發生病變而定。若為生殖細胞也可為懷孕與否、新生兒健康以及性別而定。 Moreover, as mentioned above, the neural network of the processor 13 can use the edge feature data D3 and the division time point D5 of each embryo and the number of divisions D4 for training, and the neural network can use artificial intelligence programs. Optimize the cell quality detection model. Therefore, as shown in FIG. 4 , the neural network in the processor can receive deformation vector information D1 , at least one key deformation vector characteristic value D2 , edge characteristic data D3 , the number of splits D4 , and the split time point D5 . In addition, after the training of the neural network in the processor is completed, the processor can use the artificial intelligence program to select the eggs/embryos of the infertile women to output the cell quality output data D6. The cell quality output data D6 can be in any form of data format, such as outputting grade data of embryo quality, outputting the quality ranking percentage of at least one embryo, or outputting the cell quality of at least one embryo. Cell quality can be determined by the numerical content of the detailed chemical components of the cell, the quality of genetics, the quality of the developmental state of the cell at a specific time, or whether the cell is diseased. In the case of germ cells, it can also depend on pregnancy, the health of the newborn, and the sex.
綜上所述,本發明描述一種利用光動力學技術之人工智慧的細胞檢測方法及其系統。細胞檢測系統的應用族群可為***症的婦女。醫療人員可先依據大量細胞數據,建立人工智慧之細胞品質檢測模型後,即可對***症的婦女進行療程。並且,人工智慧的類神經網路可以接收光動力學及型態學相關的各種參數,例如形變向量資訊、至少一個關鍵形變向量特徵數值、邊緣特徵數 據、***個數、***時間點。因此,類神經網路的使用可以避免醫療人員或是研究人員以主觀的方式來判斷胚胎的優劣。***症的婦女可以先進行多個胚胎培養,細胞檢測系統利用人工智慧之細胞品質間檢測模型決定最佳胚胎後,再以人工的方式植入母體子宮,以增加受孕的成功率。 To sum up, the present invention describes a cell detection method and system using artificial intelligence of photodynamic technology. The application group of the cell detection system may be infertile women. Medical personnel can first establish an artificial intelligence cell quality detection model based on a large amount of cell data, and then treat infertile women. Moreover, the artificial intelligence-like neural network can receive various parameters related to optodynamics and morphology, such as deformation vector information, at least one key deformation vector characteristic value, edge characteristic number data, number of divisions, and time of division. Therefore, the use of neural networks can avoid medical personnel or researchers to judge the quality of embryos in a subjective way. Women with infertility can first culture multiple embryos. The cell detection system uses artificial intelligence to determine the optimal embryo, and then artificially implants it into the mother's uterus to increase the success rate of conception.
以上所述僅為本發明之較佳實施例,凡依本發明申請專利範圍所做之均等變化與修飾,皆應屬本發明之涵蓋範圍。 The above descriptions are only preferred embodiments of the present invention, and all equivalent changes and modifications made according to the scope of the patent application of the present invention shall fall within the scope of the present invention.
100:利用光動力學技術之人工智慧的細胞檢測系統 100: A Cell Detection System Using Artificial Intelligence Using Photodynamic Technology
10:載具 10: Vehicle
11:透鏡模組 11: Lens module
12:影像擷取裝置 12: Image capture device
13:處理器 13: Processor
14:記憶體 14: Memory
Claims (9)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962941612P | 2019-11-27 | 2019-11-27 | |
US62/941,612 | 2019-11-27 |
Publications (2)
Publication Number | Publication Date |
---|---|
TW202120906A TW202120906A (en) | 2021-06-01 |
TWI754945B true TWI754945B (en) | 2022-02-11 |
Family
ID=75996128
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW109117822A TWI754945B (en) | 2019-11-27 | 2020-05-28 | Artificial intelligence based cell detection method by using optical kinetics and system thereof |
TW109117840A TWI781408B (en) | 2019-11-27 | 2020-05-28 | Artificial intelligence based cell detection method by using hyperspectral data analysis technology |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW109117840A TWI781408B (en) | 2019-11-27 | 2020-05-28 | Artificial intelligence based cell detection method by using hyperspectral data analysis technology |
Country Status (2)
Country | Link |
---|---|
CN (2) | CN112862742A (en) |
TW (2) | TWI754945B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100260406A1 (en) * | 2004-05-13 | 2010-10-14 | Paul Sammak | Methods and systems for imaging cells |
US20110013821A1 (en) * | 2008-03-24 | 2011-01-20 | Nikon Corporation | Image analysis method for cell observation, image-processing program, and image-processing device |
WO2013037119A1 (en) * | 2011-09-16 | 2013-03-21 | 长沙高新技术产业开发区爱威科技实业有限公司 | Device and method for erythrocyte morphology analysis |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1616286A1 (en) * | 2003-04-02 | 2006-01-18 | Amersham Biosciences UK Limited | Method of, and computer software for, classification of cells into subpopulations |
CN1563947A (en) * | 2004-03-18 | 2005-01-12 | 中国科学院上海技术物理研究所 | High microspectrum imaging system |
WO2007042044A1 (en) * | 2005-10-14 | 2007-04-19 | Unisense Fertilitech A/S | Determination of a change in a cell population |
US8744775B2 (en) * | 2007-12-28 | 2014-06-03 | Weyerhaeuser Nr Company | Methods for classification of somatic embryos comprising hyperspectral line imaging |
JP2009229274A (en) * | 2008-03-24 | 2009-10-08 | Nikon Corp | Method for analyzing image for cell observation, image processing program and image processor |
US9435732B2 (en) * | 2009-06-25 | 2016-09-06 | Yissum Research Development Of The Hebrew University Of Jerusalem Ltd. | Hyperspectral identification of egg fertility and gender |
US20140128667A1 (en) * | 2011-07-02 | 2014-05-08 | Unisense Fertilitech A/S | Adaptive embryo selection criteria optimized through iterative customization and collaboration |
EP2890781B1 (en) * | 2012-08-30 | 2020-09-23 | Unisense Fertilitech A/S | Automatic surveillance of in vitro incubating embryos |
AU2015331579A1 (en) * | 2014-10-17 | 2017-05-25 | Cireca Theranostics, Llc | Methods and systems for classifying biological samples, including optimization of analyses and use of correlation |
US9971966B2 (en) * | 2016-02-26 | 2018-05-15 | Google Llc | Processing cell images using neural networks |
CN106226247A (en) * | 2016-07-15 | 2016-12-14 | 暨南大学 | A kind of cell detection method based on EO-1 hyperion micro-imaging technique |
JP2019537157A (en) * | 2016-12-01 | 2019-12-19 | バークレー ライツ,インコーポレイテッド | Automatic detection and relocation of minute objects by microfluidic devices |
CN106815566B (en) * | 2016-12-29 | 2021-04-16 | 天津中科智能识别产业技术研究院有限公司 | Face retrieval method based on multitask convolutional neural network |
CN107064019B (en) * | 2017-05-18 | 2019-11-26 | 西安交通大学 | The device and method for acquiring and dividing for dye-free pathological section high spectrum image |
EP3639171A4 (en) * | 2017-06-16 | 2021-07-28 | Cytiva Sweden AB | Method for predicting outcome of and modelling of a process in a bioreactor |
CN108550133B (en) * | 2018-03-02 | 2021-05-18 | 浙江工业大学 | Cancer cell detection method based on fast R-CNN |
TWI664582B (en) * | 2018-11-28 | 2019-07-01 | 靜宜大學 | Method, apparatus and system for cell detection |
CN109883966B (en) * | 2019-02-26 | 2021-09-10 | 江苏大学 | Method for detecting aging degree of eriocheir sinensis based on multispectral image technology |
CN110136775A (en) * | 2019-05-08 | 2019-08-16 | 赵壮志 | A kind of cell division and anti-interference detection system and method |
CN110390676A (en) * | 2019-07-26 | 2019-10-29 | 腾讯科技(深圳)有限公司 | The cell detection method of medicine dye image, intelligent microscope system under microscope |
-
2020
- 2020-05-28 TW TW109117822A patent/TWI754945B/en active
- 2020-05-28 TW TW109117840A patent/TWI781408B/en active
- 2020-06-03 CN CN202010493873.6A patent/CN112862742A/en active Pending
- 2020-06-04 CN CN202010498111.5A patent/CN112862743A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100260406A1 (en) * | 2004-05-13 | 2010-10-14 | Paul Sammak | Methods and systems for imaging cells |
US20110013821A1 (en) * | 2008-03-24 | 2011-01-20 | Nikon Corporation | Image analysis method for cell observation, image-processing program, and image-processing device |
WO2013037119A1 (en) * | 2011-09-16 | 2013-03-21 | 长沙高新技术产业开发区爱威科技实业有限公司 | Device and method for erythrocyte morphology analysis |
Also Published As
Publication number | Publication date |
---|---|
CN112862743A (en) | 2021-05-28 |
TW202120906A (en) | 2021-06-01 |
TWI781408B (en) | 2022-10-21 |
TW202121241A (en) | 2021-06-01 |
CN112862742A (en) | 2021-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7072067B2 (en) | Systems and methods for estimating embryo viability | |
Spalding et al. | Image analysis is driving a renaissance in growth measurement | |
WO2021139258A1 (en) | Image recognition based cell recognition and counting method and apparatus, and computer device | |
CN109544512A (en) | It is a kind of based on multi-modal embryo's pregnancy outcome prediction meanss | |
WO2014021175A1 (en) | Device and method for detecting necrotic cell region and storage medium for storing computer processable program for detecting necrotic cell region | |
US11954926B2 (en) | Image feature detection | |
CN111931751B (en) | Deep learning training method, target object identification method, system and storage medium | |
EP3485458A1 (en) | Information processing device, information processing method, and information processing system | |
JP2009014355A (en) | Image processor and processing program | |
TWI754945B (en) | Artificial intelligence based cell detection method by using optical kinetics and system thereof | |
US10748288B2 (en) | Methods and systems for determining quality of an oocyte | |
US20220383986A1 (en) | Complex System for Contextual Spectrum Mask Generation Based on Quantitative Imaging | |
Chen et al. | Automating blastocyst formation and quality prediction in time-lapse imaging with adaptive key frame selection | |
CN113607736A (en) | Miniature intelligent sperm in-vitro detector and image processing method thereof | |
JP6329651B1 (en) | Image processing apparatus and image processing method | |
JP2005009949A (en) | Method of determining crystallized state of protein and system therefor | |
AU2019101174A4 (en) | Systems and methods for estimating embryo viability | |
US20240169523A1 (en) | Cell counting method, machine learning model construction method and recording medium | |
Sharma et al. | Exploring Embryo Development at the Morula Stage-an AI-based Approach to Determine Whether to Use or Discard an Embryo | |
RU2800079C2 (en) | Systems and methods of assessing the viability of embryos | |
Eswaran et al. | Deep Learning Algorithms for Timelapse Image Sequence-Based Automated Blastocyst Quality Detection | |
Tran et al. | Microscopic Video-Based Grouped Embryo Segmentation: A Deep Learning Approach | |
WO2024106231A1 (en) | Embryo classification method, computer program, and embryo classification apparatus | |
JP2024046836A (en) | Classification method and computer program | |
Sandhiya et al. | Varietal Seed Classification and Seed Germination Prediction System |