CN111968083A - Online tear film rupture time detection method based on deep learning - Google Patents

Online tear film rupture time detection method based on deep learning Download PDF

Info

Publication number
CN111968083A
CN111968083A CN202010766235.7A CN202010766235A CN111968083A CN 111968083 A CN111968083 A CN 111968083A CN 202010766235 A CN202010766235 A CN 202010766235A CN 111968083 A CN111968083 A CN 111968083A
Authority
CN
China
Prior art keywords
training
network
time
tear film
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010766235.7A
Other languages
Chinese (zh)
Other versions
CN111968083B (en
Inventor
王崇阳
陈文光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mediworks Precision Instrument Co ltd
Original Assignee
Shanghai Mediworks Precision Instrument Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Mediworks Precision Instrument Co ltd filed Critical Shanghai Mediworks Precision Instrument Co ltd
Priority to CN202010766235.7A priority Critical patent/CN111968083B/en
Publication of CN111968083A publication Critical patent/CN111968083A/en
Application granted granted Critical
Publication of CN111968083B publication Critical patent/CN111968083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Ophthalmology & Optometry (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention relates to an on-line tear film rupture time detection method based on deep learning, and belongs to the technical field of medical images. Compared with the traditional digital image processing method, the tear film rupture time detection method based on deep learning has the advantages that robustness is higher and detection is more accurate in conditions of illumination, noise, ring deformation, eyelash occlusion, blinking and the like. Compared with the tear film rupture time detection method based on deep learning of the original image, the method has the advantages that the rapid positioning network is used for extracting the ring area before segmentation, the segmented network sample space is reduced, the input size of the segmented network is reduced, the calculated amount of the segmented network is less, the instantaneity is better, the requirement on the configuration of a hardware processor is lower, and the energy consumption is less.

Description

Online tear film rupture time detection method based on deep learning
Technical Field
The invention relates to an on-line tear film rupture time detection method based on deep learning, and belongs to the technical field of medical images.
Background
The detection of tear film rupture time to determine if insufficient tear secretion is an important means for diagnosing dry eye. In the existing detection technology, the common tear film rupture time detection is to acquire a Placido disc image irradiated on human eyes in real time, then analyze tear film rupture positions through a digital image processing technology, the detection process comprises the steps of removing eyelash images by applying adaptive filtering and closed operation, marking a ring contour by using an elliptical scanning method to perform Placido ring recognition, taking a first frame image after eyes are opened as a template, respectively matching each frame after the eyes are opened with the first frame image, finally performing subtraction operation on the matched image and the first frame image to find the tear film rupture positions, calculating tear film rupture time by using the frame number of rupture image, and diagnosing dry eyes according to the time. However, this method does not deal well with problems of eyelash occlusion, deformation, blinking, etc. The tear film of the deep learning based on the original image is broken, the input size is the original image, an effective area covering a Placido ring is not positioned, the interference of other ineffective areas is easy to occur, the sample space is enlarged by using the original image, the complexity of network segmentation is high, and therefore the training difficulty is large, the accuracy is reduced, the reasoning time is slow, and the real-time effect cannot be achieved when the hardware configuration is low.
Disclosure of Invention
The invention aims to solve the technical problem of technical defects in the existing detection method for detecting the tear film rupture time.
In order to solve the problems, the technical scheme adopted by the invention is to provide an online tear film rupture time detection method based on deep learning, which comprises the following steps:
step 1: acquiring a human eye video with a Placido disc in real time;
step 2: the classification network based on the deep convolutional network is used for acquiring the longest continuous eye opening segment as an effective video segment, identifying whether the image acquired in the step 1 blinks or not and recording the statistics of the blink time and the continuous eye opening time; if meeting the next blink, recording the blink time, starting new continuous eye opening time statistics, and circularly and repeatedly updating the video segment with the longest continuous eye opening time until the video acquisition is finished;
and step 3: based on a fast positioning network of a deep convolutional network, positioning the position of the ring region of the image positioning acquired in the step 1, and cutting the image of the ring region; the step is used for improving the effective area ratio of the input image of the segmentation network, reducing the input resolution of the segmentation network, enabling the sample space of the segmentation network to be smaller, requiring fewer training samples, enabling the segmentation network to learn more sufficiently and efficiently and achieving the purpose of online real-time reasoning;
and 4, step 4: on the basis of a fast segmentation network of a deep convolutional network, segmenting the tear film rupture positions of the ring region image obtained in the step 3 in real time, and only recording the first rupture occurrence time of each position;
and 5: and acquiring the initial tear film rupture time according to the time difference between the step 2 and the step 4.
Preferably, in the step 2, a classification network based on a deep convolutional network is adopted, and a MobileNetV3 classification network is adopted to perform secondary classification on the human eye images in real time to identify whether the human eye blinks or not; training a classification model by building a MobileNetV3 classification network model; performing graying processing on an input image; the data amplification adopts the preprocessing of rotation, translation, scaling, gray scale stretching and random blurring; the loss function adopts a cross entropy function of two classifications; using a training weight of the MobileNet V3 on the IMAGENET data set as an initial weight, and then performing fine tuning training; and dividing the data set into a training set and a verification set, performing iterative training, and finally selecting the training weight with the minimum loss value difference between the training set and the verification set as a training result.
Preferably, in the step 3, the fast positioning network based on the deep convolutional network performs regression positioning on the ring edge, and the top, bottom, left and right points of the ring are taken to be 4 points; the method comprises the steps of regressing and outputting x and y values of 4 points by using a MobileNet V3 as a backbone network, and then cutting out a ring area image according to the 4 points; building a MobileNet V3 positioning network model, keeping the backbone network unchanged, and only modifying the final output layer; performing graying processing on an input image; the data amplification adopts translation, scaling, gray stretching and random fuzzy preprocessing; the regression loss function adopts a Mean Square Error (MSE) function; using a training weight of the MobileNet V3 on the IMAGENET data set as an initial weight, and then performing fine tuning training; and dividing the data set into a training set and a verification set, performing iterative training, and finally selecting the training weight with the minimum loss value difference between the training set and the verification set as a training result.
Preferably, in the step 4, based on the fast segmentation network of the deep convolutional network, the DFANet segmentation network is adopted to perform segmentation detection on the image of the ring region in real time, and each pixel is subjected to secondary classification to finally obtain a segmentation result; and then Canny edge extraction is carried out on the segmentation result image, the segmentation result image is superposed on the original image to be displayed, and the tear film rupture position is marked.
Preferably, when the DFANet segmentation network is adopted to perform segmentation detection on the ring area image in real time, a DFANet segmentation network model is built, the backbone network is kept unchanged, and only the final output layer is modified; performing graying processing on an input image; the data amplification adopts rotation, translation, scaling, gray scale stretching and random fuzzy preprocessing; the loss function adopts a cross entropy function of two classifications; taking a training weight of the DFANet on the Cityscapes data set as an initial weight, and then performing fine tuning training; dividing the data set into a training set and a verification set, and performing iterative training; and finally, selecting the training weight with the minimum loss value difference between the training set and the verification set as a training result.
Preferably, the step 5 acquires the initial tear film rupture time according to the time difference between the step 2 and the step 4, and the step 2 circularly updates the longest continuous eye-opening time period, namely the first eye-opening time of the continuous period is set as t 0; step 4, acquiring the first fracture moment of each position in real time, and setting the first fracture moment as t 1; the t1 minus t0 gives the tear film break up time for each location.
Compared with the prior art, the invention has the following beneficial effects:
compared with the traditional digital image processing method, the technical scheme has stronger robustness and more accurate detection on the conditions of illumination, noise, ring deformation, eyelash occlusion, blinking and the like. Compared with the tear film rupture time detection method based on deep learning of the original image, the method has the advantages that the rapid positioning network is used for extracting the ring area before segmentation, the segmented network sample space is reduced, the input size of the segmented network is reduced, the calculated amount of the segmented network is less, the instantaneity is better, the requirement on the configuration of a hardware processor is lower, and the energy consumption is less.
Drawings
FIG. 1 is an image of a human eye with a Placido plate;
FIG. 2 is a schematic view of the present invention for locating the edge of a Placido plate eye image ring; in the figure, the arrows show the points of the ring at the top, bottom, left and right;
FIG. 3 is a ring region image cropped according to 4-point positioning of the present invention;
FIG. 4 is a diagram of a segmentation result obtained by the DFANet segmentation network performing segmentation detection on a ring region image in real time according to the present invention;
FIG. 5 is a graph illustrating a DFANet segmentation network performing segmentation detection on a ring region image in real time according to the present invention;
FIG. 6 shows the segmentation result image is processed by Canny edge extraction, and is superimposed on the original image to show the tear film rupture position; the arrow in the figure indicates the tear film rupture position;
Detailed Description
In order to make the invention more comprehensible, preferred embodiments are described in detail below with reference to the accompanying drawings:
the invention provides an on-line tear film rupture time detection method based on deep learning, which comprises the following steps:
step 1: acquiring a human eye video with a Placido disc in real time;
step 2: the classification network based on the deep convolutional network is used for acquiring the longest continuous eye opening segment as an effective video segment, identifying whether the image acquired in the step 1 blinks or not and recording the statistics of the blink time and the continuous eye opening time; if meeting the next blink, recording the blink time, starting new continuous eye opening time statistics, and circularly and repeatedly updating the video segment with the longest continuous eye opening time until the video acquisition is finished;
and step 3: based on the fast positioning network of the depth convolution network, positioning the ring region position of the image obtained in the step 1, and cutting the image of the ring region, as shown in fig. 3; the step is used for improving the effective area ratio of the input image of the segmentation network, reducing the input resolution of the segmentation network, enabling the sample space of the segmentation network to be smaller, requiring fewer training samples, enabling the segmentation network to learn more sufficiently and efficiently and achieving the purpose of online real-time reasoning;
and 4, step 4: on the basis of a fast segmentation network of a deep convolutional network, segmenting the tear film rupture positions of the ring region image obtained in the step 3 in real time, and only recording the first rupture occurrence time of each position;
and 5: and acquiring the initial tear film rupture time according to the time difference between the step 2 and the step 4.
Wherein, the step 1 is a human eye image with Placido disc acquired in real time. As shown in fig. 1:
in the step 2, the classification network based on the deep convolutional network is adopted, and the MobileNetV3 classification network is adopted to perform secondary classification on the human eye images in real time so as to identify whether the human eye images blink or not. The label for open eyes is 1 and the label for closed eyes is 0. Training a classification model: and (3) building a MobileNet V3 classification network model, keeping the backbone network unchanged, only modifying the final output full-connection layer, and changing the output number 1000 into 2. The input image is grayed and 3 channels are reserved. And the data amplification adopts preprocessing such as rotation, translation, scaling, gray scale stretching, random blurring and the like. The loss function employs a cross-entropy function of two classes. And (3) using the training weight of the MobileNetV3 on the IMAGENET data set as an initial weight, and then performing fine tuning training. Scale the dataset 4: 1 is divided into a training set and a verification set, the iterative training is carried out for 60 rounds, the initial learning rate is 0.01, and the learning rate is reduced by 10 times in 20 rounds and 40 rounds respectively. And finally, selecting the training weight with the minimum loss value difference between the training set and the verification set as a training result.
In the step 3, the fast positioning network based on the deep convolutional network performs regression positioning on the edge of the ring, and the total number of the points is 4, namely the uppermost point, the lowermost point, the leftmost point and the rightmost point of the ring. As shown in fig. 2. And (3) regressing and outputting the x and y values of 4 points by using the MobileNet V3 as a backbone network. The ring area image is then cropped from these 4 points as shown in fig. 3. And (3) building a MobileNet V3 positioning network model, keeping the backbone network unchanged, only modifying the final output layer, and changing the output number into 8. The input image is grayed and 3 channels are reserved. And the data amplification adopts preprocessing such as translation, scaling, gray stretching, random blurring and the like. The regression loss function employs a Mean Squared Error (MSE) function. And (3) using the training weight of the MobileNetV3 on the IMAGENET data set as an initial weight, and then performing fine tuning training. Scale the dataset 4: 1 is divided into a training set and a verification set, the iterative training is carried out for 60 rounds, the initial learning rate is 0.01, and the learning rate is reduced by 10 times in 20 rounds and 40 rounds respectively. And finally, selecting the training weight with the minimum loss value difference between the training set and the verification set as a training result.
In the above step 4, based on the fast segmentation network of the deep convolutional network, the DFANet segmentation network shown in fig. 5 is used to perform segmentation detection on the ring region image in real time, and each pixel is subjected to two classifications, where 0 represents no break, 1 represents break, and a probability greater than 0.8 is taken as tear film break, otherwise, the probability is taken as background, and finally, the segmentation result shown in fig. 4 is obtained. Canny edge extraction is then performed on the segmentation result image, and the segmentation result image is overlaid on the original image to be displayed, and tear film rupture positions are marked, wherein the areas are indicated by arrows in fig. 6. And (3) building a DFANet segmentation network model, keeping the backbone network unchanged, only modifying the final output layer, and modifying the output channel into 2. The input image is grayed and 3 channels are reserved. And the data amplification adopts preprocessing such as rotation, translation, scaling, gray scale stretching, random blurring and the like. The loss function employs a cross-entropy function of two classes. And taking the training weight of the DFANet on the Cityscapes data set as an initial weight, and then performing fine tuning training. Scale the dataset 4: 1 is divided into a training set and a verification set, 80 rounds of iterative training are carried out, the initial learning rate is 0.01, and the learning rate is reduced by 10 times in 30 rounds and 60 rounds respectively. And finally, selecting the training weight with the minimum loss value difference between the training set and the verification set as a training result. The DFANet has a structure such that the input is the ring region image after clipping in the above step 3, the corresponding output is a binary image, 0 represents the background, and 1 represents the fracture region. 'conv' refers to convolution with a convolution kernel size of 3, 'enc' refers to a convolution layer block, 'FC attribute' refers to an attention module for acquiring semantic information and category information, 'C' refers to a per-channel splicing layer (association), 'xN' refers to an upsampling operation (upsampling) of a multiple of N. And finally, outputting a segmentation probability graph by a series of operations of feature extraction channel fusion upsampling and the like.
Step 5 described above acquires the tear film break-start time from the time difference between step 2 and step 4, and step 2 cyclically updates the longest sustained eye-open period, that is, the first eye-open time of the sustained period is t 0. Step 4, acquiring the first fracture moment of each position in real time, and recording the moment as t 1. the tear film break up time for each location was obtained by subtracting t0 from t 1.
While the invention has been described with respect to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention. Those skilled in the art can make various changes, modifications and equivalent arrangements, which are equivalent to the embodiments of the present invention, without departing from the spirit and scope of the present invention, and which may be made by utilizing the techniques disclosed above; meanwhile, any changes, modifications and variations of the above-described embodiments, which are equivalent to those of the technical spirit of the present invention, are within the scope of the technical solution of the present invention.

Claims (6)

1. An online tear film rupture time detection method based on deep learning is characterized in that: the method comprises the following steps:
step 1: acquiring a human eye video with a Placido disc in real time;
step 2: the classification network based on the deep convolutional network is used for acquiring the longest continuous eye opening segment as an effective video segment, identifying whether the image acquired in the step 1 blinks or not and recording the statistics of the blink time and the continuous eye opening time; if meeting the next blink, recording the blink time, starting new continuous eye opening time statistics, and circularly and repeatedly updating the video segment with the longest continuous eye opening time until the video acquisition is finished;
and step 3: based on a fast positioning network of a deep convolutional network, positioning the position of the ring region of the image positioning acquired in the step 1, and cutting the image of the ring region; the step is used for improving the effective area ratio of the input image of the segmentation network, reducing the input resolution of the segmentation network, enabling the sample space of the segmentation network to be smaller, requiring fewer training samples, enabling the segmentation network to learn more sufficiently and efficiently and achieving the purpose of online real-time reasoning;
and 4, step 4: on the basis of a fast segmentation network of a deep convolutional network, segmenting the tear film rupture positions of the ring region image obtained in the step 3 in real time, and only recording the first rupture occurrence time of each position;
and 5: and acquiring the initial tear film rupture time according to the time difference between the step 2 and the step 4.
2. The on-line tear film break-up time detection method based on deep learning of claim 1, wherein: the classification network based on the deep convolutional network in the step 2 adopts a MobileNet V3 classification network to perform secondary classification on the human eye image in real time and identify whether the human eye image blinks or not; training a classification model by building a MobileNetV3 classification network model; performing graying processing on an input image; the data amplification adopts the preprocessing of rotation, translation, scaling, gray scale stretching and random blurring; the loss function adopts a cross entropy function of two classifications; using a training weight of the MobileNet V3 on the IMAGENET data set as an initial weight, and then performing fine tuning training; and dividing the data set into a training set and a verification set, performing iterative training, and finally selecting the training weight with the minimum loss value difference between the training set and the verification set as a training result.
3. The on-line tear film break-up time detection method based on deep learning of claim 1, wherein: in the step 3, regression positioning is carried out on the ring edge based on the fast positioning network of the deep convolutional network, and 4 points of the top, the bottom, the left and the right of the ring are respectively taken; the method comprises the steps of regressing and outputting x and y values of 4 points by using a MobileNet V3 as a backbone network, and then cutting out a ring area image according to the 4 points; building a MobileNet V3 positioning network model, keeping the backbone network unchanged, and only modifying the final output layer; performing graying processing on an input image; the data amplification adopts translation, scaling, gray stretching and random fuzzy preprocessing; the regression loss function adopts a Mean Square Error (MSE) function; using a training weight of the MobileNet V3 on the IMAGENET data set as an initial weight, and then performing fine tuning training; and dividing the data set into a training set and a verification set, performing iterative training, and finally selecting the training weight with the minimum loss value difference between the training set and the verification set as a training result.
4. The on-line tear film break-up time detection method based on deep learning of claim 1, wherein: in the step 4, based on the fast segmentation network of the deep convolutional network, the DFANet segmentation network is adopted to perform segmentation detection on the image of the ring area in real time, and each pixel is subjected to secondary classification to finally obtain a segmentation result; and then Canny edge extraction is carried out on the segmentation result image, the segmentation result image is superposed on the original image to be displayed, and the tear film rupture position is marked.
5. The on-line tear film break-up time detection method based on deep learning of claim 4, wherein: when the DFANet segmentation network is adopted to carry out segmentation detection on the ring area image in real time, a DFANet segmentation network model is built, the backbone network is kept unchanged, and only the final output layer is modified; performing graying processing on an input image; the data amplification adopts rotation, translation, scaling, gray scale stretching and random fuzzy preprocessing; the loss function adopts a cross entropy function of two classifications; taking a training weight of the DFANet on the Cityscapes data set as an initial weight, and then performing fine tuning training; dividing the data set into a training set and a verification set, and performing iterative training; and finally, selecting the training weight with the minimum loss value difference between the training set and the verification set as a training result.
6. The on-line tear film break-up time detection method based on deep learning of claim 1, wherein: step 5, acquiring tear film rupture starting time according to the time difference between step 2 and step 4, and circularly updating the longest continuous eye opening time period in step 2, namely setting the first eye opening time of the continuous period as t 0; step 4, acquiring the first fracture moment of each position in real time, and setting the first fracture moment as t 1; the tear film break-up time at each position can be obtained by subtracting t0 from t 1.
CN202010766235.7A 2020-08-03 2020-08-03 Online tear film rupture time detection method based on deep learning Active CN111968083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010766235.7A CN111968083B (en) 2020-08-03 2020-08-03 Online tear film rupture time detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010766235.7A CN111968083B (en) 2020-08-03 2020-08-03 Online tear film rupture time detection method based on deep learning

Publications (2)

Publication Number Publication Date
CN111968083A true CN111968083A (en) 2020-11-20
CN111968083B CN111968083B (en) 2024-05-14

Family

ID=73363801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010766235.7A Active CN111968083B (en) 2020-08-03 2020-08-03 Online tear film rupture time detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN111968083B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734701A (en) * 2018-04-25 2018-11-02 天津市索维电子技术有限公司 A kind of Placido rings image aspects variation recognizer
US20190365314A1 (en) * 2018-06-04 2019-12-05 Nidek Co., Ltd. Ocular fundus image processing device and non-transitory computer-readable medium storing computer-readable instructions
CN111062443A (en) * 2019-12-20 2020-04-24 浙江大学 Tear film rupture time detecting system based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734701A (en) * 2018-04-25 2018-11-02 天津市索维电子技术有限公司 A kind of Placido rings image aspects variation recognizer
US20190365314A1 (en) * 2018-06-04 2019-12-05 Nidek Co., Ltd. Ocular fundus image processing device and non-transitory computer-readable medium storing computer-readable instructions
CN111062443A (en) * 2019-12-20 2020-04-24 浙江大学 Tear film rupture time detecting system based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吕永兴;张汉华;: "基于视频分析的泪膜破裂时间检测方法", 智能计算机与应用, no. 01, 28 February 2017 (2017-02-28) *
贾小军;魏远旺;廖伟志;曾丹;: "基于多阈值和改进的Hough变换检测电表接线圆孔尺寸", 光电子・激光, no. 10, 15 October 2018 (2018-10-15) *

Also Published As

Publication number Publication date
CN111968083B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN109886273B (en) CMR image segmentation and classification system
CN108665456B (en) Method and system for real-time marking of breast ultrasound lesion region based on artificial intelligence
CN110097559B (en) Fundus image focus region labeling method based on deep learning
CN111832416A (en) Motor imagery electroencephalogram signal identification method based on enhanced convolutional neural network
CN109410204B (en) Cortical cataract image processing and enhancing method based on CAM
CN112184657A (en) Pulmonary nodule automatic detection method, device and computer system
CN110400288B (en) Sugar network disease identification method and device fusing binocular features
CN110838100A (en) Colonoscope pathological section screening and segmenting system based on sliding window
CN114140465B (en) Self-adaptive learning method and system based on cervical cell slice image
CN112070158A (en) Facial flaw detection method based on convolutional neural network and bilateral filtering
CN113011340B (en) Cardiovascular operation index risk classification method and system based on retina image
CN109034012A (en) First person gesture identification method based on dynamic image and video sequence
CN114648806A (en) Multi-mechanism self-adaptive fundus image segmentation method
CN114881105A (en) Sleep staging method and system based on transformer model and contrast learning
CN116758336A (en) Medical image intelligent analysis system based on artificial intelligence
CN116030396A (en) Accurate segmentation method for video structured extraction
CN115775226A (en) Transformer-based medical image classification method
CN110084796B (en) Analysis method of complex texture CT image
CN112017165A (en) Lacrimal river height detection method based on deep learning
WO2021139447A1 (en) Abnormal cervical cell detection apparatus and method
CN113793357A (en) Bronchopulmonary segment image segmentation method and system based on deep learning
Xia et al. Retinal vessel segmentation via a coarse-to-fine convolutional neural network
Sachdeva et al. Automatic segmentation and area calculation of optic disc in ophthalmic images
CN111968083B (en) Online tear film rupture time detection method based on deep learning
CN115457009A (en) Three-dimensional medical image segmentation method based on Transformer and convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant