CN112800815A - Sight direction estimation method based on deep learning - Google Patents
Sight direction estimation method based on deep learning Download PDFInfo
- Publication number
- CN112800815A CN112800815A CN201911108890.7A CN201911108890A CN112800815A CN 112800815 A CN112800815 A CN 112800815A CN 201911108890 A CN201911108890 A CN 201911108890A CN 112800815 A CN112800815 A CN 112800815A
- Authority
- CN
- China
- Prior art keywords
- face
- pupil
- eye
- model
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Ophthalmology & Optometry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a sight direction estimation method based on deep learning, which specifically comprises the following steps: s1, inputting a human face, S2, detecting the human face and human face characteristic points, S3, extracting eye images, S4, improving the resolution, S5, detecting pupil contours, S6, estimating eyeball models, S7, calculating pupil diameters and judging pupil positions, and the invention relates to the technical field of video analysis. The sight line direction estimation method based on the deep learning can well solve the problem of difficulty in eye movement eye line identification caused by factors such as illumination and other visual artifacts by adopting the eye movement eye line identification method based on the deep learning, and achieves the purpose of quickly and timely knowing key shift and change of the sight line attention of an object, so that a series of complex tasks based on attention analysis can be performed, and the state analysis, the accurate advertisement putting and the customer preference analysis of a suspect are examined and judged in an auxiliary public security bureau or a court.
Description
Technical Field
The invention relates to the technical field of video analysis, in particular to a sight line direction estimation method based on deep learning.
Background
The sight line estimation algorithm is used for estimating the sight line direction and eyeball position of eyes in a picture or a video, calculating and returning high-precision coordinates of the center position of the eyes and vectors of the sight line direction of the eyes, real-time tracking of the sight line of the eyes can be achieved in the video, and the sight line estimation technology can enable a user to know the key points and changes of the attention of the user in time, and help the user to complete a series of complex tasks based on attention analysis, such as accurate advertisement putting, online education student state analysis, judgment of lie lying and the like.
The sight direction is one of important indexes reflecting mental states of concentration, fatigue and the like, a common sight estimation method is based on a head-mounted eye tracker or limited by near-distance operation in front of a screen, a high-definition eye image needs to be acquired, the operation moving range is required to be small, the special operation types are multiple, the action amplitude is large, the moving range is wide, and a sight direction estimation technology suitable for an open scene is needed.
Based on the deep learning technology, a sight direction estimation algorithm frame is designed according to a face image and a scene image, and effective face turning information is provided when the eye part is shielded or unclear by combining the head posture, so that the dependence of the algorithm on a high-definition eye image is reduced; on the other hand, the sight line direction at a longer distance and in a large posture is further corrected by estimating the salient region information in combination with the scene information.
Traditional feature-based and model-based gaze estimation methods have proven to perform well in professional camera and fixed lighting environments, but in unconstrained real-world environments, modeling is made very difficult by factors such as lighting and other visual artifacts.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a sight direction estimation method based on deep learning, which is used for acquiring an eye region and further adopting a deep convolution neural network model to carry out sight direction estimation based on a face image.
(II) technical scheme
In order to achieve the purpose, the invention is realized by the following technical scheme: a sight line direction estimation method based on deep learning specifically comprises the following steps:
s1, firstly, inputting a face image acquired under any scene;
s2, detecting the face part by using a face detection technology, and obtaining key feature points on the face image by using a face feature point detection technology;
s3, extracting eye images according to the key points of the human face, and improving the resolution of the images by using an interpolation algorithm, so that the resolution of the eye parts is improved, and the probability of correctly identifying shielding, correctly identifying the influence of reflected light and illumination and other interference factors is improved;
s4, detecting the elliptical contour of the pupil on the image by using a pupil contour detection technology, and then constructing a human eye model;
s5, training a proper model by using a depth convolution neural network by taking the matching of the pupil contour detected on the image in the step S4 and the pupil contour projected to the image plane by the eyeball model as correction, inputting the model into an eye image, outputting the model into a correct eyeball model, and respectively calculating the pupil diameter r, the position of the pupil and the position of the pupil in the eye region according to the eye model constructed in the step S3;
s6, acquiring key feature points on the face image by using a face feature point detection technology, and resolving a head posture by combining standard face three-dimensional feature points and the feature points detected on the face image;
and S7, converting the face image into a space with lower degree of freedom (DOF) more beneficial to recognition according to the head pose analyzed in the step S6, estimating the sight line direction in the space by using the Deep Convolutional Neural Network (DCNN) trained in the step S5 in combination with scene information, and finally outputting the sight line direction of the face in the three-dimensional world and representing the sight line direction by a vector g (x, y, z).
Preferably, the face image acquired in any scene in step S1 is a face video captured during special jobs.
Preferably, the human eye model in step S4 includes an eyeball center positionSpherical coordinates (γ, δ) of the pupil on the eyeball and the pupil size r.
Preferably, the position of the pupil in the eye area in step S5 is one of five positions, namely, upper, lower, left, right, or middle of the eye area.
Preferably, the feature points detected on the face image in step S6 are two-dimensional image coordinates, and the head pose is analyzed by using only the SolvePnp algorithm in step S6.
Preferably, the face detection technique and the face feature point detection technique in step S2 both use one of an ANN model face detection algorithm, an SVM model face detection algorithm, or an Adaboost model face detection algorithm.
(III) advantageous effects
The invention provides a sight line direction estimation method based on deep learning. Compared with the prior art, the method has the following beneficial effects: the sight line direction estimation method based on deep learning specifically comprises the following steps: s1, firstly inputting a face image acquired under any scene, S2, detecting a face part by using a face detection technology, obtaining key feature points on the face image by using a face feature point detection technology, S3, extracting an eye image according to the key points of the face, improving the image resolution by using an interpolation algorithm so as to improve the resolution of the eye part, S4, detecting an elliptical contour of a pupil on the image by using a pupil contour detection technology, then constructing a human eye model, S5, correcting the matching between the pupil contour detected on the image in the step S4 and a pupil contour projected to an image plane by using an eyeball model, training a proper model by using a depth convolution neural network, inputting the model into the eye image, outputting the eye image as a correct eyeball model, S6, and then obtaining the key feature points on the face image by using the face feature point detection technology, the head pose is analyzed by combining the standard human face three-dimensional feature points and the feature points detected on the human face image, S7, the human face image is converted into a space with lower degree of freedom (DOF) which is more beneficial to recognition according to the head pose analyzed in the step S6, the eye movement eye line identification method based on deep learning can be adopted, the problem that the traditional sight line estimation method based on characteristics is not limited in the real environment, due to the problem of difficult eye movement eye line identification caused by factors such as illumination and other visual artifacts, the aim of quickly and timely knowing the key shift and change of the object sight line attention is well achieved, thereby being capable of carrying out a series of complex tasks based on attention analysis, the method has good application prospect in assisting the public security bureau or the court to judge the state analysis of the suspect, the accurate advertisement delivery and the client preference analysis, and the state analysis of online education students and the like.
Drawings
FIG. 1 is a flow chart of the eye movement information extraction of the present invention;
FIG. 2 is a flow chart of the operation of gaze direction estimation of the present invention;
fig. 3 is a schematic view of eight iris edge coordinates and iris center coordinates of the iterative fitting method of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-3, the embodiment of the present invention provides three technical solutions: a sight line direction estimation method based on deep learning specifically comprises the following embodiments:
example 1
S1, firstly, inputting a face image acquired in any scene, wherein the face image acquired in any scene is a face video shot when special jobs are performed;
s2, detecting the face part by using a face detection technology, and obtaining key feature points on the face image by using a face feature point detection technology, wherein the face detection technology and the face feature point detection technology both adopt an ANN model face detection algorithm;
s3, extracting eye images according to the key points of the human face, and improving the resolution of the images by using an interpolation algorithm, so that the resolution of the eye parts is improved, and the probability of correctly identifying shielding, correctly identifying the influence of reflected light and illumination and other interference factors is improved;
s4, detecting the elliptical contour of the pupil on the image by using a pupil contour detection technology, and then constructing a human eye model, wherein the human eye model comprises the eyeball center positionSpherical coordinates (gamma, delta) of the pupil on the eyeball and the pupil size r;
s5, training a proper model by using a depth convolution neural network by taking the matching of the pupil contour detected on the image in the step S4 and the pupil contour projected to the image plane by the eyeball model as correction, inputting the model into an eye image, outputting the model into a correct eyeball model, and respectively calculating the pupil diameter r, the position of the pupil and the position of the pupil in the eye area according to the eye model constructed in the step S3, wherein the position of the pupil in the eye area is the upper position of the eye area;
s6, obtaining key feature points on the face image by using a face feature point detection technology, resolving a head pose by combining standard face three-dimensional feature points and feature points detected on the face image, wherein the feature points detected on the face image are two-dimensional image coordinates, and the head pose is resolved only by using a SolvePnp algorithm in the step S6;
s7, converting the face image into a space with lower degree of freedom (DOF) more beneficial to recognition according to the head pose analyzed in the step S6, in the space, estimating the sight line direction by using the Deep Convolutional Neural Network (DCNN) trained in the step S5 in combination with scene information, finally outputting the sight line direction of the face in the three-dimensional world, and outputting the sight line direction of the face in a vector mannerTo indicate.
Example 2
S1, firstly, inputting a face image acquired in any scene, wherein the face image acquired in any scene is a face video shot when special jobs are performed;
s2, detecting the face part by using a face detection technology, and obtaining key feature points on the face image by using a face feature point detection technology, wherein the face detection technology and the face feature point detection technology both adopt an SVM model face detection algorithm;
s3, extracting eye images according to the key points of the human face, and improving the resolution of the images by using an interpolation algorithm, so that the resolution of the eye parts is improved, and the probability of correctly identifying shielding, correctly identifying the influence of reflected light and illumination and other interference factors is improved;
s4, detecting the elliptical contour of the pupil on the image by using a pupil contour detection technology, and then constructing a human eye model, wherein the human eye model comprises the eyeball center positionSpherical coordinates (gamma, delta) of the pupil on the eyeball and the pupil size r;
s5, training a proper model by using a depth convolution neural network by taking the matching of the pupil contour detected on the image in the step S4 and the pupil contour projected to the image plane by the eyeball model as correction, inputting the model into an eye image, outputting the model into a correct eyeball model, and respectively calculating the pupil diameter r, the position of the pupil and the position of the pupil in the eye area according to the eye model constructed in the step S3, wherein the position of the pupil in the eye area is the left position of the eye area;
s6, obtaining key feature points on the face image by using a face feature point detection technology, resolving a head pose by combining standard face three-dimensional feature points and feature points detected on the face image, wherein the feature points detected on the face image are two-dimensional image coordinates, and the head pose is resolved only by using a SolvePnp algorithm in the step S6;
s7, converting the face image into a space with lower degree of freedom (DOF) more beneficial to recognition according to the head pose analyzed in the step S6, in the space, estimating the sight line direction by using the Deep Convolutional Neural Network (DCNN) trained in the step S5 in combination with scene information, finally outputting the sight line direction of the face in the three-dimensional world, and outputting the sight line direction of the face in a vector mannerTo indicate.
Example 3
S1, firstly, inputting a face image acquired in any scene, wherein the face image acquired in any scene is a face video shot when special jobs are performed;
s2, detecting the face part by using a face detection technology, and obtaining key feature points on the face image by using a face feature point detection technology, wherein the face detection technology and the face feature point detection technology both adopt an Adaboost model face detection algorithm;
s3, extracting eye images according to the key points of the human face, and improving the resolution of the images by using an interpolation algorithm, so that the resolution of the eye parts is improved, and the probability of correctly identifying shielding, correctly identifying the influence of reflected light and illumination and other interference factors is improved;
s4, detecting the elliptical contour of the pupil on the image by using a pupil contour detection technology, and then constructing a human eye model, wherein the human eye model comprises the eyeball center positionSpherical coordinates (gamma, delta) of the pupil on the eyeball and the pupil size r;
s5, training a proper model by using a depth convolution neural network by taking the matching of the pupil contour detected on the image in the step S4 and the pupil contour projected to the image plane by the eyeball model as correction, inputting the model into an eye image, outputting the model into a correct eyeball model, and respectively calculating the pupil diameter r, the position of the pupil and the position of the pupil in the eye area according to the eye model constructed in the step S3, wherein the position of the pupil in the eye area is the middle position of the eye area;
s6, obtaining key feature points on the face image by using a face feature point detection technology, resolving a head pose by combining standard face three-dimensional feature points and feature points detected on the face image, wherein the feature points detected on the face image are two-dimensional image coordinates, and the head pose is resolved only by using a SolvePnp algorithm in the step S6;
s7, converting the face image into a space with lower degree of freedom (DOF) more beneficial to recognition according to the head pose analyzed in the step S6, in the space, estimating the sight line direction by using the Deep Convolutional Neural Network (DCNN) trained in the step S5 in combination with scene information, finally outputting the sight line direction of the face in the three-dimensional world, and outputting the sight line direction of the face in a vector mannerTo indicate.
The invention can directly calibrate real and unconstrained eye images by training the stacked-hourglass network on the synthetic data to calibrate the eye area, and respectively calibrate five parts of input eye images, eyelid coordinates, iris coordinates, eyeball center coordinates and radius estimation.
As shown in FIG. 3, the present invention can be directly used for model-based or feature-based gaze estimation to derive the gaze direction by predicting and matching eight iris edge coordinates and iris center coordinates for iterative fitting.
The invention can realize that the eye movement eye line identification method based on deep learning is adopted, the problem that the eye movement eye line identification is difficult due to factors such as illumination and other visual artifacts in an unconstrained real environment by the traditional sight line estimation method based on characteristics is well solved, and the aim of quickly and timely knowing the key shift and change of the attention of the sight line of the object is well fulfilled, so that a series of complex tasks based on attention analysis can be carried out, and the method has good application prospects in analysis of states of suspects, accurate advertisement delivery and analysis of preference of customers, analysis of states of students in online education and the like by assisting the trial and judgment of public offices or courts.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (6)
1. A sight line direction estimation method based on deep learning is characterized in that: the method specifically comprises the following steps:
s1, firstly, inputting a face image acquired under any scene;
s2, detecting the face part by using a face detection technology, and obtaining key feature points on the face image by using a face feature point detection technology;
s3, extracting eye images according to the key points of the human face, and improving the resolution of the images by using an interpolation algorithm so as to improve the resolution of the eye parts;
s4, detecting the elliptical contour of the pupil on the image by using a pupil contour detection technology, and then constructing a human eye model;
s5, training a proper model by using a depth convolution neural network by taking the matching of the pupil contour detected on the image in the step S4 and the pupil contour projected to the image plane by the eyeball model as correction, inputting the model into an eye image, outputting the model into a correct eyeball model, and respectively calculating the pupil diameter r, the position of the pupil and the position of the pupil in the eye region according to the eye model constructed in the step S3;
s6, acquiring key feature points on the face image by using a face feature point detection technology, and resolving a head posture by combining standard face three-dimensional feature points and the feature points detected on the face image;
and S7, converting the face image into a space with lower degree of freedom more beneficial to recognition according to the head pose analyzed in the step S6, estimating the sight line direction by using the deep convolutional neural network trained in the step S5 in combination with scene information in the space, and finally outputting the sight line direction of the face in the three-dimensional world and representing the sight line direction by a vector g (x, y, z).
2. The gaze direction estimation method based on deep learning according to claim 1, characterized in that: the face image acquired in any scene in step S1 is a face video captured when the job is special.
4. The gaze direction estimation method based on deep learning according to claim 1, characterized in that: the position of the pupil in the eye area in step S5 is one of the five positions of the upper part, the lower part, the left part, the right part or the middle part of the eye area.
5. The gaze direction estimation method based on deep learning according to claim 1, characterized in that: the feature points detected on the face image in step S6 are two-dimensional image coordinates, and the head pose is resolved only by using the SolvePnp algorithm in step S6.
6. The gaze direction estimation method based on deep learning according to claim 1, characterized in that: in step S2, both the face detection technique and the face feature point detection technique use one of an ANN model face detection algorithm, an SVM model face detection algorithm, or an Adaboost model face detection algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911108890.7A CN112800815A (en) | 2019-11-13 | 2019-11-13 | Sight direction estimation method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911108890.7A CN112800815A (en) | 2019-11-13 | 2019-11-13 | Sight direction estimation method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112800815A true CN112800815A (en) | 2021-05-14 |
Family
ID=75803572
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911108890.7A Withdrawn CN112800815A (en) | 2019-11-13 | 2019-11-13 | Sight direction estimation method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112800815A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113743254A (en) * | 2021-08-18 | 2021-12-03 | 北京格灵深瞳信息技术股份有限公司 | Sight estimation method, sight estimation device, electronic equipment and storage medium |
CN114037738A (en) * | 2021-11-18 | 2022-02-11 | 湖北三闾智能科技有限公司 | Human vision-driven upper limb auxiliary robot control method |
CN114240737A (en) * | 2021-12-14 | 2022-03-25 | 北京构力科技有限公司 | Method, apparatus, device and medium for generating digital model from drawings |
CN114706484A (en) * | 2022-04-18 | 2022-07-05 | Oppo广东移动通信有限公司 | Sight line coordinate determination method and device, computer readable medium and electronic equipment |
WO2023231479A1 (en) * | 2022-06-01 | 2023-12-07 | 同方威视科技江苏有限公司 | Pupil detection method and apparatus, and storage medium and electronic device |
-
2019
- 2019-11-13 CN CN201911108890.7A patent/CN112800815A/en not_active Withdrawn
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113743254A (en) * | 2021-08-18 | 2021-12-03 | 北京格灵深瞳信息技术股份有限公司 | Sight estimation method, sight estimation device, electronic equipment and storage medium |
CN113743254B (en) * | 2021-08-18 | 2024-04-09 | 北京格灵深瞳信息技术股份有限公司 | Sight estimation method, device, electronic equipment and storage medium |
CN114037738A (en) * | 2021-11-18 | 2022-02-11 | 湖北三闾智能科技有限公司 | Human vision-driven upper limb auxiliary robot control method |
CN114240737A (en) * | 2021-12-14 | 2022-03-25 | 北京构力科技有限公司 | Method, apparatus, device and medium for generating digital model from drawings |
CN114706484A (en) * | 2022-04-18 | 2022-07-05 | Oppo广东移动通信有限公司 | Sight line coordinate determination method and device, computer readable medium and electronic equipment |
WO2023231479A1 (en) * | 2022-06-01 | 2023-12-07 | 同方威视科技江苏有限公司 | Pupil detection method and apparatus, and storage medium and electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112800815A (en) | Sight direction estimation method based on deep learning | |
US11775056B2 (en) | System and method using machine learning for iris tracking, measurement, and simulation | |
CN102830793B (en) | Sight tracing and equipment | |
Guo et al. | Eyes tell all: Irregular pupil shapes reveal gan-generated faces | |
US9317973B2 (en) | Augmented reality method applied to the integration of a pair of spectacles into an image of a face | |
CN108717531B (en) | Human body posture estimation method based on Faster R-CNN | |
US10564446B2 (en) | Method, apparatus, and computer program for establishing a representation of a spectacle lens edge | |
CN109684925B (en) | Depth image-based human face living body detection method and device | |
CN103810478B (en) | Sitting posture detection method and device | |
CN108537160A (en) | Risk Identification Method, device, equipment based on micro- expression and medium | |
CN101339607B (en) | Human face recognition method and system, human face recognition model training method and system | |
Alnajar et al. | Calibration-free gaze estimation using human gaze patterns | |
CN101377814A (en) | Face image processing apparatus, face image processing method, and computer program | |
CN104978548A (en) | Visual line estimation method and visual line estimation device based on three-dimensional active shape model | |
CN110717392B (en) | Sitting posture detection and correction method and device | |
Arar et al. | A regression-based user calibration framework for real-time gaze estimation | |
CN109752855A (en) | A kind of method of hot spot emitter and detection geometry hot spot | |
CN109725721A (en) | Human-eye positioning method and system for naked eye 3D display system | |
CN111339982A (en) | Multi-stage pupil center positioning technology implementation method based on features | |
CN114333046A (en) | Dance action scoring method, device, equipment and storage medium | |
CN111079676A (en) | Human eye iris detection method and device | |
Parikh et al. | Effective approach for iris localization in nonideal imaging conditions | |
Xia et al. | SDM-based means of gradient for eye center localization | |
CN109690555A (en) | Face detector based on curvature | |
US10796147B1 (en) | Method and apparatus for improving the match performance and user convenience of biometric systems that use images of the human eye |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210514 |
|
WW01 | Invention patent application withdrawn after publication |