CN109034051A - Human-eye positioning method - Google Patents

Human-eye positioning method Download PDF

Info

Publication number
CN109034051A
CN109034051A CN201810816204.0A CN201810816204A CN109034051A CN 109034051 A CN109034051 A CN 109034051A CN 201810816204 A CN201810816204 A CN 201810816204A CN 109034051 A CN109034051 A CN 109034051A
Authority
CN
China
Prior art keywords
eyes
image
window
human
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810816204.0A
Other languages
Chinese (zh)
Inventor
崔志斌
陈宝远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN201810816204.0A priority Critical patent/CN109034051A/en
Publication of CN109034051A publication Critical patent/CN109034051A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

Human-eye positioning method.Human-eye positioning method disadvantage is to the picture of miscellaneous background and inclined Face detection inaccuracy at this stage.Template matching method accuracy rate is higher, but due to needing repeatedly to calculate fractal dimension, operation is complicated.Method of the invention first has to obtain template and feature space when determining eyes, and determines the face used when verifying;And eigenface is obtained using Karhunen-Loeve transformation;Detect possible eyes point in a width figure;Candidate's eyes point all on image is combined, eyes window is selected from original image, is matched with the template of step 1, meet matching condition is set to candidate's eyes pair;Human face region determined by each pair of candidate's eyes is projected into eigenface spatially, finds out vector coefficient, according to vector coefficient reconstruction image, original image is compared and whether reconstruction figure can verify that the eyes of hypothesis to correct.The present invention does not have to the scaled size for obtaining each object multiple to input picture, and the complexity of operation is greatly reduced in terms of two.

Description

Human-eye positioning method
Technical field:
The present invention relates to a kind of human-eye positioning methods.
Background technique:
Eyes are the most apparent organs of human face, contain many useful features.It is extracted from the image that a width gives Eyes out, so that it may obtain identifying required face according to the relationship of it and face, and then extract other facial characteristics.Therefore, eye Eyeball positioning is often the first step of face identification system, particularly important for high performance Automatic face recognition system.It is existing Eye locating method generally requires largely to calculate, such as by after the image binaryzation of input, maximum one piece of black area is The lower boundary line in this region is set to two directions by hair zones, obtains eyes substantially in conjunction with the symmetry of eyes Position, and eyeball, canthus and upper lower eyelid can be extracted with jointing edge figure and improve recognition efficiency.But this method requires face Occupy picture major part position, the color of background is single and two line basic horizontals.There are also grayscale image is carried out horizontal direction Projection, will appear a trough in hair position, removes the image of trough or more, that is, go the influence lost hair;It will be remaining Image portion be allocated as vertical direction projection, in two position meetings there are two small trough, in conjunction with floor projection result up to people Eye approximate location.Binaryzation is carried out to the image of acquisition, the approximate location of eyes is made, then finds out two-value with Hough transform It is round as the eyes assumed in figure, examine whether the circle is eye strongly further according to eyeball and peripheral part intensity contrast Eyeball.Template matching method is found with template matching degree the best part in figure as eyes area after can establish eye template Domain.The common drawback of these methods is often inaccurate for the picture of miscellaneous background and inclined Face detection.Template matching Method accuracy rate is higher, but due to needing repeatedly to calculate fractal dimension, operation is complicated.
Summary of the invention:
The object of the present invention is to provide a kind of human-eye positioning methods.
Above-mentioned purpose is realized by following technical scheme:
Human-eye positioning method after substantially position to eyes, determines human face region to vacation based on template matching Determine eyes to be verified, the human-eye positioning method is realized by following steps:
Step 1 is preparation work: being used when first having to obtain the template and feature space when determining eyes, and determining verifying The face arrived utilizes principle component analysis in verification process;And eigenface is obtained using Karhunen-Loeve transformation;
Possible eyes point in step 2, one width figure of detection, these eyes points show as trough in image space;
Step 3 combines candidate's eyes point all on image according to established standards, to every a pair of eyes, from original Eyes window is selected in figure, is matched with the template in step 1, and meet matching condition is determined as candidate's eyes pair;
Step 4, verifying eyes pair: human face region determined by each pair of candidate's eyes is projected into eigenface spatially, is asked Its vector coefficient out compares original image and just whether reconstruction figure can verify that the eyes of hypothesis to according to vector coefficient reconstruction image Really.
The utility model has the advantages that
This algorithm combines a variety of existing eyes location technologies, including the gray analysis method based on binary map, template With method;Reconstruction image is asked using Karhunen-Loeve transformation in confirmation stage, compares reconstruction figure and original image to examine the accuracy of positioning.With base It is compared in the method for gray analysis, this method is increasingly complex, but to the of less demanding of input picture, from complex background and can have It is accurate in inclined figure to orient face;Compared with the method based on template, this method has used a pre-determined bit process, Without carrying out exhaustive search to input picture, the window that specific window may be eyes need to only be matched;Due to Before matching object size it is known that therefore do not have to input picture repeatedly it is scaled obtain each object size, this The complexity of operation is just greatly reduced in terms of two.
Specific embodiment:
Specific embodiment 1:
The human-eye positioning method of present embodiment after substantially position to eyes, is determined based on template matching Human face region verifies hypothesis eyes, and the human-eye positioning method passes through following steps and realizes:
Step 1 is preparation work: being used when first having to obtain the template and feature space when determining eyes, and determining verifying The face arrived utilizes principle component analysis in verification process;And eigenface is obtained using Karhunen-Loeve transformation;
Possible eyes point in step 2, one width figure of detection, these eyes points show as trough in image space;
Step 3 combines candidate's eyes point all on image according to established standards, to every a pair of eyes, from original Eyes window is selected in figure, is matched with the template in step 1, and meet matching condition is determined as candidate's eyes pair;
Step 4, verifying eyes pair: human face region determined by each pair of candidate's eyes is projected into eigenface spatially, is asked Its vector coefficient out compares original image and just whether reconstruction figure can verify that the eyes of hypothesis to according to vector coefficient reconstruction image Really.
Specific embodiment 2:
Unlike specific embodiment one, the human-eye positioning method of present embodiment is determined described in step 1 It is fixed-size that the process of template and feature space when eyes, which is the eye template used in the method, although in reality In the image of border, face is of different sizes, and eyes window sizes is different, but given the position of eyes after, so that it may determine eyes window And input eyes window is zoomed into template size as required and is matched, the generation of eye template is taken using multiple face samples Average method construct, selection standard certificate photo mark human face region by hand and obtain as face sample, then from face sample Eyes;Or the eyes of standard photographs are positioned with method automatically or manually, according to two distances determine eyes window and Human face region provides eyes point by hand, intercepts eyes window by system and human face region is normalized, human face region is taken to unite One size is 32 × 32, and eye areas is 32 × 8, during scaling, using the method for resampling based on linear interpolation, then The mean variance standardization for carrying out gray scale, obtained multiple eyes windows are averaged, i.e. generation eye template, generate face The feature vector in region, i.e. eigenface need to use Karhunen-Loeve transformation, after obtaining feature vector, save feature vector, we choose maximum 10 characteristic value corresponding feature vector composition characteristic spaces.
Specific embodiment 3:
Unlike specific embodiment one or two, the human-eye positioning method of present embodiment, detection described in step 2 The process of possible eyes point is in one width figure, and the possible eyes point is candidate eyeball, and eyeball is face gray scale The smallest part, and it is obvious with peripheral region comparison, on the gamma function f (x, y) of image space, position where eyes can go out An existing trough, the valley regions can be used average gradient operator and detect to obtain, and carry out mean value and variance for the image of input Standardization image is first carried out to reduce the point of detection by binary conversion treatment, eyes in binary map with the influence for eliminating illumination With hair position be it is black, the clothes of background or people are also likely to be black because the intensity contrast of eyes and surrounding eye socket is big, Two standards big in the change of gradient of ocular vicinity, that the gray value and grey scale change value of synthetic image provide, it is standard compliant Point can be used as eyes candidate point, i.e.,Wherein, fbinaryGray scale in (x, y) expression binary map, f (x, Y) gray value of original image is indicated, it reflects the grey scale change situation around the point, if the gray value of a point is smaller by (i.e. two It is black, gray value 0 after value), and surrounding change of gradient less (φv(x, y) is smaller), this point is likely on head On hair or background, clothes, point standard compliant in binary map is retained, the gray value of remaining point is changed to 255, obtains several possibility Eye areas.
Specific embodiment 4:
Unlike specific embodiment three, the human-eye positioning method of present embodiment is determined as waiting described in step 3 The process for selecting eyes pair is that, by candidate point combination of two obtained in step 2, a pair of eyes being grouped together is referred to as candidate eye Eyeball pair, if obtaining N number of candidate point,Theoretic candidate's eyes pair, such calculation amount can not put up with.Therefore, According to the actual situation, limit following several situations not considering: (1) distance between two points are excessive.Background is free of for one Face figure, two centre distance do not exceed the 1/2 of picture width, and otherwise picture can not include complete face information.(2) Distance between two points are too small.Even if two o'clock represents eyes, due to apart from too small, the human face region that they are determined also very little is provided Information be not enough to for recognition of face.(3) line between two o'clock and the angle of horizontal direction are greater than 45 degree.Although being situated between herein The advantages of method to continue first is that inclined Face detection, but the picture commonly entered does not have too big inclination.It excluded Big inclination influences less whole location efficiency, and is conducive to the increase of system operations speed.
After excluding the above various situations, obtain several candidate's eyes pair, according to the coordinate of two eyes, calculate two o'clock it Between line length l and line and horizontal line between angle, rotation image keeps line horizontal, using l as standard, in line Centered on point, interception size is 2l × (l/2) window, it is scaled 32 × 8 with the method for resampling based on linear interpolation, The eye template generated in obtained eyes window and step 1 is subjected to matching operation, if the gamma function of eye template is f0 (x, y), 0≤x≤M, 0≤y≤N, mean value μ0, variance σ0;Input eyes window gamma function be f (x, y), 0≤x≤ M, 0≤y≤N, mean value μ, variance σ calculate correlation coefficient r (f between the two0, f) and respective pixel gray value be averaged Deviation d (f0, f) it is respectively as follows:
Specific embodiment 5:
Unlike specific embodiment four, the human-eye positioning method of present embodiment, the eyes pair determined in step 3 To further be confirmed, the basis of confirmation is exactly the basic structure of face, face as a kind of special identification object, All be related between component, between component and entirety, candidate's eyes pair once it is determined that, so that it may mark entire human face region, because For the size of face and two eye distances, there is the methods one that eyes window is obtained in the proportionate relationship and step 3 of statistics between Sample rotates input picture according to the inclined degree of eyes pair, and the window of 2l × 2l is human face region on interception image;Wherein, people Eye is 0.5l to window upper end distance, and two to left and right side distances are 0.5l, and the distance to lower end is 1.5l, the window that will be obtained Mouth zooms to 32 × 32, identical as the feature vector dimension established in step 1, is affected, needs since Karhunen-Loeve transformation is illuminated by the light The standardization that mean value and variance are carried out to face image, by being projected to characteristic vector space, find out the coefficient of face image f to Amount: y=UTF, according to coefficient vector reconstruction image:It is compared with original image, if difference is big, the window of input is not Face window;Difference is in a certain range, so that it may judge the corresponding eyes of eyes window to be effectively to get to determination Eyes pair, the difference for measuring reconstruction image and original image are sized so that the signal-to-noise ratio with reconstruction image When signal-to-noise ratio is less than threshold value, can determine whether not to be facial image.

Claims (5)

1. human-eye positioning method, it is characterized in that: after substantially position to eyes, determining face based on template matching Hypothesis eyes are verified in region, and the human-eye positioning method passes through following steps and realizes:
Step 1 is preparation work: first have to obtain the template and feature space when determining eyes, and used when determining verifying Face utilizes principle component analysis in verification process;And eigenface is obtained using Karhunen-Loeve transformation;
Possible eyes point in step 2, one width figure of detection, these eyes points show as trough in image space;
Step 3 combines candidate's eyes point all on image according to established standards, to every a pair of eyes, from original image Eyes window is selected, is matched with the template in step 1, meet matching condition is determined as candidate's eyes pair;
Step 4, verifying eyes pair: human face region determined by each pair of candidate's eyes is projected into eigenface spatially, finds out it Vector coefficient compares original image and whether reconstruction figure can verify that the eyes of hypothesis to correct according to vector coefficient reconstruction image.
2. human-eye positioning method according to claim 1, it is characterized in that: mould when obtaining determining eyes described in step 1 The process of plate and feature space is that the method construct being averaged using multiple face samples, selection standard certificate photo is marked by hand Human face region obtains eyes as face sample, then from face sample;Or with method automatically or manually to standard photographs Eyes positioning, eyes window and human face region are determined according to two distances, eyes point is provided by hand, eye is intercepted by system Eyeball window and human face region are normalized, and taking human face region to unify size is 32 × 32, and eye areas is 32 × 8, are scaling In the process, using the method for resampling based on linear interpolation, the mean variance standardization of gray scale is then carried out, it is multiple by what is obtained Eyes window is averaged, i.e. generation eye template, generates the feature vector of face area, i.e. eigenface need to use Karhunen-Loeve transformation, After obtaining feature vector, feature vector is saved, it is empty that we choose the corresponding feature vector composition characteristic of maximum 10 characteristic values Between.
3. human eye according to claim 1 or 2 determines method, it is characterized in that: may in one width figure of detection described in step 2 The process of eyes point be that the possible eyes point is candidate eyeball, and eyeball is the smallest part of face gray scale, and Obvious with peripheral region comparison, on the gamma function f (x, y) of image space, position where eyes will appear a trough, should Valley regions can be used average gradient operator and detect to obtain, and carry out the standardization of mean value and variance for the image of input to eliminate Image is first carried out binary conversion treatment by the influence of illumination, and eyes and hair position are black, the clothing of background or people in binary map Clothes are also likely to be two standards black, that the gray value and grey scale change value of synthetic image provide, and standard compliant point can be made For eyes candidate point, i.e.,Wherein, fbinary(x, y) indicates the gray scale in binary map, and f (x, y) indicates former The gray value of figure, it reflects the grey scale change situation around the point, if the gray value of a point is smaller, and surrounding ladder Less, this point is likely in hair or background, clothes for degree variation, and point standard compliant in binary map is retained, remaining The gray value of point is changed to 255, obtains several possible eye areas.
4. human-eye positioning method according to claim 3, it is characterized in that: being determined as candidate's eyes pair described in step 3 Process is, by candidate point combination of two obtained in step 2, a pair of eyes being grouped together is referred to as candidate's eyes pair, if To N number of candidate point,Theoretic candidate's eyes pair obtain several candidate's eyes pair, according to the coordinate of two eyes, meter The angle between two o'clock between the length l and line and horizontal line of line is calculated, it is mark with l that rotation image, which keeps line horizontal, Standard, centered on line midpoint, interception size be 2l × (l/2) window, with the method for resampling based on linear interpolation by it 32 × 8 are scaled, the eye template generated in obtained eyes window and step 1 is subjected to matching operation, if eye template Gamma function is f0(x, y), 0≤x≤M, 0≤y≤N, mean value μ0, variance σ0;The gamma function for inputting eyes window is f (x, y), 0≤x≤M, 0≤y≤N, mean value μ, variance σ calculate correlation coefficient r (f between the two0, f) and respective pixel Average deviation d (the f of gray value0, f) it is respectively as follows:
5. human-eye positioning method according to claim 1,2 or 4, it is characterized in that: the process of verifying eyes pair is, at this In step, for the eyes determined in step 3 to will further be confirmed, the basis of confirmation is exactly the basic structure of face, face As a kind of special identification object, be all related between component, between component and entirety, candidate's eyes pair once it is determined that, Entire human face region can be marked, because there is the proportionate relationships of statistics, and step between for the size of face and two eye distances The method that eyes window is obtained in rapid 3 is the same, rotates input picture, 2l × 2l on interception image according to the inclined degree of eyes pair Window be human face region;Wherein, human eye is 0.5l to window upper end distance, and two to left and right side distances are 0.5l, are arrived down The distance at end is 1.5l, and obtained window is zoomed to 32 × 32, identical as the feature vector dimension established in step 1, due to Karhunen-Loeve transformation, which is illuminated by the light, to be affected, and needs to carry out face image the standardization of mean value and variance, by characteristic vector space Projection, finds out the coefficient vector of face image f: y=UTF, according to coefficient vector reconstruction image:Compared with original image Compared with if difference is big, the window of input is not face window;Difference is in a certain range, so that it may judge that the eyes window is corresponding Eyes to being effectively to be sized so as to get the difference for determining eyes pair, measuring reconstruction image and original image with rebuilding The signal-to-noise ratio of imageWhen signal-to-noise ratio is less than threshold value, can determine whether not to be facial image.
CN201810816204.0A 2018-07-24 2018-07-24 Human-eye positioning method Pending CN109034051A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810816204.0A CN109034051A (en) 2018-07-24 2018-07-24 Human-eye positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810816204.0A CN109034051A (en) 2018-07-24 2018-07-24 Human-eye positioning method

Publications (1)

Publication Number Publication Date
CN109034051A true CN109034051A (en) 2018-12-18

Family

ID=64645422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810816204.0A Pending CN109034051A (en) 2018-07-24 2018-07-24 Human-eye positioning method

Country Status (1)

Country Link
CN (1) CN109034051A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110543843A (en) * 2019-08-23 2019-12-06 北京工业大学 Human eye positioning and size calculation algorithm based on forward oblique projection and backward oblique projection
CN111160291A (en) * 2019-12-31 2020-05-15 上海易维视科技有限公司 Human eye detection method based on depth information and CNN

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6678389B1 (en) * 1998-12-29 2004-01-13 Kent Ridge Digital Labs Method and apparatus for embedding digital information in digital multimedia data
CN102194110A (en) * 2011-06-10 2011-09-21 淮海工学院 Eye positioning method in human face image based on K-L (Karhunen-Loeve) transform and nuclear correlation coefficient
CN105205480A (en) * 2015-10-31 2015-12-30 潍坊学院 Complex scene human eye locating method and system
CN105512630A (en) * 2015-12-07 2016-04-20 天津大学 Human eyes detection and positioning method with near real-time effect
CN106127160A (en) * 2016-06-28 2016-11-16 上海安威士科技股份有限公司 A kind of human eye method for rapidly positioning for iris identification

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6678389B1 (en) * 1998-12-29 2004-01-13 Kent Ridge Digital Labs Method and apparatus for embedding digital information in digital multimedia data
CN102194110A (en) * 2011-06-10 2011-09-21 淮海工学院 Eye positioning method in human face image based on K-L (Karhunen-Loeve) transform and nuclear correlation coefficient
CN105205480A (en) * 2015-10-31 2015-12-30 潍坊学院 Complex scene human eye locating method and system
CN105512630A (en) * 2015-12-07 2016-04-20 天津大学 Human eyes detection and positioning method with near real-time effect
CN106127160A (en) * 2016-06-28 2016-11-16 上海安威士科技股份有限公司 A kind of human eye method for rapidly positioning for iris identification

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘源: ""基于模板匹配算法的人眼定位方法"", 《火力与指挥控制》 *
郁洪强等: ""基于重建图像信噪比特征的脸部位置检测方法"", 《北京生物医学工程》 *
马桂英: ""基于眼睛特征定位的人脸模板匹配算法研究"", 《信息技术》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110543843A (en) * 2019-08-23 2019-12-06 北京工业大学 Human eye positioning and size calculation algorithm based on forward oblique projection and backward oblique projection
CN110543843B (en) * 2019-08-23 2023-12-15 北京工业大学 Human eye positioning and size calculating algorithm based on forward oblique projection and backward oblique projection
CN111160291A (en) * 2019-12-31 2020-05-15 上海易维视科技有限公司 Human eye detection method based on depth information and CNN
CN111160291B (en) * 2019-12-31 2023-10-31 上海易维视科技有限公司 Human eye detection method based on depth information and CNN

Similar Documents

Publication Publication Date Title
US11775056B2 (en) System and method using machine learning for iris tracking, measurement, and simulation
CN103093215B (en) Human-eye positioning method and device
KR100682889B1 (en) Method and Apparatus for image-based photorealistic 3D face modeling
KR101007276B1 (en) Three dimensional face recognition
KR0158038B1 (en) Apparatus for identifying person
CN105740780B (en) Method and device for detecting living human face
US20040223630A1 (en) Imaging of biometric information based on three-dimensional shapes
CN109409190A (en) Pedestrian detection method based on histogram of gradients and Canny edge detector
Puhan et al. Efficient segmentation technique for noisy frontal view iris images using Fourier spectral density
CN110175558A (en) A kind of detection method of face key point, calculates equipment and storage medium at device
JP6532642B2 (en) Biometric information authentication apparatus and biometric information processing method
JP4999731B2 (en) Face image processing device
Fukui et al. Facial feature point extraction method based on combination of shape extraction and pattern matching
CN101246544A (en) Iris locating method based on boundary point search and SUSAN edge detection
CN106203329B (en) A method of identity template is established based on eyebrow and carries out identification
CN107256410B (en) Fundus image classification method and device
JP2008204200A (en) Face analysis system and program
CN101533466A (en) Image processing method for positioning eyes
CN104036299B (en) A kind of human eye contour tracing method based on local grain AAM
JP4952267B2 (en) Three-dimensional shape processing apparatus, three-dimensional shape processing apparatus control method, and three-dimensional shape processing apparatus control program
CN109034051A (en) Human-eye positioning method
Conde et al. Automatic 3D face feature points extraction with spin images
CN106778491B (en) The acquisition methods and equipment of face 3D characteristic information
CN113436735A (en) Body weight index prediction method, device and storage medium based on face structure measurement
JP2004086929A5 (en)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181218