CN109858342A - A kind of face pose estimation of fusion hand-designed description son and depth characteristic - Google Patents

A kind of face pose estimation of fusion hand-designed description son and depth characteristic Download PDF

Info

Publication number
CN109858342A
CN109858342A CN201811580405.1A CN201811580405A CN109858342A CN 109858342 A CN109858342 A CN 109858342A CN 201811580405 A CN201811580405 A CN 201811580405A CN 109858342 A CN109858342 A CN 109858342A
Authority
CN
China
Prior art keywords
depth characteristic
face
hand
designed
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811580405.1A
Other languages
Chinese (zh)
Other versions
CN109858342B (en
Inventor
赖剑煌
欧阳柳
吴卓亮
谢晓华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201811580405.1A priority Critical patent/CN109858342B/en
Publication of CN109858342A publication Critical patent/CN109858342A/en
Application granted granted Critical
Publication of CN109858342B publication Critical patent/CN109858342B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses the face pose estimations of a kind of fusion hand-designed description son and depth characteristic, are commonly applied to for the evaluation and test of the quality of human face image of safety precaution and face recognition application.This method describes the profile and local message that son extracts facial image using SIFT, the apparent and structural information of facial image is extracted using DeepID depth network, mainly comprise the steps that the deep neural network model of one extraction depth characteristic of training, it inputs one and detects obtained facial image, extract the facial image depth characteristic using the depth network model;The SIFT feature vector with scale space invariant feature is extracted, the SIFT feature of the facial image and depth characteristic are connected, trained SVM classifier is input to and classifies, determine the posture classification of the face to be sorted.Human face modeling can be effectively performed in the present invention, improve the accuracy rate of Attitude estimation.

Description

A kind of face pose estimation of fusion hand-designed description son and depth characteristic
Technical field
The present invention relates to recognitions of face and quality of human face image to evaluate and test research field, in particular to a kind of fusion hand-designed The face pose estimation of description son and depth characteristic.
Background technique
In face recognition application and monitoring safety precaution, due to the uncertainty of image capture environment and condition, cause Collected quality of human face image is difficult to control, and picture quality difference is very big, influences whether the effect of recognition of face.Especially people Face posture it is complicated and changeable, lead to the detection difficulties of face key point, especially under facial angle deflection serious situation, Influence whether the effect of recognition of face.It is predicted and is estimated it is therefore desirable to the posture to face in image, to control face Influence of the posture to recognition performance.How accurately and rapidly the human face posture in image to be estimated, is still a research heat Point.
Existing facial image evaluating method, is broadly divided into two major classes, and one kind is the facial image matter based on face alignment Evaluation method is measured, another kind of is the method based on a variety of attribute evaluations.
Quality of human face image evaluation method based on face alignment, two facial images of contrast conting and matching it is similar Degree, it is to belong to the figure for having reference that the similarity that face alignment matches, which can serve as the measurement standard of quality of human face image, As evaluation method, the relevant information using reference picture is needed, therefore has tightened up requirement for input data.
Based on the method for a variety of attribute evaluations, commenting for index of correlation is carried out by the extraction of feature and attribute to image Valence, the comprehensive evaluation at quality of human face image of tradeoff.In actual application, the procurement cost of reference picture is bigger, so Method based on a variety of attribute evaluations has more realistic meaning.
In most application scenarios, due to the movement of angle lens and people, it will usually cause camera to be difficult to capture positive face, Face horizontally rotates angle variation very greatly, and the appearance of some side faces will affect the effect of recognition of face, so being directed to horizontal deflection The estimation at angle makes great sense.
If front is analyzed, important attribute of the human face posture as face quality evaluation, the estimation to human face posture is belongs to more The research emphasis of property pricer face quality assessment method.How to realize it is efficient, accurately make face pose estimation It is a critical issue.
Summary of the invention
The purpose of the present invention is to overcome the shortcomings of the existing technology with it is insufficient, provide a kind of fusion hand-designed description with Human face modeling can be effectively performed in the face pose estimation of depth characteristic, this method, improve the standard of Attitude estimation True rate.
The purpose of the present invention is realized by the following technical solution: a kind of to merge hand-designed description and depth characteristic Face pose estimation, comprising steps of
S1: Face datection is carried out to image;
S2: the facial image detected is filtered to remove noise;
S3: SIFT (Scale-invariant feature transform, the scale invariant feature of facial image are extracted Transformation) Expressive Features, son is described as hand-designed;Extract the depth characteristic of description face;
S4: son is described to hand-designed and depth characteristic carries out effective integration;
S5: being trained fusion feature, generates human face posture classifier;
S6: the human face posture in image or video frame is effectively estimated using human face posture classifier.
It include hand-designed description and depth characteristic in the feature that the present invention uses, wherein hand-designed description is sharp The profile and local message that son extracts facial image are described with SIFT, the contour structure information of human face posture can be efficiently extracted. And depth characteristic can utilize the powerful learning ability of convolutional neural networks, extract the effective appearance features of face and structure letter Breath.Effective integration is carried out to multiple features, the weight relationship of hand-designed description son and depth characteristic is balanced, takes full advantage of more The effective information of kind feature, therefore substantially increase accuracy.
In a kind of exemplary embodiment of the present invention, the step S1, using the faceDetector in OpenCV to figure As carrying out Face datection.It is only determined by Face datection and subsequent step is just unfolded in the image there are face.
In a kind of exemplary embodiment of the present invention, the step S2 carries out Gaussian smoothing to the facial image detected Filtering, method is:
(2-1) is usedGaussian filter is calculated, (x, y) is image coordinate, and σ is Standard deviation;
(2-2) carries out Gaussian smoothing filter removal noise to the facial image detected using Gaussian filter.
In a kind of exemplary embodiment of the present invention, in the step S3, son is described using SIFT and extracts facial image Above- mentioned information are described son by profile and local message.
In a kind of exemplary embodiment of the present invention, in the step S3, DeepID (Deep hidden is extracted IDentity feature) depth characteristic.
In a kind of exemplary embodiment of the present invention, in the step S4, by hand-designed Expressive Features and depth characteristic It connects, obtains fusion feature.
In a kind of exemplary embodiment of the present invention, in the step S5, using support vector machines (SVM) to fusion feature It is trained, generates human face posture classifier, comprising: collect a large amount of facial images and its corresponding posture as category, utilize Support vector machines is trained the fusion feature of image zooming-out, generates human face posture classifier.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1, the method for the present invention extracts Face datection, hand-designed Expressive Features, face depth characteristic is extracted, Fusion Features Effectively it is connected with human face posture classification, a fusion feature vector is connected into, to realize the estimation of human face posture.It is this more The Attitude estimation of Fusion Features can be improved the accuracy rate of Attitude estimation.
2, the present invention is using facial image region as the target area for carrying out human face modeling, to eliminate background area The influence that picture quality and human face posture are assessed in domain.
3, the easily updated upgrading of human face posture classifier that the present invention establishes, if new feature need to be introduced, in former classifier On change amplitude it is little, convenient for classifier iteration update.
Detailed description of the invention
Specific embodiments of the present invention are described below with reference to accompanying drawings, in which:
Fig. 1 shows the human face posture of a kind of fusion hand-designed description son and depth characteristic provided in an embodiment of the present invention Estimation method flow diagram.
Fig. 2 is the face image data collection example that the present invention uses, and altogether includes 13 kinds of horizontal deflection postures, angular interval It is 15 °.
Fig. 3 is a kind of face pose estimation based on fusion hand-designed description and depth characteristic of the invention Basic procedure schematic diagram.
Specific embodiment
In order to which technical solution of the present invention and advantage is more clearly understood, below in conjunction with attached drawing to of the invention exemplary Embodiment is described in more detail.
Obviously, the described embodiments are merely a part of the embodiments of the present invention, rather than the exhaustion of all embodiments.And And in the absence of conflict, the feature in the embodiment and embodiment in this explanation can be combined with each other.
Embodiment
As shown in Figure 1,3, a kind of human face modeling side of the description of fusion hand-designed son and depth characteristic of the present embodiment Method, it is main comprising steps of
S1: input facial image or video frame, using the facial image of sample, are used in training human face posture classifier In training.In practical application, input is image to be estimated.
S2, Face datection
The step is for, with the presence or absence of face, human face region in image being determined, because only that ensuring in detection image It can be by just will do it subsequent human face posture prediction steps under the premise of Face datection, if image can not pass through Face datection It does not need then to carry out subsequent step.In practical application, face is carried out to image using the faceDetector in OpenCV Detection.
In embodiment, Gaussian smoothing filter removal noise can be carried out to the facial image detected, step is:
(2-1) is usedGaussian filter is calculated, generates one under normal circumstances 5*5, standard deviation sigma are 0.5 Gaussian filter, and are normalized, and obtain Gaussian filter template.
(2-2) carries out Gaussian smoothing filter removal noise to the facial image detected using Gaussian filter.
The wherein filtered image of I ' expression, I indicate that original image, G indicate Gaussian filter.
S3, multi-feature extraction
The step is carried out for above-mentioned filtered image.The feature of extraction includes that hand-designed description and depth are special Sign.
Hand-designed description son aspect, extracts the SIFT feature of Scale invariant, and SIFT feature is to rotation, scaling, bright Degree variation remains unchanged, and also keeps a degree of stability, and SIFT description to visual angle change, affine transformation, noise The profile and local message of face can be used to extract, the structural information and appearance features of facial contour are provided.Specific method packet It includes:
The extremum extracting of (3-1) scale space is identified for the image of different scale spatially potentially to scale and choosing Select constant angle point;
(3-2) positioning feature point, on the candidate angular position of step (3-1), by one refined model of fitting come really Determine Location Scale, the degree of stability of the basis for selecting candidate angular of key point;
(3-3) characteristic direction assignment distributes to each key point position one or more side based on the gradient direction of part To;
(3-4) feature point description, calculates each feature vertex neighborhood the partial gradient of image, these gradients allow bigger Local shape deformation and light change;
Operation is normalized to the SIFT feature of extraction in (3-5), using principal component analysis (PCA) dimensionality reduction to 96 dimensions, obtains To the SIFT Expressive Features of current face's image.
In terms of depth characteristic, face 160 is extracted using DeepID depth network and ties up depth characteristic.DeepID can be adaptively Study extracts human face posture correlated characteristic, and the feature of extraction can describe facial contour feature, appearance features and phase well Close structural information.According to training set, training obtains DeepID depth network in advance.
S4, Fusion Features
The hand-designed of extraction can be described to son and depth characteristic carries out fused in tandem, the 256 dimension faces that fusion is obtained Attitude description feature is classified as subsequent step training.
S5, classification prediction
Fusion feature is trained using support vector machines (SVM), generates human face posture classifier, comprising:
(5-1) collects face attitude data collection, and using horizontal deflection angle as category, angular range is from -180 ° to 180 °, often 15 ° of spans point are a kind of, totally 13 class, as shown in Figure 2.
(5-2) extracts the fusion feature for collecting training set, is carried out using fusion feature of the support vector machines to image zooming-out Training generates human face posture classifier.
S6, human face modeling
After facial image to be estimated or video frame generate effective fusion feature, it is input to human face posture classifier, benefit The human face posture in image or video frame is effectively estimated with human face posture classifier.
In order to illustrate effect of the invention, a Sub Data Set is extracted from Multi-PIE data set and carries out experiment test, it should Sub Data Set is classified with angle, and each angle 4980 opens facial image, includes altogether 13 angles, from -180 ° to 180 °, angle 15 ° of section, therefore human face posture is divided into 13 classes.
To use in the prior art independent LBP feature, independent SIFT feature, DeepID feature, LBP+DeepID feature with And the present embodiment feature carries out estimation experiment, experimental result comparison is as shown in table 1.
The comparison of 1 experimental result of table
It can be seen from upper table, when being classified only with independent SIFT method, the classification accuracy of human face posture is 61.3%;When being classified merely with DeepID feature, the classification accuracy of human face posture is 89.1%;It is mentioned using the present embodiment The classification accuracy that fusion SIFT+DeepID feature out carries out human face posture is 91.6%.Prove the method for the present invention compared to Facial image more accurately can be carried out Attitude estimation by the prior art, can be applied to quality of human face image evaluation, video peace Many technical fields such as full prevention, recognition of face.
Obviously, the above embodiment of the present invention is just for the sake of clearly demonstrating examples made by the present invention, not to this The restriction of the embodiment of invention.
For those of ordinary skill in the art, other not similar shapes can also be made on the basis of the above description The variation or variation of formula.There is no necessity and possibility to exhaust all the enbodiments.
Attached drawing only for illustration, should not be understood as the limitation of this patent.
It can implement the technology that the present invention describes by various means.For example, these technologies may be implemented in hardware, consolidate In part, software or combinations thereof.For hardware embodiments, processing module may be implemented in one or more specific integrated circuits (ASIC), digital signal processor (DSP), programmable logic device (PLD), field-programmable logic gate array (FPGA), place Manage device, controller, microcontroller, electronic device, other electronic units for being designed to execute function described in the invention or In a combination thereof.
It, can be with the module of execution functions described herein (for example, process, step for firmware and/or Software implementations Suddenly, process etc.) implement the technology.Firmware and/or software code are storable in memory and are executed by processor.Storage Device may be implemented in processor or outside processor.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above method embodiment can pass through The relevant hardware of program instruction is completed, and program above-mentioned can store in a computer-readable storage medium, the program When being executed, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned includes: ROM, RAM, magnetic disk or light The various media that can store program code such as disk.
The above embodiment is a preferred embodiment of the present invention, but embodiments of the present invention are not by above-described embodiment Limitation, other any changes, modifications, substitutions, combinations, simplifications made without departing from the spirit and principles of the present invention, It should be equivalent substitute mode, be included within the scope of the present invention.

Claims (7)

1. a kind of face pose estimation of fusion hand-designed description son and depth characteristic, which is characterized in that comprising steps of
S1: Face datection is carried out to image;
S2: the facial image detected is filtered to remove noise;
S3: extracting the SIFT Expressive Features of facial image, describes son as hand-designed;Extract the depth characteristic of description face;
S4: son is described to hand-designed and depth characteristic carries out effective integration;
S5: being trained fusion feature, generates human face posture classifier;
S6: the human face posture in image or video frame is effectively estimated using human face posture classifier.
2. the face pose estimation of fusion hand-designed description son and depth characteristic according to claim 1, special Sign is that the step S1 carries out Face datection to image using the faceDetector in OpenCV.
3. the face pose estimation of fusion hand-designed description son and depth characteristic according to claim 1, special Sign is that the step S2 carries out Gaussian smoothing filter to the facial image detected, and method is:
(2-1) is usedGaussian filter is calculated, (x, y) is image coordinate, and σ is standard Difference;
(2-2) carries out Gaussian smoothing filter removal noise to the facial image detected using Gaussian filter.
4. the face pose estimation of fusion hand-designed description son and depth characteristic according to claim 1, special Sign is, in the step S3, using SIFT describe son extract facial image profile and local message, using above- mentioned information as Hand-designed description.
5. the face pose estimation of fusion hand-designed description son and depth characteristic according to claim 1, special Sign is, in the step S3, extracts DeepID depth characteristic.
6. the face pose estimation of fusion hand-designed description son and depth characteristic according to claim 1, special Sign is, in the step S4, hand-designed Expressive Features and depth characteristic is connected, fusion feature is obtained.
7. the face pose estimation of fusion hand-designed description son and depth characteristic according to claim 1, special Sign is, in the step S5, is trained using support vector machines to fusion feature, generates human face posture classifier, comprising: A large amount of facial images and its corresponding posture are collected as category, is carried out using fusion feature of the support vector machines to image zooming-out Training generates human face posture classifier.
CN201811580405.1A 2018-12-24 2018-12-24 Human face posture estimation method integrating manual design descriptor and depth feature Active CN109858342B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811580405.1A CN109858342B (en) 2018-12-24 2018-12-24 Human face posture estimation method integrating manual design descriptor and depth feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811580405.1A CN109858342B (en) 2018-12-24 2018-12-24 Human face posture estimation method integrating manual design descriptor and depth feature

Publications (2)

Publication Number Publication Date
CN109858342A true CN109858342A (en) 2019-06-07
CN109858342B CN109858342B (en) 2021-06-25

Family

ID=66891984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811580405.1A Active CN109858342B (en) 2018-12-24 2018-12-24 Human face posture estimation method integrating manual design descriptor and depth feature

Country Status (1)

Country Link
CN (1) CN109858342B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110457999A (en) * 2019-06-27 2019-11-15 广东工业大学 A kind of animal posture behavior estimation based on deep learning and SVM and mood recognition methods
CN112560669A (en) * 2020-12-14 2021-03-26 杭州趣链科技有限公司 Face posture estimation method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070031028A1 (en) * 2005-06-20 2007-02-08 Thomas Vetter Estimating 3d shape and texture of a 3d object based on a 2d image of the 3d object
CN101763503A (en) * 2009-12-30 2010-06-30 中国科学院计算技术研究所 Face recognition method of attitude robust
CN103198330A (en) * 2013-03-19 2013-07-10 东南大学 Real-time human face attitude estimation method based on depth video streaming
CN103218606A (en) * 2013-04-10 2013-07-24 哈尔滨工程大学 Multi-pose face recognition method based on face mean and variance energy images
CN103824089A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascade regression-based face 3D pose recognition method
US20150235073A1 (en) * 2014-01-28 2015-08-20 The Trustees Of The Stevens Institute Of Technology Flexible part-based representation for real-world face recognition apparatus and methods
CN105550634A (en) * 2015-11-18 2016-05-04 广东微模式软件股份有限公司 Facial pose recognition method based on Gabor features and dictionary learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070031028A1 (en) * 2005-06-20 2007-02-08 Thomas Vetter Estimating 3d shape and texture of a 3d object based on a 2d image of the 3d object
CN101763503A (en) * 2009-12-30 2010-06-30 中国科学院计算技术研究所 Face recognition method of attitude robust
CN103198330A (en) * 2013-03-19 2013-07-10 东南大学 Real-time human face attitude estimation method based on depth video streaming
CN103218606A (en) * 2013-04-10 2013-07-24 哈尔滨工程大学 Multi-pose face recognition method based on face mean and variance energy images
US20150235073A1 (en) * 2014-01-28 2015-08-20 The Trustees Of The Stevens Institute Of Technology Flexible part-based representation for real-world face recognition apparatus and methods
CN103824089A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascade regression-based face 3D pose recognition method
CN105550634A (en) * 2015-11-18 2016-05-04 广东微模式软件股份有限公司 Facial pose recognition method based on Gabor features and dictionary learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
苏铁明 等: ""基于深度学习与融入梯度信息的人脸姿态分类检测"", 《数据采集与处理》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110457999A (en) * 2019-06-27 2019-11-15 广东工业大学 A kind of animal posture behavior estimation based on deep learning and SVM and mood recognition methods
CN110457999B (en) * 2019-06-27 2022-11-04 广东工业大学 Animal posture behavior estimation and mood recognition method based on deep learning and SVM
CN112560669A (en) * 2020-12-14 2021-03-26 杭州趣链科技有限公司 Face posture estimation method and device and electronic equipment

Also Published As

Publication number Publication date
CN109858342B (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN104091147B (en) A kind of near-infrared eyes positioning and eye state identification method
JP5604256B2 (en) Human motion detection device and program thereof
CN108875600A (en) A kind of information of vehicles detection and tracking method, apparatus and computer storage medium based on YOLO
CN110929593B (en) Real-time significance pedestrian detection method based on detail discrimination
CN105893946B (en) A kind of detection method of front face image
CN109685013B (en) Method and device for detecting head key points in human body posture recognition
KR101697161B1 (en) Device and method for tracking pedestrian in thermal image using an online random fern learning
CN104008370A (en) Video face identifying method
CN103942577A (en) Identity identification method based on self-established sample library and composite characters in video monitoring
Kulkarni et al. Real time face recognition using LBP features
CN106599785B (en) Method and equipment for establishing human body 3D characteristic identity information base
CN110298297A (en) Flame identification method and device
Rao et al. Sign Language Recognition System Simulated for Video Captured with Smart Phone Front Camera.
Do et al. Real-time and robust multiple-view gender classification using gait features in video surveillance
CN101996308A (en) Human face identification method and system and human face model training method and system
CN110414571A (en) A kind of website based on Fusion Features reports an error screenshot classification method
Haji et al. Real time face recognition system (RTFRS)
CN112101208A (en) Feature series fusion gesture recognition method and device for elderly people
CN113850221A (en) Attitude tracking method based on key point screening
Gao et al. Occluded person re-identification based on feature fusion and sparse reconstruction
CN113205060A (en) Human body action detection method adopting circulatory neural network to judge according to bone morphology
CN109858342A (en) A kind of face pose estimation of fusion hand-designed description son and depth characteristic
Li et al. SKRWM based descriptor for pedestrian detection in thermal images
Rafi et al. A parametric approach to gait signature extraction for human motion identification
Joshi et al. Histograms of orientation gradient investigation for static hand gestures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
OL01 Intention to license declared
OL01 Intention to license declared