WO2015165227A1 - 人脸识别方法 - Google Patents

人脸识别方法 Download PDF

Info

Publication number
WO2015165227A1
WO2015165227A1 PCT/CN2014/089652 CN2014089652W WO2015165227A1 WO 2015165227 A1 WO2015165227 A1 WO 2015165227A1 CN 2014089652 W CN2014089652 W CN 2014089652W WO 2015165227 A1 WO2015165227 A1 WO 2015165227A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
face recognition
model
feature
recognition method
Prior art date
Application number
PCT/CN2014/089652
Other languages
English (en)
French (fr)
Inventor
李俊
Original Assignee
珠海易胜电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 珠海易胜电子技术有限公司 filed Critical 珠海易胜电子技术有限公司
Publication of WO2015165227A1 publication Critical patent/WO2015165227A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Definitions

  • the invention relates to a face recognition method.
  • Face recognition technology has developed rapidly in the past few years. Currently, face recognition technology cannot be satisfactorily handled in real life environments such as outdoor environments, and is only used indoors. The difficulty of face recognition is still illumination change, posture change, age change, occlusion, etc. These affect the face recognition algorithm used by the face recognition system, the degree is different, the classification of face recognition methods and its advantages and disadvantages The difficulty in face recognition that affects this is as follows:
  • a face recognition method based on appearance uses a pixel value of a face image pixel to generate a face template; a face recognition method based on a geometric feature does not rely on a pixel point, and is based on a feature point of the face (eye, nose, mouth, ear) 7) Generates a face template between geometric positional relationships.
  • the face recognition method based on appearance compared with the face recognition method based on geometric features can extract very rich face features from each pixel of the image, so it is more than the geometric feature based on several feature points. Face recognition methods bring high face recognition performance, and most successful face recognition methods are based on appearance-based recognition methods.
  • the appearance-based recognition method can not cope well with the illumination changes that affect the pixel points.
  • the geometric feature-based recognition method relies on the geometric positional relationship, which can compensate for the shortcomings of the appearance-based recognition method.
  • the face recognition method is based on the method of generating a face template to examine the overall face image or into a partial field, and is divided into a global face recognition method and a local face recognition method.
  • the global face recognition method for examining the overall face image has the advantages of expressing both the local features and the global features of the face, but it has the disadvantage of not being able to cope with the change of the posture.
  • the method of local face recognition for the change of posture is more than the method of global face recognition. Strong, there is a good response to people The advantage of the local characteristics of the face.
  • Elastic Bunch Graph Matching is a feature-based face recognition method. It belongs to the local face recognition method and is one of the most successful face recognition methods, but the local face recognition method. The disadvantage is that it does not reflect the global characteristics of the face. In order to overcome this, a method combining global face recognition method and local face recognition method has emerged, which brings some performance improvement. Both methods are based on the appearance-based face recognition method, which cannot overcome the appearance-based recognition method. Disadvantages.
  • the technical problem to be solved by the present invention is to provide a face recognition method that effectively finds facial feature points, is independent of illumination changes, and is stable to posture changes.
  • the invention provides a face recognition method, which comprises the following steps:
  • S2 generating an appearance-based face recognition model, and calculating a cosine similarity between the appearance-based face recognition model and the existing face model vector in the database;
  • step S4 using logical regression mixing based on the similarity level based on step S2 and step S3;
  • step S5 Determine the face recognition result based on the result of step S4.
  • a face elastic beam map is generated, and the face feature points are extracted according to the Haar feature in the detected face region.
  • generating a facial elastic beam map first extracting four points in the detected face region, respectively, the left and right eyeball midpoints, the mouth midpoints, and the lower jaw points, forming an initial partial face model, having 30
  • the template map of the feature points analyzes the relationship between each feature point and the four points of the initial part of the face model, generates a two-dimensional affine transformation, and applies this transformation on the 30 feature points of the template map to obtain
  • the eigenvalues corresponding to the 30 feature points are used to obtain the initial global face model; all 30 feature points of the initial global face model seek the correct convergence point, and the face elastic beam is generated as the feature point.
  • an appearance-based face recognition model is generated, and Gabor Jet is extracted from 30 feature points of the face elastic beam map, and the obtained vector is used as an initial model of the appearance-based face model, and the Gabor Jet complex number is extracted.
  • Straight composed of 40 vectors that are straight elements; the initial model of the face model uses PCA and LDA to obtain a face recognition model based on appearance.
  • a face recognition model based on geometric features is generated, and the distance between the extracted face feature points is calculated, and PCA and LDA are applied to the feature vectors whose elements are proportional to the horizontal axis and the vertical axis direction component, and the basis is obtained.
  • a face recognition model of geometric features is generated, and the distance between the extracted face feature points is calculated, and PCA and LDA are applied to the feature vectors whose elements are proportional to the horizontal axis and the vertical axis direction component, and the basis is obtained.
  • the invention adopts a face recognition method which combines the appearance-based face recognition method and the geometric feature-based face recognition method at the similarity level, can be satisfactorily applied in the actual living environment, and proposes a more effective search.
  • the method of face feature points also proposes a face recognition method based on geometric features that is independent of illumination changes and stable to posture changes.
  • FIG. 1 is a schematic flow chart showing a face recognition method according to an embodiment of the present invention.
  • Embodiments of the present invention provide a method for recognizing a face, including the following steps:
  • the present invention firstly, four points are extracted in the detected face region, which are the left and right eyeball midpoints, the mouth midpoint and the lower jaw point, and form an initial partial face model, and a template having 30 feature points.
  • the relationship between each feature point and the four points of the initial part of the face model is analyzed, and a two-dimensional affine transformation is generated.
  • the transformation is applied to the 30 feature points of the template image to obtain 30 feature points and The corresponding eigenvalues result in an initial global face model.
  • the first item is the association with the extrusion and rotation, and the latter is the translation association.
  • the translational association is simply calculated as the difference between the focus of the two-part model.
  • the four pixels of the matrix of rotation and stretching can be obtained from the relationship between two corresponding points of the two-part face model.
  • Part of the face model consists of 4 points, so that it has the smallest error, so that the corresponding 4 points are close to each other, and the linear regression transformation is obtained to obtain the rotation and stretching transformation matrix.
  • the obtained affine transformation is applied to the template image to generate an initial global face model.
  • G I face elastic beam diagram
  • the confidence level of the partial field centered on each feature point is obtained, and the point next to the point is also sought to be sure, and the corresponding feature point is updated with the point of high certainty.
  • the correct convergence point is sought, and the face elastic beam map which is used as the feature point is generated.
  • the haar feature is used instead of the Gabor feature, and the face feature points are extracted according to the Haar feature in the detected face region, and the Haar feature is to replace the pixel values in each field to examine the pixel values in a certain field. That is, the feature is to detect the difference or combination of pixel values of various modes in the candidate domain; in order to improve the detection performance of the object, such Haar features must be rich, which is trained by the two-dimensional classifier-cascade classifier.
  • the detector using Haar features is faster and more accurate than other detectors, and is mostly used for object detection.
  • the face detector based on the Haar feature of viola-jones is the most successful detector; in the present invention, each is the most successful detector.
  • the feature points are extracted from the large-capacity face database and the model centered on the point, and the viola-jones detector is used for training.
  • Gabor Jet is extracted from the 30 feature points of the face elastic beam map, and the vector obtained by the connection is used as the initial model of the appearance-based face model.
  • Gabor Jet is convolution of the Gabor filter for the pixel of interest. Got it.
  • Gabor filter The type of Gabor filter is determined.
  • a total of 40 Gabor filters are formed for 5 frequencies and 8 directions.
  • the Gabor filter is a set of 40 complex coefficients:
  • the magnitude of the Gabor Jet complex is taken out, and the vector consists of 40 straight elements.
  • Appearance-based face recognition model is obtained by applying PCA and LDA to the initial model of the face model.
  • the cosine similarity between the appearance-based face recognition model obtained above and the existing face model vector in the database is calculated.
  • the horizontal direction component and the vertical direction component are independent according to the distance.
  • the site was inspected.
  • the human head rotates and generally can only rotate in the vertical and horizontal directions.
  • the face template generation phase is as follows:
  • n the number of feature points
  • each direction is made into a possible double (combination), and for each double correspondence, the ratio of the distances included in the double is included.
  • the original template vector thus obtained contains many unnecessary features and the recognition ability is not high.
  • the PCA (alone by the direction axis) is applied here to remove the unnecessary components and then perform vector reduction.
  • the LDA is obtained for the obtained reduced vector, and a geometric feature-based face recognition model with high recognition power is generated.
  • the cosine similarity between the geometric feature-based face recognition model obtained from above and the existing face model vector in the database is calculated, and the cosine similarity calculation method is based on the appearance.
  • the face recognition method is the same.
  • the similarity level between the appearance-based face recognition method and the geometric feature-based face recognition method is mixed using logistic regression, and the formula is as follows:
  • the invention adopts a face recognition method which combines the appearance-based face recognition method and the geometric feature-based face recognition method at the similarity level, and can be satisfactorily applied to the actual living environment, and proposes It is more effective to find the method of facial feature points. It also proposes a geometric feature-based face recognition method that is independent of illumination changes and stable to posture changes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明提供一种人脸识别方法,包括以下步骤:S1:生成人脸弹性束图;S2:生成基于外观的人脸识别模型,计算获得基于外观的人脸识别模型与数据库中已有的人脸模型矢量之间的余弦相似度;S3:生成基于几何特征的人脸识别模型,计算获得的基于几何特征的人脸识别模型与数据库中已有的人脸模型矢量之间的余弦相似度;S4:基于步骤S2与基于步骤S3的相似度级别使用逻辑回归混合;S5:于步骤S4的结果判定人脸识别结果。本发明采用了把基于外观的人脸识别方法和基于几何特征的人脸识别方法在相似度级别混合的人脸识别方法,可以圆满应用于实际生活环境中。

Description

无标题 技术领域:
本发明涉及一种人脸识别方法。
人脸识别技术在过去几年中迅速发展,目前人脸识别技术在室外环境等实际生活环境中不能圆满应对,只在室内使用。人脸识别的难点依旧是照明变化、姿势变化、年龄变化、遮挡等,这些对人脸识别***所采用的人脸识别算法产生影响,其程度各不相同,人脸识别方法的分类与其优缺点、对此产生影响的人脸识别难点是如下:
基于外观的人脸识别方法,利用人脸图像像素的像素值,生成人脸模板;基于几何特征的人脸识别方法不是依靠像素点,是根据人脸的特征点(眼、鼻、口、耳…)之间几何位置关系生成人脸模板。过去基于外观的人脸识别方法跟基于几何特征的人脸识别方法相比,从图像的每个像素点可取出很丰富的人脸特征,所以比依靠于几个特征点的基于几何特征的人脸识别方法带来了很高的人脸识别性能,现在大部分成功的人脸识别方法都依据于基于外观的识别方法。
但基于外观的识别方法还是不能很好的应对对像素点产生影响的照明变化,而基于几何特征的识别方法是依靠几何位置关系,不拘于照明变化,即可以弥补基于外观的识别方法的缺点。
由于依靠于人脸特征点,要求在前阶段准确取出人脸特征点。人脸识别方法根据为生成人脸模板考察整体人脸图像或是分成部分领域来进行考察,分成全局人脸识别方法和局部人脸识别方法。考察整体人脸图像的全局人脸识别方法有把人脸局部特征和全局特征都表现的优点,但有无法应对姿势变化的缺点,相反,局部人脸识别方法对于姿势变化比全局人脸识别方法强,有能够很好地反应出人 脸局部特性的优点。
过去,弹性图束匹配-EBGM(Elastic Bunch Graph Matching)作为一个基于特征点的人脸识别方法,属于局部人脸识别方法,是最成功的人脸识别方法中的一个,但局部人脸识别方法的缺点是不能反应出人脸的全局特征。为克服这一点,出现了结合全局人脸识别方法和局部人脸识别方法的方法,带来了一些性能改善,两个方法都依据基于外观的人脸识别方法,无法克服基于外观的识别方法的缺点。
在实际生活环境中,人脸图像由照明、姿势、年龄、遮挡等变化造成人脸识别难度。因此,在实际生活环境中,人脸识别技术不能圆满,为此进行深入研究。最近几年中,在此领域进行了很多研究,有了很大的进展,但是仍然达不到满意的要求。
发明内容:
本发明所要解决的技术问题在于提供一种有效寻找脸部特征点,与照明变化无关,对姿势变化稳定的人脸识别方法。
本发明提供一种人脸识别方法,包括以下步骤:
S1:成人脸弹性束图;
S2:生成基于外观的人脸识别模型,计算获得基于外观的人脸识别模型与数据库中已有的人脸模型矢量之间的余弦相似度;
S3:生成基于几何特征的人脸识别模型,计算获得的基于几何特征的人脸识别模型与数据库中已有的人脸模型矢量之间的余弦相似度;
S4:基于步骤S2与基于步骤S3的相似度级别使用逻辑回归混合;
S5:基于步骤S4的结果判定人脸识别结果。
进一步地,生成人脸弹性束图,在检测到的人脸区域中根据Haar特征进行模式检测取出人脸特征点。
进一步地,生成人脸弹性束图,首先在检测到的人脸区域中提取四个点,分别是左右两个眼球中点、嘴中点和下颚点,组成初期部分人脸模型,在拥有30个特征点的模板图中分析每一个特征点与初期部分人脸模型的四个点之间的联系,生成二维仿射变换,在模板图的30个特征点上应用这种变换,求出30个特征点与之相对应的特征值,得出初期全局人脸模型;于初期全局人脸模型的所有30个特征点都寻求正确的汇合点,生成以此作为特征点的人脸弹性束图。
进一步地,生成基于外观的人脸识别模型,对人脸弹性束图的30个特征点提取Gabor Jet,将其连接后获得的矢量作为基于外观的人脸模型的初期模型,取出Gabor Jet复数的幅直,组成由40个幅直为元素的矢量;人脸模型的初期模型应用PCA和LDA,获得基于外观的人脸识别模型。
进一步地,生成基于几何特征的人脸识别模型,计算出取出的人脸特征点之间的距离,对按水平轴和垂直轴方向成分之间比例为要素的特征向量应用PCA和LDA,获得基于几何特征的人脸识别模型。
本发明采用了把基于外观的人脸识别方法和基于几何特征的人脸识别方法在相似度级别混合的人脸识别方法,可以圆满应用于实际生活环境中,并且提出了更能够有效地的寻找脸部特征点的方法,还提出了与照明变化无关,对姿势变化稳定的基于几何特征的人脸识别方法。
附图说明:
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。 在附图中:
图1示意性示意出本发明实施例子给出的人脸识别方法的流程图。
具体实施方式:
下面将参考附图并结合实施例,来详细说明本发明。
本发明实施例提供一种人脸识别方法,包括以下步骤:
一、生成人脸弹性束图
在本发明中,首先在检测到的人脸区域中提取四个点,分别是左右两个眼球中点、嘴中点和下颚点,组成初期部分人脸模型,在拥有30个特征点的模板图中分析每一个特征点与初期部分人脸模型的四个点之间的联系,生成二维仿射变换,在模板图的30个特征点上应用这种变换,求出30个特征点与之相对应的特征值,得出初期全局人脸模型。
变换公式如下:
[根据细则26改正04.12.2014] 
设不放弃普遍性而需得到的变换为。
Figure WO-DOC-FIGURE-1
前项是与拉伸及旋转的关联项,后项是平移关联项。
平移关联项是作为两部分模型的重点之间的差距来简单地计算。
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-2
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-3
以初期获得的4个点组成的初期部分人脸模型的重点;
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-4
在预备的人脸模型中取4个点组成的部分人脸模型的重点;
关于旋转及拉伸的矩阵的4个像素,可以根据两部分人脸模型的两个对应点之间的关系获得。
部分人脸模型由4个点组成,让它具有最小误差,使对应的4个点相互接近,获得线性回归变换后得到旋转及拉伸变换矩阵。
将获得的仿射变换应用于模板图中,生成初期全局人脸模型。
GI-T(GT);
GI:人脸弹性束图;
GT:模板图;
获得初期全局人脸模型之后,求得对于每个特征点以其为中心的部分领域的确信度,对该点旁边的点也寻求确信度,以确信度高的点更新相应特征点。
如没有确信度大的点,则会终止。
这样使每个特征点都不能再更新为止。
对于初期全局人脸模型的所有30个特征点都寻求正确的汇合点,生成以此作为特征点的人脸弹性束图。
本发明中利用haar特征代替Gabor特征,在检测到的人脸区域中根据Haar特征进行模式检测取出人脸特征点,Haar特点是代替每个点中的像素值考察某一个领域中像素值的合,即特点是检测候选领域中的各种模式的像素值合的差或合;为提高物体检测性能,这样的Haar特征必须丰富,这是依靠二维分类器-级联分类器训练而成。
利用Haar特征的检测器比其他检测器速度快、正确率高,多用于物体检测,特别是以viola-jones的Haar特征为基础的人脸检测器是最为成功的检测器;本发明中对每个特征点,在大容量人脸资料库中提取以该点为中心的模型,用viola-jones的检测器进行训练。
二、生成基于外观的人脸识别模型及匹配
对人脸弹性束图的30个特征点提取Gabor Jet,将其连接后获得的矢量作为基于外观的人脸模型的初期模型。
Gabor Jet是对关注的像素点把Gabor滤波器卷积(convolution)来获 得。
Gabor滤波器和图像的卷积利用以下公式来计算:
[根据细则26改正04.12.2014] 
Gabor滤波器如下:
Figure WO-DOC-FIGURE-5
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-6
依据矢量
Figure PCTCN2014089652-appb-000007
决定Gabor滤波器的类型,在发明中对5个频率和8个方向构成总计40个Gabor滤波器。
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-7
即Gabor滤波器是可以定为40个复数系数的集合:
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-8
本发明中取出Gabor Jet复数的幅直(magnitude),组成由40个幅直为元素的矢量。
对人脸模型的初期模型应用PCA和LDA,获得基于外观的人脸识别模型。
两个人脸模型的匹配阶段中,计算出从上面获得的基于外观的人脸识别模型与数据库中已有的人脸模型矢量之间的余弦相似度(similarity)。
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-9
三、生成基于几何特征的人脸识别模型及匹配。
以前作为有代表性的基于几何特征的人脸识别方法可以提起根据距离的人脸识别方法和比例的人脸识别方法,原来的基于比例的人脸识别方法使用了人脸特征点之间的距离比例,但这个方法还是对于立体旋转的图像的对应距离的比例会不同,拥有对姿势变化不稳定的缺点。
为了克服这样的缺点,本发明中根据距离的水平方向成分和垂直方向成分独 立地进行了考察。
对于同一个平面的两个线段,该平面旋转时,两个线段的旋转方向成分的比例相同,对于和旋转方向直交的方向成分长度和比例都一致。
人头部旋转,一般只能向垂直、水平方向旋转。
即提取连接人脸模型中差不多同一平面的特征点的线段,两个人脸模型中对于对应线段的水平、垂直轴方向成分之比例是对同一个人不变的事实为基础提取出人脸模板。
人脸模板生成阶段如下:
在上面获得的特征点中除了以鼻点和耳点为主的弯曲很深的点以外,选出差不多在同一平面的点,不考虑它们之间的顺序,制作成可能的双(组合),求得每个双对垂直轴和水平轴方向的距离。
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-10
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-11
DHi:第i水平距离
DVi:第i垂直距离
(Xi,Yi):第j节点的坐标
n:特征点的个数
然后按方向轴不考虑每个距离的顺序,制作成可能的双(组合),对每个双对应包含该双所包含的距离的比例。
RHi=DHj/DHk,j,(≠k)=1,...,m,i=1,...,mC2
RVi=DVj/DVk,j,(≠k)=1,...,m,i=1,...,mC2
RHi:第i水平比例
RVi:第i垂直比例
m:一个轴上的距离个数
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-12
生成按方向轴获得的比例为元素的矢量,相互连接后生成原始模板。
Vο-(RH1,RH2,...,RHt,RV1,RV2,...,RVt),t-mC2
这样获得的原始模板矢量包含很多不必要的的特征而且识别能力不高。
即在这里应用PCA(按方向轴单独进行)去除不必要的成分后进行矢量减缩。
对获得的减缩矢量进行LDA,生成识别力高的基于几何特征的人脸识别模型。
两个人脸模型的匹配阶段中,计算出从上面获得的基于几何特征的人脸识别模型与数据库中已有的人脸模型矢量之间的余弦相似度,余弦相似度的计算方法跟基于外观的人脸识别方法一样。
四、混合
基于外观的人脸识别方法与基于几何特征的人脸识别方法的相似度级别使用逻辑回归混合,公式如下:
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-13
X1;基于外观的方法距离X2;基于几何特征的方法距离,
Figure PCTCN2014089652-appb-000015
β0,β1,β2:逻辑回归(logistic regression)系数。
本发明采用了把基于外观的人脸识别方法和基于几何特征的人脸识别方法在相似度级别混合的人脸识别方法,可以圆满应用于实际生活环境中,并且提出了 更能够有效地的寻找脸部特征点的方法,还提出了与照明变化无关,对姿势变化稳定的基于几何特征的人脸识别方法。
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (5)

  1. 一种人脸识别方法,其特征在于,包括以下步骤:
    S1:生成人脸弹性束图;
    S2:生成基于外观的人脸识别模型,计算获得基于外观的人脸识别模型与数据库中已有的人脸模型矢量之间的余弦相似度;
    S3:生成基于几何特征的人脸识别模型,计算获得的基于几何特征的人脸识别模型与数据库中已有的人脸模型矢量之间的余弦相似度;
    S4:基于步骤S2与基于步骤S3的相似度级别使用逻辑回归混合;
    S5:基于步骤S4的结果判定人脸识别结果。
  2. 如权利要求1所述的人脸识别方法,其特征在于,生成人脸弹性束图,在检测到的人脸区域中根据Haar特征进行模式检测取出人脸特征点。
  3. 如权利要求2所述的人脸识别方法,其特征在于,生成人脸弹性束图,首先在检测到的人脸区域中提取四个点,分别是左右两个眼球中点、嘴中点和下颚点,组成初期部分人脸模型,在拥有30个特征点的模板图中分析每一个特征点与初期部分人脸模型的四个点之间的联系,生成二维仿射变换,在模板图的30个特征点上应用这种变换,求出30个特征点与之相对应的特征值,得出初期全局人脸模型;对于初期全局人脸模型的所有30个特征点都寻求正确的汇合点,生成以此作为特征点的人脸弹性束图。
  4. 如权利要求2所述的人脸识别方法,其特征在于,生成基于外观的人脸识别模型,对人脸弹性束图的30个特征点提取Gabor Jet,将其连接后获得的矢量作为基于外观的人脸模型的初期模型,取出Gabor Jet复数的幅直,组成由40个幅直为元素的矢量;对人脸模型的初期模型应用PCA和LDA,获 得基于外观的人脸识别模型。
  5. 如权利要求4所述的人脸识别方法,其特征在于,生成基于几何特征的人脸识别模型,计算出取出的人脸特征点之间的距离,对按水平轴和垂直轴方向成分之间比例为要素的特征向量应用PCA和LDA,获得基于几何特征的人脸识别模型。
PCT/CN2014/089652 2014-04-28 2014-10-28 人脸识别方法 WO2015165227A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410173445.X 2014-04-28
CN201410173445.XA CN103902992B (zh) 2014-04-28 2014-04-28 人脸识别方法

Publications (1)

Publication Number Publication Date
WO2015165227A1 true WO2015165227A1 (zh) 2015-11-05

Family

ID=50994304

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/089652 WO2015165227A1 (zh) 2014-04-28 2014-10-28 人脸识别方法

Country Status (2)

Country Link
CN (1) CN103902992B (zh)
WO (1) WO2015165227A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2610682C1 (ru) * 2016-01-27 2017-02-14 Общество с ограниченной ответственностью "СТИЛСОФТ" Способ распознавания лиц

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902992B (zh) * 2014-04-28 2017-04-19 珠海易胜电子技术有限公司 人脸识别方法
CN105160331A (zh) * 2015-09-22 2015-12-16 镇江锐捷信息科技有限公司 一种基于隐马尔可夫模型的人脸几何特征识别方法
CN105069448A (zh) * 2015-09-29 2015-11-18 厦门中控生物识别信息技术有限公司 一种真假人脸识别方法及装置
CN105631039B (zh) * 2016-01-15 2019-02-15 北京邮电大学 一种图片浏览方法
CN109214352A (zh) * 2018-09-26 2019-01-15 珠海横琴现联盛科技发展有限公司 基于2d摄像头三维成像技术的动态人脸检索识别方法
CN111783699A (zh) * 2020-07-06 2020-10-16 周书田 一种基于高效分解卷积与时间金字塔网络的视频人脸识别方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102495999A (zh) * 2011-11-14 2012-06-13 深圳市奔凯安全技术有限公司 一种人脸识别的方法
CN103440510A (zh) * 2013-09-02 2013-12-11 大连理工大学 一种面部图像中特征点的定位方法
CN103902992A (zh) * 2014-04-28 2014-07-02 珠海易胜电子技术有限公司 人脸识别方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102495999A (zh) * 2011-11-14 2012-06-13 深圳市奔凯安全技术有限公司 一种人脸识别的方法
CN103440510A (zh) * 2013-09-02 2013-12-11 大连理工大学 一种面部图像中特征点的定位方法
CN103902992A (zh) * 2014-04-28 2014-07-02 珠海易胜电子技术有限公司 人脸识别方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FEI, JUNLIN: "Research on Auto Face Recognition System Based on Improving Feature Points Location Algorithm", 31 December 2008 (2008-12-31), pages 39 - 63 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2610682C1 (ru) * 2016-01-27 2017-02-14 Общество с ограниченной ответственностью "СТИЛСОФТ" Способ распознавания лиц

Also Published As

Publication number Publication date
CN103902992A (zh) 2014-07-02
CN103902992B (zh) 2017-04-19

Similar Documents

Publication Publication Date Title
WO2015165227A1 (zh) 人脸识别方法
CN107145842B (zh) 结合lbp特征图与卷积神经网络的人脸识别方法
CN107358223B (zh) 一种基于yolo的人脸检测与人脸对齐方法
CN106897675B (zh) 双目视觉深度特征与表观特征相结合的人脸活体检测方法
WO2018107979A1 (zh) 一种基于级联回归的多姿态的人脸特征点检测方法
WO2016110005A1 (zh) 基于灰度和深度信息的多层融合的多模态人脸识别装置及方法
CN109101865A (zh) 一种基于深度学习的行人重识别方法
WO2017219391A1 (zh) 一种基于三维数据的人脸识别***
WO2017133009A1 (zh) 一种基于卷积神经网络的深度图像人体关节定位方法
US20150302240A1 (en) Method and device for locating feature points on human face and storage medium
CN106407958B (zh) 基于双层级联的面部特征检测方法
US9489561B2 (en) Method and system for estimating fingerprint pose
KR20170000748A (ko) 얼굴 인식 방법 및 장치
CN104392246B (zh) 一种基于类间类内面部变化字典的单样本人脸识别方法
Yang et al. Facial expression recognition based on dual-feature fusion and improved random forest classifier
CN111626246B (zh) 口罩遮挡下的人脸对齐方法
WO2018058419A1 (zh) 二维图像人体关节点定位模型的构建方法及定位方法
CN109858433B (zh) 一种基于三维人脸模型识别二维人脸图片的方法及装置
CN105760815A (zh) 基于第二代身份证人像和视频人像的异构人脸核实方法
CN111524183A (zh) 一种基于透视投影变换的目标行列定位方法
Du High-precision portrait classification based on mtcnn and its application on similarity judgement
Yi et al. A robust eye localization method for low quality face images
CN109993116B (zh) 一种基于人体骨骼相互学习的行人再识别方法
Feng et al. Effective venue image retrieval using robust feature extraction and model constrained matching for mobile robot localization
Chou et al. A robust real-time facial alignment system with facial landmarks detection and rectification for multimedia applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14890918

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24/04/2017)

122 Ep: pct application non-entry in european phase

Ref document number: 14890918

Country of ref document: EP

Kind code of ref document: A1