CN101320484A - Three-dimensional human face recognition method based on human face full-automatic positioning - Google Patents

Three-dimensional human face recognition method based on human face full-automatic positioning Download PDF

Info

Publication number
CN101320484A
CN101320484A CNA2008101167815A CN200810116781A CN101320484A CN 101320484 A CN101320484 A CN 101320484A CN A2008101167815 A CNA2008101167815 A CN A2008101167815A CN 200810116781 A CN200810116781 A CN 200810116781A CN 101320484 A CN101320484 A CN 101320484A
Authority
CN
China
Prior art keywords
image
human face
shape
dimensional
dimension human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008101167815A
Other languages
Chinese (zh)
Other versions
CN101320484B (en
Inventor
丁晓青
方驰
王丽婷
丁镠
刘长松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN2008101167815A priority Critical patent/CN101320484B/en
Publication of CN101320484A publication Critical patent/CN101320484A/en
Application granted granted Critical
Publication of CN101320484B publication Critical patent/CN101320484B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention discloses a three-dimensional human face identifying method based on human face full-automatic positioning and belongs to the computer vision and mode identifying field. A human face virtual image generating method comprises the steps as follows: a two-dimensional human face shape model and a partial veins model are established; a two-dimensional human face image is positioned exactly; the two-dimensional human face image is processed for three-dimensional reconstruction according to the positioning result to obtain a three-dimensional human face image; the three-dimensional human face image is processed for illumination model treatment to obtain a virtual image with changeable gestures and illumination. The method comprises the steps as follows: characteristics are picked up from the human face image to be identified and compressed; the human face is identified according to the compressed and processed characteristics. The present invention embodiment generates the virtual image by the three-dimension reconstructing of the two-dimensional human face image and by processing the illumination model, thereby increasing the sample space of the gesture and the illumination change of the image; at the same time, the three-dimension reconstructing speed is improved greatly, thereby ensuring that human face image identification has higher efficiency and recognition rate.

Description

A kind of three-dimensional face identification method based on the full-automatic location of people's face
Technical field
The present invention relates to computer vision and area of pattern recognition, particularly a kind of three-dimensional face identification method based on the full-automatic location of people's face.
Background technology
Face identification system is a core with the face recognition technology, is an emerging biological identification technology, is the high-quality precision and sophisticated technology of current International Technology field tackling key problem.The cooperation that people's face is not reproducible because of having, collection is convenient, do not need the one be shooted makes face identification system have widely and uses.
Though the research to recognition of face has continued many decades, even to this day, it still be in the area of pattern recognition one have challenging problem.Face identification method also has a series of insoluble problems, and for example when bigger variation takes place for human face posture, expression and ambient lighting (PIE, Pose Illumination Expression), discrimination will sharply descend.How to solve the identification problem of people's face under different attitudes, illumination and expression condition, remain the focus of current research.
Recognition of face problem for attitude and illumination variation uses conventional methods, and must obtain people's face training image of being used to learn under abundant different attitudes and the illumination condition, yet under many circumstances, these images also are not easy to obtain.
In order to realize not relying on the recognition of face of attitude and ambient lighting, following method is proposed in the prior art:
The first kind is an attitude invariant features extracting method, and these class methods solve the identification problem that attitude changes by extracting the feature that can overcome the attitude variation; Second class is based on the solution of various visual angles facial image, such as the conventional subspace method being expanded to the various visual angles subspace; The 3rd class is based on the method for human face three-dimensional model, and after Blanz proposed the three-dimensional face modeling method, the method that generates people's each attitude virtual images of face (Virtual Image) based on human face three-dimensional model had obtained achievement preferably in solving the attitude problem.
But also there is a lot of shortcomings in prior art, and the major defect of attitude invariant features extracting method is to extract relatively difficulty of the constant feature of attitude; Based on the solution of various visual angles facial image, its major defect is to be difficult to the attitude of people's face is definitely demarcated, and wrong attitude is estimated to reduce the recognition of face performance; And,, also have a lot of difficulties though can solve the attitude problem preferably based on the method for human face three-dimensional model, big such as calculated amount, speed slow and recover low precision, and need manual location feature point be used for initialization.
Summary of the invention
In order to realize robotization and recognition of face fast and accurately, and in identifying, overcome the influence of image attitude and illumination variation, improve computing velocity, method and a kind of three-dimensional face identification method that the embodiment of the invention provides a kind of people's face virtual images to generate based on the full-automatic location of people's face.Described technical scheme is as follows:
On the one hand, the method that the embodiment of the invention provides a kind of people's face virtual images to generate comprises:
Two-dimension human face image in the preset database is carried out the shape modeling of many subspaces, obtain the two-dimension human face shape;
Described two-dimension human face image is carried out the texture modeling, obtain the two-dimension human face local texture model;
According to described two-dimension human face shape and local texture model, two-dimension human face image is accurately located;
According to default three-dimensional face shape and to the accurate positioning result of described two-dimension human face image, described two-dimension human face image is carried out three-dimensional reconstruction, obtain three-dimensional face images;
Described three-dimensional face images is carried out illumination model handle, obtain the virtual images of attitude, illumination variation.
The embodiment of the invention is by setting up three-dimensional face shape and two-dimension human face shape, and carry out optimization process, two-dimension human face image is accurately located, according to positioning result, two-dimension human face image is carried out three-dimensional reconstruction, obtain three-dimensional face images, then three-dimensional face images being carried out illumination model handles, obtain the virtual images of attitude, illumination variation, increased the sample space of the attitude and the illumination variation of image, three-dimensional reconstruction speed improves a lot simultaneously, makes image recognition have higher efficient and discrimination.
On the other hand, the embodiment of the invention provides a kind of three-dimensional face identification method based on the full-automatic location of people's face, comprising:
Obtain two-dimension human face image to be identified;
Extract feature from described two-dimension human face image;
Feature to described extraction is compressed processing, obtains the feature of compressed processing;
To the processing of classifying of the feature of described compressed processing, obtain classification results;
Described classification results and the classification results of presetting are mated, described facial image to be identified is discerned according to matching result.
The embodiment of the invention is handled by two-dimension human face image being carried out three-dimensional reconstruction and illumination model, obtain people's face virtual images of different attitudes, thereby under the situation that a standard faces image is only arranged, use the change modeling method to generate the virtual images of attitude and illumination variation, increased the sample space of the attitude and the illumination variation of image, by to virtual images design category device, can make the identification of facial image have very high discrimination.
Description of drawings
The process flow diagram of the method that a kind of people's face virtual images that Fig. 1 provides for the embodiment of the invention 1 generates;
The attitude two-dimension human face shape left that Fig. 2 provides for the embodiment of the invention 1;
The two-dimension human face shape in the front that Fig. 3 provides for the embodiment of the invention 1;
A kind of process flow diagram that Fig. 4 provides for the embodiment of the invention 2 based on the full-automatic three-dimensional face identification method of locating of people's face;
The process flow diagram of a kind of classifier design method that Fig. 5 provides for the embodiment of the invention 2.
Embodiment
For making the purpose, technical solutions and advantages of the present invention clearer, embodiment of the present invention is described further in detail below in conjunction with accompanying drawing.
Embodiment 1
The method that the embodiment of the invention provides a kind of people's face virtual images to generate, this method is carried out the shape modeling of many subspaces to the two-dimension human face image in the database, obtains the two-dimension human face shape; Two-dimension human face image is carried out the local grain modeling, obtain the two-dimension human face local texture model; According to two-dimension human face shape and local texture model, two-dimension human face image is accurately located; According to default three-dimensional face shape and to the accurate positioning result of two-dimension human face image, two-dimension human face image is carried out three-dimensional reconstruction, obtain three-dimensional face images; Three-dimensional face images is carried out illumination model handle, obtain the virtual images of attitude, illumination variation, thereby increased the sample space of the attitude and the illumination variation of image, can overcome the influence of attitude and illumination variation in the image recognition processes.Three-dimensional reconstruction speed improves a lot simultaneously.As shown in Figure 1, present embodiment comprises:
101: set up the three-dimensional face shape according to three-dimensional face database.
Three-dimensional face database in the present embodiment is taken from 200 European three-dimensional face data, and each three-dimensional face data comprises about 100,000 summits, the coordinate data on each summit (x, y, z) and the texture color data (R, G, B) known.Setting up the three-dimensional face shape comprises:
101a: from three-dimensional face database, obtain all three-dimensional faces apex coordinate (x, y, z) and texture (R, G, raw data such as B), and raw data carried out quantification treatment.
Concrete, obtain raw data and can utilize several different methods, for example adopt the spatial digitizer collection or adopt reconstruction of two-dimensional images, adopt spatial digitizer scanning to obtain raw data in the present embodiment, after obtaining raw data, analog quantity is wherein carried out quantification treatment be converted into digital quantity.
101b: the three-dimensional face data are carried out pre-service, remove people's face part in addition, isolate the three-dimensional face images data.
Concrete, can take the method for three-dimensional face images data separating, the facial image zone is separated from whole head scanning data, promptly remove positions such as hair, shoulder.Separation of human face image-region will be determined partitioning boundary earlier, according to partitioning boundary the facial image area data is separated from raw data then.
101c:, set up the corresponding relation of facial image according to the three-dimensional face images data of separating.
All three-dimensional face images are carried out point-to-point registration, set up highdensity summit corresponding relation, the meaning of one's words of promptly same target vertex representation down is identical, all is nose such as No. 1000 summit for all three-dimensional face images.
101d: set up the three-dimensional face shape.Specific as follows:
1) coordinate figure with all summits of three-dimensional face images data after the pre-service is arranged in order, as shape vector.The shape vector that obtains is as follows:
S i = ( x i 1 , y i 1 , z i 1 , . . . , x i n , y i n , z i n ) T - - - ( 1 )
Wherein i represents i people's face data, the number of vertex of n representation model.
2) shape vector that obtains is carried out principal component analysis (PCA, Principal component analysis), obtain shape vector average and proper vector.
Principal component analysis is a kind of unsupervised linear dimension reduction method commonly used, and it seeks linear subspaces, so that sample is big as much as possible in the covariance of this subspace projection.Carry out the principal component analysis analysis and be in order to obtain a more compact parametric representation, total N three-dimensional face data in the tentation data storehouse, the concrete grammar of principal component analysis is as follows:
Calculate the shape vector average of three-dimensional face images data: s ‾ = 1 N Σ i = 1 N S i - - - ( 2 )
And covariance matrix: C x = 1 N Σ i = 1 N ( S i - s ‾ ) ( S i - s ‾ ) T
Thereby can obtain: C xs jjs jJ=1,2 ..., m s(3)
Decompose (3) formula and promptly obtain proper vector s j
3) make up the three-dimensional face shape according to shape vector average and proper vector: S mod = s ‾ + Σ j = 1 M s α j s j - - - ( 4 )
Wherein, α jBe j shape coefficient, M sFor the shape pivot number of intercepting, by variation factor α j, the shape facility vector is carried out linear combination according to different coefficients respectively, just can access difform three-dimensional face.
Because the geometric point quantity of different three-dimensional faces is not necessarily identical, therefore need set up dense point correspondence, and the geometry of different people face is counted normalization for identical quantity by methods such as interpolation, can use the method for optical flow approach or mark anchor point when setting up point correspondence.
In embodiments of the present invention, when the three-dimensional face images data satisfy normal distribution, satisfy the distribution of following formula through the deformation parameter after the orthogonal transformation of (3) formula:
P ( α → ) ~ exp [ - 1 2 ( Σ i = 1 M S ( α i 2 / σ i ) ) ]
(5)
P ( β → ) ~ exp [ - 1 2 ( Σ i = 1 M T ( β i 2 / λ i ) ) ]
Promptly the deformation parameter in the three-dimensional model of Jian Liing is not any variation, but obeys this probability distribution, thereby has avoided the generation of distortion people face.
102: the two-dimension human face image in the database is carried out the shape modeling of many subspaces, obtain the two-dimension human face shape.
Two-dimension human face database in the present embodiment is taken from 2000 Europeans and Asian two-dimension human face data, comprise data texturing (R, G, B) and data such as the attitude of people's face, expression and illumination variation.Setting up the two-dimension human face shape comprises:
102a: the two-dimension human face image in the database is divided according to attitude; The facial image of every kind of attitude is carried out the demarcation of unique point, obtain the characteristic point coordinates value; Utilize the characteristic point coordinates value to make up the shape vector of the two-dimension human face image under the corresponding attitude.
Concrete, two-dimension human face image is divided into left according to attitude, to the right, upwards, downwards with positive five kinds, with attitude facial image left is example, in the tentation data storehouse the total N of attitude two-dimension human face data left, and the individual unique point of 88 (also can be 88 beyond numerical value) of demarcating everyone face of this attitude, obtain characteristic point coordinates (x, y) as raw data, and raw data carried out quantification treatment, obtain the shape vector of people's face.
Wherein, the method of feature point for calibration can have multiple, common method is manual mark method, present embodiment adopts semi-automatic interactively manual mask method, semi-automatic mark is different from manual mark, need not mark each point is all manual, but by modes such as pullings, demarcate the unique point of people's face, can use relevant software to realize.
Constitute the shape vector of people's face according to 88 characteristic point coordinates:
X i=[X i0,y i0,x i1,y i1…x ij,y ij…x i87,y i87] T (6)
102b: shape vector is carried out the normalization of center, yardstick and direction.
When carrying out the normalized of facial image, partly be that reference point carries out normalized with the eyes in the image usually.Concrete, utilize following formula to carry out center normalization:
x ‾ i = 1 m Σ j = 1 m x ij y ‾ i = 1 m Σ j = 1 m y ij x ij = x ij - x ‾ i y ij = y ij - y ‾ i ∀ j = 1 . . . m - - - ( 7 )
Utilize following formula to carry out yardstick normalization:
| | S i ′ | | = Σ j = 1 m ( x ij ′ 2 + y ij ′ 2 ) x ij ′ ′ = x ij ′ / | | S i ′ | | y ij ′ ′ = y ij ′ / | | S i ′ | | ∀ j = 1 . . . m - - - ( 8 )
Utilize the normalization of ProcrustAnalysis algorithm travel direction, rotation in the plane of elimination people face.
102c: all shape vectors after the normalization are carried out principal component analysis, make up the shape of corresponding attitude according to the principal component analysis result; Shape by all attitudes makes up the two-dimension human face shape.
Shape vector to attitude two-dimension human face data left carries out principal component analysis, and is specific as follows:
1) the shape vector average and the covariance matrix of calculating two-dimension human face data.
Concrete, calculate the shape vector average and utilize following formula: X ‾ = 1 N Σ i = 1 N X i - - - ( 9 )
Calculate covariance matrix and utilize following formula: C x = 1 N Σ i = 1 N ( X i - X ‾ ) ( X i - X ‾ ) T - - - ( 10 )
2) make up the shape of corresponding attitude according to the principal component analysis result, make up the two-dimension human face shape by the shape of all attitudes.Specific as follows:
Obtain proper vector P according to shape vector average and covariance matrix, make up the shape of attitude two-dimension human face left: X=X+Pb, wherein, b is the form parameter that PCA analyzes.
Concrete, as shown in Figure 2, be that example describes with the shape of attitude facial image left, by being set, different form parameter b can obtain different shapes, and make shape have certain variation range.
Accordingly, be illustrated in figure 3 as the shape of front face.
Facial image to all attitudes carries out shape modeling respectively, obtains the shape of all attitudes, and the shape modeling method is the same, repeats no more.
Further, any one people's face shape X can be expressed as: X=T a(X+Pb).Wherein a is a geometric parameter, comprises the translation vector X of level, vertical direction t, Y t, scaling vector S and angle vector θ.Ta represents how much variations of shape, as shown in the formula:
a=(X t,Y t,s,θ); T X t , Y t , s , θ x y = X t Y t + s cos θ - s sin θ s sin θ s cos θ x y - - - ( 11 )
Again step, comprehensively can get the two-dimension human face shape by the shape of all attitudes.For example, use M i, i=1,2,3,4,5, distinguish correspondence left, to the right, with positive five kinds of attitude models, i is an attitude parameter, for every kind of attitude model M up and down i, its mean vector is expressed as X i, the proper vector of principal component analysis is P i, the two-dimension human face shape that comprehensively obtains is:
X = T a i ( X ‾ i + P i b i )
103: two-dimension human face image is carried out the local grain modeling, obtain the two-dimension human face local texture model.Specifically comprise:
Use the duscriminant learning method in the present embodiment, analyze each unique point texture and near other on every side and put the difference of texture on every side, method with identification solves the orientation problem of unique point, uses point that feature is relatively combined with the feature selection approach of random forest and carries out the description of local grain.
Concrete, the location feature that the embodiment of the invention proposes is that point is to comparing feature, the i.e. comparison of any two picture element gray scale sizes in the image.The modeling of present embodiment local grain is to be sorter of each unique point design, and whole people's face need design 88 sorters altogether.With the left eye angle is example, chooses to select two some p1 arbitrarily in the preset range, and p2 compares, and is concrete, and preset range can be 5 * 5 coordinate range, and with the gray-scale value of I (p) remarked pixel point, then the mathematical formulae of classifier result can be expressed as follows:
h n = 1 ifI ( p 1 ) ≥ I ( p 2 ) 0 otherwise - - - ( 12 )
Promptly when I (p1) 〉=I (p2), the result of Weak Classifier is 1, otherwise the Weak Classifier result is 0.For the image block of one 32 * 32 size, choosing two points arbitrarily has C 1024 2Plant combination, the Weak Classifier total number is about 520,000.
Selected point only need be taken up an official post in original-gray image to feature relatively and be got 2 relatively sizes of gray-scale values, does not need to carry out computings such as various conversion and multiplication and division, evolution, and therefore this feature has stablely, calculates characteristics fast.Secondly, point is clearer and more definite to the geometric position of comparing Feature Selection point, aspect the location of unique point, than Gabor feature, gradient feature or Haar feature etc. in the prior art better performance is arranged.
But because some contrast characteristic number is a lot, thus the feature selection approach that must be combined, what present embodiment used is the random forest method, its basic thought is that a lot of Weak Classifiers are integrated into a strong classifier.A random forest is made of N decision tree, and every decision tree (as decision tree T1 T2...TN) is a decision tree classification device, and each node of decision tree all is a Weak Classifier, and the result of decision of random forest is the average of all decision tree classification results.In training process, the difference of every decision tree in the random forest is the training sample set, is respectively a subclass concentrating picked at random from total sample; And the training method of every decision tree is identical, and decision tree is all chosen the best Weak Classifier of current classifying quality at each node.In assorting process, classification problem with a C class is an example, the C class is promptly exported C degree of confidence, each degree of confidence p (n, p) (f (p)=c) has represented that a sample p belongs to the probability of C class, sample p has C output result by each decision tree classification device Tn, and the judgement of last random forest is average based on all decision tree results', is shown below.
Figure A20081011678100132
104:, two-dimension human face image is accurately located according to two-dimension human face shape and local texture model.
Concrete, to the shape of each two-dimension human face image X = T a i ( X ‾ i + P i b i ) Be optimized, obtain optimum attitude model M i, and the geometric parameter a of the optimum under this attitude model iWith form parameter b iThereby, obtain the shape of the optimum of this two-dimension human face image, according to the optimum shape model, this two-dimension human face image is accurately located.Specific as follows:
Objective function according to traditional parameter optimization algorithm:
( a ^ , b ^ ) = min a , b | | Y- T a ( X ‾ + Pb ) | | 2 = min a , b ( Y - T a ( X ‾ + Pb ) ) T ( Y - T a ( X ‾ + Pb ) ) - - - ( 14 )
Add attitude parameter i, optimized Algorithm is improved, the objective function of the optimized Algorithm that present embodiment proposes is:
Figure A20081011678100135
The objective function (15) of the optimized Algorithm that present embodiment proposes has be different from traditional objective function (14) at 3, and at first, objective function (15) is a matrix W with the result of each random forest sorter output iJoin among the optimization aim, i.e. i attitude model M iThe result that the random forest sorter obtains.Secondly, add form parameter and drop on this restriction of zone of compacting in the model parameter space of shape principal component analysis, add limit entry Limit the form parameter b of principal component analysis iAt last, the two-dimensional shapes model is optimized, according to the two-dimensional shapes model M of optimum i, two-dimension human face image is accurately located.By the optimization aim function, the model parameter that can make optimization is more near expectation value.
Further, the execution in step of the optimized Algorithm of the model parameter of present embodiment proposition is as follows:
1) to all attitude model M i, { 1,2,3,4,5} carries out initialization to i ∈, positions by the two-dimension human face figure of the part of the eyes in the facial image to different attitudes, and obtains corresponding geometric parameter a iWith form parameter b i
2) feature of choosing is optimized, the point of the random forest sorter output probability maximum in the unique point preset range of selected shape model Central Plains is as new unique point.Concrete, preset range can be chosen 5 * 5 coordinate range.
3) geometric parameter of optimization attitude: a ^ i = min a i ( Y - T a i ( X ‾ i + P i b i ) ) T W i ( Y - T a i ( X ‾ i + P i b i ) ) . - - - ( 16 )
4) optimised shape parameter: b ^ i = min b i ( Y - T a ^ i ( X ‾ i + P i b i ) ) T W i ( Y - T a ^ i ( X ‾ i + P i b i ) ) + Σ j = 1 t b ij 2 / σ j 2 . - - - ( 17 )
5) if | | a ^ i - a i | | + | | b ^ i - b i | | < &epsiv; , Then stop to optimize computing; Otherwise, order a i = a ^ i ; b i = b ^ i , Return step 2).
6) the optimal characteristics point location result of more every kind of attitude model chooses and makes the minimized result of formula (15) as optimal result, obtains optimum attitude i and corresponding a iAnd b i
Make up optimum people's face shape model according to optimum parameters, realize accurate location each two-dimension human face image.
105: according to the three-dimensional face shape and to the accurate positioning result of two-dimension human face image, two-dimension human face image is carried out three-dimensional reconstruction, obtain three-dimensional face images.Specific as follows:
105a: according to the three-dimensional face shape and to the accurate positioning result of two-dimension human face image, two-dimension human face image is carried out 3D shape rebuild, obtain the shape image of three-dimensional face.
Concrete, the three-dimensional face shape that obtains in 101 mated with corresponding two-dimension human face image obtain deformation parameter α, α is optimized processing, make up the 3D shape image of facial image according to the deformation parameter α of optimum.
Further, according to people's face positioning result, obtain the characteristic point coordinates value x in the optimization model i, substitution (4) formula obtains:
S(x i)=S(X i)+P(x i)·α T (18)
Wherein, x i∈ { (x 1, y 1) ... (x l, y l); 1≤i≤l (19)
In the present embodiment, l=88 is so obtain 2l equation.
According to optimization objective function: The optimum target that obtains finding the solution deformation parameter is:
min &alpha; &Sigma; j = 1 M &alpha; j 2 / &sigma; j 2 s . t . S ( x i ) = S &OverBar; ( x i ) + P ( x i ) &CenterDot; &alpha; T - - - ( 21 )
Promptly to satisfy S (x i)=S (x i)+P (x i) α TBe restrictive condition, to min
Figure A20081011678100152
Be optimized.
Find the solution α according to (21) formula and can obtain optimum 3D shape parameter, α substitution formula (4) is calculated, can obtain the 3D shape image of facial image S mod = S &OverBar; + &Sigma; j = 1 M s &alpha; j s j .
105b: the shape image to three-dimensional face carries out the three-dimensional geometry conversion, obtains the three-dimensional face shape image through the three-dimensional geometry conversion.Specific as follows:
The unique point that the three-dimensional geometry conversion is about in the three-dimensional face shape image is carried out position translation, convergent-divergent or rotation processing in the space, the form with homogeneous coordinates can be expressed as follows with matrix multiplication:
Translation transformation is: x &prime; y &prime; z &prime; 1 = 1 0 0 t x 0 1 0 t y 0 0 1 t z 0 0 0 1 x y z 1 = x + t x y + t y z + t z 1 - - - ( 22 )
X wherein, y, z are the three-dimensional point coordinates before the translation, and x ', y ', z ' are the point coordinate after the translation, t x, t y, t zBe to prolong X, Y, the translation of Z-direction.
Scale transformation is: x &prime; y &prime; z &prime; 1 = s x 0 0 0 0 s y 0 0 0 0 s z 0 0 0 0 1 = s x x 0 0 0 0 s y y 0 0 0 0 s z z 0 0 0 0 1 - - - ( 23 )
S wherein x, s y, s zBe respectively x, y, z axle scaling.
Around the rotational transform of coordinate axis, the relative coordinate initial point rotates the conversion at θ angle under the right-handed coordinate system around coordinate axis:
Rotate around X-axis: x &prime; y &prime; z &prime; 1 = 1 0 0 0 0 cos &theta; - sin &theta; 0 0 sin &theta; cos &theta; 0 0 0 0 1 x y z 1 = R X ( &theta; ) x y z 1 - - - ( 24 )
Rotate around Y-axis: x &prime; y &prime; z &prime; 1 = cos &theta; 0 sin &theta; 0 0 1 0 0 - sin &theta; 0 cos &theta; 0 0 0 0 1 x y z 1 = R Y ( &theta; ) x y z 1 - - - ( 25 )
Rotate around the Z axle: x &prime; y &prime; z &prime; 1 = cos &theta; - sin &theta; 0 0 sin &theta; cos &theta; 0 0 0 0 1 0 0 0 0 1 x y z 1 = R Z ( &theta; ) x y z 1 - - - ( 26 )
Comprehensively can be got by (23)-(27) formula, the expression formula of three-dimensional geometry conversion is:
[x′y′z′] T=R(θ x,θ y,θ z)·s(s x,s y,s z)·[x?y?z] T+M(t x,t y,t z) (27)
Wherein, S ( s x , s y , s z ) = s x 0 0 0 s y 0 0 0 s z Be scaled matrix;
M ( t x , t y , t z ) = t x t y t z Be translation matrix;
R (θ x, θ y, θ z) be rotation matrix:
R ( &theta; x , &theta; y , &theta; z ) = 1 0 0 0 cos &theta; x - sin &theta; x 0 sin &theta; x cos &theta; x cos &theta; y 0 sin &theta; y 0 1 0 - sin &theta; y 0 cos &theta; y cos &theta; z - sin &theta; z 0 sin &theta; z cos &theta; z 0 0 0 1 10
= cos &theta; y cos &theta; z - cos &theta; y sin &theta; z sin &theta; y sin &theta; x sin &theta; y cos &theta; z + cos &theta; x sin &theta; z - sin &theta; x sin &theta; y sin &theta; z + cos &theta; x cos &theta; z - sin &theta; x cos &theta; y - cos &theta; x sin &theta; y cos &theta; z cos &theta; x sin &theta; y sin &theta; z + sin &theta; x cos &theta; z cos &theta; x cos &theta; y
In (27) formula, [x y z] TBe the apex coordinate before rotating, [x ' y ' z '] TBe postrotational apex coordinate, θ x, θ y, θ zBe respectively around x, y, the angle of z axle rotation is carried out the three-dimensional geometry conversion by (27) formula to the three-dimensional face shape image that obtains among the 104a and is promptly obtained three-dimensional face shape image through the three-dimensional geometry conversion.
105c: will carry out texture through the three-dimensional face shape image of three-dimensional geometry conversion, and obtain the texture image of three-dimensional face.Specific as follows:
1) obtains unique point coordinate figure on the three-dimensional face shape image of geometric transformation, the volume coordinate of unique point is carried out projective transformation, obtain the projection coordinate of unique point on two-dimension human face image.
In the present embodiment, projective transformation is parallel projection just.The projecting direction of positive parallel projection is parallel with certain change in coordinate axis direction of view coordinate, and promptly projecting direction is vertical with the plane of two other coordinate axis composition.In the view coordinate of positive parallel projection, for example press the z direction projection, the perspective view coordinate of object is just irrelevant with its z value, so remove the two-dimensional projection that the z variable is a three-dimensional body.Conversion along the orthogonal projection of z direction can be expressed as:
x p y p z p 1 = 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 x 0 y 0 z 0 1 = P zort x 0 y 0 z 0 1 - - - ( 28 )
To a summit on the three-dimensional model [x y z] T, be [x ' y ' z '] through the coordinate after the geometric transformation TUse orthogonal projection model can obtain this projection coordinate on the plane of delineation and be:
P x′=x?′×(width/edge)+width/2 P y′=y′×(height/edge)+height/2 (29)
Wherein width is the width of two-dimensional image, and height is the height of two-dimensional image, and edge is the length on three-dimensional vision area border.
2), obtain the texture image of three-dimensional face with the pixel value of the two-dimension human face image in the projection coordinate texel value as the corresponding point on the three-dimensional face images.
On the three-dimensional face shape image more arbitrarily, its volume coordinate is [x y z] T, obtain its projection coordinate on two dimensional image plane by (27) formula and (29) formula and be [P xP y] T,, thereby obtain the texture image of three-dimensional face with the pixel value of the two-dimension human face image on this coordinate texture as corresponding point on the three-dimensional face images.
By step 105, finished the 3D shape of two-dimension human face image and rebuild and texture reconstruction, obtained the three-dimensional face images of rebuilding.
106: three-dimensional face images is carried out illumination model handle, obtain the virtual images of attitude, illumination variation.Specific as follows:
106a:, formulate illumination model to the three-dimensional face images that obtains in above-mentioned 105.
Illumination model is a kind of mathematical model, is used for the physical model of replace complex, is used for simulation when illumination is mapped to body surface, reflects, the light of transmission enters people's vision system, makes the people can see the phenomenon of object.The illumination model that can formulate has multiplely in embodiments of the present invention, is example with the Phong illumination model, in the Phong model, three components is arranged: surround lighting, diffuse reflection and direct reflection.The light intensity I that reflexes to viewpoint by 1 P on the body surface is the summation of the reflective light intensity of surround lighting, desirable diffuse reflection light intensity and specular light, that is:
I=I aK a+I pK d(L·N)+I pK s(R·V) n (30)
I wherein aBe the light intensity of surround lighting, K aBe the reflection coefficient of object to surround lighting, I pBe incident intensity, K dBe the diffuse-reflection factor relevant with object, 0<K d<1, K sBe the specularity factor relevant with object.The normal direction of point P is N on the body surface, and the vector that points to light source from a P is L, and direction of visual lines is V, and reflection direction is R.
106b: according to default rotation angle value three-dimensional face images is carried out the three-dimensional geometry conversion, obtain the three-dimensional face images that attitude changes.
After having determined illumination model, choose the θ of rotation angle value of the expression human face posture three-dimensional of some x, θ y, θ zValue is carried out the three-dimensional geometry conversion to three-dimensional face images.Wherein, rotation angle value can be chosen with 5 ° to 10 ° changing value in [60 °, 60 °] scope, and concrete conversion process repeats no more referring to the three-dimensional geometry conversion process among the 104a.
106c: according to default light source parameters value the three-dimensional face images that attitude changes is carried out projective transformation, obtain the virtual images of attitude, illumination variation.
Concrete, choose the light source parameters value, the three-dimensional face images through the three-dimensional geometry conversion is carried out projective transformation, concrete projective transformation process repeats no more referring to 104b projective transformation process.Three-dimensional face is projected to the plane of delineation, and the horizontal blanking of going forward side by side is handled, and the people's face imaginary circle that produces the variation of illumination and attitude resembles.
The embodiment of the invention is by setting up three-dimensional face shape, two-dimension human face shape and two-dimension human face local texture model, two-dimension human face image is accurately located, according to positioning result, two-dimension human face image is carried out three-dimensional reconstruction, obtain three-dimensional face images, then three-dimensional face images being carried out illumination model handles, obtain the virtual images of attitude, illumination variation, thereby increased the sample space of the attitude and the illumination variation of image, can overcome the influence of attitude and illumination variation in the image recognition processes.The method of using some contrast characteristic and feature selecting to combine when the local grain modeling has improved computing velocity, makes the identification of image have higher efficient and discrimination.
Embodiment 2
Present embodiment provides a kind of method of facial image identification, and this method is by obtaining two-dimension human face image to be identified; Extract feature from two-dimension human face image; The feature of extracting is compressed processing, obtain the feature of compressed processing; To the processing of classifying of the feature of compressed processing, obtain classification results; Classification results and the classification results of presetting are mated, facial image to be identified is discerned according to matching result.As shown in Figure 4, present embodiment comprises:
201: obtain two-dimension human face image to be identified, and carry out pre-service.
Concrete, the pre-service of two-dimension human face image is comprised: human face region is carried out the normalization of the rectification of plane rotation and yardstick, gray scale, partly is that reference point carries out normalized with the eyes in the image usually.Normalized method is identical with method among the embodiment 1, repeats no more.
202: extract feature from two-dimension human face image.
Concrete, to pretreated two-dimension human face image Feature Extraction, can be gray feature, edge feature, wavelet character, Gabor feature etc.
203: the feature of extracting is compressed processing, obtain the feature of compressed processing.
Concrete, be characterized as example with Gabor, be the eigenvector X of the facial image of L obtaining length fAfter, carry out feature compression, therefrom extract feature with distinguishing ability, improve the distribution of feature simultaneously, reduce the dimension of feature, thereby improve the recognition performance of system.Specific as follows:
Utilize principal component analysis, linear discriminant analysis (LDA, Linear discriminant analysis) or method that both combine, the Gabor feature of extracting is compressed processing.
Wherein, LDA is a kind of linear dimension reduction method that supervision is arranged commonly used, and it seeks linear subspaces, so that scatter in the class of sample projection on this subspace closely, scatter between class and disperse.With the facial image is example, and specific practice is as follows: the form x that at first all bidimensional facial images is arranged in column vector according to row preface or row preface iI=1,2 ..., N.Like this piece image correspondence sample in the higher dimensional space.Suppose that these samples are divided into the class into C, every class has N iIndividual sample then has:
Grand mean m = 1 N &Sigma; i = 1 N x i
All kinds of averages m i = 1 N i &Sigma; x j &Element; X i x j ( i = 1,2 , . . . c )
Scatter matrix in the class S w = &Sigma; i = 1 c &Sigma; x j &Element; X i ( x j - m i ) ( x j - m i ) T
Scatter matrix between class S b = &Sigma; i = 1 c N i ( m i - m ) ( m i - m ) T
Obtain the projection matrix of linear discriminant analysis: W LDA = arg max W | W T S b W | | W T S w W | = [ w 1 , w 2 ,..., w m ] - - - ( 31 )
The base that constitutes the LDA subspace can be obtained by following generalized eigenvalue decomposition: s bw iiS ww i(32)
Gabor feature to the two-dimension human face image extraction, the projection subspace of training principal component analysis earlier, obtain the projection matrix of principal component analysis,, obtain the projection matrix W of linear discriminant analysis then with the projection subspace of the Gabor features training linear discriminant analysis of extracting LDA, two projection matrixes are multiplied each other obtains the feature compression matrix, by the feature compression matrix Gabor feature of extracting is compressed, and obtains the feature of compressed processing.
204: to the processing of classifying of the feature of compressed processing, obtain classification results, classification results and default classification results are mated, facial image to be identified is discerned according to matching result.Specifically comprise:
204a: by the design category device to the feature processing of classifying.The design procedure of sorter comprises as shown in Figure 5:
1), generates the virtual images of facial image according to known two-dimension human face image database.Specific as follows:
Two-dimension human face image in the database is carried out the shape modeling of many subspaces, obtain the two-dimension human face shape; Two-dimension human face image is carried out the texture modeling, obtain the two-dimension human face local texture model; According to two-dimension human face shape and local texture model, two-dimension human face image is accurately located; According to default three-dimensional face shape and to the accurate positioning result of two-dimension human face image, two-dimension human face image is carried out three-dimensional reconstruction, obtain three-dimensional face images; Three-dimensional face images is carried out illumination model handle, obtain the virtual images of attitude, illumination variation.
The method of the generation virtual images among the method that generates virtual images and the embodiment 1 is identical, repeats no more herein.
2) virtual images is carried out normalized, obtain virtual images through normalized.Specific as follows:
2a) position calculation according to the three-dimensional face images unique point goes out the position that imaginary circle resembles middle unique point.
2b) virtual images that obtains being carried out geometrical normalization, partly is that reference point carries out normalized with the eyes in the image usually, and the major organs aligning of facial image to normal place, is isolated human face region to avoid background interference according to organ site.The purpose that people's face is corrected is that the major organs with people's face is remedied to assigned address, reduces yardstick between image, translation and plane rotation difference.The method of correcting can be that image is carried out two-dimentional affined transformation, comprises translation, convergent-divergent and rotation.
2c) virtual images after the geometrical normalization processing is carried out gray scale normalization.
For avoiding ambient light unusual according to the picture contrast that, imaging device may cause, the facial image of present embodiment after to geometrical normalization carried out the gray balance processing, improves its intensity profile, the consistance between enhancement mode.Operable gradation of image equalization method comprises gray-level histogram equalization, the correction of illumination plane and gray average, variance normalization etc.
3) virtual images through normalized is extracted feature and compresses processing, obtain the feature of compressed processing.
Concrete, to the virtual images Feature Extraction, can be gray feature, edge feature, wavelet character, Gabor feature etc.
After extracting the feature of virtual images, utilize principal component analysis, linear discriminant analysis or method that both combine, the feature of extracting is compressed processing, obtain the feature of compressed processing.Feature compression method in the method for feature compression and 203 is identical, repeats no more.
4) by the characteristic Design sorter of compressed processing.
Bayes (Bayesian) decision theory is the theoretical foundation and the main stream approach of classifier design, according to Bayesian decision theory, and eigenvector X fBelong to N pattern class C={c 1, c 2..., c NOne of, if known X fBelong to classification c j, the posterior probability of 1≤j≤N is p (c j/ X f), carry out following decision rule so and will realize optimal classification on the minimal error meaning:
c * = arg max c j &Element; C p ( c j / X f ) - - - ( 33 )
Wherein, c *∈ C is a classification results.Common posterior probability p (c j/ X f) by the prior probability P (c of classification j) and class conditional probability density p (X f/ c j) represent that then formula (33) becomes:
c * = arg max c j &Element; C P ( c j ) p ( X f / c j ) - - - ( 34 )
The prior probability of supposing each one face classification is equal, i.e. P (c j)=P (c i) 1≤i, j≤N, then maximum a posteriori probability becomes maximum kind conditional probability density criterion:
c * = arg max c j &Element; C p ( X f / c j ) - - - ( 35 )
In the practical application, the functional form of class conditional probability density and parameter all are unknown usually.In order to realize Bayesian decision, a kind of mode of classifier design is to utilize training image that the class conditional probability density is estimated, promptly estimates the functional form and the parameter of class conditional probability density.
Adopt distinct methods to p (X f/ c j) carry out modeling, just obtain multi-form Discrimination Functions and corresponding sorter.
Concrete, in facial image, the classification of special vector has Gaussian distribution usually, when the covariance matrix of the class of special vector equates that all interior each eigenvector of class is separate, when having equal variance, can obtain minimum distance classifier: c * = arg min c j &Element; C | | X f - &mu; j | | , Wherein, μ jBe class c jAverage.
According to the design concept of sorter as can be known, each feature is handled through the classification of sorter, can obtain a unique classification results, and therefore, each facial image can obtain corresponding with it classification results through the design and the training of sorter.
204b: facial image to be identified is extracted feature and compression, the feature input category device of compressed processing is obtained classification results; Two-dimension human face image all in the database is extracted feature and compression, with the feature input category device of compressed processing, with the classification results that obtains as the classification results of presetting.
The classification results of facial image to be identified is mated with the classification results of presetting, facial image to be identified is discerned according to matching result.
According to the design concept of sorter as can be known, each facial image can obtain corresponding with it classification results through the design and the training of sorter, just can discern corresponding facial image according to the output result of sorter.
In the present embodiment, people's face to 8 attitudes is discerned, be respectively c05 (turning left 22.5 °), c37 (turning left 45 °), c02 (turning left 67.5 °), c29 (turning right 22.5 °), c11 (turning right 45 °), c14 (turning right 67.5 °), c09 (bowing), c07 (new line), the accuracy of recognition of face has reached 70%, 94%, 100%, 100%, 95%, 68%, 100%, 100% respectively.
The embodiment of the invention is handled by two-dimension human face image being carried out three-dimensional reconstruction and illumination model, obtain people's face virtual images of different attitudes, thereby under the situation that a standard faces image is only arranged, use the change modeling method to generate the virtual images of attitude and illumination variation, increased the sample space of the attitude and the illumination variation of image, by to virtual images design category device, can make the identification of facial image have very high discrimination.
Above-described embodiment, the present invention embodiment a kind of more preferably just, the common variation that those skilled in the art carries out in the technical solution of the present invention scope and replacing all should be included in protection scope of the present invention.

Claims (18)

1, a kind of method of people's face virtual images generation is characterized in that, comprising:
Two-dimension human face image in the preset database is carried out the shape modeling of many subspaces, obtain the two-dimension human face shape;
Described two-dimension human face image is carried out the local grain modeling, obtain the two-dimension human face local texture model;
According to described two-dimension human face shape and local texture model, described two-dimension human face image is accurately located;
According to default three-dimensional face shape and to the accurate positioning result of described two-dimension human face image, described two-dimension human face image is carried out three-dimensional reconstruction, obtain three-dimensional face images;
Described three-dimensional face images is carried out illumination model handle, obtain the virtual images of attitude, illumination variation.
2, the method that generates of people's face virtual images as claimed in claim 1 is characterized in that, described two-dimension human face image in the preset database is carried out the shape modeling of many subspaces, obtains the two-dimension human face shape, comprising:
Two-dimension human face image in the described database is divided according to attitude;
The facial image of every kind of attitude is carried out the demarcation of unique point, obtain described characteristic point coordinates value;
Utilize described characteristic point coordinates value to make up the shape vector of the two-dimension human face image under the corresponding attitude;
Described shape vector is carried out normalized processing, obtain shape vector through normalized;
Described shape vector through normalized is carried out principal component analysis, make up the shape of corresponding attitude according to the principal component analysis result;
Described shape by all attitudes makes up the two-dimension human face shape.
3, the method that generates of people's face virtual images as claimed in claim 1 is characterized in that, described described two-dimension human face image is carried out the local grain modeling, obtains the two-dimension human face local texture model, comprising:
Obtain the unique point coordinate figure on the described two-dimension human face image;
Two pixel gray scale sizes in the preset range of described two-dimension human face image unique point are compared, obtain a contrast characteristic;
Utilize feature selection approach that described some contrast characteristic selected to handle, obtain selecting result;
Make up the two-dimension human face local texture model according to described selection result.
4, the method that generates of people's face virtual images as claimed in claim 1 is characterized in that, describedly according to described two-dimension human face shape and local texture model two-dimension human face image is accurately located, and comprising:
According to preset algorithm described two-dimension human face shape is optimized processing, obtains optimum attitude parameter, geometric parameter and form parameter;
Utilize attitude parameter, geometric parameter and the form parameter of described optimum, make up the optimum shape model of described two-dimension human face image;
Utilize described optimum shape model, described two-dimension human face image is accurately located.
5, the method for people's face virtual images generation as claimed in claim 1, it is characterized in that the three-dimensional face shape that described basis is default and to the accurate positioning result of described two-dimension human face image carries out three-dimensional reconstruction to described two-dimension human face image, obtain three-dimensional face images, comprising:
According to default three-dimensional face shape and to the accurate positioning result of described two-dimension human face image, described two-dimension human face image is carried out 3D shape rebuild, obtain the shape image of three-dimensional face;
Shape image to described three-dimensional face carries out the three-dimensional geometry conversion, obtains the three-dimensional face shape image of conversion;
Three-dimensional face shape image to described conversion carries out texture, obtains the texture image of three-dimensional face;
The texture image of the three-dimensional face shape image of described conversion and described three-dimensional face combined obtain described three-dimensional face images.
6, the method that generates of people's face virtual images as claimed in claim 5 is characterized in that, describedly described two-dimension human face image is carried out 3D shape rebuilds, and obtains the shape image of three-dimensional face, comprising:
Described three-dimensional face shape and pinpoint two-dimension human face image are mated, obtain the deformation parameter of two dimensional image, described deformation parameter is optimized processing to three-dimensional model;
According to the deformation parameter after the described optimization, described two-dimension human face image is carried out 3D shape rebuild, obtain the shape image of three-dimensional face.
7, the method for people's face virtual images generation as claimed in claim 5 is characterized in that described shape image to described three-dimensional face carries out texture, obtains the texture image of three-dimensional face, comprising:
Obtain the characteristic point coordinates value on the three-dimensional face shape image of described conversion, the volume coordinate of described unique point is carried out projective transformation, obtain the projection coordinate of described unique point on described two-dimension human face image;
With the pixel value of the described two-dimension human face image in the described projection coordinate texel value, obtain the texture image of three-dimensional face as the corresponding point on the described three-dimensional face images.
8, the method that generates of people's face virtual images as claimed in claim 1 is characterized in that, describedly three-dimensional face images is carried out illumination model handles, and obtains the virtual images of attitude, illumination variation, comprising:
According to default rotation angle value three-dimensional face images is carried out the three-dimensional geometry conversion, obtain the three-dimensional face images that attitude changes;
According to default light source parameters value described attitude is changed three-dimensional face images and carry out projective transformation, obtain the virtual images of described attitude, illumination variation.
9, a kind of three-dimensional face identification method based on the full-automatic location of people's face is characterized in that, comprising:
Obtain two-dimension human face image to be identified;
Extract feature from described two-dimension human face image;
Feature to described extraction is compressed processing, obtains the feature of compressed processing;
To the processing of classifying of the feature of described compressed processing, obtain classification results;
Described classification results and the classification results of presetting are mated, described facial image to be identified is discerned according to matching result.
10, the three-dimensional face identification method based on the full-automatic location of people's face as claimed in claim 9 is characterized in that, obtains described default classification results, comprising:
Two-dimension human face image in the preset database is carried out the shape modeling of many subspaces, obtain the two-dimension human face shape;
Described two-dimension human face image is carried out the local grain modeling, obtain the two-dimension human face local texture model;
According to described two-dimension human face shape and local texture model, described two-dimension human face image is accurately located;
According to default three-dimensional face shape and to the accurate positioning result of described two-dimension human face image, described two-dimension human face image is carried out three-dimensional reconstruction, obtain three-dimensional face images;
Described three-dimensional face images is carried out illumination model handle, obtain the virtual images of attitude, illumination variation;
To the processing of classifying of described virtual images, obtain classification results, with described classification results as default classification results.
11, the three-dimensional face identification method based on the full-automatic location of people's face as claimed in claim 10 is characterized in that, described two-dimension human face image in the preset database is carried out the shape modeling of many subspaces, obtains the two-dimension human face shape, comprising:
Two-dimension human face image in the described database is divided according to attitude;
The facial image of every kind of attitude is carried out the demarcation of unique point, obtain described characteristic point coordinates value;
Utilize described characteristic point coordinates value to make up the shape vector of the two-dimension human face image under the corresponding attitude;
Described shape vector is carried out normalized processing, obtain shape vector through normalized;
Described shape vector through normalized is carried out principal component analysis, make up the shape of corresponding attitude according to the principal component analysis result;
Described shape by all attitudes makes up the two-dimension human face shape.
12, the three-dimensional face identification method based on the full-automatic location of people's face as claimed in claim 10 is characterized in that, described described two-dimension human face image is carried out the local grain modeling, obtains the two-dimension human face local texture model, comprising:
Two pixel gray scale sizes in the preset range of described two-dimension human face image unique point are compared, obtain a contrast characteristic;
Utilize feature selection approach that described some contrast characteristic selected to handle, obtain selecting result;
Make up the two-dimension human face local texture model according to described selection result.
13, the three-dimensional face identification method based on the full-automatic location of people's face as claimed in claim 10 is characterized in that, describedly according to described two-dimension human face shape and local texture model two-dimension human face image is accurately located, and comprising:
According to preset algorithm described two-dimension human face shape is optimized processing, obtains optimum attitude parameter, geometric parameter and form parameter;
Utilize attitude parameter, geometric parameter and the form parameter of described optimum, make up the optimum shape model of described two-dimension human face image;
Utilize described optimum shape model, described two-dimension human face image is accurately located.
14, the three-dimensional face identification method based on the full-automatic location of people's face as claimed in claim 10, it is characterized in that, the three-dimensional face shape that described basis is default and to the accurate positioning result of described two-dimension human face image, described two-dimension human face image is carried out three-dimensional reconstruction, obtain three-dimensional face images, comprising:
According to default three-dimensional face shape and to the accurate positioning result of described two-dimension human face image, described two-dimension human face image is carried out 3D shape rebuild, obtain the shape image of three-dimensional face;
Shape image to described three-dimensional face carries out the three-dimensional geometry conversion, obtains the three-dimensional face shape image of conversion;
Three-dimensional face shape image to described conversion carries out texture, obtains the texture image of three-dimensional face;
The texture image of the three-dimensional face shape image of described conversion and described three-dimensional face combined obtain described three-dimensional face images.
15, the three-dimensional face identification method based on the full-automatic location of people's face as claimed in claim 14 is characterized in that, describedly described two-dimension human face image is carried out 3D shape rebuilds, and obtains the shape image of three-dimensional face, comprising:
Described three-dimensional face shape and pinpoint two-dimension human face image are mated, obtain the deformation parameter of two dimensional image, described deformation parameter is optimized processing to three-dimensional model;
According to the deformation parameter after the described optimization, described two-dimension human face image is carried out 3D shape rebuild, obtain the shape image of three-dimensional face.
16, the three-dimensional face identification method based on the full-automatic location of people's face as claimed in claim 14 is characterized in that described shape image to described three-dimensional face carries out texture, obtains the texture image of three-dimensional face, comprising:
Obtain the characteristic point coordinates value on the three-dimensional face shape image of described conversion, the volume coordinate of described unique point is carried out projective transformation, obtain the projection coordinate of described unique point on described two-dimension human face image;
With the pixel value of the described two-dimension human face image in the described projection coordinate texel value, obtain the texture image of three-dimensional face as the corresponding point on the described three-dimensional face images.
17, the three-dimensional face identification method based on the full-automatic location of people's face as claimed in claim 10 is characterized in that, three-dimensional face images is carried out illumination model handle, and obtains the virtual images of attitude, illumination variation, comprising:
According to default rotation angle value three-dimensional face images is carried out the three-dimensional geometry conversion, obtain the three-dimensional face images that attitude changes;
According to default light source parameters value described attitude is changed three-dimensional face images and carry out projective transformation, obtain the virtual images of described attitude, illumination variation.
18, the three-dimensional face identification method based on the full-automatic location of people's face as claimed in claim 10 is characterized in that, described the processing of classifying of described virtual images is obtained classification results, comprising:
Described virtual images is carried out normalized, obtain virtual images through normalized;
Extract feature from described virtual images through normalized;
Feature to described extraction is compressed processing, obtains the feature of compressed processing;
To the processing of classifying of the feature of described compressed processing, obtain classification results.
CN2008101167815A 2008-07-17 2008-07-17 Three-dimensional human face recognition method based on human face full-automatic positioning Active CN101320484B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008101167815A CN101320484B (en) 2008-07-17 2008-07-17 Three-dimensional human face recognition method based on human face full-automatic positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008101167815A CN101320484B (en) 2008-07-17 2008-07-17 Three-dimensional human face recognition method based on human face full-automatic positioning

Related Child Applications (2)

Application Number Title Priority Date Filing Date
CN2009101433254A Division CN101561875B (en) 2008-07-17 2008-07-17 Method for positioning two-dimensional face images
CN200910143324XA Division CN101561874B (en) 2008-07-17 2008-07-17 Method for recognizing face images

Publications (2)

Publication Number Publication Date
CN101320484A true CN101320484A (en) 2008-12-10
CN101320484B CN101320484B (en) 2012-01-04

Family

ID=40180515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101167815A Active CN101320484B (en) 2008-07-17 2008-07-17 Three-dimensional human face recognition method based on human face full-automatic positioning

Country Status (1)

Country Link
CN (1) CN101320484B (en)

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751689B (en) * 2009-09-28 2012-02-22 中国科学院自动化研究所 Three-dimensional facial reconstruction method
CN102542240A (en) * 2010-12-23 2012-07-04 三星电子株式会社 Equipment and method for estimating orientation of human body
CN102831382A (en) * 2011-06-15 2012-12-19 北京三星通信技术研究有限公司 Face tracking apparatus and method
CN101477625B (en) * 2009-01-07 2013-04-24 北京中星微电子有限公司 Upper half of human body detection method and system
WO2013091370A1 (en) * 2011-12-22 2013-06-27 中国科学院自动化研究所 Human body part detection method based on parallel statistics learning of 3d depth image information
CN103971112A (en) * 2013-02-05 2014-08-06 腾讯科技(深圳)有限公司 Image feature extracting method and device
CN104408420A (en) * 2014-11-26 2015-03-11 苏州福丰科技有限公司 Three-dimensional face recognition method for entry and exit administration
CN104504405A (en) * 2014-12-02 2015-04-08 苏州福丰科技有限公司 Method for recognizing three-dimensional face
CN104765739A (en) * 2014-01-06 2015-07-08 南京宜开数据分析技术有限公司 Large-scale face database searching method based on shape space
CN105006020A (en) * 2015-07-14 2015-10-28 重庆大学 Virtual man face generation method based on 3D model
CN105023006A (en) * 2015-08-05 2015-11-04 西安电子科技大学 Face recognition method based on enhanced nonparametric margin maximization criteria
CN105654048A (en) * 2015-12-30 2016-06-08 四川川大智胜软件股份有限公司 Multi-visual-angle face comparison method
CN105893984A (en) * 2016-04-29 2016-08-24 北京工业大学 Face projection method for facial makeup based on face features
CN106462738A (en) * 2014-05-20 2017-02-22 埃西勒国际通用光学公司 Method for constructing a model of the face of a person, method and device for posture analysis using such a model
CN106951923A (en) * 2017-03-21 2017-07-14 西北工业大学 A kind of robot three-dimensional shape recognition process based on multi-camera Vision Fusion
CN107066951A (en) * 2017-03-15 2017-08-18 中国地质大学(武汉) A kind of recognition methods of spontaneous expression of face and system
CN107564049A (en) * 2017-09-08 2018-01-09 北京达佳互联信息技术有限公司 Faceform's method for reconstructing, device and storage medium, computer equipment
CN107622227A (en) * 2017-08-25 2018-01-23 深圳依偎控股有限公司 A kind of method, terminal device and the readable storage medium storing program for executing of 3D recognitions of face
CN107729838A (en) * 2017-10-12 2018-02-23 中科视拓(北京)科技有限公司 A kind of head pose evaluation method based on deep learning
CN107832751A (en) * 2017-12-15 2018-03-23 北京奇虎科技有限公司 Mask method, device and the computing device of human face characteristic point
WO2018053703A1 (en) * 2016-09-21 2018-03-29 Intel Corporation Estimating accurate face shape and texture from an image
CN108932468A (en) * 2018-04-27 2018-12-04 衡阳师范学院 One kind being suitable for psychologic face recognition method
CN108961384A (en) * 2017-05-19 2018-12-07 中国科学院苏州纳米技术与纳米仿生研究所 three-dimensional image reconstruction method
CN109146962A (en) * 2018-09-07 2019-01-04 百度在线网络技术(北京)有限公司 Detect method, apparatus, storage medium and the terminal device of face's angle
CN109376593A (en) * 2018-09-10 2019-02-22 杭州格像科技有限公司 Man face characteristic point positioning method and system
CN109409335A (en) * 2018-11-30 2019-03-01 腾讯科技(深圳)有限公司 Image processing method, device, computer-readable medium and electronic equipment
CN110020600A (en) * 2019-03-05 2019-07-16 厦门美图之家科技有限公司 Generate the method for training the data set of face alignment model
CN110032927A (en) * 2019-02-27 2019-07-19 视缘(上海)智能科技有限公司 A kind of face identification method
CN110148468A (en) * 2019-05-09 2019-08-20 北京航空航天大学 The method and device of dynamic human face image reconstruction
CN110147721A (en) * 2019-04-11 2019-08-20 阿里巴巴集团控股有限公司 A kind of three-dimensional face identification method, model training method and device
CN110298319A (en) * 2019-07-01 2019-10-01 北京字节跳动网络技术有限公司 Image composition method and device
CN110647782A (en) * 2018-06-08 2020-01-03 北京信息科技大学 Three-dimensional face reconstruction and multi-pose face recognition method and device
WO2020134925A1 (en) * 2018-12-28 2020-07-02 广州市百果园信息技术有限公司 Illumination detection method and apparatus for facial image, and device and storage medium
CN111414803A (en) * 2020-02-24 2020-07-14 北京三快在线科技有限公司 Face recognition method and device and electronic equipment
CN111582223A (en) * 2020-05-19 2020-08-25 华普通用技术研究(广州)有限公司 Three-dimensional face recognition method
CN112330824A (en) * 2018-05-31 2021-02-05 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112528902A (en) * 2020-12-17 2021-03-19 四川大学 Video monitoring dynamic face recognition method and device based on 3D face model
CN113313674A (en) * 2021-05-12 2021-08-27 华南理工大学 Ship body rust removal method based on virtual data plane
CN114529445A (en) * 2020-10-30 2022-05-24 北京字跳网络技术有限公司 Method and device for drawing special dressing effect, electronic equipment and storage medium

Cited By (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477625B (en) * 2009-01-07 2013-04-24 北京中星微电子有限公司 Upper half of human body detection method and system
CN101751689B (en) * 2009-09-28 2012-02-22 中国科学院自动化研究所 Three-dimensional facial reconstruction method
CN102542240A (en) * 2010-12-23 2012-07-04 三星电子株式会社 Equipment and method for estimating orientation of human body
CN102831382A (en) * 2011-06-15 2012-12-19 北京三星通信技术研究有限公司 Face tracking apparatus and method
WO2013091370A1 (en) * 2011-12-22 2013-06-27 中国科学院自动化研究所 Human body part detection method based on parallel statistics learning of 3d depth image information
CN103971112B (en) * 2013-02-05 2018-12-07 腾讯科技(深圳)有限公司 Image characteristic extracting method and device
CN103971112A (en) * 2013-02-05 2014-08-06 腾讯科技(深圳)有限公司 Image feature extracting method and device
CN104765739A (en) * 2014-01-06 2015-07-08 南京宜开数据分析技术有限公司 Large-scale face database searching method based on shape space
CN104765739B (en) * 2014-01-06 2018-11-02 南京宜开数据分析技术有限公司 Extensive face database search method based on shape space
CN106462738A (en) * 2014-05-20 2017-02-22 埃西勒国际通用光学公司 Method for constructing a model of the face of a person, method and device for posture analysis using such a model
CN104408420A (en) * 2014-11-26 2015-03-11 苏州福丰科技有限公司 Three-dimensional face recognition method for entry and exit administration
CN104504405A (en) * 2014-12-02 2015-04-08 苏州福丰科技有限公司 Method for recognizing three-dimensional face
CN105006020A (en) * 2015-07-14 2015-10-28 重庆大学 Virtual man face generation method based on 3D model
CN105006020B (en) * 2015-07-14 2017-11-07 重庆大学 A kind of conjecture face generation method based on 3D models
CN105023006A (en) * 2015-08-05 2015-11-04 西安电子科技大学 Face recognition method based on enhanced nonparametric margin maximization criteria
CN105023006B (en) * 2015-08-05 2018-05-04 西安电子科技大学 Face identification method based on enhanced nonparametric maximal margin criterion
CN105654048A (en) * 2015-12-30 2016-06-08 四川川大智胜软件股份有限公司 Multi-visual-angle face comparison method
CN105893984A (en) * 2016-04-29 2016-08-24 北京工业大学 Face projection method for facial makeup based on face features
CN105893984B (en) * 2016-04-29 2018-11-20 北京工业大学 A kind of face projecting method of the types of facial makeup in Beijing operas based on facial characteristics
US10818064B2 (en) 2016-09-21 2020-10-27 Intel Corporation Estimating accurate face shape and texture from an image
WO2018053703A1 (en) * 2016-09-21 2018-03-29 Intel Corporation Estimating accurate face shape and texture from an image
CN107066951B (en) * 2017-03-15 2020-01-14 中国地质大学(武汉) Face spontaneous expression recognition method and system
CN107066951A (en) * 2017-03-15 2017-08-18 中国地质大学(武汉) A kind of recognition methods of spontaneous expression of face and system
CN106951923B (en) * 2017-03-21 2020-06-16 西北工业大学 Robot three-dimensional shape recognition method based on multi-view information fusion
CN106951923A (en) * 2017-03-21 2017-07-14 西北工业大学 A kind of robot three-dimensional shape recognition process based on multi-camera Vision Fusion
CN108961384A (en) * 2017-05-19 2018-12-07 中国科学院苏州纳米技术与纳米仿生研究所 three-dimensional image reconstruction method
CN108961384B (en) * 2017-05-19 2021-11-30 中国科学院苏州纳米技术与纳米仿生研究所 Three-dimensional image reconstruction method
CN107622227A (en) * 2017-08-25 2018-01-23 深圳依偎控股有限公司 A kind of method, terminal device and the readable storage medium storing program for executing of 3D recognitions of face
CN107564049B (en) * 2017-09-08 2019-03-29 北京达佳互联信息技术有限公司 Faceform's method for reconstructing, device and storage medium, computer equipment
CN107564049A (en) * 2017-09-08 2018-01-09 北京达佳互联信息技术有限公司 Faceform's method for reconstructing, device and storage medium, computer equipment
CN107729838A (en) * 2017-10-12 2018-02-23 中科视拓(北京)科技有限公司 A kind of head pose evaluation method based on deep learning
CN107832751A (en) * 2017-12-15 2018-03-23 北京奇虎科技有限公司 Mask method, device and the computing device of human face characteristic point
CN108932468A (en) * 2018-04-27 2018-12-04 衡阳师范学院 One kind being suitable for psychologic face recognition method
CN108932468B (en) * 2018-04-27 2021-10-12 衡阳师范学院 Face recognition method suitable for psychology
CN112330824A (en) * 2018-05-31 2021-02-05 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN110647782A (en) * 2018-06-08 2020-01-03 北京信息科技大学 Three-dimensional face reconstruction and multi-pose face recognition method and device
CN109146962A (en) * 2018-09-07 2019-01-04 百度在线网络技术(北京)有限公司 Detect method, apparatus, storage medium and the terminal device of face's angle
CN109376593A (en) * 2018-09-10 2019-02-22 杭州格像科技有限公司 Man face characteristic point positioning method and system
CN109376593B (en) * 2018-09-10 2020-12-29 杭州格像科技有限公司 Face feature point positioning method and system
US11961325B2 (en) 2018-11-30 2024-04-16 Tencent Technology (Shenzhen) Company Limited Image processing method and apparatus, computer-readable medium, and electronic device
WO2020108610A1 (en) * 2018-11-30 2020-06-04 腾讯科技(深圳)有限公司 Image processing method, apparatus, computer readable medium and electronic device
CN109409335A (en) * 2018-11-30 2019-03-01 腾讯科技(深圳)有限公司 Image processing method, device, computer-readable medium and electronic equipment
CN109409335B (en) * 2018-11-30 2023-01-20 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer readable medium and electronic equipment
WO2020134925A1 (en) * 2018-12-28 2020-07-02 广州市百果园信息技术有限公司 Illumination detection method and apparatus for facial image, and device and storage medium
US11908236B2 (en) 2018-12-28 2024-02-20 Bigo Technology Pte. Ltd. Illumination detection method and apparatus for face image, and device and storage medium
CN110032927A (en) * 2019-02-27 2019-07-19 视缘(上海)智能科技有限公司 A kind of face identification method
CN110020600A (en) * 2019-03-05 2019-07-16 厦门美图之家科技有限公司 Generate the method for training the data set of face alignment model
CN110020600B (en) * 2019-03-05 2021-04-16 厦门美图之家科技有限公司 Method for generating a data set for training a face alignment model
CN110147721B (en) * 2019-04-11 2023-04-18 创新先进技术有限公司 Three-dimensional face recognition method, model training method and device
CN110147721A (en) * 2019-04-11 2019-08-20 阿里巴巴集团控股有限公司 A kind of three-dimensional face identification method, model training method and device
CN110148468B (en) * 2019-05-09 2021-06-29 北京航空航天大学 Method and device for reconstructing dynamic face image
CN110148468A (en) * 2019-05-09 2019-08-20 北京航空航天大学 The method and device of dynamic human face image reconstruction
CN110298319A (en) * 2019-07-01 2019-10-01 北京字节跳动网络技术有限公司 Image composition method and device
CN111414803A (en) * 2020-02-24 2020-07-14 北京三快在线科技有限公司 Face recognition method and device and electronic equipment
CN111582223A (en) * 2020-05-19 2020-08-25 华普通用技术研究(广州)有限公司 Three-dimensional face recognition method
CN114529445A (en) * 2020-10-30 2022-05-24 北京字跳网络技术有限公司 Method and device for drawing special dressing effect, electronic equipment and storage medium
CN112528902A (en) * 2020-12-17 2021-03-19 四川大学 Video monitoring dynamic face recognition method and device based on 3D face model
CN113313674A (en) * 2021-05-12 2021-08-27 华南理工大学 Ship body rust removal method based on virtual data plane

Also Published As

Publication number Publication date
CN101320484B (en) 2012-01-04

Similar Documents

Publication Publication Date Title
CN101561874B (en) Method for recognizing face images
CN101320484B (en) Three-dimensional human face recognition method based on human face full-automatic positioning
CN101159015B (en) Two-dimensional human face image recognizing method
CN112418074B (en) Coupled posture face recognition method based on self-attention
CN107886529B (en) Point cloud registration method for three-dimensional reconstruction
US10891511B1 (en) Human hairstyle generation method based on multi-feature retrieval and deformation
Ramanathan et al. Face verification across age progression
CN106682598B (en) Multi-pose face feature point detection method based on cascade regression
Smith et al. Recovering facial shape using a statistical model of surface normal direction
CN100375108C (en) Automatic positioning method for characteristic point of human faces
US8306257B2 (en) Hierarchical tree AAM
Bongsoo Choy et al. Enriching object detection with 2d-3d registration and continuous viewpoint estimation
CN110751098A (en) Face recognition method for generating confrontation network based on illumination and posture
CN100373395C (en) Human face recognition method based on human face statistics
Smith et al. Facial shape-from-shading and recognition using principal geodesic analysis and robust statistics
CN104794441B (en) Human face characteristic positioning method based on active shape model and POEM texture models under complex background
Aydogdu et al. Comparison of three different CNN architectures for age classification
Breuer et al. Automatic 3D face reconstruction from single images or video
US8311319B2 (en) L1-optimized AAM alignment
Chen et al. Silhouette-based object phenotype recognition using 3D shape priors
Lee et al. Silhouette-based 3d face shape recovery
Chen et al. Unconstrained face verification using fisher vectors computed from frontalized faces
CN111274944A (en) Three-dimensional face reconstruction method based on single image
CN114332136B (en) Face attribute data labeling method, computer equipment and storage medium
Gilani et al. Towards large-scale 3D face recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant