CN102270308B - Facial feature location method based on five sense organs related AAM (Active Appearance Model) - Google Patents

Facial feature location method based on five sense organs related AAM (Active Appearance Model) Download PDF

Info

Publication number
CN102270308B
CN102270308B CN 201110205022 CN201110205022A CN102270308B CN 102270308 B CN102270308 B CN 102270308B CN 201110205022 CN201110205022 CN 201110205022 CN 201110205022 A CN201110205022 A CN 201110205022A CN 102270308 B CN102270308 B CN 102270308B
Authority
CN
China
Prior art keywords
face
model
texture
aam
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201110205022
Other languages
Chinese (zh)
Other versions
CN102270308A (en
Inventor
赵俭辉
李磊
袁志勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN 201110205022 priority Critical patent/CN102270308B/en
Publication of CN102270308A publication Critical patent/CN102270308A/en
Application granted granted Critical
Publication of CN102270308B publication Critical patent/CN102270308B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a facial feature location method aiming at partially obscured images in complex scene. The method comprises the following steps of: based on a sample image set, respectively modeling for each facial organ, and training to obtain an AAM (Active Appearance Model) related to the five sense organs; determining search areas of the AAM through a Haar feature face detection technology while initially locating a facial area, and classifying the search areas according to the probability of being searched; at an AAM fitting calculation part, respectively performing error calculation for each facial organ based on an obscuring weight of the five sense organs, and then, comprehensively evaluating a fitting degree of the model and the image through an energy function; and performing search optimization on a fitting process of the AAM by a genetic algorithm. In comparison with the prior related algorithms, the method can locate the facial features of the partially obscured images more accurately, and can enhance the robustness of the algorithms and improve the efficiency of the algorithms while ensuring higher accuracy.

Description

A kind of facial Feature Localization method based on the relevant AAM model of face
Technical field
The present invention relates to detect and field of locating technology based on the facial image face of Digital Image Processing and pattern-recognition, relate in particular to a kind of facial Feature Localization method based on the relevant AAM model of face.
Background technology
Face recognition technology compares people's face visual signature information by analysis and carries out the identity discriminating, it is one of the most difficult research topic of living things feature recognition field even artificial intelligence field, the difficulty of recognition of face is main from the characteristics of people's face as biological characteristic, is the computer technology research field of a hot topic.
Based on the recognition of face main method of model comprise active shape model (Active Shape Model, ASM) and initiatively apparent model (Active Appearance Model, AAM).ASM utilizes the shape information learning training of object to obtain the model of change of shape, then the target in the pattern search image that obtains by training.Though ASM has utilized the shape information of object, but can also further utilize the texture information in the zone, statistical law to texture in the target object shape area carries out modeling, so just can obtain AAM, can improve the accuracy rate of location greatly by using AAM.Recognition of face based on AAM is divided into model foundation and model The Fitting Calculation two parts.As the active apparent model, AAM texture in conjunction with object on the basis of shape is set up.In the model The Fitting Calculation, quadratic sum with AAM model instance and input picture difference defines an energy function, utilize this energy function to estimate fitting degree, by constantly iterating to realize minimizing of energy function, reach the purpose of match between model instance and the input picture, the people's face portion feature in the present image has then been described in the position at each the shape reference mark that finally obtains.
Do not block aspect the facial image characteristic point orientation problem handle having, a large amount of researchers strengthen the algorithm robustness, improve accuracy rate and raise the efficiency aspect a lot of methods have been proposed; But aspect process facial partial occlusion situation, the method that proposes and seldom and still based on the AAM algorithm, for example based on the AAM of PO, based on the AAM of ERN.At present the processing section is blocked the problem that the facial Feature Localization algorithm mainly exists and is: the result that (1) relies on human face region to detect, if people's face testing result is inaccurate or failure, with the accuracy rate of direct effect characteristics location algorithm; (2) existing algorithm is to blocking the comparison sensitivity, and when shielded area was big, accuracy rate was not high; (3) traditional pattern search mode efficient is lower.
Summary of the invention
The present invention solves the result that existing in prior technology relies on human face region to detect, if people's face testing result is inaccurate or failure, with the technical matters of the accuracy rate of direct effect characteristics location algorithm etc.; Provide a kind of size by searched probability to be divided into first, second, third region of search, thereby strengthened a kind of facial Feature Localization method based on the relevant AAM model of face of the robustness of feature location algorithm.
It is to solve existing in prior technology to blocking the comparison sensitivity that the present invention also has a purpose, when shielded area is big, and the technical matters that accuracy rate is not high; Provide a kind of facial face according to people's face to divide to handle and blocked error, reduced the susceptibility to blocking, improved a kind of facial Feature Localization method based on the relevant AAM model of face of the accuracy rate of feature location.
It is to solve the low technical matters that waits of existing in prior technology way of search efficient that the present invention has a purpose again; Provide a kind of the pattern search process has been optimized, not only improved the efficient of pattern search, and avoided a kind of facial Feature Localization method based on the relevant AAM model of face of the suboptimization problem that may occur as far as possible.
Above-mentioned technical matters of the present invention is mainly solved by following technical proposals:
A kind of facial Feature Localization method based on the relevant AAM model of face is characterized in that, may further comprise the steps:
Step 1, based on the sample image collection, respectively to each face organ's modeling, training obtains the face relevant AAM model corresponding with each face organ;
Step 2 is utilized Haar feature human face detection tech when the facial zone initial alignment, determine the region of search of above-mentioned each AAM model, and it is classified according to searched probability;
Step 3 in AAM model The Fitting Calculation part, is carried out error respectively to each face organ and is calculated based on the weights that block of face, then the fitting degree by energy function comprehensive evaluation model and image;
Step 4 uses genetic algorithm that the search optimization of AAM model fit procedure is carried out in completing steps 3 classification of integrating step 2 simultaneously.
In above-mentioned a kind of facial Feature Localization method based on the relevant AAM model of face, in the described step 1, based on the sample image collection, respectively to each face organ's modeling, training obtains the relevant AAM model of face, may further comprise the steps:
Step 1.1 is chosen face database, and the people's face in the image is carried out manual unique point mark;
Step 1.2 based on the unique point of step 1.1 mark, is carried out Delaunay triangle division based on the Delaunay trigonometric ratio to people's face, and the difference of pressing face, and training obtains shape separately; Carry out linearity then in the triangle gridding zone of face correspondence affine, and training obtains texture model separately; Merge shape and texture model and finally obtain the AAM model, face are respectively left eye eyeball and left eyebrow, right eye eyeball and right eyebrow, nose, face and face contour.
In above-mentioned a kind of facial Feature Localization method based on the relevant AAM model of face, when the facial zone initial alignment, utilize Haar feature human face detection tech in the described step 2, determine the region of search of AAM model, and it classified according to searched probability, comprise following substep:
Step 2.1 utilizes the Haar tagsort device of cascade that image is carried out the detection of people's face;
Step 2.2 if detect people's face or similar human face region, is carried out the division of three priority levels to it, is divided into first, second, third region of search according to the size of searched probability;
Step 2.3 if detect less than human face region, is then carried out the search of AAM model in the entire image space.
In above-mentioned a kind of facial Feature Localization method based on the relevant AAM model of face, in the described step 3 in AAM model The Fitting Calculation part, the weights that block based on face carry out error calculating respectively to each face organ, by the fitting degree of energy function comprehensive evaluation model and image, comprise following substep then:
Step 3.1 in the AAM model, is calculated the texture statistics information in Different Organs zone respectively, i.e. the texture mean value of each organic region after the model normalization;
Step 3.2 is applied to facial unscreened sample image collection with the relevant AAM model of face, determine the face of sample image after, calculate the texture statistics information of each organic region respectively, i.e. the texture mean value of each organic region behind the image normalization;
Step 3.3 with the ratio of image texture average and the model texture average weights that block as organ, does not have based on face and block all results that the sample image collection iterates in the model The Fitting Calculation, determines the minimum and maximum weights that block of each organic region;
Step 3.4, at pending non-sample image, calculate the weights that block of each organ in the relevant AAM model fit procedure of face, if weights are between this organ minimum and maximum weights, think not have and block, block otherwise be judged as, present image organ texture value is substituted by corresponding sample image organ texture mean value when blocking;
Step 3.5, for because of the limited misjudgment that may cause of sample image collection feature space, the method by mean filter is carried out filtering to the noise of error vector sequence in the fit procedure and is handled.
In above-mentioned a kind of facial Feature Localization method based on the relevant AAM model of face, use genetic algorithm that AAM model fit procedure is searched for optimization in the described step 4, comprise following substep:
Step 4.1, the introducing genetic algorithm is optimized the search procedure of AAM model, chooses individual formation population respectively according to the probability size in first, second, third region of search, and adopts suitable heredity strategy;
Step 4.2 by a series of experiments based on sample image, is determined the value of each parameter of genetic algorithm;
Step 4.3 uses genetic algorithm to carry out the The Fitting Calculation of AAM model, considers translation, convergent-divergent, the rotational transform of model in picture search simultaneously.
In above-mentioned a kind of facial Feature Localization method based on the relevant AAM model of face, in the described step 1.2, feature point set based on manual markings carries out the Delaunay trigonometric ratio, human face region is divided into 5 organs that comprise left eye, right eye, nose, face, face contour, therefore shape S can be expressed as the combination of face, i.e. S=(S EyeL, S EyeR, S Nose, S Mouth, S Outline), wherein the triangle gridding of each organic region formation is described below:
The triangle gridding that left eye (comprising eyes and eyebrow) is corresponding:
S eyeL={Triangle Li|i∈[1,N L]}
Triangle wherein LiFor being positioned at the triangle gridding of left eye region, N LIt is this area triangles number;
The triangle gridding that right eye (comprising eyes and eyebrow) is corresponding:
S eyeR={Triangle Ri|i∈[1,N R]}
Triangle wherein RiFor being positioned at the triangle gridding of right eye region, N RIt is this area triangles number;
The triangle gridding of nose correspondence:
S nose={Triangle Ni|i∈[1,N N]}
Triangle wherein NiFor being positioned at the triangle gridding of nasal area, N NIt is this area triangles number;
The triangle gridding of face correspondence:
S mouth={Triangle Mi|i∈[1,N M]}
Triangle wherein MiFor being positioned at the triangle gridding in face zone, N MIt is this area triangles number;
The triangle gridding of face contour correspondence:
S outline={Triangle Oi|i∈[1,N O]}
Triangle wherein OiFor being positioned at the triangle gridding in face contour zone, N OIt is this area triangles number.
In above-mentioned a kind of facial Feature Localization method based on the relevant AAM model of face, in the described step 1.2, according to the result of manual markings and trigonometric ratio, training obtains face shape separately, can be expressed as through the shape S after normalization and the PCA processing:
S = S 0 + Σ i = 1 n p i S i
S wherein 0Be average shape, S iBe shape facility vector, p iBe the shape parameter; Carry out linearity then in the triangle gridding zone of face correspondence affine, and training obtains face texture model separately, can be expressed as through the texture model A after normalization and the PCA processing:
A = A 0 + Σ i = 1 m λ i A i
A wherein 0Be average texture, A iBe texture feature vector, λ iBe texture model parameters;
Final shape and the texture model of merging obtains the relevant AAM model of face.
In above-mentioned a kind of facial Feature Localization method based on the relevant AAM model of face, in the described step 2.2, according to detected people's face, the region of search of AAM model is divided into 3 classes according to searched probability, i.e. first, second, third region of search is expressed as follows respectively:
First region of search, the K neighborhood (as the 10*10 zone) of people's face rectangle frame central point is most possible zone of realizing match between AAM model and the input picture, the searched probability in this zone is p 1
Second region of search, be positioned at outside people's face rectangle frame central point K neighborhood with people's face rectangle frame within the zone, the searched probability in this zone is p 2
The 3rd region of search is positioned at the remaining area in the entire image space outside people's face rectangle frame, and the searched probability in this zone is p 3
Pass between the searched probability of described three regions of search is: p 1>p 2>p 3And p 1+ p 2+ p 3=1.
In above-mentioned a kind of facial Feature Localization method based on the relevant AAM model of face, in the described step 3.1, concrete operation method is as follows: human face region is divided into 5 organs, the computing formula of the model texture average of each organ is as follows, wherein, the texture feature vector of A (x) expression pixel x correspondence:
The model texture average of left eye (comprising eyes and eyebrow):
M MeyeL = 1 P L Σ x ∈ S eyeL A ( x )
P wherein LIt is the pixel count in the left eye organic region;
The model texture average of right eye (comprising eyes and eyebrow):
M MeyeR = 1 P R Σ x ∈ S eyeR A ( x )
P wherein RIt is the pixel count in the right eye organic region;
The model texture average of nose:
M Mnose = 1 P N Σ x ∈ S nose A ( x )
P wherein NIt is the pixel count in the nose organic region;
The model texture average of face:
M Mmouth = 1 P M Σ x ∈ S mouth A ( x )
P wherein MIt is the pixel count in the face organic region;
The model texture average of face contour:
M Moutline = 1 P O Σ x ∈ S outline A ( x )
P wherein OIt is the pixel count in the face contour organic region;
In the described step 3.2, concrete operation method is as follows: human face region is divided into 5 organs, and the computing formula of the image texture average of each organ is as follows, wherein, with I (W (x)) expression pixel x through model corresponding texture feature vector to the image mapped:
The image texture average of left eye (comprising eyes and eyebrow):
M IeyeL = 1 P L Σ x ∈ S eyeL I ( W ( x ) )
The image texture average of right eye (comprising eyes and eyebrow):
M IeyeR = 1 P R Σ x ∈ S eyeR I ( W ( x ) )
The image texture average of nose:
M Inose = 1 P N Σ x ∈ S nose I ( W ( x ) )
The image texture average of face:
M Im outh = 1 P M Σ x ∈ S mouth I ( W ( x ) )
The image texture average of face contour:
M Ioutline = 1 P O Σ x ∈ S outline I ( W ( x ) ) ;
In the described step 3.5, concrete operation method is as follows: the basis for estimation of the relevant AAM model match convergence of face is: whether the Euclidean distance of the error vector between model and the input picture is less than the threshold value that sets in advance, if error vector is the diff vector of n dimension, then the Euclidean distance sm of error vector is:
sm = Σ i = 0 n - 1 diff [ i ] 2
Face minimum and maximum block weights be based on facial do not have block that the sample image collection obtains, use the method for mean filter that the error vector sequence in the fit procedure is carried out denoising, namely before the Euclidean distance sm of error of calculation vector, increase following three steps:
Step 3.51 adopts
Figure BDA0000077379530000085
Wherein, mean is the average of n dimension diff vector;
Step 3.52 adopts
Figure BDA0000077379530000086
Wherein, max is the maximal value of n dimension diff vector, r iFor vector is respectively tieed up result after the standardization;
Step 3.53, adopt if (diff[i]>mean) diff[i]=r i* diff[i].
Therefore, the present invention has following advantage: 1. the size by searched probability is divided into first, second, third region of search, thereby has strengthened the robustness of feature location algorithm; 2. divide to handle according to the facial face of people's face and block error, reduced the susceptibility to blocking, improved the accuracy rate of feature location; 3. the pattern search process is optimized, has not only improved the efficient of pattern search, and avoided the suboptimization problem that may occur as far as possible.
Description of drawings
Fig. 1 be among the present invention people's face testing result based on the classification of searched probability;
Fig. 2 is location algorithm flow process of the present invention;
Embodiment
Below by embodiment, and by reference to the accompanying drawings, technical scheme of the present invention is described in further detail.
Embodiment:
The invention will be further described by reference to the accompanying drawings with specific embodiment below:
1. the relevant AAM model of face is practiced in training based on sample image
(1) choosing lineup's face image pattern data is training set (the IMM face database with the disclosed standard in the world is example), sample image is carried out manual facial characteristics mark, totally 58 of unique points, be respectively each 8 of left and right sides eyes, each 5 on left and right sides eyebrow, 11 in nose, 8 in face, 13 of face contours.
(2) feature point set based on manual markings carries out the Delaunay trigonometric ratio, human face region is divided into 5 organs that comprise left eye, right eye, nose, face, face contour.
Therefore shape S can be expressed as the combination of face, i.e. S=(S EyeL, S EyeR, S Nose, S Mouth, S Outline), wherein the triangle gridding of each organic region formation is described below.
1. the corresponding triangle gridding of left eye (comprising eyes and eyebrow):
S eyeL={Triangle Li|i∈[1,N L]} (1)
Triangle wherein LiFor being positioned at the triangle gridding of left eye region, N LIt is this area triangles number;
2. the corresponding triangle gridding of right eye (comprising eyes and eyebrow):
S eyeR={Triangle Ri|i∈[1,N R]} (2)
Triangle wherein RiFor being positioned at the triangle gridding of right eye region, N RIt is this area triangles number;
3. the triangle gridding of nose correspondence:
S nose={Triangle Ni|i∈[1,N N]} (3)
Triangle wherein NiFor being positioned at the triangle gridding of nasal area, N NIt is this area triangles number;
4. the triangle gridding of face correspondence:
S mouth={Triangle Mi|i∈[1,N M]} (4)
Triangle wherein MiFor being positioned at the triangle gridding in face zone, N MIt is this area triangles number;
5. the triangle gridding of face contour correspondence:
S outline={Triangle Oi|i∈[1,N O]} (5)
Triangle wherein OiFor being positioned at the triangle gridding in face contour zone, N OIt is this area triangles number.
(3) according to the result of manual markings and trigonometric ratio, training obtains face shape separately, can be expressed as through the shape S after normalization and the PCA processing:
S = S 0 + Σ i = 1 n p i S i - - - ( 6 )
S wherein 0Be average shape, S iBe shape facility vector, p iBe the shape parameter.
(4) it is affine to carry out linearity in the triangle gridding zone of face correspondence, and training obtains face texture model separately, and the texture model A after handling through normalization and PCA can be expressed as:
A = A 0 + Σ i = 1 m λ i A i - - - ( 7 )
A wherein 0Be average texture, A iBe texture feature vector, λ iBe texture model parameters.
(5) merge shape and texture model, obtain the relevant AAM model of face.
2. the human face region of first location is carried out classification based on searched probability
(1) use is carried out the initial alignment of facial zone based on the human face detection tech of Haar feature and AdaBoost cascade classifier.
(2) according to detected people's face (rectangle frame shape), the region of search of AAM model is divided into 3 classes according to searched probability, i.e. first, second, third region of search, as shown in Figure 1.
1. first region of search, the K neighborhood (as the 10*10 zone) of people's face rectangle frame central point is most possible zone of realizing match between AAM model and the input picture, the searched probability in this zone is p 1
2. second region of search, be positioned at outside people's face rectangle frame central point K neighborhood with people's face rectangle frame within the zone, the searched probability in this zone is p 2
3. the 3rd region of search is positioned at the remaining area in the entire image space outside people's face rectangle frame, and the searched probability in this zone is p 3
Pass between the searched probability of three regions of search is: p 1>p 2>p 3And p 1+ p 2+ p 3=1, p for example 1=0.85, p 2=0.1, p 3=0.05.
(3) if the Haar feature detection does not navigate to human face region, then be the region of search of AAM model with the entire image space.
3. block weights based on the relevant AAM of face and each organ and carry out facial Feature Localization
(1) in the AAM model, calculates the texture statistics information in Different Organs zone respectively, i.e. the texture mean value of each organic region after the model normalization.
Human face region is divided into 5 organs, and the computing formula of the model texture average of each organ is as follows.
1. the model texture average of left eye (comprising eyes and eyebrow):
M MeyeL = 1 P L Σ x ∈ S eyeL A ( x ) - - - ( 8 )
P wherein LIt is the pixel count in the left eye organic region;
2. the model texture average of right eye (comprising eyes and eyebrow):
M MeyeR = 1 P R Σ x ∈ S eyeR A ( x ) - - - ( 9 )
P wherein RIt is the pixel count in the right eye organic region;
3. the model texture average of nose:
M Mnose = 1 P N Σ x ∈ S nose A ( x ) - - - ( 10 )
P wherein NIt is the pixel count in the nose organic region;
4. the model texture average of face:
M Mmouth = 1 P M Σ x ∈ S mouth A ( x ) - - - ( 11 )
P wherein MIt is the pixel count in the face organic region;
5. the model texture average of face contour:
M Moutline = 1 P O Σ x ∈ S outline A ( x ) - - - ( 12 )
P wherein OIt is the pixel count in the face contour organic region.
(2) the relevant AAM model of face is applied to facial unscreened sample image collection, determine the face of sample image after, calculate the texture statistics information of each organic region respectively, i.e. the texture mean value of each organic region behind the image normalization.
Human face region is divided into 5 organs, and the computing formula of the image texture average of each organ is as follows.
1. the image texture average of left eye (comprising eyes and eyebrow):
M IeyeL = 1 P L Σ x ∈ S eyeL I ( W ( x ) ) - - - ( 13 )
2. the image texture average of right eye (comprising eyes and eyebrow):
M IeyeR = 1 P R Σ x ∈ S eyeR I ( W ( x ) ) - - - ( 14 )
3. the image texture average of nose:
M Inose = 1 P N Σ x ∈ S nose I ( W ( x ) ) - - - ( 15 )
4. the image texture average of face:
M Im outh = 1 P M Σ x ∈ S mouth I ( W ( x ) ) - - - ( 16 )
5. the image texture average of face contour:
M Ioutline = 1 P O Σ x ∈ S outline I ( W ( x ) ) - - - ( 17 )
(3) with the ratio of image texture average and the model texture average weights that block as organ, do not have based on face and block all results that the sample image collection iterates in the model The Fitting Calculation, determine the minimum and maximum weights that block of each organic region.
Be example with the left eye organic region, do not have the i time result of blocking in the sample image collection AAM model fit procedure based on it at face, the weights that block of left eye are:
W eyeLi = M IeyeLi M MeyeLi - - - ( 18 )
Block weights according to all of sample set correspondence, can obtain the minimum and maximum weights that block of each organ in the facial face, i.e. W EyeL max, W EyeL min, W EyeR max, W EyeR min, W Nose max, W Nose min, W Mouth max, W Mouth min, W Outline max, W Outline min
(4) at pending non-sample image, calculate the weights that block of each organ in the relevant AAM model fit procedure of face, if weights are between this organ minimum and maximum weights, think not have and block, block otherwise be judged as, present image organ texture value is substituted by corresponding sample image organ texture mean value when blocking.
Be example with the left eye organic region still, its disposal route when the i time iteration of the relevant AAM model of present image face is:
if(M IeyeLi>W eyeL max*M MeyeLi||M IeyeLi<W eyeL min*M MeyeLi)P=M IeyeL (19)
Wherein P is the pixel value in this organic region.
(5) for because of the limited misjudgment that may cause of sample image collection feature space, the method by mean filter is carried out filtering to the noise of error vector sequence in the fit procedure and is handled.
The basis for estimation of the relevant AAM model match convergence of face is: whether the Euclidean distance of the error vector between model and the input picture is less than the threshold value that sets in advance; If less than, think that then the relevant AAM model match of face restrains.If error vector is the diff vector of n dimension, then the Euclidean distance sm of error vector is:
sm = Σ i = 0 n - 1 diff [ i ] 2 - - - ( 20 )
Face minimum and maximum block weights be based on facial do not have block that the sample image collection obtains, when handling the non-sample image, may occur because the input picture feature is positioned at the misjudgment that causes outside the sample image collection feature space, thereby cause the noise in the time error sequence vector that iterates.At this situation, use the method for mean filter that the error vector sequence in the fit procedure is carried out denoising, namely increase following three steps before at formula (20):
mean = 1 n * Σ i = 0 n - 1 diff [ i ] - - - ( 21 )
r i = max - diff [ i ] max - mean - - - ( 22 )
③if(diff[i]>mean)diff[i]=r i*diff[i] (23)
4. in AAM model The Fitting Calculation process, use genetic algorithm to search for optimization
(1) the introducing genetic algorithm is optimized the search procedure of AAM model, chooses individual formation population respectively according to the probability size in first, second, third region of search, and adopts suitable heredity strategy.
At the The Fitting Calculation of the relevant AAM model of face, carry out the optimization of search procedure by genetic algorithm.In first, second, third region of search respectively with p 1, p 2, p 3Probability choose the individual population that forms, thereby guarantee that the zone that the match possibility is bigger between model and image can obtain bigger searching probability, and the basic calculating unit of genetic algorithm (i.e. Ge Ti fitness function) is exactly the Euclidean distance that calculates error vector between AAM model and the input picture.
To select excellent thought in order embodying in excellent, to adopt means such as directly heredity, hybridization, variation to reflect natural genetic development.Individuality in the population sorts according to fitness, and preceding 20% best individuality of fitness is genetic directly to population of future generation; In the middle of the population of future generation 60% individuality by preceding 20% individuality randomly in twos intermolecular hybrid obtain; The individuality of residue 20% obtains by variation, is equivalent to introduce the new individuality of part.If P iWith Q iBe that (i) is for two individualities that participate in hybridization in the population, P I+1With Q I+1Be (i+1) for two individualities of corresponding generation in the population, then crossover operation can be expressed as:
P i+1=p*P i+(1-p)*Q i (24)
Q i+1=(1-p)*P i+p*Q i (25)
Wherein p is [0,1] interval interior random floating point.
(2) by a series of experiments based on sample image, determine the value of each parameter of genetic algorithm.
Determining of each parameter of genetic algorithm needs to select suitable empirical value by a series of experiments based on sample image.For example, carry out facial Feature Localization in the image of 320*240 size, the individual amount of genetic algorithm population can be set to 120, and maximum iteration time can be set to 15.
(3) use genetic algorithm to carry out the The Fitting Calculation of AAM model, in picture search, consider translation, convergent-divergent, the rotational transform of model simultaneously.
Based on the genetic algorithm that designs with determine after correlation parameter, can carry out the The Fitting Calculation of the relevant AAM model of face.In picture search, consider translation, convergent-divergent, the rotational transform of AAM model simultaneously, namely the model center coordinate (x, y), scaling of model factor s, model twiddle factor r be as the ingredient of genes of individuals in the genetic algorithm.
Use genetic algorithm can significantly improve search efficiency.For example, be the 320*240 pixel in the picture search zone, the scaling of model scope be 0.5-2.0 doubly and the convergent-divergent step-length be 0.01 times, the model rotating range be about each 0.524 radian (being about 30 °) and rotation step-length when being 0.01 radian, the calculated amount that common traversal method needs is:
320*240* (2.0-0.5) * 100* (2*0.524) * 100=1.207*10 9Inferior;
If use genetic algorithm and only consider the basic calculating unit, calculated amount is: 120*15=1.800*10 3Inferior.
Based on the facial Feature Localization algorithm flow of the relevant AAM of face as shown in Figure 2.
Experimental result shows, by the technical program, can make accurate localization more to the facial characteristics of partial occlusion facial image.Compare existing related algorithm, the present invention has strengthened the robustness of algorithm when guaranteeing higher accuracy, improved the efficient of algorithm.
Specific embodiment described herein only is that the present invention's spirit is illustrated.Those skilled in the art can make various modifications or replenish or adopt similar mode to substitute described specific embodiment, but can't depart from spirit of the present invention or surmount the defined scope of appended claims.

Claims (6)

1. the facial Feature Localization method based on the relevant AAM model of face is characterized in that, may further comprise the steps:
Step 1, based on the sample image collection, respectively to each face organ's modeling, training obtains the face relevant AAM model corresponding with each face organ; May further comprise the steps:
Step 1.1 is chosen face database, and the people's face in the image is carried out manual unique point mark;
Step 1.2 based on the unique point of step 1.1 mark, is carried out Delaunay triangle division based on the Delaunay trigonometric ratio to people's face, and the difference of pressing face, and training obtains shape separately; Carry out linearity then in the triangle gridding zone of face correspondence affine, and training obtains texture model separately; Merge shape and texture model and finally obtain the AAM model, face are respectively left eye eyeball and left eyebrow, right eye eyeball and right eyebrow, nose, face and face contour;
Step 2 is utilized Haar feature human face detection tech when the facial zone initial alignment, determine the region of search of above-mentioned each AAM model, and it is classified according to searched probability; Comprise following substep:
Step 2.1 utilizes the Haar tagsort device of cascade that image is carried out the detection of people's face;
Step 2.2 if detect people's face or similar human face region, is carried out the division of three priority levels to it, is divided into first, second, third region of search according to the size of searched probability;
Step 2.3 if detect less than human face region, is then carried out the search of AAM model in the entire image space;
Step 3 in AAM model The Fitting Calculation part, is carried out error respectively to each face organ and is calculated based on the weights that block of face, then the fitting degree by energy function comprehensive evaluation model and image; Comprise following substep:
Step 3.1 in the AAM model, is calculated the texture statistics information in Different Organs zone respectively, i.e. the texture mean value of each organic region after the model normalization also is model texture average;
Step 3.2, the relevant AAM model of face is applied to facial unscreened sample image collection, determine the face of sample image after, calculate the texture statistics information of each organic region respectively, being the texture mean value of each organic region behind the image normalization, also is the image texture average;
Step 3.3 with the ratio of image texture average and the model texture average weights that block as organ, does not have based on face and block all results that the sample image collection iterates in the model The Fitting Calculation, determines the minimum and maximum weights that block of each organic region;
Step 3.4, at pending non-sample image, calculate the weights that block of each organ in the relevant AAM model fit procedure of face, if weights block between the weights in this organ minimum and maximum, think not have and block, block otherwise be judged as, present image organ texture value is substituted by corresponding sample image texture average when blocking;
Step 3.5, for because of the limited misjudgment that may cause of sample image collection feature space, the method by mean filter is carried out filtering to the noise of error vector sequence in the fit procedure and is handled;
Step 4, behind the completing steps 3, the classification of integrating step 2 also uses genetic algorithm the result of step 3 to be carried out the search optimization of AAM model fit procedure.
2. a kind of facial Feature Localization method based on the relevant AAM model of face according to claim 1 is characterized in that, uses genetic algorithm that AAM model fit procedure is searched for optimization in the described step 4, comprises following substep:
Step 4.1 is introduced genetic algorithm and is adopted the heredity strategy that the search procedure of AAM model is optimized, and chooses the individual population that forms respectively according to the probability size in first, second, third region of search;
Step 4.2 by a series of experiments based on sample image, is determined the value of each parameter of genetic algorithm;
Step 4.3 uses genetic algorithm to carry out the The Fitting Calculation of AAM model, considers translation, convergent-divergent, the rotational transform of model in picture search simultaneously.
3. a kind of facial Feature Localization method based on the relevant AAM model of face according to claim 2, it is characterized in that, in the described step 1.2, feature point set based on manual markings carries out the Delaunay trigonometric ratio, human face region is divided into 5 organs that comprise left eye, right eye, nose, face, face contour, therefore shape S can be expressed as the combination of face, i.e. S=(S EyeL, S EyeR, S Nose, S Mouth, S Outline), wherein the triangle gridding of each organic region formation is described below:
The triangle gridding that comprises the left eye correspondence of eyes and eyebrow:
S eyeL={Triangle Li|i∈[1,N L]}
Triangle wherein LiFor being positioned at the triangle gridding of left eye region, N LIt is this area triangles number;
The triangle gridding that comprises the right eye correspondence of eyes and eyebrow:
S eyeR={Triangle Ri|i∈[1,N R]}
Triangle wherein RiFor being positioned at the triangle gridding of right eye region, N RIt is this area triangles number;
The triangle gridding of nose correspondence:
S nose={Triangle Ni|i∈[1,N N]}
Triangle wherein NiFor being positioned at the triangle gridding of nasal area, N NIt is this area triangles number;
The triangle gridding of face correspondence:
S mouth={Triangle Mi|i∈[1,N M]}
Triangle wherein MiFor being positioned at the triangle gridding in face zone, N MIt is this area triangles number;
The triangle gridding of face contour correspondence:
S outline={Triangle Oi|i∈[1,N O]}
Triangle wherein OiFor being positioned at the triangle gridding in face contour zone, N OIt is this area triangles number.
4. a kind of facial Feature Localization method based on the relevant AAM model of face according to claim 2, it is characterized in that, in the described step 1.2, result according to manual markings and trigonometric ratio, training obtains face shape separately, can be expressed as through the shape S after normalization and the PCA processing:
S = S 0 + Σ i = 1 n p i S i
S wherein 0Be average shape, S iBe shape facility vector, p iBe the shape parameter; Carry out linearity then in the triangle gridding zone of face correspondence affine, and training obtains face texture model separately, can be expressed as through the texture model A after normalization and the PCA processing:
A = A 0 + Σ i = 1 m λ i A i
A wherein 0Be average texture, A iBe texture feature vector, λ iBe texture model parameters;
Final shape and the texture model of merging obtains the relevant AAM model of face.
5. a kind of facial Feature Localization method based on the relevant AAM model of face according to claim 2, it is characterized in that, in the described step 2.2, according to detected people's face, region of search to the AAM model is divided into 3 classes according to searched probability, i.e. first, second, third region of search is expressed as follows respectively:
First region of search, the K neighborhood of people's face rectangle frame central point is most possible zone of realizing match between AAM model and the input picture, the searched probability in this zone is p 1
Second region of search, be positioned at outside people's face rectangle frame central point K neighborhood with people's face rectangle frame within the zone, the searched probability in this zone is p 2;
The 3rd region of search is positioned at the remaining area in the entire image space outside people's face rectangle frame, and the searched probability in this zone is p 3
Pass between the searched probability of described three regions of search is: p 1P 2P 3And p 1+ p 2+ p 3=1.
6. a kind of facial Feature Localization method based on the relevant AAM model of face according to claim 3, it is characterized in that, in the described step 3.1, concrete operation method is as follows: human face region is divided into 5 organs, the computing formula of the model texture average of each organ is as follows, wherein, the texture feature vector of A (x) expression pixel x correspondence:
The model texture average that comprises the left eye of eyes and eyebrow:
M MeyeL = 1 P L Σ x ∈ S eyeL A ( x )
P wherein LIt is the pixel count in the left eye organic region;
The model texture average that comprises the right eye of eyes and eyebrow:
M MeyeR = 1 P R Σ x ∈ S eyeR A ( x )
P wherein RIt is the pixel count in the right eye organic region;
The model texture average of nose:
M Mnose = 1 P N Σ x ∈ S nose A ( x )
P wherein NIt is the pixel count in the nose organic region;
The model texture average of face:
M Mmouth = 1 P M Σ x ∈ S mouth A ( x )
P wherein MIt is the pixel count in the face organic region;
The model texture average of face contour:
M Moutline = 1 P O Σ x ∈ S outline A ( x )
P wherein OIt is the pixel count in the face contour organic region;
In the described step 3.2, concrete operation method is as follows: human face region is divided into 5 organs, and the computing formula of the image texture average of each organ is as follows, wherein, with I (W (x)) expression pixel x through model corresponding texture feature vector to the image mapped:
The image texture average that comprises the left eye of eyes and eyebrow:
M IeyeL = 1 P L Σ x ∈ S eyeL I ( W ( x ) )
The image texture average that comprises the right eye of eyes and eyebrow:
M IeyeR = 1 P R Σ x ∈ S eyeR I ( W ( x ) )
The image texture average of nose:
M Inose = 1 P N Σ x ∈ S nose I ( W ( x ) )
The image texture average of face:
M Im outh = 1 P M Σ x ∈ S mouth I ( W ( x ) )
The image texture average of face contour:
M Ioutline = 1 P O Σ x ∈ S outline I ( W ( x ) )
In the described step 3.5, concrete operation method is as follows: the basis for estimation of the relevant AAM model match convergence of face is: whether the Euclidean distance of the error vector between model and the input picture is less than the threshold value that sets in advance, if error vector is the diff vector of n dimension, then the Euclidean distance sm of error vector is:
sm = Σ i = 0 n - 1 diff [ i ] 2
Face minimum and maximum block weights be based on facial do not have block that the sample image collection obtains, use the method for mean filter that the error vector sequence in the fit procedure is carried out denoising, namely before the Euclidean distance sm of error of calculation vector, increase following three steps:
Step 3.51 adopts
Figure FDA00003220279700072
Wherein, mean is the average of n dimension diff vector;
Step 3.52 adopts
Figure FDA00003220279700073
Wherein, max is the maximal value of n dimension diff vector, r iFor vector is respectively tieed up result after the standardization;
Step 3.53, employing if (diff[i]〉mean) diff[i]=r i* diff[i].
CN 201110205022 2011-07-21 2011-07-21 Facial feature location method based on five sense organs related AAM (Active Appearance Model) Expired - Fee Related CN102270308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110205022 CN102270308B (en) 2011-07-21 2011-07-21 Facial feature location method based on five sense organs related AAM (Active Appearance Model)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110205022 CN102270308B (en) 2011-07-21 2011-07-21 Facial feature location method based on five sense organs related AAM (Active Appearance Model)

Publications (2)

Publication Number Publication Date
CN102270308A CN102270308A (en) 2011-12-07
CN102270308B true CN102270308B (en) 2013-09-11

Family

ID=45052609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110205022 Expired - Fee Related CN102270308B (en) 2011-07-21 2011-07-21 Facial feature location method based on five sense organs related AAM (Active Appearance Model)

Country Status (1)

Country Link
CN (1) CN102270308B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750546B (en) * 2012-06-07 2014-10-29 中山大学 Face shielding detection method based on structured error code
CN104021384B (en) * 2014-06-30 2018-11-27 深圳中智科创机器人有限公司 A kind of face identification method and device
WO2016179808A1 (en) * 2015-05-13 2016-11-17 Xiaoou Tang An apparatus and a method for face parts and face detection
CN105095881B (en) * 2015-08-21 2023-04-07 小米科技有限责任公司 Face recognition method, face recognition device and terminal
CN108022260B (en) * 2016-11-04 2021-10-12 株式会社理光 Face alignment method and device and electronic equipment
CN107292287B (en) * 2017-07-14 2018-09-21 深圳云天励飞技术有限公司 Face identification method, device, electronic equipment and storage medium
CN107633204B (en) * 2017-08-17 2019-01-29 平安科技(深圳)有限公司 Face occlusion detection method, apparatus and storage medium
CN110298225A (en) * 2019-03-28 2019-10-01 电子科技大学 A method of blocking the human face five-sense-organ positioning under environment
CN110334649A (en) * 2019-07-04 2019-10-15 五邑大学 A kind of five dirty situation of artificial vision's intelligence Chinese medicine facial diagnosis examines survey method and device
CN110987189B (en) * 2019-11-21 2021-11-02 北京都是科技有限公司 Method, system and device for detecting temperature of target object
CN112150387B (en) * 2020-09-30 2024-04-26 广州光锥元信息科技有限公司 Method and device for enhancing stereoscopic impression of five sense organs on human images in photo
CN112581488B (en) * 2020-12-30 2022-10-21 郑州大学 Display screen based on micro LED display technology
CN115497615B (en) * 2022-10-24 2023-09-01 北京亿家老小科技有限公司 Remote medical method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7720284B2 (en) * 2006-09-08 2010-05-18 Omron Corporation Method for outlining and aligning a face in face processing of an image
KR101092820B1 (en) * 2009-09-22 2011-12-12 현대자동차주식회사 Lipreading and Voice recognition combination multimodal interface system

Also Published As

Publication number Publication date
CN102270308A (en) 2011-12-07

Similar Documents

Publication Publication Date Title
CN102270308B (en) Facial feature location method based on five sense organs related AAM (Active Appearance Model)
CN106096538B (en) Face identification method and device based on sequencing neural network model
CN104834922B (en) Gesture identification method based on hybrid neural networks
CN105718868B (en) A kind of face detection system and method for multi-pose Face
CN101558431B (en) Face authentication device
CN103268497A (en) Gesture detecting method for human face and application of gesture detecting method in human face identification
CN109522853A (en) Face datection and searching method towards monitor video
CN102799872B (en) Image processing method based on face image characteristics
CN105335719A (en) Living body detection method and device
CN101216882A (en) A method and device for positioning and tracking on corners of the eyes and mouths of human faces
CN106778468A (en) 3D face identification methods and equipment
CN106778474A (en) 3D human body recognition methods and equipment
CN103886589A (en) Goal-oriented automatic high-precision edge extraction method
CN104464079A (en) Multi-currency-type and face value recognition method based on template feature points and topological structures of template feature points
CN103455794A (en) Dynamic gesture recognition method based on frame fusion technology
CN105740779A (en) Method and device for human face in-vivo detection
CN103440510A (en) Method for positioning characteristic points in facial image
CN102509104A (en) Confidence map-based method for distinguishing and detecting virtual object of augmented reality scene
CN107808376A (en) A kind of detection method of raising one's hand based on deep learning
Efraty et al. Facial component-landmark detection
Li et al. Robust iris segmentation based on learned boundary detectors
CN106611158A (en) Method and equipment for obtaining human body 3D characteristic information
CN103020614A (en) Human movement identification method based on spatio-temporal interest point detection
CN101533466A (en) Image processing method for positioning eyes
CN103324921B (en) A kind of mobile identification method based on interior finger band and mobile identification equipment thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130911

Termination date: 20150721

EXPY Termination of patent right or utility model