CN1885310A - Human face model training module and method, human face real-time certification system and method - Google Patents
Human face model training module and method, human face real-time certification system and method Download PDFInfo
- Publication number
- CN1885310A CN1885310A CN 200610012086 CN200610012086A CN1885310A CN 1885310 A CN1885310 A CN 1885310A CN 200610012086 CN200610012086 CN 200610012086 CN 200610012086 A CN200610012086 A CN 200610012086A CN 1885310 A CN1885310 A CN 1885310A
- Authority
- CN
- China
- Prior art keywords
- face
- people
- sample
- image
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 238000012549 training Methods 0.000 title claims abstract description 61
- 210000000056 organ Anatomy 0.000 claims abstract description 13
- 238000012706 support-vector machine Methods 0.000 claims description 59
- 230000001815 facial effect Effects 0.000 claims description 51
- 238000004422 calculation algorithm Methods 0.000 claims description 33
- 210000000887 face Anatomy 0.000 claims description 32
- 238000010606 normalization Methods 0.000 claims description 24
- 230000009466 transformation Effects 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 10
- 239000000203 mixture Substances 0.000 claims description 7
- 238000002203 pretreatment Methods 0.000 claims description 5
- 239000004576 sand Substances 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 17
- 238000005286 illumination Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 11
- 230000014509 gene expression Effects 0.000 description 11
- 238000006243 chemical reaction Methods 0.000 description 8
- 238000012937 correction Methods 0.000 description 8
- 239000004744 fabric Substances 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 241001269238 Data Species 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 238000003909 pattern recognition Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000010187 selection method Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- 235000012364 Peperomia pellucida Nutrition 0.000 description 2
- 240000007711 Peperomia pellucida Species 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000008676 import Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000016507 interphase Effects 0.000 description 1
- 230000002045 lasting effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012887 quadratic function Methods 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000005316 response function Methods 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 210000000162 simple eye Anatomy 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 210000000857 visual cortex Anatomy 0.000 description 1
Images
Landscapes
- Collating Specific Patterns (AREA)
Abstract
The invention relates to a face model training module, relative method, and a face real-time identify system, and method, wherein in the identification, first using face sample image, to supply one face model supporting vector machine to each user; via collecting the video image input by camera, searching and checking the face of image, and tracking and identifying the image; then automatically marking the organ character point of face, to pre-treat checked face; calculating the Gabor character of face image after pretreatment; selecting low-dimension character vector from high-dimension Gabor character; inputting the selected low-dimension character vector into face model, to process face recognition, to feedback similarity data of each face model; based on said similarity data, outputting final face identifying result. The invention can improve the right rate of face recognition identification.
Description
Technical field
The present invention relates to a kind of model training module and method of facial image, with people's face real time certification system and method, relate in particular to a kind of faceform's training module and method, with real time certification system and method based on Gabor feature and support vector machine.
Background technology
Recognition of face be will input facial image and the template in the known storehouse similarity degree that compares and determine one's identity, be a very research topic of hot topic of present computer vision and pattern identification research field.(as fingerprint, iris, DNA etc.) compare with other biological characteristic, utilize people's face to carry out more direct, the nature of identification.Some other biological characteristic then needs to adopt information acquisition mode initiatively, the cooperation that needs the user with cooperate, also make troubles to the user.And the information obtain manner of passive type is adopted in recognition of face, need not user's intervention in the whole process of identification, thereby uses more friendly, more convenient.In addition, people's face is the important channel that people distinguish different people, is one of topmost information source, so recognition of face is a kind of recognition method more natural than fingerprint, retina and iris, has application prospect and practical value widely.
Face authentication is a branch of recognition of face, and face authentication system requirements computing machine has trained people's face of user in not only can the recognition training storehouse, can also refuse to know the people's face that is not user in the storehouse.The face authentication algorithm in a plurality of fields such as video monitoring, the system of registering, man-machine interaction, system boot etc. very large application potential is arranged.
Recent decades, a large amount of researchists have proposed very many research methods in the recognition of face field.Method the earliest is based on the method for unique point, promptly discern people's face according to the structural relation of people's face key feature point, next is based on the method for image, extract feature from the input picture of people's face, and adopt statistical recognition method to discern, perhaps between training template and test template, take the method for template matches.Occurred the method based on three-dimensional afterwards again, purpose is people's face of the various different attitudes of identification.But on the precision of face authentication and degree of stability, need further improve on the whole.
Summary of the invention
Technical matters to be solved by this invention is to provide a kind of faceform's training module and method, and people's face real time certification system and method, with realization faceform's generation, and accurate recognition of face and authentication.
For solving the problems of the technologies described above, the invention provides a kind of people's face real-time identifying method, comprise the steps:
(1),, each user that need authenticate obtains a support vector machine faceform for training by people's face sample image;
(2) video image of acquisition camera input, the front face in search and the detected image, and it is continued to follow the trail of and verify, guarantee that people's face of following the trail of is a front face;
(3) demarcate organ characteristic's point in the front face automatically, and in view of the above detected people's face is carried out pre-service;
(4) calculating is picked out the strongest part Gabor feature of classification capacity and is formed low dimensional feature vector through the Gabor feature of pretreated facial image from higher-dimension Gabor feature;
(5) the low dimensional feature vector after will selecting is input among the described faceform, carries out recognition of face, returns the similarity data with each faceform;
(6) according to the described similarity data of returning, export final face authentication result.
Wherein, described step (1) comprising:
(1A) at each user that need authenticate, gather positive sample and anti-sample facial image;
(1B) all sample people faces are demarcated, determined the human face characteristic point position in people's face sample;
(1C) according to calibration result, all sample images are carried out pre-service;
(1D) calculating is picked out the strongest part Gabor feature of classification capacity and is formed low dimensional feature vector through the Gabor feature of pretreated sample image from higher-dimension Gabor feature;
(1E) utilize described low dimensional feature vector, adopt support vector machine that the different authentication user is trained, for each user obtains a support vector machine faceform.
For solving the problems of the technologies described above, the present invention also provides a kind of faceform's training method, comprises the steps:
(A), gather positive sample and anti-sample facial image at each user that need authenticate;
(B) all sample people faces are demarcated, determined the human face characteristic point position in people's face sample;
(C), all sample images are carried out pre-service according to calibration result;
(D) calculating is picked out the strongest part Gabor feature of classification capacity and is formed low dimensional feature vector through the Gabor feature of pretreated sample image from higher-dimension Gabor feature;
(E) utilize described low dimensional feature vector, adopt support vector machine that the different authentication user is trained, for each user obtains a support vector machine faceform.
For solving the problems of the technologies described above, the present invention and then a kind of people's face real time certification system is provided comprises:
Training module is used for by people's face sample image, obtains a support vector machine faceform for each user that need authenticate trains;
Authentication module comprises:
Gather tracing unit, be used for the video image of acquisition camera input, the front face in search and the detected image, and it is continued to follow the trail of and verify, guarantee that people's face of following the trail of is a front face;
Pretreatment unit is used for demarcating automatically organ characteristic's point of front face, and in view of the above detected people's face is carried out pre-service;
Feature calculation unit is used to calculate the Gabor feature through pretreated facial image;
The feature module of selection is used for picking out the strongest low dimensional feature vector of part Gabor feature composition of classification capacity from calculating the higher-dimension Gabor feature that obtains;
The identification authentication ' unit is used for the low dimensional feature vector after selecting is input to described faceform, carries out recognition of face, returns the similarity data with each faceform, and according to the described similarity data of returning, exports final face authentication result.
Wherein, described training module can comprise:
The sample collection unit is used for the user that need authenticate at each, gathers positive sample and anti-sample facial image;
Pretreatment unit is used for the demarcation of basis to the human face characteristic point position of all sample people faces, and all sample images are carried out pre-service;
Feature calculation unit is used to calculate the Gabor feature through pretreated facial image;
The feature module of selection is used for picking out the strongest low dimensional feature vector of part Gabor feature composition of classification capacity from calculating the higher-dimension Gabor feature that obtains;
The model training unit is used to utilize described low dimensional feature vector, adopts support vector machine that the different authentication user is trained, for each user obtains a support vector machine faceform.
For solving the problems of the technologies described above, the present invention also provides a kind of faceform's training module, comprising:
The sample collection unit is used for the user that need authenticate at each, gathers positive sample and anti-sample facial image;
Pretreatment unit is used for the demarcation of basis to the human face characteristic point position of all sample people faces, and all sample images are carried out pre-service;
Feature calculation unit is used to calculate the Gabor feature through pretreated facial image;
The feature module of selection is used for picking out the strongest low dimensional feature vector of part Gabor feature composition of classification capacity from calculating the higher-dimension Gabor feature that obtains;
The model training unit is used to utilize described low dimensional feature vector, adopts support vector machine that the different authentication user is trained, for each user obtains a support vector machine faceform.
Faceform's training module and training method that the present invention proposes, and the real time certification system and the method for people's face in the video sequence, utilize the face tracking result that the recognition result of a period of time is combined, improved the accuracy rate of identification greatly, and can accurately refuse to know people's face of non-training user, algorithm has certain anti-illumination interference and attitude interference performance, and interference performances such as anti-expression are strong.
Description of drawings
Fig. 1 is the block diagram according to the described face authentication of embodiment of the invention system;
Fig. 2 is according to the described face authentication process flow diagram of the embodiment of the invention;
Fig. 3 is according to the described sample training schematic flow sheet of the embodiment of the invention;
Fig. 4 is the demarcation and collection synoptic diagram according to the described people's face of embodiment of the invention sample;
Fig. 5 is according to the described facial image non-linear correction of embodiment of the invention synoptic diagram;
Fig. 6 is according to the described non-linear correction principle schematic of the embodiment of the invention;
Fig. 7 is according to the described facial image illumination of embodiment of the invention result synoptic diagram;
Fig. 8 is according to the described feature extraction method flow diagram based on AdaBoost of the embodiment of the invention;
Fig. 9 is according to the described support vector machine optimal classification of embodiment of the invention face synoptic diagram;
Figure 10 A, Figure 10 B are according to the described face authentication output of embodiment of the invention result schematic diagram.
Embodiment
In the embodiments of the invention, at first provide a kind of faceform's training module, having comprised based on Gabor feature and support vector machine: the sample collection unit, be used for the user that need authenticate at each, gather positive sample and anti-sample facial image; Pretreatment unit is used for the demarcation of basis to the human face characteristic point position of all sample people faces, and all sample images are carried out pre-service; Feature calculation unit is used to calculate the Gabor feature through pretreated facial image; The feature module of selection is used for picking out the strongest low dimensional feature vector of part Gabor feature composition of classification capacity from calculating the higher-dimension Gabor feature that obtains; The model training unit is used to utilize described low dimensional feature vector, adopts the one-to-many support vector machine that the different authentication user is trained, and obtains the support vector machine faceform of an one-to-many for each user.
The training method of this training module correspondence can comprise: at each user that need authenticate, gather positive sample and anti-sample facial image; All sample people faces are demarcated, determined the human face characteristic point position in people's face sample; According to calibration result, all sample images are carried out pre-service; Calculate Gabor feature, from higher-dimension Gabor feature, pick out the strongest part Gabor feature of classification capacity and form low dimensional feature vector through pretreated sample image; Utilize described low dimensional feature vector, adopt the one-to-many support vector machine that the different authentication user is trained, obtain the support vector machine faceform of an one-to-many for each user.
The faceform who utilizes above-mentioned training module and training method to obtain, the embodiment of the invention has further realized complete people's face real-time authentication process, can from the image sequence that camera is taken, detect in real time and tracking sequence in people's face, target people face is discerned and is authenticated.In fact, face authentication system and method provided by the invention, and do not rely on above-mentioned faceform's training module and training method, the faceform who uses any training module and training method to obtain can be applicable to verification process of the present invention.
Below, be example in order to the faceform who obtains with above-mentioned training module, people's face real-time authentication process of the present invention is elaborated:
The embodiment of the invention can comprise training and authenticate two modules in system, wherein training module need be gathered people's face sample of user and train, obtain a plurality of users' faceform, authentication module then combines the detection tracking results of people's face in the video sequence with the faceform who trains, judge whether input people face is the user who trains; If not, system provides the result who refuses to know.
As shown in Figure 1, be block diagram according to the described face authentication of embodiment of the invention system.The purpose of training module is the human face recognition model by a large amount of people's face sample image training users.At first obtain a large amount of anti-sample people faces, promptly can not comprise people's face of user in these samples certainly, then gather the front face data of a plurality of authenticated; All positive and negative sample people faces are demarcated, determined the exact position of two eyes of people's face sample and face; We carry out normalization and rectification in conjunction with calibration result to all sample images, and the eyes and the face of all samples is remedied to fixing position; We carried out photo-irradiation treatment in conjunction with the gray level image of a width of cloth standard to the sample after correcting after people's face was corrected and finished, and the average light that makes the sample image various piece is according to being consistent with standard picture; Then adopt FFT conversion and FFT inverse transformation to calculate the multiple dimensioned multi-direction Gabor feature of all sample images; Because the Gabor intrinsic dimensionality of each sample has reached 80,000, it will be unusual difficulty that feature of higher-dimension is so trained, therefore embodiments of the invention have adopted the feature selection method based on AdaBoost, from these features, pick out the strongest thousands of dimensional features of classification capacity in conjunction with positive and negative sample data and come out, the feature of picking out is formed new low-dimensional Gabor proper vector; After finishing, feature selecting adopt support vector machine (SVM) algorithm of one-to-many that different user is trained again, when each user is trained, the positive sample characteristics of training is exactly the low-dimensional Gabor proper vector after the selecting of this user's sample correspondence, the proper vector that then comprised all anti-samples and other users' feature are levied in anti-espionage, and we can obtain an one-to-many SVM model for each user like this.So far, faceform's training finishes.
The face authentication module need detect track human faces in real time from the input image sequence of video frequency pick-up head, and discerns in conjunction with the faceform, the last authentication result of comprehensive multiframe recognition data output.The video image of our acquisition camera input at first in real time, the front face in search and the detected image then continues to follow the tracks of to the people's face that detects, and verifies at any time, judges whether people's face of following the tracks of is front face; If track human faces is a front face, authentication module is demarcated eyes and the face in the front face automatically, obtains three calibration points; By eyes, three calibration points of face the people's face that detects is carried out pre-service, comprise the rectification of people's face, the normalization of people's face and photo-irradiation treatment; After finishing, pre-service then calculates the Gabor feature of people's face; The return results that the feature of AdaBoost is selected algorithm in the combined training module is picked out the strongest part Gabor feature of classification capacity and is formed low dimensional feature vector from higher-dimension Gabor feature; The proper vector of selecting is input among the faceform, carries out recognition of face, return the similarity data of each individual face; Last will work as the similarity data of counting two field picture in forefathers' face front again and the similarity data of present frame combine, export final face authentication result.
As shown in Figure 2, for according to the described face authentication process flow diagram of the embodiment of the invention,, train the support vector machine faceform's (step 201) who obtains an one-to-many for each user that need authenticate at first by people's face sample image; The video image of acquisition camera input, the front face in search and the detected image, and it is continued to follow the trail of and verify, guarantee that people's face of following the trail of is front face (step 202); Automatically demarcate the organ characteristic's point in the front face, and in view of the above detected people's face is carried out pre-service (step 203); Calculate Gabor feature then, from higher-dimension Gabor feature, pick out the strongest part Gabor feature of classification capacity and form low dimensional feature vector (step 204) through pretreated facial image; Low dimensional feature vector after selecting is input among the described faceform, carries out recognition of face, return similarity data (step 205) with each faceform; In conjunction with the similarity numerical value of consecutive numbers two field picture, export final face authentication result (step 206).
The described training process of step 201 among Fig. 2, can adopt multiple mode to train, as shown in Figure 3, be the sample training schematic flow sheet of embodiment of the invention employing, at first, gather positive sample and anti-sample facial image (step 301) at each user that need authenticate; All sample people faces are demarcated, determined the human face characteristic point position (step 302) in people's face sample; According to calibration result, all sample images are carried out pre-service (step 303); Calculate Gabor feature, from higher-dimension Gabor feature, pick out the strongest part Gabor feature of classification capacity and form low dimensional feature vector (step 304) through pretreated sample image; Utilize described low dimensional feature vector, adopt the one-to-many support vector machine that the different authentication user is trained, obtain support vector machine faceform's (step 305) of an one-to-many for each user.
The described face tracking of step 202 among Fig. 2, also can pass through accomplished in many ways, the for example real-time detection of people's face and the method and system that continue to follow the trail of in a kind of video sequence that provides in the Chinese patent application 200510135668.8, this application adopts the layering detection model of AdaBoost training algorithm training front face, and divide the video image of yardstick search input, determine the position of a plurality of people's faces in the image; Then system verifies at follow-up several frames the people's face that detects, and verifies again in conjunction with the lasting tracking that realizes people's face based on the face tracking algorithm of Mean shift and color histogram; In tracing process, can carry out the checking of people's face, determine the accuracy of tracking results.This system tests a plurality of people's faces in many scenes, test result shows that people's face detection and tracking algorithm of this paper can quick and precisely detect a plurality of front faces under different expressions, the different colour of skin, the different illumination conditions, can detect people's face of-20 ° to 20 ° degree of depth rotations ,-20 ° to 20 ° plane rotations, but any attitude people's face of real-time follow-up comprises side, rotation people face etc.
The described human face positioning feature point of step 203 also has a variety of methods to realize among Fig. 2, for example the localization method that provides in the Chinese patent application 200610011673.2.According to embodiments of the invention, can determine the eyes position by following steps: (1) adopts statistical to determine left eye region of search and right eye region of search on the basis of obtaining people's face positional information, and definite left eye primary election position and right eye primary election position; (2) in described left eye and right eye region of search, adopt left eye local feature detecting device and right eye local feature detecting device respectively, all left eye primary election positions and right eye primary election position are differentiated, and determined a simple eye similarity numerical value for each primary election position; (3) from all left eye primary election positions and right eye primary election position, select the preceding N of similarity numerical value maximum respectively
1Individual position is as left-eye candidate positions and right eye position candidate, and it is right that all left eyes and right eye position candidate are made into the eyes candidate, with each candidate to being that benchmark is determined the eyes zone; (4) adopt the eyes area detector as global restriction, described each eyes zone differentiated, for each eyes candidate wherein to determining an eyes similarity numerical value; (5) the preceding M of selection eyes similarity numerical value maximum
1Individual eyes candidate is right, to all left-eye candidate positions and all right eye position candidate difference calculating mean value wherein, as left eye characteristic point position and right eye characteristic point position.
And can determine the face position by following steps: (1) adopts statistical to determine face location finding zone on the basis of obtaining eye position information, and definite face primary election position; (2) in face location finding zone, adopt face local feature detecting device that each face primary election position is differentiated, and determine a face local similar number of degrees value for it; (3) the preceding N of selection face local similar number of degrees value maximum
2Individual primary election position for each position candidate, is a benchmark with left eye characteristic point position, right eye characteristic point position, face position candidate as the face position candidate, determines face area; (4) adopt the face area detecting device as global restriction, each described definite face area is differentiated, for each face position candidate is wherein determined a face overall situation similarity numerical value; (5) select the preceding M of face overall situation similarity numerical value maximum, individual position candidate is calculated the mean value of these position candidate, as the face characteristic point position.
The described pre-treatment step of step 203 among Fig. 2, and the described pre-treatment step of step 303 among Fig. 3, process all is similar, below main be that example further specifies with pre-service to sample image.
Before carrying out recognition of face, must carry out pre-service to size, position and the gray scale of input facial image, the size of different facial images, gray scale are consistent.In addition, the position of people's face should be consistent in the different images, and the method that this can pass through eyes, face location makes the position basic fixed of people's face eyes, face in the input picture, again entire image is carried out affined transformation or non-linear correction.Have only through after these pre-service, a plurality of input people faces of same individual just can have certain similarity on some feature, and different people's faces also just can have certain difference, and just can adopt the statistical model recognizer to carry out the training and the identification of model this moment.
With reference to the step 301 among the figure 3, because the present invention adopts the mode of support vector machine (SVM) to realize that face authentication is (for the arthmetic statement of support vector machine, can be referring to " pattern-recognitions " that the people showed such as Bian Zhaoqi, Zhang Xuegong, publishing house of Tsing-Hua University, 2000), therefore need to collect a large amount of anti-sample people faces, to improve the accuracy of face authentication.These anti-samples preferably should cover people's face of different expressions, the different colour of skin, all ages and classes as far as possible, comprise people's face of-20 ° to 20 ° degree of depth rotations, comprise the people's face of wearing and do not wear glasses.
The positive sample people face of face authentication is meant people's face sample of user to be certified, and these class data need be gathered user's sample by Automatic Program when practical application, and automatically user's sample is carried out pre-service and feature calculation.
With reference to the step 302 among the figure 3, to all anti-samples, the present invention can be by manually demarcating the key feature point of all anti-sample people faces, for each positive sample people face has been demarcated three points: two centers, face center.And can take the method for demarcation automatically for user's to be certified positive sample, obtain the coordinate of three points, as shown in Figure 4.
Then, the present invention can carry out geometrical normalization to each individual face according to these calibration points, the major organs aligning that is about to facial image is to normal place, reduce yardstick, translation and plane rotation difference between sample, cut out out human face region according to organ site then and become to be people's face sample, make people's face sample introduce background interference less, and the organ site of different people face sample have consistance as far as possible.
The present invention has introduced a width of cloth standard faces image each individual face sample has been carried out the cutting of geometrical normalization and human face region.Yardstick wd * the ht that at first determines people's face window to be identified is 44 * 48, and promptly wide is 44, and height is 48.Then obtain the front face image of a width of cloth standard, two y coordinate unanimity in the standard picture, people's face is also symmetrical fully, shown in the 4A among Fig. 4, demarcates three key feature points of this image.The position of the square human face region of determining cutting according to the distance and the position of eyes in this image.If two distance is r, the central point of two lines is (x
Center, y
Center), the wide 2r that is made as of collection rectangle, i.e. twice binocular interval, the then coordinate (x in clipping rectangle zone
Left, y
Top, x
Right, y
Bottom) be:
The human face region of cutting is normalized to 44 * 48 size,, and obtain the coordinate [x of three calibration points after the normalization as the 4B among Fig. 4
Stad(i), y
Stad(i)], i=0,1,2, preceding two is the eye center point, last is the lip central point.
Three unique point [x of any given primitive man's face sample and demarcation
Label(i), y
Label(i)], i=0,1,2, as the 4C among Fig. 4, more direct method of cutting out is the affined transformation coefficient between three point coordinate after these three points of calculating and the standard picture normalization.In addition, can not add the stretching conversion of people's face all directions in the affined transformation formula, we only consider rotation and two conversion of whole convergent-divergent.Then can calculate in this cutting image arbitrarily some corresponding point coordinate in the original sample, and then obtain the pixel value of being had a few in the cutting people face, shown in the 4D among Fig. 4 by the affined transformation coefficient.
But the algorithm based on affined transformation exists apparent in view defective.At first when people's face sample band expression or the non-front of input people face, adopt the eyes lip central point deviation of the eyes lip central point of the cutting people face that this method obtains and standard picture can be bigger, particularly with the lip central point after the cutting of attitude sample also not on the image vertical center axis, eye position is also variant, as shown in Figure 5,5A is original image and calibration point, and 5B is the cutting image.Therefore for people's face of the different attitude expressions of same people, people's face eyes lip position difference is bigger in its cutting image, and this certain degree can reduce the anti-expression of recognizer, attitude interference capability.
Embodiments of the invention have adopted a kind of non-linear correction method, promptly adopt non-linear method will import the position that three central points of people's face are remedied to 3 of standard faces fully.At first only consider two central points of eyes, adopt the affined transformation algorithm between the calibration point of input people's face and standard faces, to calculate the affined transformation coefficient, equally only consider rotation and two conversion of whole convergent-divergent this moment.That is:
Four unknown numbers are arranged in the following formula, and four equatioies have only unique one to separate, be made as (a, b, c, d), the 5C among Fig. 5 is the cutting result who only adopts these four coefficients to obtain.Can calculate the corresponding point of three unique points in cutting people face in the input sample by this affined transformation coefficient, be made as [x
Trans(i), y
Trans(i)], i=0,1,2.The position that preceding two coordinate transformings are eyes and the eye position of standard face are in full accord, but are subjected to interference such as attitude, expression, and face position difference may be bigger.We need be with the face aligning to normal place for this reason.
As shown in Figure 6, A, B point is the central point in the standard picture among the figure, and the D point is the central point of A, B, and Cstad is standard lip central point, and C is the lip point after the conversion.The non-linear correction process is carried out in two steps, at first corrects in the y direction, makes the y coordinate of correcting back lip point consistent with Cstad, as the C ' point among Fig. 6.And then carry out the rectification of x direction, and we couple together D and C ', and are half of about DC ' line is divided into people's face, consider certain bar straight line of horizontal direction, and establishing its y coordinate is y1, and the intersecting point coordinate E of itself and DC ' straight line is (x
1, y
1).Because we need (x
1, y
1) move to (x
D, y
1), x wherein
DBe the x coordinate of D, therefore need be to (x
D, y
1) point of the right and left carries out linear transformation respectively, E is moved on the DCstad of axis.Consider certain point (x, y
1), to the some x<x on the left side
1, its coordinate of correcting the back point is (xx
D/ x
1, y
1), to the some x 〉=x on the right
1, its coordinate of correcting the back point is [2x
D-x
D(2x
D-x)/(2x
D-x
1), y
1].As can be seen,, people from right side face is stretched, can all be remedied on people's face vertical centering control axis DCstad having a few on DC ' straight line like this if C ' on the right side of Cstad, then needs people's face in left side is compressed.
After obtaining the non-linear correction coefficient, then obtain people's face after the rectification in conjunction with original image, if the facial image after the cutting is I, this picture size is 44 * 48, wherein certain point coordinate be (x, y), the coordinate before obtaining it and correct according to the non-linear correction coefficient (x ', y '), obtain the coordinate (x of this point in the original image again by the affined transformation coefficient
Ori, y
Ori):
For eliminating The noise, (x, pixel value y) is made as corresponding point (x in the images cut
Ori, y
Ori) average of all some pixel values in the neighborhood scope, shown in the 5D among Fig. 5.
In addition, disturbed by factors such as ambient light photograph, imaging device, facial image brightness or contrast can occur unusually, strong shadow or situation such as reflective appear, also there is this difference in addition between the colour of skin of different ethnic groups, therefore need carry out the gray balance processing to the people's face sample after geometrical normalization and the rectification, improve its intensity profile, the consistance between enhancement mode.But, the illumination problem in the recognition of face is comparison difficulty always but also is unusual important problem.Very many photo-irradiation treatment algorithms have been arranged for many years, but performance compare all generally, resist the ability of various ambient light interference all poor.Owing to need gather the positive sample of people's face and train based on the face recognition algorithms of statistical method, but the illumination of the positive sample of people's face is general more single, even to the positive sample of adding different light, training data also can only cover a few illumination patterns.And illumination is very complicated in the actual scene, and same people's face but illumination difference when big gray scale also can have this evident difference, also can there be this difference in the characteristics of image that calculates.In addition, if the uneven illumination of input people face is even, subregion illumination is strong, subregion illumination is weak, even carry out the normalization, histogram equalization etc. of full figure this moment to image, all be difficult to obtain light application ratio people's face data uniformly, this will reduce the precision of recognition of face greatly.
The photo-irradiation treatment algorithm that the embodiment of the invention adopts can carry out in two steps, at first image is carried out whole gray scale normalization, and then the combined standard image carries out local gray scale normalization.
Whole normalization is fairly simple, a given width of cloth standard faces image, and as the 4B among Fig. 4, the average P of basis of calculation people's face gray scale
sAnd variances sigma
s, then calculate average P and the variances sigma of importing the sample gray scale, its arbitrary pixel value I (x, y) pixel value after the normalization is:
I′(x,y)=[I(x,y)-P]·σ
s/σ+P
s
The accurate image point of bidding (x, pixel value y) be S (x, y), behind the input people face gray scale normalization this point value be I ' (x, y).Because the position of eyes, face correspondence fully in this two width of cloth image, so the organ site difference of the position of each organ and standard faces is also not too large in the sample.That is to say that each local gray scale of two width of cloth images should be similar to unanimity, if gray scale is inconsistent, the uneven illumination that then can think to import people's face is even, need carry out the rectification of gray scale, thus can be with the gray scale of the gray scale deemphasis positive input people face of standard faces.
Based on this consideration, the embodiment of the invention is handled respectively each picture element, considers wherein certain point (x, y), extract all picture elements in its neighborhood scope, the neighborhood length and width are W, we add up, and (x, y) average of W * W some gray scale in the neighborhood is made as A in the input sample
I(x, y), (x, y) average of W * W some gray scale in the neighborhood is made as A in the statistical standard sample again
S(x, y).A
I(x, the y) size of brightness in Fan Ying the current neighborhood, A
S(x, y) reflection is the intensity of standard faces local light photograph, if both differ greatly, then uneven illumination is even near the current point of expression input people's face, need correct the gray scale of this point, again A
S(x, y) and A
I(x, ratio y) can the approximate reverse illumination according to the ratio of intensity, therefore can be directly the gray-scale value of this point be multiply by this ratio, as correction result, i.e. (x, y) the afterwards new gray-scale value I of processing
r(x y) is:
I
r(x,y)=I′(x,y)·A
S(x,y)/A
I(x,y)
The selection of W is relatively more crucial, and W can not be too big, otherwise the gray scale rectification does not have effect, and W can not be too little, otherwise facial image and standard faces after correcting are more approaching, and this paper is made as 15 with W, obtains optimum.As shown in Figure 7, for before the photo-irradiation treatment and photo-irradiation treatment after the result contrast synoptic diagram, wherein, 7A is the facial image after the overall intensity normalization; 7B is for carrying out the facial image after gray scale is corrected according to embodiments of the invention.
The described feature extraction step of step 304 among step 204 and Fig. 3 among Fig. 2 is unusual the key link in recognition of face.Feature commonly used has gray feature, edge feature, wavelet character, Gabor feature etc.Wherein Gabor shows outstanding time-frequency aggregation for facial image provides multiple dimensioned, multidirectional fine description, possesses the very strong portrayal details and the ability of partial structurtes.It has the character of bandpass filtering, can partly resist the influence that becomes illumination slowly, also can some high frequency noises of elimination.Simultaneously, the simple type cell is closely similar to the response of picture signal in the impulse response function of two-dimensional Gabor filter and the mammiferous visual cortex, possesses solid foundation in theory.Therefore, embodiments of the invention are selected the Gabor feature when realizing recognition of face.
The impulse response of two-dimensional Gabor filter is expressed as:
σ=2 π wherein, we have considered 5 frequency v=0 ..., 4,8 direction μ=0 ..., 7, then have:
Each point at facial image can all calculate 5 frequencies, and 8 directions, totally 40 dimension Gabor features, account form are that the impulse response with input picture and each frequency all directions carries out convolution, that is:
G
j(x)=∫I
r(x′)ψ
j(x-x′)dx′
In order to improve the counting yield of Gabor feature, can adopt fft algorithm that this convolution process is quickened, earlier to I
r(x ') and ψ
j(x ') carries out the FFT conversion respectively, the result after the conversion multiplied each other carry out anti-FFT conversion again, just can obtain in the image have a few for the Gabor feature of certain certain direction of frequency.Total Gabor characteristic number is 5 * 8 * 44 * 48=84480, this data volume is very large, it is very difficult directly adopting sorting algorithm the feature of higher-dimension like this to be trained and discern, and therefore also needs to carry out selecting of feature, reduces the dimension of feature significantly.
The dimension of every width of cloth people face Gabor feature is up to 84480, and total number of training has more than 10,000, and what adopt when sorter is trained is SVM algorithm more than 1 pair, therefore can adopt based on the feature of AdaBoost and select algorithm, picking out the strongest thousands of dimensional features of classification capacity in conjunction with the mode classification more than 1 pair and positive and negative sample data from these features comes out as 2000 dimensional features, the feature of picking out is formed new low-dimensional Gabor proper vector, adopt support vector machine (SVM) algorithm of one-to-many that different user is trained after feature selecting finishes again.The faceform's of the calculated amount of training algorithm and storage data volume all reduces greatly like this.In verification process, algorithm only need calculate the Gabor feature of people's face, selects the result in conjunction with existing feature and picks out low dimensional feature, lower dimensional feature vector is discerned.
The feature extraction method of simply introducing the embodiment of the invention below and being adopted based on AdaBoost, as shown in Figure 8:
Step 801: given two class samples, sample number is L, and positive sample number is Lp, and anti-sample number is Ln.
Step 802: initialization, weight is set, positive sample is 1/2Lp, anti-sample is 1/2Ln.
At first, set weights for positive and negative image pattern collection, in a specific embodiment, can the shared weight of inverse video sample set be set to 1/2, the shared weight of all positive image pattern collection is set to 1/2.Certainly, in other embodiments, also can the shared weight of all inverse video sample sets be set to 2/5 fully, the shared weight of all inverse video sample sets is set to 3/5.That is to say, can be that positive and negative image pattern collection is set weight as required.Afterwards, set weight for each positive and negative image pattern, in a specific embodiment, the weight that can set each positive sample is the 1/Lp of positive sample set weight, and the weight of setting each anti-sample is the 1/Ln of anti-sample lump weight.Certainly, also important positive and negative image pattern can be set higher weight.
Step 803: set iteration round t=1,2 ..., T.
Step 804: consider the feature that all are never selected, utilize single features training Weak Classifier, obtain optimum threshold parameter according to the weights of training sample set, make the weighting error rate minimum of all samples, can obtain an error rate for each Weak Classifier and characteristic of correspondence thereof like this.
Adopt j Weak Classifier h
j(x) according to j feature G of preset threshold and each image pattern
j(x) go to judge that each sample image is positive sample or anti-sample, can count the weighting error rate of this Weak Classifier thus.
Each Weak Classifier is all only handled a corresponding feature, and it can be expressed as:
Wherein, low_ θ
jBe Weak Classifier h
j(x) low threshold value, high_ θ
jBe Weak Classifier h
j(x) if high threshold is j feature G of present image sample
j(x) numerical value is greater than low threshold value and when being lower than high threshold, described Weak Classifier h
j(x) be output as 1, its expression present image sample is judged as positive sample; Otherwise, described Weak Classifier h
j(x) be output as 0, its expression present image sample is judged as anti-sample.Wherein, Weak Classifier h
j(x) low threshold value and high threshold are the weight settings according to image pattern.
About the classification of Weak Classifier to image pattern, specifically be exactly, at first, j Weak Classifier h
j(x) according to j feature G of the 1st image pattern
j(x) judge that the 1st image pattern is positive sample or anti-sample, next, according to j feature G of the 2nd image pattern
j(x) judge that the 2nd image pattern is positive sample or anti-sample ..., up to, j Weak Classifier h
j(x) according to j feature G of L image pattern
j(x) judge that L image pattern is positive sample or anti-sample.
Step 805: count each Weak Classifier h
j(x) error rate is selected a predetermined number Weak Classifier of error rate minimum and its characteristic of correspondence is selected the result as the feature when front-wheel.
Each Weak Classifier h
j(x) to be that positive sample or anti-sample judge wherein the sample that misdeems to be arranged all to L image pattern, in other words, Weak Classifier h
j(x) positive sample anti-sample may be regarded as, also positive sample anti-sample may be regarded as.The weight of the image pattern of this Weak Classifier mistake of statistics is tried to achieve, just can obtain this Weak Classifier h
j(x) weighting error rate.Afterwards, a predetermined number Weak Classifier characteristic of correspondence of error rate minimum is selected the result as the feature when front-wheel.In one embodiment, described predetermined number is 1, also can be, 2 or 3 or the like, the operator can set this number according to actual conditions.
Step 806: the weight of the judicious image pattern of Weak Classifier that reduces to select, increase the weight of the selected wrongheaded image pattern of Weak Classifier, and the weight of the image pattern after upgrading carried out normalization, make the weight sum of all samples equal 1, return 103, enter the next round iteration,, pick out the feature of predetermined number until finishing the setting round.
Above selection method at be two class problems.For the multiclass problem, realization architecture design selection method that can the binding pattern sorting algorithm.If what pattern classification algorithm adopted is the framework of one-to-many, we select process with feature and are decomposed into a plurality of two class problems, in each two class problem wherein a class be certain class sample, another kind of then corresponding other samples.If what pattern recognition problem adopted is man-to-man framework, be about to the multiclass pattern recognition problem and be decomposed into a plurality of two classes problem one to one, the class in each two class problem is arbitrary class input sample, second class is another kind of input sample.When selecting, feature need consider the AdaBoost module flow process of a plurality of similar Fig. 8 like this, we realize each AdaBoost module flow process synchronously, being about to the error rate that the t wheel Weak Classifier of all AdaBoost modules returns adds up, the feature of total error rate minimum is returned, selected the result as this feature of taking turns.Each is taken turns feature and selects and upgrade weight according to the error rate of each current AdaBoost module again after finishing, and selects next stack features.
Among Fig. 2 among step 201 and Fig. 3 the described support vector machine of step 305 (SVM) be that Statistical Learning Theory develops a kind of mode identification method of.This algorithm is that the optimal classification face under the linear separability situation proposes.Consider two class linear separability situations shown in Fig. 9, establish sample set (x
i, y
i), i=1 ..., n, x ∈ R
d, y ∈+1 ,-1}, wherein y
iBe the category label of pattern xi, H:wx+b=0 is the classification interface, H1, H2 be respectively be parallel to H and with H distance be two planes of 1/ ‖ w ‖, the distance between them is called class interval (margin).The basic thought of support vector machine wishes to find an optimum linearity classifying face exactly, makes the class interval big as far as possible, and promptly ‖ w ‖ is as far as possible little, and classification error is few as far as possible on training set.The problem of finding the solution of optimal classification face is the quadratic function extreme-value problem under inequality constrain in fact, and its optimum solution is:
α wherein
iBe weight.To most sample α
iFor being zero, the α that minority is non-vanishing
iCorresponding is exactly support vector, promptly is positioned at the sample on H1 and H2 two planes.The optimal classification function then is
Sgn () is a sign function.F (x) is that 1 expression is identified as first kind sample, i.e. y=1, otherwise think and be identified as the second class sample.Change the click computing of proper vector in the following formula into inner product, and inner product satisfies the Mercer condition, just linear SVM can be expanded to the non-linear SVM of broad sense, that is:
Adopt different inner product functions will cause different algorithm of support vector machine, as polynomial expression inner product, sigmoid function, radial kernel function (RBF) etc., compare with linear SVM, non-linear SVM expands to the optimal classification face nonlinear, can realize the classification of a lot of linear inseparable situations, so classification accuracy is improved also.We have adopted the SVM algorithm based on RBF, that is: when realizing recognition of face
SVM has one to one and two kinds of ways of realization of one-to-many when being used for the identification of multiclass people face.Man-to-man SVM is to be wantonly two class sample training svm classifier devices at algorithm, if N class sample is arranged like this, then needs to train the sorter of N * (N-1)/2.During identification sample is input in each svm classifier device successively, each judgement all will be eliminated a class sample.If two sample standard deviations of certain sorter correspondence are superseded certainly, then skip this sorter, that remaining after all judgements are finished classification is exactly a recognition result.The subject matter of sorter is that training the time has only been considered all kinds of training samples one to one, and a large amount of anti-sample datas have all slatterned, and this sorter can't realize anti-sample refuse know, therefore can't be applied to the face authentication algorithm.
The SVM algorithm of one-to-many only need be trained a sorter respectively for each classification, and positive sample is exactly such other training data when training at every turn, and anti-sample has then comprised other kinds data and all anti-sample datas.Because this method has been considered numerous anti-sample datas, the optimum interphase that has obtained after having trained can come current classification sample and other classification sample separation more exactly, therefore when realizing the automated validation of a plurality of people's faces, the SVM algorithm of one-to-many has extraordinary using value.
The verification process of one-to-many SVM is also fairly simple, the feature of importing after sample is selected is input in N the svm classifier device, if all sorters can refuse to know input feature vector, think that then all categories is all dissimilar in input people face and the training storehouse, the result that algorithm output is refused to know; Otherwise, if input feature vector has only passed through a sorter, and refused to know, then this sorter corresponding class result that is exactly recognition of face by other all sorters; Another kind of special circumstances are exactly that input feature vector has passed through a more than svm classifier device, algorithm thinks that it is similar to a plurality of classifications, from our experimental result, this situation is very rare, because we are all kinds of samples anti-sample of other classifications each other when sorter is trained, but when others face of inhomogeneity was more similar, this situation also can occur.This moment, we taked a kind of short-cut method to address this problem, because each one-to-many SVM algorithm all can be a judgement of each sample output numerical value:
This numerical value has also reflected the degree of closeness of input sample with corresponding classification to a certain extent, reaches the gap size with corresponding anti-sample.This numerical value is big more, and then expression input sample is similar more to current classification, and is big more with other classification differences.Therefore we handle this special circumstances according to size of this judgement numerical value, i.e. output in the future refuses to know that result's svm classifier device returns:
Sort, with the result of maximum number corresponding class as recognition of face.Although this is an approximate result, from actual result, the effect of this method is still very good.
Below in conjunction with one-to-many SVM, step 205,206 described calculation of similarity degree among Fig. 2 are described.
Face authentication algorithm based on one-to-many SVM can be imported a judgement of people's face output data J for each
n(x), n=0,1 ..., N-1 has reflected when the similarity degree of forefathers' face with corresponding classification.We are with the judgement data combination that each classification shape of face is counted frame, and L is counted in the new judgement that obtains a reflection multiframe identifying information
n(x), n=0,1 ..., N-1, we are with the identification similarity of this number as each classification.Be located at front k-1 frame we to try to achieve a similarity be L
n K-1(x), n=0,1 ..., N-1, the SVM of k frame return number and are J
n k(x), n=0,1 ..., N-1, then the similarity calculating formula of k frame is:
The similarity that is people's face adds up, but cumulative number need limit minimum and maximum value.The J that works as certain classification of consecutive numbers frame like this
n(x) all greater than zero the time, then total similarity L
n(x) will increase gradually; Otherwise then reduce gradually.If L at k frame all categories
k n(x) all less than zero, then refuse to know this people's face; If one or more L is arranged
k n(x) greater than zero, then with the maximum L of numerical value
k n(x) corresponding class is as the result of face authentication.Shown in Figure 10 A, be frame input people face, Figure 10 B is the output result, each one L among the figure
k n(x) show that with histogram middle black line is the similarity decision threshold, the present invention directly is made as zero with this threshold value.
In sum, identifying object of the present invention mainly is a front face, and the input camera is a video frequency pick-up head.When the present invention realizes the authentication of people's face, the method of taking that is based on image, detect on the basis of locating at people's face with critical organ, adopt the statistical recognition algorithm that input people face is discerned, at first detect the front face in the image, then obtain the position of two central points of people's face, face central point.Based on these three somes people's face sample is carried out normalization and photo-irradiation treatment, extract the multi-direction Gabor feature of multiresolution, and the Gabor feature after will handling is input in the support vector machine (SVM), train and discern.In addition, input object of the present invention is the image sequence that video frequency pick-up head is gathered, and therefore can be on the basis of face tracking the authentication result of multiframe people face be integrated, and improves the precision and the degree of stability of face authentication.
Claims (16)
1, a kind of people's face real-time identifying method is characterized in that, comprises the steps:
(1),, each user that need authenticate obtains a support vector machine faceform for training by people's face sample image;
(2) video image of acquisition camera input, the front face in search and the detected image, and it is continued to follow the trail of and verify, guarantee that people's face of following the trail of is a front face;
(3) demarcate organ characteristic's point in the front face automatically, and in view of the above detected people's face is carried out pre-service;
(4) calculating is picked out the strongest part Gabor feature of classification capacity and is formed low dimensional feature vector through the Gabor feature of pretreated facial image from higher-dimension Gabor feature;
(5) the low dimensional feature vector after will selecting is input among the described faceform, carries out recognition of face, returns the similarity data with each faceform;
(6) according to the described similarity data of returning, export final face authentication result.
2, the method for claim 1 is characterized in that, described step (6) is the similarity numerical value in conjunction with the consecutive numbers two field picture, exports final face authentication result.
3, the method for claim 1 is characterized in that, described step (1) comprising:
(1A) at each user that need authenticate, gather positive sample and anti-sample facial image;
(1B) all sample people faces are demarcated, determined the human face characteristic point position in people's face sample;
(1C) according to calibration result, all sample images are carried out pre-service;
(1D) calculating is picked out the strongest part Gabor feature of classification capacity and is formed low dimensional feature vector through the Gabor feature of pretreated sample image from higher-dimension Gabor feature;
(1E) utilize described low dimensional feature vector, adopt support vector machine that the different authentication user is trained, obtain one support vector machine faceform for each user.
As claim 1 or 3 described methods, it is characterized in that 4, described organ characteristic's point comprises eyes and face unique point.
5, method as claimed in claim 4 is characterized in that, described pre-treatment step comprises:
(A1), adopt the affined transformation algorithm between the calibration point of input facial image and standard faces image, to calculate the affined transformation coefficient according to the eyes position of demarcating;
(B1) utilize described affined transformation coefficient,, obtain people's face cutting image based on eyes according to the yardstick of predetermined people's face window to be identified;
(C1), calculate eyes and corresponding point A, B, the C of face unique point in the cutting image in the input picture, and determine eyes central point D and standard face central point Cstad according to eyes position A, B according to described affined transformation coefficient;
(D1) the C point is corrected on the y direction, made the y coordinate of the face central point C ' after the rectification consistent with the y coordinate of Cstad;
(E1) be the separatrix with DC ' line, if C ' is on the right side of Cstad, then the people's face to left side, described separatrix compresses, people's face to the right side, separatrix stretches, if C ' is in the left side of Cstad, then the people's face to right side, described separatrix compresses, and the people's face on the left of the separatrix is stretched, the point on DC ' straight line is remedied on people's face vertical centering control axis DCstad.
6, method as claimed in claim 5 is characterized in that, further comprises:
(F1) (X Y) is made as corresponding point (Xort, Yort) average of all some pixel values in the neighborhood scope to the pixel value in the cutting image after will correcting, wherein, (Xort is Yort) by point (X for described corresponding point, Y) coordinate before rectification (X ', Y ') utilizes the affined transformation coefficient to obtain.
7, as claim 1 or 3 described methods, it is characterized in that described pre-treatment step comprises:
(A2) determine a secondary standard gray scale facial image, and calculate the average P of facial image gray scale
sAnd variances sigma
s
(B2) calculate the average P and the variances sigma of the gray scale of current input facial image, and utilize formula: I ' (x, y)=[I (x, y)-P] σ
s/ σ+P
sThe input facial image is carried out overall intensity normalization, and wherein, (x y) be the preceding input facial image pixel gray-scale value of whole normalization to I, and (x y) is input facial image pixel gray-scale value after the whole normalization to I '.
8, method as claimed in claim 7 is characterized in that, described pre-treatment step further comprises:
(C2) utilize formula: I
r(x, y)=I ' (x, y) A
S(x, y)/A
I(x y), carries out local gray level normalization to current input facial image, wherein, and A
I(x y) is (x, the y) gray average in the neighborhood, the A of picture element in the current input image
S(x y) is (x, the y) gray average in the neighborhood, the I of corresponding picture element in the described standard faces image
r(x y) is input facial image pixel gray-scale value after the local normalization.
9, method as claimed in claim 2 is characterized in that, the computing formula of the similarity numerical value of the described consecutive numbers two field picture of step (6) is:
Wherein, L
k n(x) be all faceforms' of K frame similarity, J
n(x), n=0,1 ..., N-1 is the similarity judgement data that different faceforms return for each facial image, L
n K-1(x), n=0,1 ..., N-1 is the similarity of front K-1 frame.
10, method as claimed in claim 9 is characterized in that, if at all faceforms' of k frame L
k n(x) all less than zero, then refuse to know this people's face; If one or more L is arranged
k n(x) greater than zero, then with the L of numerical value maximum
k n(x) Dui Ying faceform is as the result of face authentication.
11, a kind of faceform's training method is characterized in that, comprises the steps:
(A), gather positive sample and anti-sample facial image at each user that need authenticate;
(B) all sample people faces are demarcated, determined the human face characteristic point position in people's face sample;
(C), all sample images are carried out pre-service according to calibration result;
(D) calculating is picked out the strongest part Gabor feature of classification capacity and is formed low dimensional feature vector through the Gabor feature of pretreated sample image from higher-dimension Gabor feature;
(E) utilize described low dimensional feature vector, adopt support vector machine that the different authentication user is trained, obtain one support vector machine faceform for each user.
12, method as claimed in claim 11 is characterized in that, described step (C) comprising:
(C1) determine a secondary standard gray scale facial image, and calculate the average P of facial image gray scale
sAnd variances sigma
s
(C2) calculate the average P and the variances sigma of the gray scale of current input facial image, and utilize formula:
I ' (x, y)=[I (x, y)-P] σ
s/ σ+P
sThe input facial image is carried out overall intensity normalization, and wherein, (x y) be the preceding input facial image pixel gray-scale value of whole normalization to I, and (x y) is input facial image pixel gray-scale value after the whole normalization to I '.
13, a kind of people's face real time certification system is characterized in that, comprising:
Training module is used for by people's face sample image, obtains a support vector machine faceform for each user that need authenticate trains;
Authentication module comprises:
Gather tracing unit, be used for the video image of acquisition camera input, the front face in search and the detected image, and it is continued to follow the trail of and verify, guarantee that people's face of following the trail of is a front face;
Pretreatment unit is used for demarcating automatically organ characteristic's point of front face, and in view of the above detected people's face is carried out pre-service;
Feature calculation unit is used to calculate the Gabor feature through pretreated facial image;
The feature module of selection is used for picking out the strongest low dimensional feature vector of part Gabor feature composition of classification capacity from calculating the higher-dimension Gabor feature that obtains;
The identification authentication ' unit is used for the low dimensional feature vector after selecting is input to described faceform, carries out recognition of face, returns the similarity data with each faceform, and according to the described similarity data of returning, exports final face authentication result.
14, system as claimed in claim 13 is characterized in that, described identification authentication ' unit is the similarity numerical value in conjunction with the consecutive numbers two field picture, exports final face authentication result.
15, system as claimed in claim 13 is characterized in that, described training module comprises:
The sample collection unit is used for the user that need authenticate at each, gathers positive sample and anti-sample facial image;
Pretreatment unit is used for the demarcation of basis to the human face characteristic point position of all sample people faces, and all sample images are carried out pre-service;
Feature calculation unit is used to calculate the Gabor feature through pretreated facial image;
The feature module of selection is used for picking out the strongest low dimensional feature vector of part Gabor feature composition of classification capacity from calculating the higher-dimension Gabor feature that obtains;
The model training unit is used to utilize described low dimensional feature vector, adopts support vector machine that the different authentication user is trained, for each user obtains a support vector machine faceform.
16, a kind of faceform's training module is characterized in that, comprising:
The sample collection unit is used for the user that need authenticate at each, gathers positive sample and anti-sample facial image;
Pretreatment unit is used for the demarcation of basis to the human face characteristic point position of all sample people faces, and all sample images are carried out pre-service;
Feature calculation unit is used to calculate the Gabor feature through pretreated facial image;
The feature module of selection is used for picking out the strongest low dimensional feature vector of part Gabor feature composition of classification capacity from calculating the higher-dimension Gabor feature that obtains;
The model training unit is used to utilize described low dimensional feature vector, adopts support vector machine that the different authentication user is trained, for each user obtains a support vector machine faceform.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2006100120865A CN100458831C (en) | 2006-06-01 | 2006-06-01 | Human face model training module and method, human face real-time certification system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2006100120865A CN100458831C (en) | 2006-06-01 | 2006-06-01 | Human face model training module and method, human face real-time certification system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1885310A true CN1885310A (en) | 2006-12-27 |
CN100458831C CN100458831C (en) | 2009-02-04 |
Family
ID=37583458
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2006100120865A Active CN100458831C (en) | 2006-06-01 | 2006-06-01 | Human face model training module and method, human face real-time certification system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100458831C (en) |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100589117C (en) * | 2007-04-18 | 2010-02-10 | 中国科学院自动化研究所 | Gender recognition method based on gait |
CN101739555A (en) * | 2009-12-01 | 2010-06-16 | 北京中星微电子有限公司 | Method and system for detecting false face, and method and system for training false face model |
CN101866416A (en) * | 2010-06-18 | 2010-10-20 | 山东大学 | Fingerprint image segmentation method based on transductive learning |
CN101887524A (en) * | 2010-07-06 | 2010-11-17 | 湖南创合制造有限公司 | Pedestrian detection method based on video monitoring |
CN101114909B (en) * | 2007-08-17 | 2011-02-16 | 上海博康智能信息技术有限公司 | Full-automatic video identification authentication system and method |
CN102009879A (en) * | 2010-11-18 | 2011-04-13 | 无锡中星微电子有限公司 | Elevator automatic keying control system and method, face model training system and method |
CN101216973B (en) * | 2007-12-27 | 2011-08-17 | 北京中星微电子有限公司 | An ATM monitoring method, system, and monitoring device |
CN102194107A (en) * | 2011-05-13 | 2011-09-21 | 华南理工大学 | Smiling face recognition method for reducing dimension by using improved linear discriminant analysis |
CN102257513A (en) * | 2008-12-19 | 2011-11-23 | 坦德伯格电信公司 | Method for speeding up face detection |
CN101216884B (en) * | 2007-12-29 | 2012-04-18 | 北京中星微电子有限公司 | A method and system for face authentication |
CN102501648A (en) * | 2011-10-11 | 2012-06-20 | 陕西科技大学 | Official seal use management method and device |
CN102682309A (en) * | 2011-03-14 | 2012-09-19 | 汉王科技股份有限公司 | Face feature registering method and device based on template learning |
CN103310190A (en) * | 2012-05-16 | 2013-09-18 | 清华大学 | Facial image sample acquiring and optimizing method based on heterogeneous active vision network |
CN103914676A (en) * | 2012-12-30 | 2014-07-09 | 杭州朗和科技有限公司 | Method and apparatus for use in face recognition |
CN104866821A (en) * | 2015-05-04 | 2015-08-26 | 南京大学 | Video object tracking method based on machine learning |
US9235782B1 (en) | 2012-12-24 | 2016-01-12 | Google Inc. | Searching images and identifying images with similar facial features |
CN105447823A (en) * | 2014-08-07 | 2016-03-30 | 联想(北京)有限公司 | Image processing method and electronic device |
CN105654056A (en) * | 2015-12-31 | 2016-06-08 | 中国科学院深圳先进技术研究院 | Human face identifying method and device |
CN105740758A (en) * | 2015-12-31 | 2016-07-06 | 上海极链网络科技有限公司 | Internet video face recognition method based on deep learning |
CN105825163A (en) * | 2015-01-09 | 2016-08-03 | 杭州海康威视数字技术股份有限公司 | Retrieval system and method of face image |
CN103793718B (en) * | 2013-12-11 | 2017-01-18 | 台州学院 | Deep study-based facial expression recognition method |
CN107239763A (en) * | 2017-06-06 | 2017-10-10 | 合肥创旗信息科技有限公司 | Check class attendance system based on recognition of face |
CN107657216A (en) * | 2017-09-11 | 2018-02-02 | 安徽慧视金瞳科技有限公司 | 1 to the 1 face feature vector comparison method based on interference characteristic vector data collection |
CN108875791A (en) * | 2018-05-25 | 2018-11-23 | 国网山西省电力公司电力科学研究院 | Transmission line of electricity external force based on radar destroys target identification method and system |
CN109918998A (en) * | 2019-01-22 | 2019-06-21 | 东南大学 | A kind of big Method of pose-varied face |
CN109934948A (en) * | 2019-01-10 | 2019-06-25 | 宿迁学院 | A kind of novel intelligent is registered device and its working method |
CN111062995A (en) * | 2019-11-28 | 2020-04-24 | 重庆中星微人工智能芯片技术有限公司 | Method and device for generating face image, electronic equipment and computer readable medium |
CN111310743A (en) * | 2020-05-11 | 2020-06-19 | 腾讯科技(深圳)有限公司 | Face recognition method and device, electronic equipment and readable storage medium |
CN111476189A (en) * | 2020-04-14 | 2020-07-31 | 北京爱笔科技有限公司 | Identity recognition method and related device |
CN112232310A (en) * | 2020-12-09 | 2021-01-15 | 中影年年(北京)文化传媒有限公司 | Face recognition system and method for expression capture |
CN112860931A (en) * | 2021-01-18 | 2021-05-28 | 广东便捷神科技股份有限公司 | Construction method of face recognition library, face payment method and system |
CN113553971A (en) * | 2021-07-29 | 2021-10-26 | 青岛以萨数据技术有限公司 | Method and device for extracting optimal frame of face sequence and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6826300B2 (en) * | 2001-05-31 | 2004-11-30 | George Mason University | Feature based classification |
-
2006
- 2006-06-01 CN CNB2006100120865A patent/CN100458831C/en active Active
Cited By (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100589117C (en) * | 2007-04-18 | 2010-02-10 | 中国科学院自动化研究所 | Gender recognition method based on gait |
CN101114909B (en) * | 2007-08-17 | 2011-02-16 | 上海博康智能信息技术有限公司 | Full-automatic video identification authentication system and method |
CN101216973B (en) * | 2007-12-27 | 2011-08-17 | 北京中星微电子有限公司 | An ATM monitoring method, system, and monitoring device |
CN101216884B (en) * | 2007-12-29 | 2012-04-18 | 北京中星微电子有限公司 | A method and system for face authentication |
CN102257513A (en) * | 2008-12-19 | 2011-11-23 | 坦德伯格电信公司 | Method for speeding up face detection |
CN102257513B (en) * | 2008-12-19 | 2013-11-06 | 思科***国际公司 | Method for speeding up face detection |
CN101739555A (en) * | 2009-12-01 | 2010-06-16 | 北京中星微电子有限公司 | Method and system for detecting false face, and method and system for training false face model |
CN101739555B (en) * | 2009-12-01 | 2014-11-26 | 北京中星微电子有限公司 | Method and system for detecting false face, and method and system for training false face model |
CN101866416A (en) * | 2010-06-18 | 2010-10-20 | 山东大学 | Fingerprint image segmentation method based on transductive learning |
CN101887524A (en) * | 2010-07-06 | 2010-11-17 | 湖南创合制造有限公司 | Pedestrian detection method based on video monitoring |
CN101887524B (en) * | 2010-07-06 | 2012-07-04 | 湖南创合制造有限公司 | Pedestrian detection method based on video monitoring |
CN102009879A (en) * | 2010-11-18 | 2011-04-13 | 无锡中星微电子有限公司 | Elevator automatic keying control system and method, face model training system and method |
CN102682309B (en) * | 2011-03-14 | 2014-11-19 | 汉王科技股份有限公司 | Face feature registering method and device based on template learning |
CN102682309A (en) * | 2011-03-14 | 2012-09-19 | 汉王科技股份有限公司 | Face feature registering method and device based on template learning |
CN102194107A (en) * | 2011-05-13 | 2011-09-21 | 华南理工大学 | Smiling face recognition method for reducing dimension by using improved linear discriminant analysis |
CN102194107B (en) * | 2011-05-13 | 2013-03-20 | 华南理工大学 | Smiling face recognition method for reducing dimension by using improved linear discriminant analysis |
CN102501648A (en) * | 2011-10-11 | 2012-06-20 | 陕西科技大学 | Official seal use management method and device |
CN103310190B (en) * | 2012-05-16 | 2016-04-13 | 清华大学 | Based on the facial image sample collection optimization method of isomery active vision network |
CN103310190A (en) * | 2012-05-16 | 2013-09-18 | 清华大学 | Facial image sample acquiring and optimizing method based on heterogeneous active vision network |
US9235782B1 (en) | 2012-12-24 | 2016-01-12 | Google Inc. | Searching images and identifying images with similar facial features |
CN103914676A (en) * | 2012-12-30 | 2014-07-09 | 杭州朗和科技有限公司 | Method and apparatus for use in face recognition |
CN103914676B (en) * | 2012-12-30 | 2017-08-25 | 杭州朗和科技有限公司 | A kind of method and apparatus used in recognition of face |
CN103793718B (en) * | 2013-12-11 | 2017-01-18 | 台州学院 | Deep study-based facial expression recognition method |
CN105447823A (en) * | 2014-08-07 | 2016-03-30 | 联想(北京)有限公司 | Image processing method and electronic device |
CN105447823B (en) * | 2014-08-07 | 2019-07-26 | 联想(北京)有限公司 | A kind of image processing method and a kind of electronic equipment |
CN105825163A (en) * | 2015-01-09 | 2016-08-03 | 杭州海康威视数字技术股份有限公司 | Retrieval system and method of face image |
CN104866821B (en) * | 2015-05-04 | 2018-09-14 | 南京大学 | Video object tracking based on machine learning |
CN104866821A (en) * | 2015-05-04 | 2015-08-26 | 南京大学 | Video object tracking method based on machine learning |
CN105654056A (en) * | 2015-12-31 | 2016-06-08 | 中国科学院深圳先进技术研究院 | Human face identifying method and device |
CN105740758A (en) * | 2015-12-31 | 2016-07-06 | 上海极链网络科技有限公司 | Internet video face recognition method based on deep learning |
CN107239763A (en) * | 2017-06-06 | 2017-10-10 | 合肥创旗信息科技有限公司 | Check class attendance system based on recognition of face |
CN107657216A (en) * | 2017-09-11 | 2018-02-02 | 安徽慧视金瞳科技有限公司 | 1 to the 1 face feature vector comparison method based on interference characteristic vector data collection |
CN108875791A (en) * | 2018-05-25 | 2018-11-23 | 国网山西省电力公司电力科学研究院 | Transmission line of electricity external force based on radar destroys target identification method and system |
CN109934948B (en) * | 2019-01-10 | 2022-03-08 | 宿迁学院 | Novel intelligent sign-in device and working method thereof |
CN109934948A (en) * | 2019-01-10 | 2019-06-25 | 宿迁学院 | A kind of novel intelligent is registered device and its working method |
CN109918998A (en) * | 2019-01-22 | 2019-06-21 | 东南大学 | A kind of big Method of pose-varied face |
CN109918998B (en) * | 2019-01-22 | 2023-08-01 | 东南大学 | Large-gesture face recognition method |
CN111062995A (en) * | 2019-11-28 | 2020-04-24 | 重庆中星微人工智能芯片技术有限公司 | Method and device for generating face image, electronic equipment and computer readable medium |
CN111062995B (en) * | 2019-11-28 | 2024-02-23 | 重庆中星微人工智能芯片技术有限公司 | Method, apparatus, electronic device and computer readable medium for generating face image |
CN111476189A (en) * | 2020-04-14 | 2020-07-31 | 北京爱笔科技有限公司 | Identity recognition method and related device |
CN111476189B (en) * | 2020-04-14 | 2023-10-13 | 北京爱笔科技有限公司 | Identity recognition method and related device |
CN111310743A (en) * | 2020-05-11 | 2020-06-19 | 腾讯科技(深圳)有限公司 | Face recognition method and device, electronic equipment and readable storage medium |
CN111310743B (en) * | 2020-05-11 | 2020-08-25 | 腾讯科技(深圳)有限公司 | Face recognition method and device, electronic equipment and readable storage medium |
CN112232310A (en) * | 2020-12-09 | 2021-01-15 | 中影年年(北京)文化传媒有限公司 | Face recognition system and method for expression capture |
CN112860931A (en) * | 2021-01-18 | 2021-05-28 | 广东便捷神科技股份有限公司 | Construction method of face recognition library, face payment method and system |
CN112860931B (en) * | 2021-01-18 | 2023-11-03 | 广东便捷神科技股份有限公司 | Construction method of face recognition library |
CN113553971A (en) * | 2021-07-29 | 2021-10-26 | 青岛以萨数据技术有限公司 | Method and device for extracting optimal frame of face sequence and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN100458831C (en) | 2009-02-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN1885310A (en) | Human face model training module and method, human face real-time certification system and method | |
CN1862487A (en) | Screen protection method and apparatus based on human face identification | |
CN101339607A (en) | Human face recognition method and system, human face recognition model training method and system | |
US11263432B2 (en) | Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices | |
CN102708361B (en) | Human face collecting method at a distance | |
CN1794264A (en) | Method and system of real time detecting and continuous tracing human face in video frequency sequence | |
CN102426649B (en) | Simple steel seal digital automatic identification method with high accuracy rate | |
CN1822024A (en) | Positioning method for human face characteristic point | |
CN101930543B (en) | Method for adjusting eye image in self-photographed video | |
CN106529414A (en) | Method for realizing result authentication through image comparison | |
CN101615292B (en) | Accurate positioning method for human eye on the basis of gray gradation information | |
CN107438854A (en) | The system and method that the image captured using mobile device performs the user authentication based on fingerprint | |
CN1152340C (en) | Fingerprint image enhancement method based on knowledge | |
CN101059836A (en) | Human eye positioning and human eye state recognition method | |
He et al. | Real-time human face detection in color image | |
AU2017370720A1 (en) | Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices | |
CN105389593A (en) | Image object recognition method based on SURF | |
CN1506903A (en) | Automatic fingerprint distinguishing system and method based on template learning | |
CN1801181A (en) | Robot capable of automatically recognizing face and vehicle license plate | |
CN106056064A (en) | Face recognition method and face recognition device | |
CN1977286A (en) | Object recognition method and apparatus therefor | |
CN103577838A (en) | Face recognition method and device | |
CN1794265A (en) | Method and device for distinguishing face expression based on video frequency | |
CN102842032A (en) | Method for recognizing pornography images on mobile Internet based on multi-mode combinational strategy | |
CN104021382A (en) | Eye image collection method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C41 | Transfer of patent application or patent right or utility model | ||
TR01 | Transfer of patent right |
Effective date of registration: 20160516 Address after: 519031 Guangdong city of Zhuhai province Hengqin Baohua Road No. 6, room 105 -478 Patentee after: GUANGDONG ZHONGXING ELECTRONICS CO., LTD. Address before: 100083, Haidian District, Xueyuan Road, Beijing No. 35, Nanjing Ning building, 15 Floor Patentee before: Beijing Vimicro Corporation |