CN101763507B - Face recognition method and face recognition system - Google Patents

Face recognition method and face recognition system Download PDF

Info

Publication number
CN101763507B
CN101763507B CN 201010034359 CN201010034359A CN101763507B CN 101763507 B CN101763507 B CN 101763507B CN 201010034359 CN201010034359 CN 201010034359 CN 201010034359 A CN201010034359 A CN 201010034359A CN 101763507 B CN101763507 B CN 101763507B
Authority
CN
China
Prior art keywords
people
face
subregion
feature
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 201010034359
Other languages
Chinese (zh)
Other versions
CN101763507A (en
Inventor
邱建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Athena Eyes Co Ltd
Original Assignee
BEIJING ATHENA EYES SCIENCE & TECHNOLOGY DEVELOPMENT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING ATHENA EYES SCIENCE & TECHNOLOGY DEVELOPMENT Co Ltd filed Critical BEIJING ATHENA EYES SCIENCE & TECHNOLOGY DEVELOPMENT Co Ltd
Priority to CN 201010034359 priority Critical patent/CN101763507B/en
Publication of CN101763507A publication Critical patent/CN101763507A/en
Application granted granted Critical
Publication of CN101763507B publication Critical patent/CN101763507B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a face recognition method. The method includes the steps of dividing a pretreated face sample image in fixed size into a plurality of superposed sub-regions in different sizes, extracting texture characteristics of the sub-regions, selecting the effective texture characteristics according to the preset rules, obtaining projection characteristic values of the effective texture characteristics, and carrying out face recognition according to the projection characteristic values of the sub-regions. The invention can improve the processing speed and the recognizing effect of face recognition.

Description

Face identification method and face identification system
Technical field
The present invention relates in mode identification technology, particularly relate to a kind of face identification method and a kind of face identification system.
Background technology
Identity authentication technique based on biological characteristic has more and more consequence and effect in social life.In multiple biological authentication method,, paid close attention to widely and paid attention to because have that property, cost are low without invading, good concealment, do not need the advantage such as measured's special compounding based on the identification of human face's feature and authentication, be with a wide range of applications.
According to function, recognition of face can be divided into human face recognition (Face Identification) and face authentication (Face Verification) two classes.Human face recognition refers to one or more facial image for identity to be determined, and the proprietary facial image of preserving in itself and the database is compared, and with its image, and determines the two whether same person in the specified data storehouse.Face authentication refers to one or more facial image with identity to be determined, compares with the facial image of its statement identity in the database, determines that whether the two is from same person.
For face identification method, generally need two steps.First step is training process, namely adopts the facial image of known identities, selects the feature for the recognition of face best results, and obtains faceform's parameters.Second step is use procedure, namely adopts best identified feature and the model parameter that obtains, and the facial image of unknown identity is judged, determines the process of its identity.
Prior art usually adopts the method based on global characteristics or subspace, regards human face region as an integral body, with certain feature of whole people's face as Expressive Features.The defective of these class methods is easily to be subject to the impact that local light photograph, expression shape change or glasses etc. are worn, and also easily is subject to the interference of local noise.
In addition, in the prior art, also usually adopt multiple dimensioned multidirectional Gabor feature as the Expressive Features of human face region, these methods need the Gabor feature of a plurality of yardsticks of one-time calculation, a plurality of directions, and are very consuming time, cause recognition of face speed to descend.In use procedure, when carrying out face alignment, also need to calculate and more required all features of using, for the human face recognition system, if the people's face in the database is too many, then cause the speed of system too slow.
In the prior art, also exist some to adopt PCA that the feature of extracting is carried out dimensionality reduction, then adopt LDA to determine projecting direction, and the method that adopts arest neighbors to differentiate.But these methods or adopt random selected characteristic abandon the processing mode of Partial Feature, lost validity feature, and PCA is the method that a kind of based on data represents, be not for classification problem designs, in brief, the feature behind the PCA dimensionality reduction is not to the most effective feature of classification.
Thereby, need at present the urgent technical matters that solves of those skilled in the art to be exactly: how can propose a kind of quicker with innovating, the better face recognition algorithms for the treatment of effect, too slow in order to solve in the prior art processing speed, the problem of poor effect.
Summary of the invention
Technical matters to be solved by this invention provides a kind of face identification method and system, to improve processing speed and the recognition effect of recognition of face.
In order to solve the problems of the technologies described above, the embodiment of the invention discloses a kind of face identification method, comprising:
People's face sample image is divided into a plurality of mutually subregions overlapping and not of uniform size, and described people's face sample image is the facial image through pretreated fixed size;
Extract the textural characteristics of described subregion;
From described textural characteristics, choose effective textural characteristics according to presetting rule, and obtain the projection properties value of described effective textural characteristics; The described step of choosing effective textural characteristics according to presetting rule comprises: to all subregions, respectively for each sub regions institute texture feature extraction, adopt the associating lifting feature to select algorithm and pick out T feature as effective textural characteristics of current subregion, described T is positive integer;
Projection properties value according to all subregion is carried out recognition of face: the step that described projection properties value according to all subregion is carried out recognition of face further comprises: the distance value that calculates the projection properties value of each group comparison people face sample image respective sub-areas; Calculate the distance value sum of the projection properties value of a plurality of subregions in the current comparison people face sample image and be the distance value of comparison people face; Judge according to described comparison people face distance value whether current comparison people face sample image is characterized by same people;
Wherein, calculate the distance value of the projection properties value of described subregion by following steps: to participating in two facial images of comparison, search for other subregion of current subregion close position, and calculate the distance value of the projection properties value between each adjacent locations subregion; With the minimum value in the distance value of the projection properties value between described each adjacent locations subregion, as the distance value of the projection properties value of current subregion.
Preferably, described subregion is the rectangular area, and described textural characteristics is local Gabor binary pattern histogram feature, and the step of described extraction subregion textural characteristics further comprises:
Generate m yardstick and n the Gabor bank of filters that direction consists of, described m, n are positive integer;
The employing yardstick is that m, direction are the Gabor wave filter of n, is that people's face sample image of (j, i) carries out filtering to the pixel center position, obtains the Gabor eigenwert of each pixel;
In Gabor characteristic image that characterized by described Gabor eigenwert, a m yardstick, a n direction, the setting pixel coordinate is (j, the brightness of some correspondence i) is G (j, i), centered by point (j, i), get radius and be the circle of R, and take the coordinate axis transverse axis be in forward the circle on point as starting point, uniformly-spaced get P point on circle, described R gets a plurality of yardsticks, and P gets a plurality of values; The Gabor eigenwert that each point is set is G 1, G 2... G P
Calculate the local Gabor binary pattern LGBP feature of described point (j, i) by following formula:
LGBP ( j , i ) = Σ 1 ≤ q ≤ P B ( j , i , q ) * 2 q - 1
Wherein, B ( j , i , q ) = 1 if G q ≥ G ( j , i ) 0 else ;
Carry out histogram calculation by following formula according to described local Gabor binary pattern LGBP feature:
H l = Σ t ≤ j ≤ b , l ≤ i ≤ r I ( LGBP ( j , i ) = = l ) , l = 0 , . . . , N - 1 ,
Wherein, I ( A ) = 1 , A is true 0 , A is false , N is 2 P-1, wherein P is for uniformly-spaced getting number a little on the circle; L is the left hand edge horizontal ordinate on people's face sample image, and t is the coboundary ordinate on people's face sample image, and r is the right hand edge horizontal ordinate on people's face sample image, and b is the lower limb ordinate on people's face sample image.
Preferably, the described step of choosing effective textural characteristics according to presetting rule comprises:
Preferably, described employing associating lifting feature is selected algorithm and is selected the step of the effective textural characteristics of all subregion and further comprise:
The sample set that structure is comprised of the subregion textural characteristics, and distribute initial weight for each sample, described initial weight is determined by the sample classification factor that whether is same people's face;
According to described initial weight by following substep to the sample training, to pick out T feature;
Substep S1, carry out the normalization of weighted value for all samples of a certain people's face;
Substep S2, determine the weak feature in the sample, construct corresponding Weak Classifier for feature a little less than each according to the mode of LUT look-up table according to described normalized weighted value, and obtain the error cost parameter of each Weak Classifier;
Substep S3, this is taken turns the corresponding weak feature of the Weak Classifier of error cost parameter minimum in the training as t the feature of picking out, wherein, t=1,2 ..., T;
Substep S4, based on the t that the picks out feature weighted value of new samples more;
Substep S5, judge whether to pick out T feature, if not, then return described substep S1; If then finish training.
Preferably, the described step of obtaining the projection properties value of effective textural characteristics comprises:
Adopt the linear discriminant analysis method to obtain the linear transformation of described effective textural characteristics;
The projection in the described linear transformation of effective textural characteristics of all subregion is obtained corresponding projection properties value.
Preferably, described projection properties value according to all subregion step of carrying out recognition of face further comprises:
Generate positive sample characteristics and anti-sample characteristics; Described positive sample characteristics is the distance value of projection properties value that belongs to same people's the corresponding subregions of two facial images; Described anti-sample characteristics is the distance value of projection properties value that belongs to the corresponding subregions of two facial images of different people;
Adopt the Real-Adaboost algorithm that described positive sample characteristics and anti-sample characteristics are trained, acquisition has in the described strong classifier and judges whether people's face sample image to be identified belongs to faceform's degree of confidence parameter as the strong classifier of faceform's sorter;
People's face sample image to be identified is inputted described strong classifier, judge whether to belong to same people's face by described degree of confidence parameter.
The embodiment of the invention also discloses a kind of face identification system, comprising:
Subregion is divided module, is used for people's face sample image is divided into a plurality of mutually subregions overlapping and not of uniform size, and described people's face sample image is the facial image through pretreated fixed size;
Characteristic extracting module is for the textural characteristics that extracts described subregion;
Validity feature is chosen module, is used for choosing effective textural characteristics according to presetting rule from described textural characteristics; Described validity feature is chosen module and further comprised: the associating lifting feature is selected algorithm and is called submodule, be used for all subregions, respectively from the textural characteristics of its extraction, adopt the associating lifting feature to select algorithm and pick out T feature as effective textural characteristics of current subregion, described T is positive integer;
The projective transformation module is for the projection properties value of obtaining described effective textural characteristics;
Recognition processing module is used for carrying out recognition of face according to the projection properties value of all subregion; Described recognition processing module further comprises:
Adjacent subarea domain search module is used for searching for other subregion of current subregion close position, and calculating the distance value of the projection properties value between each adjacent locations subregion participating in two facial images of comparison;
Summation distance value calculating sub module is used for calculating the current distance value sum of comparing the projection properties value of a plurality of subregions of people's face sample image the distance value that is comparison people face;
The identification decision submodule is used for judging according to described comparison people face distance value whether current comparison people face sample image is characterized by same people;
Projection properties distance value determination module is used for the minimum value with the distance value of the projection properties value between described each adjacent locations subregion, as the distance value of the projection properties value of current subregion;
Comparison distance value calculating sub module is used for calculating the distance value that each organizes the projection properties value of comparison people face sample image respective sub-areas.
Preferably, described subregion is the rectangular area, and described textural characteristics is local Gabor binary pattern histogram feature, and described characteristic extracting module further comprises:
Gabor feature calculation submodule is used for generating m yardstick and n the Gabor bank of filters that direction consists of, and described m, n are positive integer; Adopting respectively yardstick is that m, direction are the Gabor wave filter of n, is that people's face sample image of (j, i) carries out filtering to the pixel center position, obtains the Gabor eigenwert of each pixel;
LGBP feature calculation submodule, be used at Gabor characteristic image that characterized by described Gabor eigenwert, a m yardstick, a n direction, the setting pixel coordinate is that the brightness of the some correspondence of (j, i) is G (j, i), with point (j, i) get the circle that radius is R centered by, and take the coordinate axis transverse axis be in forward the circle on point as starting point, on circle, uniformly-spaced get P point, described R gets a plurality of yardsticks, and P gets a plurality of values; The Gabor eigenwert that each point is set is G 1, G 2... G P
Calculate the local Gabor binary pattern LGBP feature of described point (j, i) by following formula:
LGBP ( j , i ) = Σ 1 ≤ q ≤ P B ( j , i , q ) * 2 q - 1
Wherein, B ( j , i , q ) = 1 if G q ≥ G ( j , i ) 0 else ;
The histogram calculation submodule is used for carrying out histogram calculation by following formula according to described local Gabor binary pattern LGBP feature:
H l = Σ t ≤ j ≤ b , l ≤ i ≤ r I ( LGBP ( j , i ) = = l ) , l = 0 , . . . , N - 1 ,
Wherein, I ( A ) = 1 , A is true 0 , A is false , N is 2 P-1, wherein P is for uniformly-spaced getting number a little on the circle; L is the left hand edge horizontal ordinate on people's face sample image, and t is the coboundary ordinate on people's face sample image, and r is the right hand edge horizontal ordinate on people's face sample image, and b is the lower limb ordinate on people's face sample image.
Preferably, described projective transformation module further comprises:
The linear transformation submodule is used for adopting the linear discriminant analysis method to obtain the linear transformation of described effective textural characteristics;
Projection properties obtains submodule, is used for the projection in the described linear transformation of effective textural characteristics of all subregion is obtained corresponding projection properties value.
Preferably, described recognition processing module further comprises:
Sample characteristics generates submodule, is used for generating positive sample characteristics and anti-sample characteristics; Described positive sample characteristics is the distance value of projection properties value that belongs to same people's the corresponding subregions of two facial images; Described anti-sample characteristics is the distance value of projection properties value that belongs to the corresponding subregions of two facial images of different people;
The Real-Adaboost algorithm calls submodule, be used for adopting the Real-Adaboost algorithm that described positive sample characteristics and anti-sample characteristics are trained, acquisition has in the described strong classifier and judges whether people's face sample image to be identified belongs to faceform's degree of confidence parameter as the strong classifier of faceform's sorter;
The confidence judgement submodule is used for people's face sample image to be identified is inputted described strong classifier, judges whether to belong to same people's face by described degree of confidence parameter.
Preferably, described system also comprises:
Adjacent subarea domain search module is used for searching for other subregion of current subregion close position, and calculating the distance value of the projection properties value between each adjacent locations subregion participating in two facial images of comparison;
Projection properties distance value determination module is used for the minimum value with the distance value of the projection properties value between described each adjacent locations subregion, as the distance value of the projection properties value of current subregion.
Compared with prior art, the present invention has the following advantages:
At first, the present invention is by being divided into people's face a plurality of mutually subregions overlapping and not of uniform size, and adopt the strong textural characteristics of details descriptive power as the represented face characteristic of every sub regions, this face representation method based on the zone has overcome overall expression on the one hand for the localized variation sensitive issue; On the other hand, adopt the character representation in the local neighborhood, also overcome pixel characteristic for positioning feature point and people's face alignment sensitive issue.And, in a preferred embodiment of the invention, the subregion textural characteristics that extracts is local Gabor binary pattern histogram feature, and this feature is for because the error that positioning feature point and people's face alignment sensitive issue causes robust more, and the expression ability of each feature is also stronger.
Then, the present invention is for the subregion textural characteristics that extracts, further adopting feature to select algorithm selects for the most effective feature of sorter, and adopt linear discriminant analysis to obtain for the most effective projecting direction of classification, so that in training process, the internal memory that needs is very little, and can reduce operand;
Moreover it is feature that the present invention adopts the projection properties value of all subregion, for the most effective part of classification, constructs final sorter in the selection all subregion.This sorter can be refused except most of image combining by less feature, thus Effective Raise processing speed, particularly the comparison speed on large scale database.
In addition, in actual use, above-mentioned corresponding subregion for people's face in the database, can also search at its close position, obtain a plurality of subregions, find so that in the database a plurality of people's faces zone with as of the above-mentioned projection properties minimum of the described subregion of forefathers' face, and with the projection properties of projection properties minimum value as current subregion, and be input in the follow-up integrated classifier and calculate.Because the error of people's face location algorithm often can't accurately be alignd between people's face.And method of the present invention can alignment error impact, improved processing speed, and can obtain better treatment effect.
Description of drawings
Fig. 1 is the flow chart of steps of a kind of face identification method embodiment of the present invention;
Fig. 2 is the structured flowchart of a kind of face identification system embodiment of the present invention.
Embodiment
For above-mentioned purpose of the present invention, feature and advantage can be become apparent more, the present invention is further detailed explanation below in conjunction with the drawings and specific embodiments.
With reference to figure 1, show the process flow diagram of a kind of face identification method embodiment of the present invention, specifically can comprise the steps:
Step 101, people's face sample image is divided into a plurality of mutually subregions overlapping and not of uniform size, described people's face sample image is the facial image through pretreated fixed size;
The textural characteristics of step 102, the described subregion of extraction;
Step 103, from described textural characteristics, choose effective textural characteristics according to presetting rule, and obtain the projection properties value of described effective textural characteristics;
Step 104, carry out recognition of face according to the projection properties value of all subregion.
Need to prove the feature extraction that focuses on recognition of face that the embodiment of the invention is described and sorter structure.For the image acquisition of front end, people's face detects, follows the tracks of, the face feature point location, and the pre-service of human face region extraction and face image adopts prior art to get final product, and for example, a kind of process of pre-service facial image is:
Substep A1, collection user's to be certified facial image, described facial image can comprise positive sample image and anti-sample image;
The position that the organ characteristic is ordered in substep A2, the described facial image of demarcation;
In practice, can demarcate or automatic calibration method obtains the unique points such as eyes, nose, face of people's face from all sample images position by manual.
Substep A3, according to the position that described organ characteristic is ordered, described facial image is carried out size and gray scale normalization;
Substep A4, the image that extracts default size from the facial image after the described normalization are effective facial image.
Be appreciated that, people's face sample image of indication in the embodiment of the invention, refer to that unusually face detects and positioning feature point, and the facial image after the alignment normalization, its size is certain size that sets, for example, suppose setting facial image width and highly be respectively FW, FH, can but to be not limited to get FW be that 64, FH is 80.
For clarity sake, each step for the present embodiment below describes in detail:
One, for the explanation of step 101:
Be underwriter's face accuracy of identification, the present invention has adopted the details that can take full advantage of people's face and the facial image feature extracting method of area information, by people's face sample image being divided into the subregions of mutually overlapping different sizes, and adopt the feature on all subregion that people's face is represented.This face representation method based on subregion has overcome overall expression on the one hand for the localized variation sensitive issue, simultaneously, adopts the character representation in the local neighborhood, has also overcome pixel characteristic for positioning feature point and people's face alignment sensitive issue.
In specific implementation, described subregion preferably can adopt the rectangular area, namely passes through to select the rectangular areas of a series of different sizes, and calculates respectively the feature on all subregion.For expressing better people's face, the size of subregion can not be too little, is different from pixel characteristic to guarantee it, can overcome to a certain extent the impact of alignment error; And its size needs to cover certain limit, guarantees existing relatively local feature, and the feature of the overall situation of approaching is also arranged, and could represent better people's face like this.In a kind of concrete example, 1 pixel can be selected in the interval between each adjacent subarea territory of same yardstick, also can select greater than 1 pixel.Be understandable that, the interval is less, and the subregion that then obtains is more, and the expression ability is stronger, but processing speed is slower; The interval is larger, and the subregion that then obtains is fewer, and the expression ability is more weak, but processing speed is faster.As a kind of example, the length breadth ratio of above-mentioned subregion can be got 1: 1, and 1: 2,2: 1, the width of subregion can get 16,20,24,28,32 etc.Suppose and obtain altogether N BlockSub regions is respectively R i, i=1,2...N Block
Two, for the explanation of step 102:
Extract the face characteristic on the above-mentioned subregion, a variety of selections can be arranged.Be preferably in embodiments of the present invention texture feature extraction, texture refers generally to the grey scale change rule of picture dot in the viewed image of people, it is basic and important characteristic in the image, comparatively effective texture description operator comprises the Gabor feature of each pixel on this subregion at present, the perhaps LBP on this subregion (Local Binary Pattern, local binary patterns) feature.
The Gabor feature is by a series of different scales, and the Gabor wave filter of different directions and image carry out two-dimensional convolution and obtain.The Gabor wave filter is proposed by Daugman the earliest, point out that the Gabor Wavelet Kernel Function has the characteristic identical with the two-dimentional echo area of human brain cortex simple cell, can be regarded as one by the sinusoidal plane wave of Gaussian FUNCTION MODULATION at spatial domain Gabor wave filter, it can catch the partial structurtes information corresponding to spatial frequency (yardstick), locus and directional selectivity.In image representation, the Gabor feature has obtained a lot of well application, has much obtained the face identification method of better effects, also adopts this feature as the face representation method.The LBP feature is by the magnitude relationship of pixel and neighborhood territory pixel is encoded, and the textural characteristics of the image-region of acquisition, LBP have also been obtained preferably effect in texture recognition and face recognition application.
Because the Gabor feature easily is subject to the impact of illumination variation, and the dimension of Gabor feature is very high, data area is very large, and uses the Gabor feature that can only adopt each pixel, is inconvenient to adopt the statistical property of regional area as feature.Thereby, in a preferred embodiment of the present invention, proposed to adopt local Gabor binary pattern histogram feature as the preferred textural characteristics of subregion.
So-called local Gabor binary pattern, exactly for the m * n that obtains a Gabor magnitude image, the size of each image is big or small identical with people's face sample image, is FW, FH, total m*n.And the value of each pixel is this direction on each image, this yardstick, and the Gabor feature of this location of pixels is Gabor (m, n, j, i).In specific implementation, the Gabor feature can adopt the method that convolution is converted to the frequency domain product to calculate fast.For the Gabor feature of some yardsticks, some directions, can calculate first the frequency spectrum of image by FFT, its frequency spectrum with the Gabor nuclear of corresponding dimension be multiplied each other, and adopt IFFT to calculate corresponding frequency-region signal, be current yardstick, the Gabor image of direction.As seen, some yardsticks, the Gabor image of some directions is independently to calculate.
Generally, in a preferred embodiment of the present invention, described step 102 can comprise following substep:
Substep B1, obtain the Gabor feature of m yardstick of described people's face sample image, a n direction, described m, n are positive integer;
The local binary patterns of each pixel is characterized as local Gabor binary pattern LGBP feature in substep B2, the described Gabor characteristic image of calculating;
Substep B3, the described local Gabor binary pattern LGBP feature of foundation are carried out histogram calculation, obtain the local Gabor binary pattern histogram feature of described subregion.
Namely preferably adopt in the present embodiment the LGBP histogram on the subregion to be the face representation feature on this subregion.The relative LGBP feature of LGBP histogram feature itself, for because the error that people's face alignment step such as positioning feature point causes robust more, and, utilized to a certain extent regional neighborhood information, the expression ability of each feature is also stronger.
For making those skilled in the art understand better the present invention, below provide a kind of concrete example of extracting the local Gabor binary pattern of subregion histogram feature:
All Gabor wave filters that adopt m yardstick and n direction to consist of obtain the Gabor feature, are called global filtering device group, and (m * n) expression can get m=5, n=8 in practice with G.Obtain thus a proper vector that is formed by m*n*FW*FH feature.Be m with yardstick, direction is the Gabor wave filter of n and pixel center position for people's face sample image of (j, i) carries out filtering, obtains the following expression of Gabor feature: Gabor (m, n, j, i).
For n direction, the Gabor image of m yardstick is write a Chinese character in simplified form its Gabor that locates at position (j, i) and is characterized as G (j, i).Further, obtain its LBP image by following processing:
Suppose that pixel coordinate is (j in the Gabor characteristic image, the brightness of some correspondence i) is G (j, i), centered by point (j, i), get radius and be the circle of R, and take the coordinate axis transverse axis be in forward the circle on point as starting point, on circle, uniformly-spaced get P point, and the described R of stating gets a plurality of yardsticks, and P gets a plurality of values.
The Gabor eigenwert of supposing each point is G 1, G 2... G P, then the LGBP feature of defining point (j, i) is shown in following formula:
LGBP ( j , i ) = Σ 1 ≤ q ≤ P B ( j , i , q ) * 2 q - 1
Wherein, B ( j , i , q ) = 1 if G q ≥ G ( j , i ) 0 else ;
For certain rectangular area on people's face sample image, the method for further calculating the LGBP histogram feature according to the LGBP feature is as follows:
Suppose that the rectangular area is R (l, t, r, b), its four direction edge coordinate on facial image is respectively left hand edge horizontal ordinate l 1, coboundary ordinate t, right hand edge horizontal ordinate r, lower limb ordinate b, then calculate the LGBP histogram feature by following formula:
H l 2 = Σ t ≤ j ≤ b , l 1 ≤ i ≤ r I ( LGBP ( j , i ) = = l 2 ) , l 2 = 0 , . . . , N - 1 ,
Wherein, I ( A ) = 1 , A is true 0 , A is false , l 2Value for characteristic image.N is the maximum occurrences number of LGBP feature, is 2 P-1, wherein P is for uniformly-spaced getting number a little on the circle; In a kind of concrete example, when P=8, N is 256.
Adopt the LGBP histogram on all subregion to represent if be appreciated that people's face sample image, the characteristic length of all subregion is N.
Certainly, the division methods of above-mentioned subregion and the extracting method of textural characteristics are only as example, and it all is feasible that those skilled in the art adopt corresponding method arbitrarily according to actual conditions, and the present invention need not this to be limited.
Three, for the explanation of step 103:
For current subregion, its proper vector is the N dimension.Yet this N dimensional vector is not that every one dimension is all effective to classifying, and if all need to calculate, calculated amount is also very large.For reducing calculated amount, save memory source, a kind of better method is that the employing feature is selected or the method for dimensionality reduction is processed it, obtains the most effective front U dimension in the N dimension, adopts this U Wesy in making up final subregion characteristic of division.A kind of simple method is to adopt the PCA method to carry out dimensionality reduction, but the feature that the PCA method obtains not is for the most effective feature of classification.Thereby, in order to overcome the needs of internal memory, in embodiments of the present invention preferably,
To all subregions, respectively for each sub regions institute texture feature extraction, adopt the associating lifting feature to select algorithm and pick out T feature as effective textural characteristics of current subregion, described T is positive integer.
The core concept of this algorithm be adopt in the class/method between class is two class problems with recognition of face by a multiclass problem reduction.
For making those skilled in the art understand better the present invention, a kind of concrete example that the associating lifting feature is selected algorithm of using in embodiments of the present invention below is provided, specifically can comprise following substep:
The sample set that substep C1, structure are comprised of the subregion textural characteristics, and for each sample distribution initial weight;
For example, suppose total C people in the training sample, c=1,2...C, therefore total C two class problems.Total sample number is N, and, among c the people n is arranged cIndividual sample, sample x iRepresent, each sample is with a mark y iRepresent which people it belongs to, the weight initialization of sample need to be carried out respectively for C two class classification problems so:
Figure GDA0000133932520000141
The subscript c of w is illustrated in c the sample weights value under the two class problems.
Substep C2, according to following process begin the training, to pick out altogether T feature, t=1,2...T;
C21, normalized weight value namely need to carry out the normalization of weighted value for each c class problem.For example, carry out for c two class problems that weighted value is normalized to be treated to:
w t , i c = w t , i c / Σ i = 1 N w t , i c
C22, suppose the M that adds up to of feature, j=1,2...M, among each two class problem c for feature a little less than each
Figure GDA0000133932520000143
All train a Weak Classifier
Figure GDA0000133932520000144
Weak Classifier can be constructed according to the mode of look-up table (LUT).Under c two class problems, the constructive formula of Weak Classifier and Weak Classifier the computing formula of error cost parameter c ost as follows:
LUT is divided into K interval,
Figure GDA0000133932520000151
Positive sample LUT is:
Figure GDA0000133932520000152
Negative sample LUT is:
Figure GDA0000133932520000153
Wherein, Be respectively the weighted value sum of positive and negative samples under c the two class problems.
The acquisition Weak Classifier is:
Figure GDA0000133932520000155
The cost of Weak Classifier is: R j c = Σ k = 1 K min ( h j c , + ( k ) , h j c , - ( k ) ) ;
C23, general
Figure GDA0000133932520000157
Minimum feature is as t feature f that picks out t
C24, by the following formula weighted value of new samples more:
w t + 1 , i c = w t , i c * exp ( - μ i c * f t c ( x i | w t c ) ) ;
Wherein, work as y iDuring=c,
Figure GDA0000133932520000159
Otherwise
Figure GDA00001339325200001510
If C25 has picked out T feature, then training finishes, otherwise, turn back to the C21 step to proceed feature and select.
Certainly, above-mentioned feature is selected algorithm only as example, and it all is feasible that those skilled in the art adopt any feature of the prior art to select algorithm according to actual conditions, and the present invention need not this to be limited.
On all subregion that obtains for above-mentioned selection to the most effective T of a classification feature, further can obtain a projection matrix by multidimensional linear discriminant analysis method, this projection matrix is comprised of a plurality of projection vectors, described effective textural characteristics can obtain a plurality of projection properties values after doing projection to this projection matrix.For multidimensional linear discriminant analysis method, those skilled in the art get final product with reference to prior art, and the present invention has not just given unnecessary details at this.
For every sub regions, it is less than normal that the projection properties value of above-mentioned effective textural characteristics satisfies the eigenwert of same person, and the eigenwert of different people is bigger than normal, and the requirement of easily distinguishing.
Certainly, the computing method of above-mentioned projection properties value are only as example, and it all is feasible that those skilled in the art adopt corresponding method arbitrarily according to actual conditions, and the present invention need not this to be limited.
Four, for the explanation of step 104:
As feature, can adopt sorter to select construction algorithm structure faceform sorter the above-mentioned projection properties value of all subregion.This sorter is selected to form for the most effective part of classification in all subregion.Utilize this sorter to carry out recognition of face, can refuse to remove most of image combining with less feature, thereby improved the comparison speed on large scale database.
In a preferred embodiment of the present invention, described step 104 can comprise following substep:
The distance value of the projection properties value of substep D1, each group comparison people face sample image respective sub-areas of calculating;
The distance value sum of the projection properties value of a plurality of subregions is the distance value of comparison people face in substep D2, the current comparison of the calculating people face sample image;
Substep D3, the described comparison people face distance value of foundation judge whether current comparison people face sample image is characterized by same people.
Need to prove, in practice, common one group of comparison people face is two comparison people faces.
As another kind of preferred embodiment of the present invention, described step 104 can comprise following substep:
The distance value of the projection properties value of substep E1, each group comparison people face sample image respective sub-areas of calculating;
The distance value of the projection properties value between substep E2, the described comparison of foundation people face all subregion judges whether current comparison people face sample image is characterized by same people.
As another preferred embodiment of the present invention, described step 104 can comprise following substep:
Substep F1, the positive sample characteristics of generation and anti-sample characteristics; Described positive sample characteristics is the distance value of projection properties value that belongs to same people's the corresponding subregions of two facial images; Described anti-sample characteristics is the distance value of projection properties value that belongs to the corresponding subregions of two facial images of different people;
Substep F2, employing Real-Adaboost algorithm are trained described positive sample characteristics and anti-sample characteristics, acquisition has in the described strong classifier and judges whether people's face sample image to be identified belongs to faceform's degree of confidence parameter as the strong classifier of faceform's sorter;
Substep F3, people's face sample image to be identified is inputted described strong classifier, judge whether to belong to same people's face by described degree of confidence parameter.
For making those skilled in the art understand better the present invention, below provide a kind of concrete example of the Real-Adaboost of employing Algorithm for Training strong classifier:
F11, given global characteristics training set L={ (x i, y i), i=1 ..., n, wherein, y i{+1 ,-1} is affiliated sample class to ∈, x i∈ X is the distance value for projection properties value corresponding to subregion;
The weight of each element is in F12, the described global characteristics training set of initialization:
D 1 ( i ) = 1 n , i = 1 , . . . , n ;
F13, by T Weak Classifier of T iteration acquisition, wherein, t=1 ..., T, the process of the t time iteration is:
1) takes turns distribution D at this tOn, adopt Weak Classifier structure selection algorithm, obtain this and take turns best Weak Classifier, obtain h t: X → {+∝ ,-∝ };
2) more the weight of new samples is D t + 1 ( i ) = D t ( i ) exp ( - y i h t ( x i ) ) Z t ;
Wherein, Z t = Σ i D t ( i ) exp ( - y i h t ( x i ) ) It is normalized factor;
F14, the last strong classifier of output: H ( x ) = sign ( Σ t = 1 T h t ( x ) - b ) .
Take turns in the iteration every, for the Weak Classifier construction algorithm, can adopt Region Segmentation Weak Classifier building method (Domain-partitioning weak hypotheses) to construct Weak Classifier, and therefrom choose so that the Weak Classifier of error in classification upper bound minimum is exported as epicycle.For making those skilled in the art understand better the present invention, below provide the example of Weak Classifier construction algorithm as follows:
At distribution D tOn, as follows for its corresponding Weak Classifier of each latent structure in the candidate feature space:
1) sample space is divided into n different interval X 1..., X n, X 1∪ X 2∪ ... ∪ X n=X and X i ∩ i ≠ j X j = Φ ;
2) at distribution D tOn, adopt following formula to calculate:
Figure GDA0000133932520000183
L=± 1 wherein;
3) for X jIn each X, set its corresponding Weak Classifier and be output as:
∀ x ∈ X j , h ( x ) = 1 2 ln ( W + 1 j + ϵ W - 1 j + ϵ ) ,
Wherein ε<<1/2N;
4) calculate Z = 2 Σ j W + 1 j W - 1 j ;
5) from all Weak Classifiers of structure, select so that the h of Z minimum takes turns the Weak Classifier of final selection output as this.
Through above-mentioned training process, every sub regions all obtains a sorter
Figure GDA0000133932520000191
Definition
Figure GDA0000133932520000192
Be the classification confidence parameter of territorial classification device, the numerical value of this degree of confidence can be good at weighing to have in the current feature situation at the sample subregion, and this sample belongs to the degree of model.
In embodiments of the present invention preferably, the number of features that above-mentioned every sub regions selection obtains can be taken as any one number in [1, T], and corresponding calculating LDA linear transformation.Respectively as feature, and adopt Real-Adaboost to select wherein projection properties to the most effective final current subregion of conduct of classification above-mentioned T projection properties of described subregion.The benefit of relatively former method is, the value of T diminishes on the impact of net result, and chooses when classifying more effective feature, can further reduce operand.
As a kind of concrete application in recognition of face is used, in embodiments of the present invention, can also calculate by following steps the distance value of the projection properties value of described subregion:
To participating in two facial images of comparison, search for other subregion of current subregion close position, and calculate the distance value of the projection properties value between each adjacent locations subregion;
With between described each adjacent locations subregion, the minimum value in the distance value of subregion projection properties value is as the distance value of the projection properties value of current subregion.Because the error of people's face location algorithm often can't accurately be alignd between people's face.Said method can alignment error impact, improved treatment effect.
Need to prove, for embodiment of the method, for simple description, therefore it all is expressed as a series of combination of actions, but those skilled in the art should know, the present invention is not subjected to the restriction of described sequence of movement, because according to the present invention, some step can adopt other orders or carry out simultaneously.Secondly, those skilled in the art also should know, the embodiment described in the instructions all belongs to preferred embodiment, and related action and module might not be that the present invention is necessary.
With reference to figure 2, show the structural drawing of a kind of face identification system embodiment of the present invention, specifically can comprise such as lower module:
Subregion is divided module 201, is used for people's face sample image is divided into a plurality of mutually subregions overlapping and not of uniform size, and described people's face sample image is the facial image through pretreated fixed size;
Characteristic extracting module 202 is for the textural characteristics that extracts described subregion;
Validity feature is chosen module 203, is used for choosing effective textural characteristics according to presetting rule from described textural characteristics;
Projective transformation module 204 is for the projection properties value of obtaining described effective textural characteristics;
Recognition processing module 205 is used for carrying out recognition of face according to the projection properties value of all subregion.
In a preferred embodiment of the present invention, described subregion can be the rectangular area, and described textural characteristics can be local Gabor binary pattern histogram feature, and in this case, described characteristic extracting module 202 may further include following submodule:
Gabor feature calculation submodule is used for generating m yardstick and n the Gabor bank of filters that direction consists of, and described m, n are positive integer; Adopting respectively yardstick is that m, direction are the Gabor wave filter of n, is that people's face sample image of (j, i) carries out filtering to the pixel center position, obtains the Gabor eigenwert of each pixel;
LGBP feature calculation submodule, be used at Gabor characteristic image that characterized by described Gabor eigenwert, a m yardstick, a n direction, the setting pixel coordinate is that the brightness of the some correspondence of (j, i) is G (j, i), with point (j, i) get the circle that radius is R centered by, and take the coordinate axis transverse axis be in forward the circle on point as starting point, on circle, uniformly-spaced get P point, described R gets a plurality of yardsticks, and P gets a plurality of values; The Gabor eigenwert that each point is set is G 1, G 2... G P
Calculate the local Gabor binary pattern LGBP feature of described point (j, i) by following formula:
LGBP ( j , i ) = Σ 1 ≤ q ≤ P B ( j , i , q ) * 2 q - 1
Wherein, B ( j , i , q ) = 1 if G q ≥ G ( j , i ) 0 else ;
The histogram calculation submodule is used for carrying out histogram calculation by following formula according to described local Gabor binary pattern LGBP feature:
H l 2 = Σ t ≤ j ≤ b , l 1 ≤ i ≤ r I ( LGBP ( j , i ) = = l 2 ) , l 2 = 0 , . . . , N - 1 ,
Wherein, I ( A ) = 1 , A is true 0 , A is false , N is 2 P-1, wherein P is for uniformly-spaced getting number a little on the circle; l 1Left hand edge horizontal ordinate on the behaviour face sample image, t are the coboundary ordinate on people's face sample image, and r is the right hand edge horizontal ordinate on people's face sample image, and b is the lower limb ordinate on people's face sample image, l 2Value for characteristic image.
In a preferred embodiment of the present invention, described validity feature is chosen module 203 and be may further include following submodule:
The associating lifting feature is selected algorithm and is called submodule, is used for all subregions, from the textural characteristics of its extraction, adopts the associating lifting feature to select algorithm and picks out T feature as effective textural characteristics of current subregion respectively, and described T is positive integer.
In a kind of concrete example, described employing associating lifting feature is selected algorithm and is mainly used in being handled as follows:
The sample set that structure is comprised of the subregion textural characteristics, and distribute initial weight for each sample, described initial weight is determined by the sample classification factor that whether is same people's face;
According to described initial weight by following substep to the sample training, to pick out T feature;
Substep S1, carry out the normalization of weighted value for all samples of a certain people's face;
Substep S2, determine the weak feature in the sample, construct corresponding Weak Classifier for feature a little less than each according to the mode of look-up table (LUT) according to described normalized weighted value, and obtain the error cost parameter of each Weak Classifier;
Substep S3, this is taken turns the corresponding weak feature of the Weak Classifier of error cost parameter minimum in the training as t the feature of picking out, wherein, t=1,2 ..., T;
Substep S4, based on the t that the picks out feature weighted value of new samples more;
Substep S5, judge whether to pick out T feature, if not, then return described substep S1; If then finish training.
In a preferred embodiment of the present invention, described projective transformation module 204 may further include following submodule:
The linear transformation submodule is used for adopting the linear discriminant analysis method to obtain the linear transformation of described effective textural characteristics;
Projection properties obtains submodule, is used for the projection in the described linear transformation of effective textural characteristics of all subregion is obtained corresponding projection properties value.
In a preferred embodiment of the present invention, described recognition processing module 205 may further include following submodule:
Summation distance value calculating sub module is used for calculating the current distance value sum of comparing the projection properties value of a plurality of subregions of people's face sample image the distance value that is comparison people face;
The identification decision submodule is used for judging according to described comparison people face distance value whether current comparison people face sample image is characterized by same people.
As another kind of preferred embodiment of the present invention, described recognition processing module 205 may further include following submodule:
Comparison distance value calculating sub module is used for calculating the distance value that each organizes the projection properties value of comparison people face sample image respective sub-areas;
The comparison decision sub-module is used for judging according to the distance value of the projection properties value between described comparison people face all subregion whether current comparison people face sample image is characterized by same people.
As another kind of preferred embodiment of the present invention, described recognition processing module 205 may further include following submodule:
Sample characteristics generates submodule, is used for generating positive sample characteristics and anti-sample characteristics; Described positive sample characteristics is the distance value of projection properties value that belongs to same people's the corresponding subregions of two facial images; Described anti-sample characteristics is the distance value of projection properties value that belongs to the corresponding subregions of two facial images of different people; The Real-Adaboost algorithm calls submodule, be used for adopting the Real-Adaboost algorithm that described positive sample characteristics and anti-sample characteristics are trained, acquisition has in the described strong classifier and judges whether people's face sample image to be identified belongs to faceform's degree of confidence parameter as the strong classifier of faceform's sorter;
The confidence judgement submodule is used for people's face sample image to be identified is inputted described strong classifier, judges whether to belong to same people's face by described degree of confidence parameter.
In the concrete use of recognition of face, the embodiment of the invention can also comprise such as the distance value of lower module with the projection properties value of calculating subregion:
Adjacent subarea domain search module is used for searching for other subregion of current subregion close position, and calculating the distance value of the projection properties value between each adjacent locations subregion participating in two facial images of comparison;
Projection properties distance value determination module is used between described each adjacent locations subregion, and the minimum value in the distance value of subregion projection properties value is as the distance value of the projection properties value of current subregion.
For system embodiment because itself and embodiment of the method basic simlarity shown in Figure 1, so describe fairly simple, relevant part gets final product referring to the part explanation of system embodiment.
The present invention can be used in numerous general or special purpose computingasystem environment or the configuration.For example: personal computer, server computer, handheld device or portable set, plate equipment, multicomputer system, the system based on microprocessor, set top box, programmable consumer-elcetronics devices, network PC, small-size computer, mainframe computer, comprise distributed computing environment of above any system or equipment etc.
The present invention can describe in the general context of the computer executable instructions of being carried out by computing machine, for example program module.Usually, program module comprises the routine carrying out particular task or realize particular abstract data type, program, object, assembly, data structure etc.Also can in distributed computing environment, put into practice the present invention, in these distributed computing environment, be executed the task by the teleprocessing equipment that is connected by communication network.In distributed computing environment, program module can be arranged in the local and remote computer-readable storage medium that comprises memory device.
Above a kind of face identification method provided by the present invention and a kind of face identification system are described in detail, used specific case herein principle of the present invention and embodiment are set forth, the explanation of above embodiment just is used for helping to understand method of the present invention and core concept thereof; Simultaneously, for one of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (9)

1. a face identification method is characterized in that, comprising:
People's face sample image is divided into a plurality of mutually subregions overlapping and not of uniform size, and described people's face sample image is the facial image through pretreated fixed size;
Extract the textural characteristics of described subregion;
From described textural characteristics, choose effective textural characteristics according to presetting rule, and obtain the projection properties value of described effective textural characteristics; The described step of choosing effective textural characteristics according to presetting rule comprises: to all subregions, respectively for each sub regions institute texture feature extraction, adopt the associating lifting feature to select algorithm and pick out T feature as effective textural characteristics of current subregion, described T is positive integer;
Projection properties value according to all subregion is carried out recognition of face; The step that described projection properties value according to all subregion is carried out recognition of face further comprises: the distance value that calculates the projection properties value of each group comparison people face sample image respective sub-areas; Calculate the distance value sum of the projection properties value of a plurality of subregions in the current comparison people face sample image and be the distance value of comparison people face; Judge according to described comparison people face distance value whether current comparison people face sample image is characterized by same people;
The distance value of the projection properties value of described subregion calculates by following steps: to participating in two facial images of comparison, search for other subregion of current subregion close position, and calculate the distance value of the projection properties value between each adjacent locations subregion; With the minimum value in the distance value of the projection properties value between described each adjacent locations subregion, as the distance value of the projection properties value of current subregion.
2. the method for claim 1 is characterized in that, described subregion is the rectangular area, and described textural characteristics is local Gabor binary pattern histogram feature, and the step of described extraction subregion textural characteristics further comprises:
Generate m yardstick and n the Gabor bank of filters that direction consists of, described m, n are positive integer;
The employing yardstick is that m, direction are the Gabor wave filter of n, is that people's face sample image of (j, i) carries out filtering to the pixel center position, obtains the Gabor eigenwert of each pixel;
In Gabor characteristic image that characterized by described Gabor eigenwert, a m yardstick, a n direction, the setting pixel coordinate is (j, the brightness of some correspondence i) is G (j, i), centered by point (j, i), get radius and be the circle of R, and take the coordinate axis transverse axis be in forward the circle on point as starting point, uniformly-spaced get P point on circle, described R gets a plurality of yardsticks, and P gets a plurality of values; The Gabor eigenwert that each point is set is G 1, G 2... G P
Calculate the local Gabor binary pattern LGBP feature of described point (j, i) by following formula:
Figure FDA00001630229700021
Wherein,
Figure FDA00001630229700022
Carry out histogram calculation by following formula according to described local Gabor binary pattern LGBP feature:
Figure FDA00001630229700023
Wherein, N is 2 P-1, wherein P is for uniformly-spaced getting number a little on the circle; L is the left hand edge horizontal ordinate on people's face sample image, and t is the coboundary ordinate on people's face sample image, and r is the right hand edge horizontal ordinate on people's face sample image, and b is the lower limb ordinate on people's face sample image.
3. the method for claim 1 is characterized in that, described employing associating lifting feature is selected algorithm and selected the step of the effective textural characteristics of all subregion and further comprise:
The sample set that structure is comprised of the subregion textural characteristics, and distribute initial weight for each sample, described initial weight is determined by the sample classification factor that whether is same people's face;
According to described initial weight by following substep to the sample training, to pick out T feature;
Substep S1, carry out the normalization of weighted value for all samples of a certain people's face;
Substep S2, determine the weak feature in the sample, construct corresponding Weak Classifier for feature a little less than each according to the mode of LUT look-up table according to described normalized weighted value, and obtain the error cost parameter of each Weak Classifier;
Substep S3, this is taken turns the corresponding weak feature of the Weak Classifier of error cost parameter minimum in the training as t the feature of picking out, wherein, t=1,2 ..., T;
Substep S4, based on the t that the picks out feature weighted value of new samples more;
Substep S5, judge whether to pick out T feature, if not, then return described substep S1; If then finish training.
4. method claimed in claim 1 is characterized in that, the described step of obtaining the projection properties value of effective textural characteristics comprises:
Adopt the linear discriminant analysis method to obtain the linear transformation of described effective textural characteristics;
The projection in the described linear transformation of effective textural characteristics of all subregion is obtained corresponding projection properties value.
5. method claimed in claim 1 is characterized in that, the step that described projection properties value according to all subregion is carried out recognition of face further comprises:
Generate positive sample characteristics and anti-sample characteristics; Described positive sample characteristics is the distance value of projection properties value that belongs to same people's the corresponding subregions of two facial images; Described anti-sample characteristics is the distance value of projection properties value that belongs to the corresponding subregions of two facial images of different people;
Adopt the Real-Adaboost algorithm that described positive sample characteristics and anti-sample characteristics are trained, acquisition has in the described strong classifier and judges whether people's face sample image to be identified belongs to faceform's degree of confidence parameter as the strong classifier of faceform's sorter;
People's face sample image to be identified is inputted described strong classifier, judge whether to belong to same people's face by described degree of confidence parameter.
6. a face identification system is characterized in that, comprising:
Subregion is divided module, is used for people's face sample image is divided into a plurality of mutually subregions overlapping and not of uniform size, and described people's face sample image is the facial image through pretreated fixed size;
Characteristic extracting module is for the textural characteristics that extracts described subregion;
Validity feature is chosen module, is used for choosing effective textural characteristics according to presetting rule from described textural characteristics; Described validity feature is chosen module and further comprised: the associating lifting feature is selected algorithm and is called submodule, be used for all subregions, respectively from the textural characteristics of its extraction, adopt the associating lifting feature to select algorithm and pick out T feature as effective textural characteristics of current subregion, described T is positive integer;
The projective transformation module is for the projection properties value of obtaining described effective textural characteristics;
Recognition processing module is used for carrying out recognition of face according to the projection properties value of all subregion; Described recognition processing module further comprises:
Adjacent subarea domain search module is used for searching for other subregion of current subregion close position, and calculating the distance value of the projection properties value between each adjacent locations subregion participating in two facial images of comparison;
Summation distance value calculating sub module is used for calculating the current distance value sum of comparing the projection properties value of a plurality of subregions of people's face sample image the distance value that is comparison people face;
The identification decision submodule is used for judging according to described comparison people face distance value whether current comparison people face sample image is characterized by same people;
Projection properties distance value determination module is used for the minimum value with the distance value of the projection properties value between described each adjacent locations subregion, as the distance value of the projection properties value of current subregion;
Comparison distance value calculating sub module is used for calculating the distance value that each organizes the projection properties value of comparison people face sample image respective sub-areas.
7. system as claimed in claim 6 is characterized in that, described subregion is the rectangular area, and described textural characteristics is local Gabor binary pattern histogram feature, and described characteristic extracting module further comprises:
Gabor feature calculation submodule is used for generating m yardstick and n the Gabor bank of filters that direction consists of, and described m, n are positive integer; Adopting respectively yardstick is that m, direction are the Gabor wave filter of n, is that people's face sample image of (j, i) carries out filtering to the pixel center position, obtains the Gabor eigenwert of each pixel;
LGBP feature calculation submodule, be used at Gabor characteristic image that characterized by described Gabor eigenwert, a m yardstick, a n direction, the setting pixel coordinate is that the brightness of the some correspondence of (j, i) is G (j, i), with point (j, i) get the circle that radius is R centered by, and take the coordinate axis transverse axis be in forward the circle on point as starting point, on circle, uniformly-spaced get P point, described R gets a plurality of yardsticks, and P gets a plurality of values; The Gabor eigenwert that each point is set is G 1, G 2... G P
Calculate the local Gabor binary pattern LGBP feature of described point (j, i) by following formula:
Wherein,
Figure FDA00001630229700052
The histogram calculation submodule is used for carrying out histogram calculation by following formula according to described local Gabor binary pattern LGBP feature:
Wherein,
Figure FDA00001630229700054
N is 2 P-1, wherein P is for uniformly-spaced getting number a little on the circle; L is the left hand edge horizontal ordinate on people's face sample image, and t is the coboundary ordinate on people's face sample image, and r is the right hand edge horizontal ordinate on people's face sample image, and b is the lower limb ordinate on people's face sample image.
8. system claimed in claim 6 is characterized in that, described projective transformation module further comprises:
The linear transformation submodule is used for adopting the linear discriminant analysis method to obtain the linear transformation of described effective textural characteristics;
Projection properties obtains submodule, is used for the projection in the described linear transformation of effective textural characteristics of all subregion is obtained corresponding projection properties value.
9. system claimed in claim 6 is characterized in that, described recognition processing module further comprises:
Sample characteristics generates submodule, is used for generating positive sample characteristics and anti-sample characteristics; Described positive sample characteristics is the distance value of projection properties value that belongs to same people's the corresponding subregions of two facial images; Described anti-sample characteristics is the distance value of projection properties value that belongs to the corresponding subregions of two facial images of different people;
The Real-Adaboost algorithm calls submodule, be used for adopting the Real-Adaboost algorithm that described positive sample characteristics and anti-sample characteristics are trained, acquisition has in the described strong classifier and judges whether people's face sample image to be identified belongs to faceform's degree of confidence parameter as the strong classifier of faceform's sorter;
The confidence judgement submodule is used for people's face sample image to be identified is inputted described strong classifier, judges whether to belong to same people's face by described degree of confidence parameter.
CN 201010034359 2010-01-20 2010-01-20 Face recognition method and face recognition system Active CN101763507B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010034359 CN101763507B (en) 2010-01-20 2010-01-20 Face recognition method and face recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010034359 CN101763507B (en) 2010-01-20 2010-01-20 Face recognition method and face recognition system

Publications (2)

Publication Number Publication Date
CN101763507A CN101763507A (en) 2010-06-30
CN101763507B true CN101763507B (en) 2013-03-06

Family

ID=42494664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010034359 Active CN101763507B (en) 2010-01-20 2010-01-20 Face recognition method and face recognition system

Country Status (1)

Country Link
CN (1) CN101763507B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108205666A (en) * 2018-01-21 2018-06-26 山东理工大学 A kind of face identification method based on depth converging network

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436637B (en) * 2010-09-29 2013-08-21 中国科学院计算技术研究所 Method and system for automatically segmenting hairs in head images
CN102436636B (en) * 2010-09-29 2013-09-25 中国科学院计算技术研究所 Method and system for segmenting hair automatically
CN102479320A (en) * 2010-11-25 2012-05-30 康佳集团股份有限公司 Face recognition method and device as well as mobile terminal
CN102184404B (en) * 2011-04-29 2012-11-28 汉王科技股份有限公司 Method and device for acquiring palm region in palm image
CN103226774A (en) * 2013-04-25 2013-07-31 中国科学技术大学 Information exchange system
CN103646234B (en) * 2013-11-15 2017-08-25 天津天地伟业数码科技有限公司 Face identification method based on LGBPH features
CN103810490B (en) * 2014-02-14 2017-11-17 海信集团有限公司 A kind of method and apparatus for the attribute for determining facial image
CN104851084B (en) * 2014-02-17 2019-04-02 腾讯科技(深圳)有限公司 A kind of acquisition methods and device of LBP feature
CN104463091B (en) * 2014-09-11 2018-04-06 上海大学 A kind of facial image recognition method based on image LGBP feature subvectors
CN105678208B (en) * 2015-04-21 2019-03-08 深圳Tcl数字技术有限公司 Extract the method and device of face texture
CN104866969A (en) * 2015-05-25 2015-08-26 百度在线网络技术(北京)有限公司 Personal credit data processing method and device
US9633250B2 (en) * 2015-09-21 2017-04-25 Mitsubishi Electric Research Laboratories, Inc. Method for estimating locations of facial landmarks in an image of a face using globally aligned regression
CN105469080B (en) * 2016-01-07 2018-09-25 东华大学 A kind of facial expression recognizing method
CN106529546B (en) * 2016-09-29 2018-05-29 深圳云天励飞技术有限公司 A kind of image-recognizing method and device
CN107437068B (en) * 2017-07-13 2020-11-20 江苏大学 Pig individual identification method based on Gabor direction histogram and pig body hair mode
CN108596250B (en) * 2018-04-24 2019-05-14 深圳大学 Characteristics of image coding method, terminal device and computer readable storage medium
CN110210307B (en) * 2019-04-30 2023-11-28 ***股份有限公司 Face sample library deployment method, face-recognition-based service processing method and device
CN113239839B (en) * 2021-05-24 2022-03-11 电子科技大学成都学院 Expression recognition method based on DCA face feature fusion
CN115984948B (en) * 2023-03-20 2023-05-26 广东广新信息产业股份有限公司 Face recognition method applied to temperature sensing and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1790374A (en) * 2004-12-14 2006-06-21 中国科学院计算技术研究所 Face recognition method based on template matching
CN100410963C (en) * 2006-12-27 2008-08-13 中山大学 Two-dimensional linear discrimination human face analysis identificating method based on interblock correlation
CN101447021A (en) * 2008-12-30 2009-06-03 爱德威软件开发(上海)有限公司 Face fast recognition system and recognition method thereof
CN100557624C (en) * 2008-05-23 2009-11-04 清华大学 Face identification method based on the multicomponent and multiple characteristics fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1790374A (en) * 2004-12-14 2006-06-21 中国科学院计算技术研究所 Face recognition method based on template matching
CN100410963C (en) * 2006-12-27 2008-08-13 中山大学 Two-dimensional linear discrimination human face analysis identificating method based on interblock correlation
CN100557624C (en) * 2008-05-23 2009-11-04 清华大学 Face identification method based on the multicomponent and multiple characteristics fusion
CN101447021A (en) * 2008-12-30 2009-06-03 爱德威软件开发(上海)有限公司 Face fast recognition system and recognition method thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张文超等.基于局部Gabor变化直方图序列的人脸描述与识别.《软件学报》.2006,第17卷(第12期),第2508-2517页. *
朱文佳.基于机器学习的行人检测关键技术研究.《中国优秀硕士论文全文数据库 信息科技辑》.2008,(第6期),第25-28页. *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108205666A (en) * 2018-01-21 2018-06-26 山东理工大学 A kind of face identification method based on depth converging network

Also Published As

Publication number Publication date
CN101763507A (en) 2010-06-30

Similar Documents

Publication Publication Date Title
CN101763507B (en) Face recognition method and face recognition system
CN103679158B (en) Face authentication method and device
Khan et al. Optimized Gabor features for mass classification in mammography
Liu et al. Learning the spherical harmonic features for 3-D face recognition
CN100426314C (en) Feature classification based multiple classifiers combined people face recognition method
CN104778457B (en) Video face identification method based on multi-instance learning
CN109902590A (en) Pedestrian's recognition methods again of depth multiple view characteristic distance study
CN105138972A (en) Face authentication method and device
KR101433472B1 (en) Apparatus, method and computer readable recording medium for detecting, recognizing and tracking an object based on a situation recognition
He et al. Deep learning architecture for iris recognition based on optimal Gabor filters and deep belief network
Barpanda et al. Iris feature extraction through wavelet mel-frequency cepstrum coefficients
CN101558431A (en) Face authentication device
Trabelsi et al. Hand vein recognition system with circular difference and statistical directional patterns based on an artificial neural network
CN101996308A (en) Human face identification method and system and human face model training method and system
Zou et al. Chronological classification of ancient paintings using appearance and shape features
Samma et al. Face sketch recognition using a hybrid optimization model
Ramya et al. Certain investigation on iris image recognition using hybrid approach of Fourier transform and Bernstein polynomials
Alphonse et al. A novel maximum and minimum response-based Gabor (MMRG) feature extraction method for facial expression recognition
CN102129557A (en) Method for identifying human face based on LDA subspace learning
Agarwal et al. An efficient back propagation neural network based face recognition system using haar wavelet transform and PCA
Pathak et al. Multimodal eye biometric system based on contour based E-CNN and multi algorithmic feature extraction using SVBF matching
Sarangi et al. An evaluation of ear biometric system based on enhanced Jaya algorithm and SURF descriptors
Tallapragada et al. Iris recognition based on combined feature of GLCM and wavelet transform
Liu et al. Iris double recognition based on modified evolutionary neural network
Omaia et al. 2D-DCT distance based face recognition using a reduced number of coefficients

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee
CP03 Change of name, title or address

Address after: 100097 Beijing city Haidian District landianchang Road No. 2 Building No. 2 hospital unit 1 (A) 7E

Patentee after: BEIJING ATHENA EYES SCIENCE & TECHNOLOGY CO.,LTD.

Address before: 100097 Beijing city Haidian District landianchang Road No. 2 Jin Yuan Times Business Center Building 2, block B, 5D

Patentee before: Beijing Athena Eyes Technology Development Co.,Ltd.

CP02 Change in the address of a patent holder

Address after: 100097 Beijing Haidian District Kunming Hunan Road 51 C block two floor 207.

Patentee after: BEIJING ATHENA EYES SCIENCE & TECHNOLOGY CO.,LTD.

Address before: 100097 7E 1, unit 2, building 2, East Road, 2 indigo factory, Haidian District, Beijing.

Patentee before: BEIJING ATHENA EYES SCIENCE & TECHNOLOGY CO.,LTD.

CP02 Change in the address of a patent holder
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 410205 14 Changsha Zhongdian Software Park Phase I, 39 Jianshan Road, Changsha High-tech Development Zone, Yuelu District, Changsha City, Hunan Province

Patentee after: Wisdom Eye Technology Co.,Ltd.

Address before: 100097 2nd Floor 207, Block C, 51 Hunan Road, Kunming, Haidian District, Beijing

Patentee before: BEIJING ATHENA EYES SCIENCE & TECHNOLOGY CO.,LTD.

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Face recognition method and face recognition system

Effective date of registration: 20221205

Granted publication date: 20130306

Pledgee: Agricultural Bank of China Limited Hunan Xiangjiang New Area Branch

Pledgor: Wisdom Eye Technology Co.,Ltd.

Registration number: Y2022430000107

PC01 Cancellation of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20231220

Granted publication date: 20130306

Pledgee: Agricultural Bank of China Limited Hunan Xiangjiang New Area Branch

Pledgor: Wisdom Eye Technology Co.,Ltd.

Registration number: Y2022430000107

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: No. 205, Building B1, Huigu Science and Technology Industrial Park, No. 336 Bachelor Road, Bachelor Street, Yuelu District, Changsha City, Hunan Province, 410000

Patentee after: Wisdom Eye Technology Co.,Ltd.

Address before: 410205 building 14, phase I, Changsha Zhongdian Software Park, No. 39, Jianshan Road, Changsha high tech Development Zone, Yuelu District, Changsha City, Hunan Province

Patentee before: Wisdom Eye Technology Co.,Ltd.