CN105117688B - Face identification method based on Texture Feature Fusion and SVM - Google Patents

Face identification method based on Texture Feature Fusion and SVM Download PDF

Info

Publication number
CN105117688B
CN105117688B CN201510454967.1A CN201510454967A CN105117688B CN 105117688 B CN105117688 B CN 105117688B CN 201510454967 A CN201510454967 A CN 201510454967A CN 105117688 B CN105117688 B CN 105117688B
Authority
CN
China
Prior art keywords
face
feature
classification
ulnbh
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510454967.1A
Other languages
Chinese (zh)
Other versions
CN105117688A (en
Inventor
邵艳清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing College of Electronic Engineering
Original Assignee
Chongqing College of Electronic Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing College of Electronic Engineering filed Critical Chongqing College of Electronic Engineering
Priority to CN201510454967.1A priority Critical patent/CN105117688B/en
Publication of CN105117688A publication Critical patent/CN105117688A/en
Application granted granted Critical
Publication of CN105117688B publication Critical patent/CN105117688B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention proposes a kind of face identification method based on Texture Feature Fusion and SVM, belongs to image processing field;First, the textural characteristics that NSCT converts multiple dimensioned, multi-direction high-frequency sub-band are extracted with uniform LBP operators, then the uniform pattern LBP characteristic informations of each high-frequency sub-band are counted and are combined them, obtain a kind of face textural characteristics ULNBH of combination LBP operators and NSCT advantages.ULNBH lacks low-frequency information, therefore the characteristic proposition of Gabor characteristic is combined to merge ULNBH features and Gabor characteristic in characteristic layer, to obtain a kind of fusion feature that face texture feature information is more complete.In the recognition of face stage, dimensionality reduction is carried out to the feature vector of higher-dimension with principal component analytical method, then the fusion feature after dimensionality reduction is identified using SVM.The fusion feature is stronger to the robustness of illumination and attitudes vibration.

Description

Face identification method based on Texture Feature Fusion and SVM
Technical field
The present invention relates to a kind of face identification methods, belong to image processing field, especially a kind of to be melted based on textural characteristics Close the face identification method with SVM.
Background technology
In recent years along with the fast development of Internet technology, information technology all brings non-to the every field of the whole society Often big influence, information technology it is commonly used in, safety problem is related to the various aspects of people's life.Authentication Technology is the very crucial component part of information security field.Traditional identity authentication techniques mainly use password and smart card Mode, there is the easily stolen security risks for taking and losing.The one kind of face recognition technology as biological identification technology, passes through Computer technology, iconology, statistics and physiology etc., which are combined together, carries out the feature of certain stabilizations of human body accurately The complex art of verification and identification.Recognition of face because its exclusive advantage becomes one of most widely used biological identification technology, It has general biological characteristic validation technology antifalsification, it is not easy to lose the advantages that, while being also equipped with initiative and user The advantages that friendly.Ensuring funds Rong'an in all directions, face recognition technology is applied in the examples such as mobile payment, The real time security of financial transaction and the supervision in later stage are ensured well;Ensureing public safety field, face recognition technology It is widely used in frontier defense safety check, intelligent video monitoring, access control system etc.;In national security field, face is known Other technology has been used for the security protection monitoring system of important area, for the identification tracking etc. to offender and terrorist; In MMS (Multimedia Message Service) field, more humanized service can be provided according to customer demand using face recognition technology, To promote service quality.Therefore face recognition technology has very important practical value and commercial value, knows to face It Kai Zhan not further investigate with profound significance extensively.
The fast development in the fields such as computer science and image procossing has greatly facilitated the research of recognition of face, however by Often there are various interference in actual man face image acquiring and processing procedure, the effect of recognition of face is caused Prodigious influence, especially in the gatherer process of facial image often by illumination variation, the variation etc. of human face posture The interference of factor, to keep the face characteristic information of extraction not comprehensive enough or adulterate excessive redundancy, to final Recognition of face effect produces the influence of very severe.The illumination or posture that each experts and scholars are directed in current recognition of face become Many targetedly solutions have been proposed in change problem, although single in reducing illumination variation or attitudes vibration does The case where disturbing the influence of factor, but being existed simultaneously for both disturbing factors, the effect of recognition of face still need to be improved, Therefore further research is carried out to the face identification system under illumination interference and attitudes vibration to be of great significance.
Invention content
The present invention is directed at least solve the technical problems existing in the prior art, especially innovatively proposes one kind and be suitable for The face identification method based on Texture Feature Fusion and SVM in recognition of face.
The present invention discloses a kind of face identification method suitable for image recognition comprising following steps:
Step 1, image is normalized, obtains face picture of the same size;
Step 2, respectively to face database sample extraction ULNBH feature vectors and Gabor characteristic vector, by Gabor wavelet spy Sign vector sum ULNBH feature vectors are merged with serial fusion method, obtain fusion feature G-ULNBH and by G-ULNBH Fusion feature is normalized;
Step 3, the face feature vector after fusion is divided into two classes, one kind is training sample, and one kind is test sample, fortune Dimensionality reduction is carried out to the face characteristic matrix of training sample and test sample with PCA algorithms, obtains the eigenmatrix of dimensionality reduction;
Step 4, the eigenmatrix format of dimensionality reduction is adjusted, facial image is divided into 1 according to affiliated classification arrives W, W For face classification sum, every a kind of face setting is corresponded to the label of itself classification;
Step 5, the feature vector and label of face training sample and test sample are inputted into SVM classifier, selects core letter Number and parameter, are finally identified face by the classification results of grader.
The face identification method suitable for image recognition, it is preferred that the step 1 includes:
It is w to carry out scale to training sample image, and the NSCT that direction is h is decomposed, and obtains a low frequency subgraphAnd it is more High frequency subgraph on a different scale different directionsHigh frequency subgraphDescribing scale is W, direction are the face texture information on h, and the size of each high frequency subgraph is M × N.
The face identification method suitable for image recognition, it is preferred that include between the step 1 and step 2:
A is obtained NSCT and is converted each high frequency with the local neighborhood texture information of each high frequency subgraph of uniform LBP operator extractions The local neighborhood textural characteristics of subgraph are that the NSCT that the directions w are h decomposes the uniform LBP eigenmatrixes mark of high frequency subgraph scale It is denoted as CW, h
B, to CW, hStatistics with histogram is carried out, statistical information is obtained, obtains the statistical information of each subband, by same people The statistical information of high-frequency sub-band be chained together to obtain the characteristic information ULNBH of facial image;ULNBH have can extract ruler Spend the multiple dimensioned, multi-direction of the textural characteristics of larger structure, translation, invariable rotary characteristic.
The face identification method suitable for image recognition, it is preferred that the A includes:
The uniform local binary patterns ULBP such as following formulas used
(xc,yc) be LBP center pixel, (the x in sub-band coefficientsc,yc) at basic LBP values, be to (xc,yc) point Centered on, the coefficient in 3 × 3 neighborhood carries out binary coding, gkFor subband Cu,vIn with (xc,yc) centered on position up time K-th of neighborhood system numerical value of needle, gcCentered on position (xc,yc) subband coefficient values;To Cu,vCarry out uniform local binary patterns value It calculates.
The face identification method suitable for image recognition, it is preferred that the step 2 includes:
Using the Two-Dimensional Gabor Wavelets textural characteristics in Two-Dimensional Gabor Wavelets kernel function extraction facial image I (x, y), make To use the Gabor kernel functions to be filtered image I (x, y) in each scale all directions, i.e., to its convolution:
Wu,v(x, y)=I (x, y) × ψu,v(x,y)
Calculating speed is improved using FFT transform and inverse FFT transform to above formula:
Wu,v(x, y)=F-1(F(I(x,y))×F(ψu,v(x,y)))
Wu,v(x, y) indicates the texture feature vector on scale v direction u, Wu,vAmplitude contain image local energy Variation, Wu,vReal and imaginary parts response including Gabor cores, so usually calculating Wu,vAmplitude as character representation, I (x, y) By Wu,v(x, y)=F-1(F(I(x,y))×F(ψu,v(x, y))) u × v its corresponding feature vector are obtained after formula transformation, it closes And these features, the Gabor textural characteristics for obtaining I (x, y) are expressed as fGabor=(W1,1,W1,2,…,Wu,1,…,Wu,v)。
The face identification method suitable for image recognition, it is preferred that the step 2 includes:
For the Gabor characteristic vector sum ULNBH feature vectors of the facial image of extraction, select serial nature fusion method into Row Fusion Features, if obtained Gabor characteristic vector is Φ1, ULNBH feature vectors are Φ2, then the serial fusion feature that obtains G-ULNBH is denoted as
The face identification method suitable for image recognition, it is preferred that the step 3 includes:
If union feature matrix is F, by union feature matrix F centralization, i.e., by each characteristic value in union feature matrix F The average value for subtracting column calculates the covariance matrix S of F, solves the eigenvalue λ of SiWith corresponding feature vector μi, In, i=1,2 ..., s, s are the characteristic value number of S, according to descending arrayed feature value λi, and according to formula T characteristic value obtains contribution rate of accumulative total η before calculating, and wherein t is less than s;Minimum value t values when η is more than or equal to parameter logistic Z are calculated, The corresponding feature vector of t characteristic value constitutes projection matrix P, P=(μ before taking1, μ2..., μt), F is sat to P and is projected, F × P is For F using the low-dimensional union feature matrix F formed after PCA dimensionality reductions '.
The face identification method suitable for image recognition, it is preferred that the step 4 includes:
Optimal classification function is
K (x in formulai, x) and it is kernel function;Sgn () is sign function;M is the number of training sample, designs a support The key of vector machine is kernel function and kernel functional parameter.
The face identification method suitable for image recognition, it is preferred that the step 5 includes:
Cross validation method has been selected in tool box to select C and g, guarantee to select Radial basis kernel function parameter, has finally been selected Performance indicator that group of best C and g is selected as support vector machines parameter.To ensure optimal classification results.
The face identification method suitable for image recognition, it is preferred that further include:
(1) training stage, according to the mapping relations of sample characteristics and sample label, in former training sample categorization vector Label converting sample class is bis- disaggregated model class labels of SVM, by low-dimensional union feature of the training sample after PCA dimensionality reductions Matrix carries out classification based training using SVM, needs to train altogetherA two classification recognition of face device, k is face in formula Class number;
(2) test phase judges test sample generic with the method for ballot;For a test sample, first, Classification and Identification is carried out to test sample successively with the two classification recognition of face devices that the training stage obtains, further according to recognition result into Row ballot;Secondly, the gained vote sum of each classification is counted, the most classification of poll is the sample class;If gone out Existing two or more classification numbers of votes obtained are all most, then selection has the classification of minimum class label as a result.
In conclusion by adopting the above-described technical solution, the beneficial effects of the invention are as follows:
The present invention proposes a kind of face identification method based on Texture Feature Fusion and SVM, belongs to image processing field; The method that NSCT converts the textural characteristics of multiple dimensioned, multi-direction high-frequency sub-band is extracted with uniform LBP operators, then statistics is every The uniform pattern LBP characteristic informations of a high-frequency sub-band simultaneously combine them, and then have obtained a kind of new textural characteristics (Histogram of Uniform Local NSCT Binary Pattern, ULNBH), it is high that this feature realizes each scale The expression of frequency texture feature information and dimensionality reduction.ULNBH fully combines the advantages of LBP operators and NSCT, but still lacks low frequency letter Breath, therefore the characteristic proposition of Gabor characteristic is combined to merge ULNBH features and Gabor characteristic in characteristic layer, to obtain A kind of fusion feature that face texture feature information is more complete (G-ULNBH features).In order to further examine G-ULNBH features Validity, select support vector machines (Support Vector Machine, SVM) as grader to G-ULNBH features into The verification of row recognition of face.Principal component analysis (Principal Components Analysis, PCA) method pair is used first The feature vector of higher-dimension carries out dimensionality reduction, and then the fusion feature after dimensionality reduction is identified using SVM.The fusion feature is to illumination It is stronger with the robustness of attitudes vibration.
Description of the drawings
The above-mentioned and/or additional aspect and advantage of the present invention will become in the description from combination following accompanying drawings to embodiment Obviously and it is readily appreciated that, wherein:
Fig. 1 is the face recognition algorithms flow chart the present invention is based on Texture Feature Fusion and SVM;
Specific implementation mode
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached The embodiment of figure description is exemplary, and is only used for explaining the present invention, and is not considered as limiting the invention.
In the description of the present invention, it is to be understood that, term " longitudinal direction ", " transverse direction ", "upper", "lower", "front", "rear", The orientation or positional relationship of the instructions such as "left", "right", "vertical", "horizontal", "top", "bottom" "inner", "outside" is based on attached drawing institute The orientation or positional relationship shown, is merely for convenience of description of the present invention and simplification of the description, and does not indicate or imply the indicated dress It sets or element must have a particular orientation, with specific azimuth configuration and operation, therefore should not be understood as the limit to the present invention System.
In the description of the present invention, unless otherwise specified and limited, it should be noted that term " installation ", " connected ", " connection " shall be understood in a broad sense, for example, it may be mechanical connection or electrical connection, can also be the connection inside two elements, it can , can also indirectly connected through an intermediary, for the ordinary skill in the art to be to be connected directly, it can basis Concrete condition understands the concrete meaning of above-mentioned term.
As shown in Figure 1, the present invention discloses a kind of face identification method suitable for image recognition, it is critical that packet Include following steps:
In order to be combined Local Features Analysis with the advantages of multiple dimensioned Orientation Features extraction algorithm, the present invention proposes On the basis of NSCT transformation, each high-frequency sub-band is converted to NSCT using the uniform pattern in LBP operators and extracts local neighborhood relationship Feature.
NSCT transformation also has translation invariance etc. while having the advantages that anisotropy, multi-resolution analysis and time-frequency are local Advantage.The texture feature extraction of each scale after being converted to NSCT, all directions high frequency subgraph, energy may be implemented in local binary operator It is enough clearly to express the texture of each representative region in high frequency subgraph, and weaken little flat of researching value in each high-frequency sub-band The feature in skating area domain, the information of prominent each marginal portion.Uniform pattern LBP can be realized while effective expression information to spy The dimensionality reduction for levying data is conducive to subsequent feature recognition operation, therefore selects the LBP of uniform pattern here.
The detailed process for constructing new face textural characteristics is as follows:
Step 1, geometrical normalization processing first is done to image, the facial image that a size is M × N is obtained, then to figure It is w as carrying out scale, the NSCT that direction is h is decomposed, and obtains a low frequency subgraphAnd on multiple and different scale different directions High frequency subgraphHigh frequency subgraphIt is w to describe scale, and direction is on h The size of face texture information, each high frequency subgraph is still M × N;
Step 2, with the local neighborhood texture information of each high frequency subgraph of uniform LBP operator extractions, it is each to obtain NSCT transformation The local neighborhood textural characteristics of high frequency subgraph are that the NSCT that the directions w are h decomposes the uniform LBP features square of high frequency subgraph scale Battle array is labeled as CW, h
Step 3, to CW, hStatistics with histogram is carried out, statistical information is obtained, the system of each subband is obtained according to above method Information is counted, the statistical information of the high-frequency sub-band of same people is chained together and can be obtained the characteristic information of facial image (Histogram of Uniform Local NSCT Binary Pattern, ULNBH);
Intrinsic dimensionality can be made to obtain certain compression compared to original LBP with the LBP under uniform pattern.Invariable rotary LBP operators although there is gray scale translation invariance and rotational invariance, be more advantageous to identification texture information, but due to The LBP operators of invariable rotary without directional information this for the very important information of recognition of face, the LBP of invariable rotary is calculated Son is used seldom in field of face identification;So the present invention has selected uniform local binary patterns (Uniform Local Binary Patterns, ULBP).
ULBP calculations are as follows:
From above formula it can be seen that working as R=1, when P=8, the dimension of LBP sets of patterns only has 10.It is dropping significantly in this way The dimension of low LBP sets of patterns simultaneously, has been also convenient for subsequent data analysis.
(xc,yc) be LBP center pixel, (the x in sub-band coefficientsc,yc) at basic LBP values, be to (xc,yc) point Centered on, the coefficient in 3 × 3 neighborhood carries out binary coding, gkFor subband Cu,vIn with (xc,yc) centered on position up time K-th of neighborhood system numerical value of needle, gcCentered on position (xc,yc) subband coefficient values.To Cu,vCarry out uniform local binary patterns value It calculates, wherein XcMiddle subscript c is the acronym of center.
Wherein, subscript riu is rotation invariant- The acronym of uniform, LBPriIndicate invariable rotary LBP patterns;LBPriuIndicate that invariable rotary unifies LBP patterns; LBPriu2For LBPriPattern and LBPriuThe combination of pattern.
As high frequency coefficient subband Cu,vScale is v, and direction is the uniform local binary at u;
ULNBH has the multiple dimensioned, multi-direction of the textural characteristics that can extract scale larger structure, translation, invariable rotary Characteristic.But obtained on the basis of the high-frequency sub-band that NSCT is converted, so ULNBH characteristic values still do not have face figure The low-frequency information of picture.
And Two-Dimensional Gabor Wavelets can be spatial frequency, spatial position and the side of the representative partial structurtes information in image Good extraction to information is highly suitable for describing image texture characteristic.The expression formula of Two-Dimensional Gabor Wavelets kernel function is such as Under:
In each parameter in above equation, z=(x, y) is spatial position, and σ is the scale factor of Gauss bed, and u, v are fixed The direction of Gabor filter and scale.ku,vFor plane wave vector,Constitute different small echo letters in small wave system Number, kvDefine ku,vScale,kmaxIt is maximum frequency, fvFor the steric factor of frequency domain kernel function,The set direction of Gabor wavelet is described, i is positive integer.
From ku,vExpression-form as it can be seen that it is k that the frequency branch of two-dimensional Gabor filter, which is radius,vBorder circular areas, it is right Transverse and longitudinal high-frequency region can be covered, but more few for the frequency coverage of diagonal line high-frequency region in facial image frequency domain.
Two-Dimensional Gabor Wavelets textural characteristics in (3) formula of utilization extraction facial image I (x, y) can be seen as to use and be somebody's turn to do Gabor kernel functions are filtered image I (x, y) in each scale all directions, i.e., to its convolution:
Wu,v(x, y)=I (x, y) × ψu,v(x,y) (4)
Calculating speed can be improved using FFT transform and inverse FFT transform to (4) formula:
Wu,v(x, y)=F-1(F(I(x,y))×F(ψu,v(x,y))) (5)
Wu,v(x, y) indicates the texture feature vector on scale v direction u, Wu,vAmplitude contain image local energy Variation, Wu,vReal and imaginary parts response including Gabor cores, so usually calculating Wu,vAmplitude as character representation, I (x, y) It can get u × v its corresponding feature vectors after the transformation of (5) formula, merge these features, the Gabor of I (x, y) can be obtained Textural characteristics indicate as follows:
fGabor=(W1,1,W1,2,…,Wu,1,…,Wu,v) (6)。
Step 4, for the Gabor characteristic vector sum ULNBH feature vectors of the facial image of extraction, present invention selection is serial Fusion Features method carries out Fusion Features, if obtained Gabor characteristic vector is Φ1, ULNBH feature vectors are Φ2, serial to merge Feature is abbreviated as G-ULNBH features, is denoted asIt can obtain
G-ULNBH features have been provided simultaneously with the low frequency and high-frequency information of facial image, can more give full expression to the line of face Manage feature, and with multiple dimensioned, multi-direction and translation invariant feature, the anti-interference changed for illumination and human face posture is more By force.
Face characteristic data after fusion are pre-processed, to eliminate the redundancy and noise in primitive character, Be conducive to training and the sort operation of grader.Pretreatment operation includes characteristic normalization and characteristic dimensionality reduction.
Data normalization processing is carried out to the face characteristic data after fusion according to (7) formula.
In formula (7), Y is to the data after feature normalization, and X is original characteristic, and min is indicated in sample data Minimum value, max indicate the maximum value in sample data, and the value by pre-processing each sample data is normalized to the area of [- 1,1] Between.
Step 5, since obtained G-ULNBH eigenmatrix dimensions are still bigger, PCA methods are chosen to carry out dimensionality reduction;
Since obtained union feature matrix dimension is excessive, in order to reduce the calculation amount of face training and identification, the present invention Face higher-dimension is combined with the method for Feature Dimension Reduction and carries out dimensionality reduction, calculates the stronger feature of part recognition capability to replace original Feature vector.PCA methods are a kind of linear character method of descents, have higher stability, can be before keeping discrimination as possible Put height-regulating operation efficiency, therefore what the present invention chose is PCA methods.
It is as follows that PCA reduction process is carried out to face characteristic:If union feature matrix is F, by union feature matrix F center Change, i.e., each characteristic value in union feature matrix F is subtracted to the average value of column, calculates the covariance matrix S of F, solves Go out the eigenvalue λ of SiWith corresponding feature vector μi, (wherein, i=1,2 ..., s, s be S characteristic value number), according to by greatly to Minispread eigenvalue λi, and preceding t (t is less than s) a characteristic value is calculated according to formula (10) and obtains contribution rate of accumulative total η.Calculate η be more than etc. Minimum value t values when parameter logistic Z, the corresponding feature vector of t characteristic value constitutes projection matrix P, P=(μ before taking1, μ2..., μt).Finally, F is sat to P and is projected, F × P be F using the low-dimensional union feature matrix F formed after PCA dimensionality reductions '.
Feature remains primitive character as far as possible after dimensionality reduction, has effectively eliminated the redundancy in primitive character and has made an uproar Sound not only improves quickening classification processing arithmetic speed, is also improving discrimination to a certain degree.
Step 6, because of the strong advantage that support vector machines is shown in classification, SVM is selected to come to drop as grader Fusion texture feature vector after dimension carries out recognition of face verification;
The basic thought of support vector machines is:Input vector is mapped to a higher dimensional space by nonlinear transformation first In, optimal line classifying face is established in this higher dimensional space by choosing interior Product function appropriate.By DUAL PROBLEMS OF VECTOR MAPPING to higher-dimension sky Between in only change inner product operation, and algorithm complexity does not increase with the increase of dimension.
Optimal hyperlane can be constructed directly for two classification problems, its essence is the quadratic programming problem under constraints, Optimal classification function is
K (x in formulai, x) and it is a kernel function;Sgn () is sign function;xiIt is i-th of training sample, x is test specimens This, m is the number of training sample, αiIt is Lagrange coefficient, b is threshold value.The key of one support vector machines of design is core Function and kernel functional parameter.What the present invention selected is Radial basis kernel function.
Recognition of face is classification problem more than one, and the present invention realizes SVM faces using " one-to-one " Changing Strategy More Classification and Identifications.Detailed process is as follows:(1) training stage, using the strategy of " one-to-one ", according to sample characteristics and sample label Mapping relations, the sample class in former training sample categorization vector it is label converting be bis- disaggregated model class labels of SVM, will Low-dimensional union feature matrix of the training sample after PCA dimensionality reductions, to carrying out classification based training, needs to train altogether using SVMA two classification recognition of face device, k is face class number in formula.(2) test phase judges with the method for ballot Test sample generic.For a test sample, first, classify recognition of face devices successively with two that the training stage obtains Classification and Identification is carried out to test sample, is voted further according to recognition result;Secondly, it unites to the gained vote sum of each classification Meter, the most classification of poll is the sample class.If there is there are two or more classification number of votes obtained be all most, then select Classification with minimum class label is as a result.According to this principle, each sample concentrated to test sample is classified Identification can be obtained the classification results of each test sample.
Step 7, carry out Training Support Vector Machines with the G-ULNBH face characteristics of training sample, then by the support after training Vector machine carries out Classification and Identification to the G-ULNBH face characteristics of test sample.
Above-mentioned technical proposal has the beneficial effect that:The method of uniform LBP combinations local histogram statistics, can be good at carrying The neighborhood characteristics of image are taken out, effectively expressing goes out randomness texture and periodic texture, has gray scale translation invariance, and Insensitive to the attitudes vibration of face, the G-ULNBH features after fusion have higher discrimination, ULNBH features small to Gabor The high frequency texture characteristic information of wave characteristic carries out making up to a certain extent, and the G-ULNBH face characteristics after fusion are provided simultaneously with The face texture feature information of high and low frequency is also equipped with multiple dimensioned, multi-direction characteristic, translation invariant characteristic, therefore shows Go out has preferable robustness for illumination and human face posture variation.
The face identification method being applicable in image recognition, it is preferred that the steps 1 and 2 include:
It is decomposed with NSCT, obtains a low frequency subgraphAnd the high frequency subgraph on multiple and different scale different directions, Again with the local neighborhood texture information of each high frequency subgraph of uniform LBP operator extractions, uniform LBP patterns are all LBP defeated The sub-fraction gone out, but the texture information of the overwhelming majority can be described, there is stronger classification capacity.
The face identification method being applicable in image recognition, it is preferred that the step 3 includes:
To CW, hThe histogram information of facial image can be obtained by carrying out statistics with histogram, and the feature for being convenient for face is known Not.
The face identification method being applicable in image recognition, it is preferred that the step 4 includes:
Gabor characteristic for the facial image of extraction and ULNBH features, the present invention select serial nature fusion method to carry out Fusion Features, the face characteristic after fusion serial in this way contain abundant high frequency face texture information and low frequency face line simultaneously Manage information.
The face identification method being applicable in image recognition, it is preferred that the step 5 includes:
It is excessive due to obtaining Gabor characteristic matrix dimension, in order to reduce the calculation amount of face training and identification, reduce simultaneously Characteristic vector data is carried out dimensionality reduction, extracted by the interference that the redundancy in fusion feature brings feature recognition, the present invention Part recognition capability stronger feature replaces former feature vector.The present invention has chosen PCA methods to carry out dimensionality reduction, PCA algorithms It is a kind of linear dimension reduction method, there is higher stability, operation efficiency can be improved under the premise of keeping discrimination as possible.
The face identification method being applicable in image recognition, it is preferred that the step 6,7 include:
Support vector machines has good learning ability and accurate classification capacity, has to sample learning, classification huge Influence, and be a kind of two graders with very strong generalization ability and learning ability, for solve small sample problem and Unique advantage is shown in non-linear and higher-dimension identification problem.Penalty factor and kernel functional parameter setting and optimization are asked It inscribes, cross validation method has been selected to carry out preferred C and g in tool box.Therefore can ensure to select suitable Radial basis kernel function ginseng Number.Final choice performance indicator that group of best C and g is as support vector machines parameter.To ensure optimal classification results.
Therefore, the present invention is on the basis of classical face recognition algorithms, it is proposed that one kind being based on Texture Feature Fusion and SVM Face identification method, can accurately extraction facial image high and low frequency information on the basis of, effectively improve face characteristic Fixed information selects support vector machines to carry out recognition of face and can effectively improve recognition effect, carrying out field of face identification tool It is significant.
After the present invention carries out NSCT transformation to face, uniform LBP is carried out to high-frequency sub-band and has been converted, has been obtained in NSCT ULNBH features on the basis of transformation.But the high-frequency sub-bands that convert of NSCT hardly include the useful information of low frequency region, And the low frequency region of face includes still the face information that part can be used to identify, the ULNBH after the present invention converts NSCT is special Gabor wavelet of seeking peace transform characteristics are combined, then the face characteristic after joint is carried out classification based training with SVM, to detection Face carries out Classification and Identification, and the present invention is respectively tested the blending algorithm using ORL and Yale face databases, test result Show that the algorithm in illumination variation, attitudes vibration, all improves face identification rate well.
Wherein english abbreviation is SVM:Support vector machines Support Vector Machine,
LBP:Local binary patterns Local BinaryPatterns;
NSCT:Without down sampling contourlet transform Nonsubsampled contourlet transform;
ULBP:Uniform local binary patterns Uniform Local Binary Patterns;
ULNBH:Histogram of Uniform Local NSCT Binary Pattern;
PCA:Principal component analysis Principal Components Analysis.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example Point is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are not Centainly refer to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be any One or more embodiments or example in can be combined in any suitable manner.
Although an embodiment of the present invention has been shown and described, it will be understood by those skilled in the art that:Not In the case of being detached from the principle of the present invention and objective a variety of change, modification, replacement and modification can be carried out to these embodiments, this The range of invention is limited by claim and its equivalent.

Claims (8)

1. a kind of face identification method suitable for image recognition, which is characterized in that include the following steps:
Step 1, image is normalized, obtains face picture of the same size;
The step 1 includes:
It is w to carry out scale to training sample image, and the NSCT that direction is h is decomposed, and obtains a low frequency subgraph A0And it is multiple and different High frequency subgraph { A on scale different directions1,1, A1,2..., AW, 1..., AW, h, high frequency subgraph AW, hIt is w, side to describe scale It is M × N to the size for the face texture information on h, each high frequency subgraph;
Step 2, respectively to face database sample extraction ULNBH feature vectors and Gabor characteristic vector, by Gabor wavelet feature to Amount and ULNBH feature vectors are merged with serial fusion method, are obtained fusion feature G-ULNBH and are merged G-ULNBH Feature is normalized;
Include between the step 1 and step 2:
A is obtained NSCT and is converted each high frequency subgraph with the local neighborhood texture information of each high frequency subgraph of uniform LBP operator extractions Local neighborhood textural characteristics, scale be the directions w be h NSCT decompose high frequency subgraph uniform LBP eigenmatrixes be labeled as CW, h
B, to CW, hStatistics with histogram is carried out, statistical information is obtained, obtains the statistical information of each subband, by the height of same people The statistical information of frequency subband is chained together to obtain the characteristic information ULNBH of facial image;ULNBH have can extract scale compared with The multiple dimensioned, multi-direction of the textural characteristics of big structure, translation, invariable rotary characteristic;
Step 3, the face feature vector after fusion is divided into two classes, one kind is training sample, and one kind is test sample, is used PCA algorithms carry out dimensionality reduction to the face characteristic matrix of training sample and test sample, obtain the eigenmatrix of dimensionality reduction;
Step 4, the eigenmatrix format of dimensionality reduction is adjusted, facial image is divided into 1 according to affiliated classification arrives W, and W is people Face classification sum, by every a kind of face setting corresponding to the label of itself classification;
Step 5, by the feature vector and label of face training sample and test sample input SVM classifier, select kernel function and Parameter is finally identified face by the classification results of grader.
2. the face identification method according to claim 1 suitable for image recognition, which is characterized in that the A includes:
The uniform local binary patterns ULBP such as following formulas used,
(xc,yc) be LBP center pixel, (the x in sub-band coefficientsc,yc) at basic LBP values, be to (xc,yc) point is The heart, the coefficient in 3 × 3 neighborhood carry out binary coding, gkFor subband Cu,vIn with (xc,yc) centered on position clockwise K neighborhood system numerical value, gcCentered on position (xc,yc) subband coefficient values;To Cu,vUniform local binary patterns value is carried out to calculate.
3. the face identification method according to claim 1 suitable for image recognition, which is characterized in that the step 2 Including:
Using the Two-Dimensional Gabor Wavelets textural characteristics in Two-Dimensional Gabor Wavelets kernel function extraction facial image I (x, y), as making Image I (x, y) is filtered in each scale all directions with the Gabor kernel functions, i.e., to its convolution:
Wu,v(x, y)=I (x, y) × ψu,v(x,y)
Calculating speed is improved using FFT transform and inverse FFT transform to above formula:
Wu,v(x, y)=F-1(F(I(x,y))×F(ψu,v(x,y)))
Wu,v(x, y) indicates the texture feature vector on scale v direction u, Wu,vAmplitude contain the variation of image local energy, Wu,vReal and imaginary parts response including Gabor cores, so usually calculating Wu,vAmplitude as character representation, I (x, y) passes through Wu,v(x, y)=F-1(F(I(x,y))×F(ψu,v(x, y))) u × v its corresponding feature vector are obtained after formula transformation, merge this A little features, the Gabor textural characteristics for obtaining I (x, y) are expressed as fGabor=(W1,1,W1,2,…,Wu,1,…,Wu,v)。
4. the face identification method according to claim 1 suitable for image recognition, which is characterized in that the step 2 Including:
For the Gabor characteristic vector sum ULNBH feature vectors of the facial image of extraction, serial nature fusion method is selected to carry out special Sign fusion, if obtained Gabor characteristic vector is Φ1, ULNBH feature vectors are Φ2, then the serial fusion feature G- that obtains ULNBH is denoted as
5. the face identification method according to claim 1 suitable for image recognition, which is characterized in that the step 3 Including:
If union feature matrix is F union feature matrix F centralization is subtracted each characteristic value in union feature matrix F The average value of column calculates the covariance matrix S of F, solves the eigenvalue λ of SiWith corresponding feature vector μi, wherein i =1,2 ..., s, s are the characteristic value number of S, according to descending arrayed feature value λi, and according to formulaIt calculates Preceding t characteristic value obtains contribution rate of accumulative total η, and wherein t is less than s;Minimum value t values when η is more than or equal to parameter logistic Z are calculated, preceding t is taken The corresponding feature vector of a characteristic value constitutes projection matrix P, P=(μ1, μ2..., μt), F is sat to P and is projected, F × P is F profits With the low-dimensional union feature matrix F formed after PCA dimensionality reductions '.
6. the face identification method according to claim 1 suitable for image recognition, which is characterized in that the step 4 Including:
Optimal classification function is
K (x in formulai, x) and it is kernel function;Sgn () is sign function;M is the number of training sample, designs a support vector machines Key be kernel function and kernel functional parameter.
7. the face identification method according to claim 1 suitable for image recognition, which is characterized in that the step 5 Including:
Cross validation method has been selected in tool box to select C and g, guarantee to select Radial basis kernel function parameter, final choice Energy index best that group of C and g are as support vector machines parameter, to ensure optimal classification results.
8. the face identification method according to claim 7 suitable for image recognition, which is characterized in that
Further include:
(1) training stage, according to the mapping relations of sample characteristics and sample label, the sample in former training sample categorization vector Class label is converted into bis- disaggregated model class labels of SVM, by low-dimensional union feature matrix of the training sample after PCA dimensionality reductions Classification based training is carried out using SVM, needs to train altogetherA two classification recognition of face device, k is face classification in formula Number;
(2) test phase judges test sample generic with the method for ballot;For a test sample, first, use The two classification recognition of face devices that training stage obtains carry out Classification and Identification to test sample successively, are thrown further according to recognition result Ticket;Secondly, the gained vote sum of each classification is counted, the most classification of poll is the sample class;If there is having Two or more classification numbers of votes obtained are all most, then selection has the classification of minimum class label as a result.
CN201510454967.1A 2015-07-29 2015-07-29 Face identification method based on Texture Feature Fusion and SVM Expired - Fee Related CN105117688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510454967.1A CN105117688B (en) 2015-07-29 2015-07-29 Face identification method based on Texture Feature Fusion and SVM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510454967.1A CN105117688B (en) 2015-07-29 2015-07-29 Face identification method based on Texture Feature Fusion and SVM

Publications (2)

Publication Number Publication Date
CN105117688A CN105117688A (en) 2015-12-02
CN105117688B true CN105117688B (en) 2018-08-28

Family

ID=54665672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510454967.1A Expired - Fee Related CN105117688B (en) 2015-07-29 2015-07-29 Face identification method based on Texture Feature Fusion and SVM

Country Status (1)

Country Link
CN (1) CN105117688B (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105691367B (en) * 2016-01-25 2018-08-07 大连楼兰科技股份有限公司 Based on image and the united bus active brake method of heartbeat inspecting and system
CN107103266B (en) * 2016-02-23 2019-08-20 中国科学院声学研究所 The training of two-dimension human face fraud detection classifier and face fraud detection method
CN105844291A (en) * 2016-03-18 2016-08-10 常州大学 Characteristic fusion method based on kernel typical correlation analysis
CN106022254A (en) * 2016-05-17 2016-10-12 上海民实文化传媒有限公司 Image recognition technology
CN106056059B (en) * 2016-05-20 2019-02-12 合肥工业大学 The face identification method of multi-direction SLGS feature description and performance cloud Weighted Fusion
CN106503718B (en) * 2016-09-20 2019-11-22 南京邮电大学 A kind of local binary patterns Image Description Methods based on wave filter group
CN107871101A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 A kind of method for detecting human face and device
CN106778487A (en) * 2016-11-19 2017-05-31 南宁市浩发科技有限公司 A kind of 2DPCA face identification methods
CN106651935B (en) * 2016-11-29 2019-06-14 河南科技大学 A kind of texture image representation method based on multi-scale sampling
CN106980873B (en) * 2017-03-09 2020-07-07 南京理工大学 Koi screening method and device based on deep learning
CN107392142B (en) * 2017-07-19 2020-11-13 广东工业大学 Method and device for identifying true and false face
CN108549868A (en) * 2018-04-12 2018-09-18 中国矿业大学 A kind of pedestrian detection method
CN108898052A (en) * 2018-05-23 2018-11-27 上海理工大学 The detection method and equipment of man-made features in remote sensing images
CN109002770B (en) * 2018-06-25 2021-03-16 电子科技大学 Face recognition method under low-resolution condition
CN109094491B (en) * 2018-06-29 2021-02-05 深圳市元征科技股份有限公司 Vehicle component adjusting method, device and system and terminal equipment
CN108932501B (en) * 2018-07-13 2021-09-10 江苏大学 Face recognition method based on multi-core association integration dimension reduction
CN109034256B (en) * 2018-08-02 2021-03-30 燕山大学 LTP and HOG feature fused breast tumor detection system and method
CN109409212A (en) * 2018-09-12 2019-03-01 中国人民解放军国防科技大学 Face recognition method based on cascade BGP
CN109271972A (en) * 2018-11-05 2019-01-25 常熟理工学院 Intelligent image identifying system and method based on natural language understanding and image graphics
CN109785286B (en) * 2018-12-12 2021-04-30 中国科学院深圳先进技术研究院 Image restoration detection method based on texture feature fusion
CN109711305A (en) * 2018-12-19 2019-05-03 浙江工商大学 Merge the face identification method of a variety of component characterizations
CN110084259B (en) * 2019-01-10 2022-09-20 谢飞 Facial paralysis grading comprehensive evaluation system combining facial texture and optical flow characteristics
CN110119691B (en) * 2019-04-19 2021-07-20 华南理工大学 Portrait positioning method based on local two-dimensional mode and invariant moment search
CN110084220A (en) * 2019-05-08 2019-08-02 重庆邮电大学 A kind of vehicle-mounted fatigue detection method based on multiple dimensioned binary mode
CN110532907B (en) * 2019-08-14 2022-01-21 中国科学院自动化研究所 Traditional Chinese medicine human body constitution classification method based on face image and tongue image bimodal feature extraction
CN110309839B (en) * 2019-08-27 2019-12-03 北京金山数字娱乐科技有限公司 A kind of method and device of iamge description
CN111241960B (en) * 2020-01-06 2023-05-30 佛山科学技术学院 Face recognition method and system based on wiener filtering and PCA
CN111308985B (en) * 2020-02-18 2021-03-26 北京航空航天大学 Performance degradation evaluation method for control assembly of airplane environmental control system based on NSCT and DM
CN113688861A (en) * 2021-07-06 2021-11-23 清华大学 Low-dimensional feature small sample multi-classification method and device based on machine learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102024141A (en) * 2010-06-29 2011-04-20 上海大学 Face recognition method based on Gabor wavelet transform and local binary pattern (LBP) optimization
CN104408440A (en) * 2014-12-10 2015-03-11 重庆邮电大学 Identification method for human facial expression based on two-step dimensionality reduction and parallel feature fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9311564B2 (en) * 2012-10-05 2016-04-12 Carnegie Mellon University Face age-estimation and methods, systems, and software therefor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102024141A (en) * 2010-06-29 2011-04-20 上海大学 Face recognition method based on Gabor wavelet transform and local binary pattern (LBP) optimization
CN104408440A (en) * 2014-12-10 2015-03-11 重庆邮电大学 Identification method for human facial expression based on two-step dimensionality reduction and parallel feature fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
融合NSCT和自适应平滑的光照不变量提取算法;唐朝伟 等;《计算机辅助设计与图形学学报》;20141130;第26卷(第11期);第2070-2078页 *

Also Published As

Publication number Publication date
CN105117688A (en) 2015-12-02

Similar Documents

Publication Publication Date Title
CN105117688B (en) Face identification method based on Texture Feature Fusion and SVM
CN101667246B (en) Human face recognition method based on nuclear sparse expression
Zhang et al. Spectral clustering ensemble applied to SAR image segmentation
CN106096652B (en) Classification of Polarimetric SAR Image method based on sparse coding and small echo self-encoding encoder
Bileschi StreetScenes: Towards scene understanding in still images
CN101866421B (en) Method for extracting characteristic of natural image based on dispersion-constrained non-negative sparse coding
CN101763507B (en) Face recognition method and face recognition system
CN102521575B (en) Iris identification method based on multidirectional Gabor and Adaboost
CN104778457A (en) Video face identification algorithm on basis of multi-instance learning
CN102855468B (en) A kind of single sample face recognition method in photograph identification
CN102324038B (en) Plant species identification method based on digital image
CN109902590A (en) Pedestrian's recognition methods again of depth multiple view characteristic distance study
CN102332084B (en) Identity identification method based on palm print and human face feature extraction
CN105138970A (en) Spatial information-based polarization SAR image classification method
CN104331706A (en) Polarization SAR image classification based on RBM and SVM
CN104036289A (en) Hyperspectral image classification method based on spatial and spectral features and sparse representation
CN101916369B (en) Face recognition method based on kernel nearest subspace
CN107330457B (en) A kind of Classification of Polarimetric SAR Image method based on multi-feature fusion
Bian et al. Combining weighted linear project analysis with orientation diffusion for fingerprint orientation field reconstruction
CN101819629B (en) Supervising tensor manifold learning-based palmprint identification system and method
Radha et al. Neural network based face recognition using RBFN classifier
Song et al. Fingerprint indexing based on pyramid deep convolutional feature
Shi et al. Face recognition algorithm based on self-adaptive blocking local binary pattern
CN102682306A (en) Wavelet pyramid polarization texture primitive feature extracting method for synthetic aperture radar (SAR) images
Varish A modified similarity measurement for image retrieval scheme using fusion of color, texture and shape moments

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180828

Termination date: 20190729