CN104063715A - Face classification method based on nearest neighbor feature lines - Google Patents
Face classification method based on nearest neighbor feature lines Download PDFInfo
- Publication number
- CN104063715A CN104063715A CN201410307765.XA CN201410307765A CN104063715A CN 104063715 A CN104063715 A CN 104063715A CN 201410307765 A CN201410307765 A CN 201410307765A CN 104063715 A CN104063715 A CN 104063715A
- Authority
- CN
- China
- Prior art keywords
- prime
- rightarrow
- sample
- class
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a face classification method based on nearest neighbor feature lines. Based on the nearest neighbor theory, the method defines a novel weight index, provides criteria based on the weight index and an improved simplified feature line method, builds a face classifier suitable for light and various pose changes with lower computational complexity, shorter recognition time and better robustness compared with other classifiers. The classifier firstly uses the principal component analysis method to extract features of a training database image, constructs a training database matrix and extracts sample image features, and constructs test sample vectors. Then weight coefficients are calculated, and the criteria are made according to the weight coefficients. The simplified nearest neighbor feature line classifier is constructed. Experiment results of a plurality of situations indicate that in the same hardware environment, the classifier has lower computational complexity and better robustness compared with other classifiers.
Description
Technical field
The present invention relates to a kind of face classification method based on the nearest feature line, relate in particular to and a kind ofly utilize automatic classification that computer technology, digital image processing techniques, mode identification technology etc. realize people's face and sentence method for distinguishing, belong in living things feature recognition field about face characteristic and extract the technology with identification.
Background technology
1, face recognition technology
Recognition of face has become an important research direction in living things feature recognition, and its gordian technique is the extraction of proper vector and the realization of sorting technique.Researchist has proposed a large amount of face identification methods, wherein about proper vector extraction comparison popular have a principal component analysis (PCA) (Principal Component Analysis, abbreviation PCA), linear discriminant analysis (Linear Discriminant Analysis is called for short LDA) etc.; PCA is a kind of without supervise algorithm, and it obtains fundamental component by solving the eigenwert of the covariance matrix of polytomy variable.About sorting technique, popular have a K arest neighbors (k-Nearest Neighbor, abbreviation KNN) method, arest neighbors subspace method, support vector machine (Support VectorMachine, be called for short SVM) and the sorter based on compressed sensing (Sparse Representation-basedClassification is called for short SRC) etc.
2, pivot constituent analysis
Be otherwise known as KL conversion of pivot constituent analysis, the generator matrix ∑ of calculating K L conversion, can be the overall stroll matrix S of training sample
t, can be also scatter matrix S between the class of training sample
bdeng, scatter matrix is generated by training set.
Total population scatter matrix can be expressed as:
If get total population scatter matrix S
tas generator matrix ∑, note
∑ can be write as:
∑=XX
T∈R
m×m
If by scatter matrix S in class
bas the generator matrix ∑ of KL conversion, that is:
Here c is that training sample is concentrated pattern class number,
the mean value vector that training sample is concentrated all kinds of pattern samples, note:
Generator matrix ∑ is:
∑=XX
T∈R
n×n
Finally calculate eigenwert and the proper vector of generator matrix, constructor space, projects to training image and test pattern in feature space, each width image projection in space corresponding to a point in subspace.And then can classify by the theory of pattern-recognition.
3, nearest feature line method
Be provided with L quasi-mode, wherein k class has N
kindividual sample
to test sample y and N
karbitrary sample in individual sample
definition distance:
Wherein,
Represent that y is to vector
intersection point, and
by calculating y, arrive
the distance of characteristic curve, classification results is as follows, if
Y belongs to the 1st class.
Summary of the invention
Goal of the invention: in order to overcome the deficiencies in the prior art, the invention provides a kind of face identification method based on the nearest feature line, maintain with existing sorter identical calculations complexity situation under, discrimination is higher, robustness is better.
Technical scheme: for achieving the above object, the technical solution used in the present invention is:
A kind of people's face resolution method based on the nearest feature line, comprises the steps:
(1) set up training storehouse: use PCA method to extract the eigenwert of sample, the described eigenwert of extracting of usining obtains the base vector of proper subspace as training data, according to described base vector, sample is projected to proper subspace, obtain the coordinate of sample in proper subspace; Set up training storehouse matrix A=[A
1, A
2..., A
k] ∈ R
m * n, wherein m is the dimension of each sample after the sampling of PCA method, and n is the sum of sample in training storehouse, and k is the sum of sample class in training storehouse, A
kset for each class training picture;
(2) picture to be measured is projected to described proper subspace, obtain the coordinate of picture to be measured in proper subspace
(3) calculate weight coefficient w
j, and tentatively judge, comprise following steps:
(31) definition error function
If
Wherein,
for sample in training storehouse, 1≤j≤n, j is natural number;
(32) local covariance matrix is
calculating weight coefficient is
Wherein, 1≤k≤n, k is natural number; 1≤l≤n, 1≤m≤n, l, m is natural number;
(33) calculate described w
jthe weight coefficient of each sample class in corresponding training storehouse
(34) calculate weight vectors discriminant index
(35) threshold tau of design weight vectors discriminant index:
Wherein,
(36) compare the size of weight discriminant index W (x) and threshold tau: if W (x) > is τ, directly output
the maximum corresponding class of mould is classification results;
(4) if W (x)≤τ is handled as follows:
(41) revise training storehouse: pick out
x maximum sample class re-establishes training storehouse matrix A '=[A
max1, A
max2..., A
maxX] ∈ R
m * n;
(42) calculate and simplify characteristic curve: any two images are in revising training storehouse
in feature space, obtain a characteristic curve
(43) calculate the coordinate of picture to be measured in proper subspace to described characteristic curve
distance
Wherein,
(44) described in basis
Classification obtains
Wherein, N
kfor the number of samples in each class sample, 1≤k≤n, k
cnumber for C class sample.
Beneficial effect: the people's face resolution method based on the nearest feature line provided by the invention, take the nearest feature line theory as foundation, define a kind of new weights coefficient W (x), criterion and threshold value calculation method based on W (x), built a face classification device that is applicable to illumination and the multiple variation of attitude, compared other sorters, computation complexity is close, discrimination is higher, and robustness is better; The in the situation that of illumination variation and multi-pose Face, the recognition success rate of this method is more than 98%; This method to various features data (such as PCA, LDA, stochastic sampling etc.) all can reach higher discrimination, under the less condition of sample characteristics dimension, still can reach higher discrimination, this feature of the method can reduce sampling request and reduce data space, thereby reduces the cost of recognition of face, better be applicable to the hardware environment of resource-constrained (such as powered battery, memory capacity is little etc.); In noise and block the in the situation that of being less than 50%, the method still has good discrimination, compare the classic algorithm such as NFL, KNN and there is better recognition success rate, better robustness, recognition of face under rugged surroundings is had to good adaptability and validity, and on computational complexity, have larger raising.
Accompanying drawing explanation
Fig. 1 is process flow diagram of the present invention;
The effect of classification when Fig. 2 is W (x)=0.47;
The effect of classification when Fig. 3 is W (x)=0.98;
Fig. 4 is the impact of weight discriminant index on recognition success rate;
Fig. 5 is this algorithm and the comparison of NFL algorithm in the situation that of stack salt-pepper noise;
Fig. 6 is this algorithm and the comparison of NNL algorithm in the situation that of stack salt-pepper noise;
Fig. 7 is this algorithm and the comparison of KNN algorithm in the situation that of stack salt-pepper noise;
Fig. 8 is this algorithm and the comparison of NFL algorithm in the situation that stack block blocks;
Fig. 9 is this algorithm and the comparison of NNL algorithm in the situation that stack block blocks;
Figure 10 is this algorithm and the comparison of KNN algorithm in the situation that stack block blocks.
Embodiment
Below in conjunction with accompanying drawing, the present invention is further described.
Be illustrated in figure 1 a kind of people's face resolution method based on the nearest feature line, comprise the steps:
(1) set up training storehouse: use PCA method to extract the eigenwert of sample, the eigenwert of extracting of usining obtains the base vector of proper subspace as training data, according to base vector, sample is projected to proper subspace, obtains the coordinate of sample in proper subspace; Set up training storehouse matrix A=[A
1, A
2..., A
k] ∈ R
m * n, wherein, m is the dimension of each sample after the sampling of PCA method, and n is the sum of sample in training storehouse, and k is the sum of sample class in training storehouse, A
kset for each class training picture;
(2) picture to be measured is projected to proper subspace, obtain the coordinate of picture to be measured in proper subspace
(3) calculate weight coefficient w
j, and tentatively judge, comprise following steps:
(31) definition error function
if
wherein,
for sample in training storehouse, 1≤j≤n, j is natural number;
(32) local covariance matrix is
calculating weight coefficient is
wherein, 1≤k≤n, k is natural number; 1≤l≤n, 1≤m≤n, l, m is natural number;
(33) calculate weight coefficient corresponding to every class
for weight coefficient w
jthe coefficient of each sample class in corresponding training storehouse;
(34) calculate weight discriminant index
(35) according to the size of calculating weight discriminant index W (x), judge:
If W (x)=1, max
i|| δ
i(x) ||
2/ || x||
2=1, illustrate that weight coefficient is substantially only distributed in a class;
If W (x)=0,
illustrate that weight coefficient is almost distributed in each class;
Therefore, the threshold tau ∈ (0,1) that can design a weight discriminant index represents the distribution situation of weight coefficient, and specific design method is:
Wherein,
(36) compare the size of weight discriminant index W (x) and threshold tau:
If W (x) > is τ, illustrate that weight coefficient distributes comparatively concentrated, the effect of classification is better, and the sample class that can directly export residual error minimum is classification results;
If W (x)≤τ, illustrates that weight coefficient distributes not good, the effect of classification is better, and classifying quality is bad, need to dwindle the scope in training storehouse, carries out subseries again;
(4) if W (x) is less than τ, be handled as follows:
(41) revise training storehouse: pick out
x maximum sample class re-establishes training storehouse matrix A '=[A
max1, A
max2..., A
maxX] ∈ R
m * n;
(42) calculate and simplify characteristic curve: any two images are in revising training storehouse
in feature space, obtain a characteristic curve
(43) calculate the coordinate of picture to be measured in proper subspace to described characteristic curve
distance
Wherein,
(44) described in basis
Classification obtains
Wherein, N
kfor the number of samples in each class sample, 1≤k≤n, k
cnumber for C class sample.
Below just the present invention's some detailed problems in the specific implementation describe.
1, the test database of selecting is ORL face database, UMIST face database, Extended YaleB face database, in these three databases, all comprises facial image, is mainly the variation of direction and angle.
2, use PCA method to extract feature, experiment shows, with respect to stochastic sampling, PCA has higher success ratio.
1) read in face database, after normalization face database, everyone in storehouse selected to the image construction training set of some, all the other form test set.If image is N*M after normalization, pixel is connected and forms N*M dimensional vector by row, can regard a point in N*M space as, can convert with a low n-dimensional subspace n and describe this image by KL.
2) establishing in facial image database by N width facial image, is X with vector representation
1, X
2..., X
n, ask its people's face the average image to be
Draw thus the inequality of every width image
3) calculate covariance matrix
the eigenvalue λ of compute matrix C
kwith characteristic of correspondence vector μ
k.In actual computation, operand is larger, in order to reduce operand, the inequality of every width image is formed to matrix: an X '=[X
1', X
2' ... X
n'], so covariance matrix can be write as
theoretical according to linear algebra, will calculate X ' (X ')
teigenvalue λ
jwith characteristic of correspondence vector Φ
jproblem be converted to calculating (X ')
tthe eigenvalue λ of X '
jwith characteristic of correspondence vector Φ
j', obtain Φ
j' rear Φ
jcan be by
obtain.And then by SVD theorem, obtain the eigenvalue λ of Matrix C
k.
4) training image is projected to proper subspace, the inequality of N width facial images all in face database, to this space projection, is obtained to projection vector Y separately
1, Y
2..., Y
n:
(Y
i)T=[y
1i,y
2i,...,y
Mi],i=1,2,...,N
y
ji=(u
j)
TX′
j,j=1,2,...,M
Wherein, u
jfor proper vector group, X '
jfor training image inequality.
Composing training matrix A=[Y
1, Y
2..., Y
n], image vector is arranged by class order.
3, classification first, the test picture for given, is projected in proper subspace, obtaining property coordinate vector
1) calculate weight coefficient, and tentatively judge, definition error function
if
wherein,
for training storehouse sample, n altogether, 1≤j≤n, j is natural number;
2) local covariance matrix is
calculating weight coefficient is
wherein, 1≤k≤n, k is natural number; 1≤l≤n, 1≤m≤n, l, m is natural number;
3) calculate weight coefficient corresponding to every class
for the coefficient of each sample class in the corresponding training of weight coefficient w storehouse, and calculate weight discriminant index
4, the threshold tau ∈ (0,1) of a weight discriminant index of design represents the distribution situation of weight coefficient.
5, design
wherein k is the sum of sample class in training storehouse:
If W (x) > is τ, illustrate that weight coefficient distributes comparatively concentrated, directly output
the maximum corresponding class of mould is classification results;
If W (x)≤τ, illustrates that weight coefficient distributes not good, be handled as follows, need to dwindle the scope of training pants, carry out subseries again;
6, suitable selection τ can effectively improve recognition success rate, as shown in Figure 4:
1) pick out
x maximum class re-establishes a new less training storehouse matrix A '=[A
max1, A
max2, A
max3]; In most cases, correct classification is included in revised less training storehouse, has so just dwindled identification range, and X is provided by following formula
2) calculate and simplify characteristic curve: any two images are in revising training storehouse
in feature space, obtain a characteristic curve
3) calculate the coordinate of picture to be measured in proper subspace to described characteristic curve
distance
Wherein,
4) described in basis
Classification obtains
Wherein, N
kfor the number of samples in each class sample, 1≤k≤n, k
cnumber for C class sample.
Describe experimental result of the present invention below in detail:
1, the database that experiment of the present invention adopts is international ORL, UMIST, Extended YaleB face database.Wherein ORL storehouse, comprises altogether 40 volunteers, and everyone contains 10 pictures, and pixel is 92*112, totally 400 pictures.5 images that we select everyone are as training storehouse, other 5 as test pattern.For UMIST storehouse, everyone chooses 18 images and uses to comprise altogether 20 volunteers, and pixel is 92*112, and wherein 3 as training image, and remaining is as test pattern.For Extended YaleB storehouse, comprise altogether 38 volunteers, everyone chooses 58 images and uses, and pixel is 168*192.
2, experiment one: Fig. 4 is presented at W (x) and gets different values to test success ratio and the impact of time, and this experiment is carried out on UMIST storehouse.Horizontal ordinate represents the value of W (x), from 0 to 1.Left ordinate represents recognition success rate, and right ordinate represents the time.Experiment shows the raising along with W (x) value, and after W (x) > 0.5, this algorithm can be obtained the success ratio higher than NFL algorithm, and along with the raising of W (x), the test duration also increases to some extent.
3, experiment two: Fig. 5-Figure 10 is the experiment of test pattern superimposed noise.Experiment is the experiment of carrying out on ORL face database, selects randomface, eigenface, and fisherface is as feature extraction mode.Fig. 6 is the experiment of stack salt-pepper noise, the experiment that Fig. 7 blocks for the random block of stack.Horizontal ordinate represents the shared image percentage of noise, the success ratio of ordinate representative identification.Experiment shows, the in the situation that of superimposed noise, algorithm compares NFL herein, KNN, NNL algorithm has better recognition effect, and especially when selecting eigenface as feature extraction, this algorithm can be obtained the lifting of nearly 20% recognition success rate in the situation that of 50% noise accounting.
4, experiment three: table 1 is algorithm complex test, experiment is carried out on Extended YaleB storehouse, chosen the mode of feature extraction in 3, respectively randomface, eigenface, fisherface, this algorithm and classical sorting algorithm are as KNN, NNL etc. have approximate algorithm complex, yet the recognition success rate of algorithm and robustness are all better than classic algorithm.With other some algorithms as NFL, NFP, SVM compares, there is significant reduction the test duration of this algorithm.
Table 1 test operation time (s)
Randomfaces | Eigenfaces | Fisherfaces | |
SFL | 110.88 | 143.42 | 188.85 |
KNN | 106.24 | 112.81 | 135.06 |
NNL | 107.95 | 133.76 | 175.51 |
NFL | 598.23 | 639.03 | 789.35 |
NFP | 7890.06 | 9180.31 | 9862.89 |
SVM | 1824.08 | 483.81 | 565.25 |
The above is only the preferred embodiment of the present invention; be noted that for those skilled in the art; under the premise without departing from the principles of the invention, can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.
Claims (1)
1. the people's face resolution method based on the nearest feature line, is characterized in that: comprise the steps:
(1) set up training storehouse: use PCA method to extract the eigenwert of sample, the described eigenwert of extracting of usining obtains the base vector of proper subspace as training data, according to described base vector, sample is projected to proper subspace, obtain the coordinate of sample in proper subspace; Set up training storehouse matrix A=[A
1, A
2..., A
k] ∈ R
m * n, wherein m is the dimension of each sample after the sampling of PCA method, and n is the sum of sample in training storehouse, and k is the sum of sample class in training storehouse, A
kset for each class training picture;
(2) picture to be measured is projected to described proper subspace, obtain the coordinate of picture to be measured in proper subspace
(3) calculate weight coefficient w
j, and tentatively judge, comprise following steps:
(31) definition error function
If
Wherein,
for sample in training storehouse, 1≤j≤n, j is natural number;
(32) local covariance matrix is
calculating weight coefficient is
Wherein, 1≤k≤n, k is natural number; 1≤l≤n, 1≤m≤n, l, m is natural number;
(33) calculate described w
jthe weight coefficient of each sample class in corresponding training storehouse
(34) calculate weight vectors discriminant index
(35) threshold tau of design weight vectors discriminant index:
Wherein,
(36) compare the size of weight discriminant index W (x) and threshold tau: if W (x) > is τ, directly output
the maximum corresponding class of mould is classification results;
(4) if W (x)≤τ is handled as follows:
(41) revise training storehouse: pick out
x maximum sample class re-establishes training storehouse matrix A '=[A
max1, A
max2..., A
maxX] ∈ R
m * n;
(42) calculate and simplify characteristic curve: any two images are in revising training storehouse
in feature space, obtain a characteristic curve
(43) calculate the coordinate of picture to be measured in proper subspace to described characteristic curve
distance
Wherein,
(44) described in basis
Classification obtains
Wherein, N
kfor the number of samples in each class sample, 1≤k≤n, k
cnumber for C class sample.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410307765.XA CN104063715B (en) | 2014-06-30 | 2014-06-30 | A kind of face classification method based on the nearest feature line |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410307765.XA CN104063715B (en) | 2014-06-30 | 2014-06-30 | A kind of face classification method based on the nearest feature line |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104063715A true CN104063715A (en) | 2014-09-24 |
CN104063715B CN104063715B (en) | 2017-05-31 |
Family
ID=51551417
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410307765.XA Active CN104063715B (en) | 2014-06-30 | 2014-06-30 | A kind of face classification method based on the nearest feature line |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104063715B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109800723A (en) * | 2019-01-25 | 2019-05-24 | 山东超越数控电子股份有限公司 | A kind of recognition of face and the computer booting system and method for staying card is logged in violation of rules and regulations |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102163279A (en) * | 2011-04-08 | 2011-08-24 | 南京邮电大学 | Color human face identification method based on nearest feature classifier |
CN102402784A (en) * | 2011-12-16 | 2012-04-04 | 武汉大学 | Human face image super-resolution method based on nearest feature line manifold learning |
CN103345621A (en) * | 2013-07-09 | 2013-10-09 | 东南大学 | Face classification method based on sparse concentration index |
-
2014
- 2014-06-30 CN CN201410307765.XA patent/CN104063715B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102163279A (en) * | 2011-04-08 | 2011-08-24 | 南京邮电大学 | Color human face identification method based on nearest feature classifier |
CN102402784A (en) * | 2011-12-16 | 2012-04-04 | 武汉大学 | Human face image super-resolution method based on nearest feature line manifold learning |
CN103345621A (en) * | 2013-07-09 | 2013-10-09 | 东南大学 | Face classification method based on sparse concentration index |
Non-Patent Citations (2)
Title |
---|
STAN Z.LI 等: ""Face Recognition Using the Nearest Feature Line Method"", 《IEEE TRANSACTIONS ON NEURAL NETWORKS》 * |
沈杰 等: ""一种用于人脸识别的监督局部线性嵌入算法及其改进"", 《计算机应用与软件》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109800723A (en) * | 2019-01-25 | 2019-05-24 | 山东超越数控电子股份有限公司 | A kind of recognition of face and the computer booting system and method for staying card is logged in violation of rules and regulations |
Also Published As
Publication number | Publication date |
---|---|
CN104063715B (en) | 2017-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | A comparative study of encoding, pooling and normalization methods for action recognition | |
CN107194341B (en) | Face recognition method and system based on fusion of Maxout multi-convolution neural network | |
Sun et al. | Deep learning face representation by joint identification-verification | |
CN101777131B (en) | Method and device for identifying human face through double models | |
Harada et al. | Discriminative spatial pyramid | |
Dora et al. | An evolutionary single Gabor kernel based filter approach to face recognition | |
Chen et al. | Fisher vector encoded deep convolutional features for unconstrained face verification | |
Mahmud et al. | Human face recognition using PCA based Genetic Algorithm | |
CN102938065A (en) | Facial feature extraction method and face recognition method based on large-scale image data | |
CN103870811A (en) | Method for quickly recognizing front face through video monitoring | |
Sisodia et al. | ISVM for face recognition | |
Beksi et al. | Object classification using dictionary learning and rgb-d covariance descriptors | |
Salhi et al. | Fast and efficient face recognition system using random forest and histograms of oriented gradients | |
CN105608443B (en) | A kind of face identification method of multiple features description and local decision weighting | |
CN103345621B (en) | A kind of face classification method based on sparse CI | |
CN103714340A (en) | Self-adaptation feature extracting method based on image partitioning | |
CN102609718B (en) | Method for generating vision dictionary set by combining different clustering algorithms | |
CN103942572A (en) | Method and device for extracting facial expression features based on bidirectional compressed data space dimension reduction | |
Devi et al. | Content based feature combination method for face image retrieval using neural network and SVM classifier for face recognition | |
Wang et al. | An improved PCA face recognition algorithm based on the discrete wavelet transform and the support vector machines | |
CN110287973B (en) | Image feature extraction method based on low-rank robust linear discriminant analysis | |
CN104063715A (en) | Face classification method based on nearest neighbor feature lines | |
CN105718858A (en) | Pedestrian recognition method based on positive-negative generalized max-pooling | |
Yang et al. | A fusing algorithm of Bag-Of-Features model and Fisher linear discriminative analysis in image classification | |
Wen et al. | An algorithm based on SVM ensembles for motorcycle recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |