CN103020589B - A kind of single training image per person method - Google Patents
A kind of single training image per person method Download PDFInfo
- Publication number
- CN103020589B CN103020589B CN201210464629.2A CN201210464629A CN103020589B CN 103020589 B CN103020589 B CN 103020589B CN 201210464629 A CN201210464629 A CN 201210464629A CN 103020589 B CN103020589 B CN 103020589B
- Authority
- CN
- China
- Prior art keywords
- sample
- training
- training sample
- face
- photo
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of single training image per person method, comprise the following steps: 1) input face subcharacter training sample material;2) structure training sample;3) P subcharacter of each described training sample is extracted;4) given arbitrary training sample, calculating the difference of two width images in training sample according to P sub-characteristic measure module, the P constructing this sample ties up sample characteristics data vector v, if two secondary photos represent same person in training sample, the response value of v is r=1, otherwise r=0;5) according in step 4), the training result data set of machine learning is obtained;6) input two width human face photos of contrast to be identified, be identified.According to the present invention by building the many training sample sets for face subcharacter in advance, it is achieved the identification ability to face subcharacter, and zygote feature identification integration technology, it is achieved single training image per person.
Description
Technical field
The present invention relates to relate to a kind of single training image per person method, belong to face recognition technology (Face
Recognition Technique, FRT) technical field.
Background technology
Face recognition technology is that current biological measures the most representative and the most challenging important technology in field
Direction.Recognition of face refers to based on known face Sample Storehouse, utilize image procossing and/or mode identification technology from static or
In dynamic scene, identify one or more face.
Recognition of face can be in the face of two kinds of situations, and first training sample is more sufficient, and another kind is then that training sample is not
The most sufficient situation, some face identification method is when sample is relatively deficient, it is difficult to obtain preferable recognition effect, as in identity
In the application such as results card, passport checking, everyone only has facial image for identifying systematic training, some face recognition methods
It is difficult to the competent important task obtaining preferable recognition result.
For the application that training sample is relatively deficient, one training sample is suggested, and single training image per person problem refers to
, face characteristic storehouse is only each client and preserves a photos, when whether needs judge witness to be tested as client,
It is only capable of the unique photos comparison that will preserve in on-the-spot capture pictures and face characteristic storehouse.
Single training image per person problem, generally uses several method:
Method one, the single sample in face characteristic storehouse is changed into multiple sample by mapping algorithm, then with study
Algorithm carries out learning classification, thus problem is changed into many training samples recognition of face problem.This method has a problem in that list
Individual sample changes into multiple sample, can cause sample content distortion so that training effect is well below real many training samples
Face recognition algorithms.
Method two, foundation individual human face sample build the threedimensional model of face, and two dimensional image identification problem is converted into three
Dimension module identification problem.Current this kind of method is the most immature, cannot set up threedimensional model accurately according to two dimensional image.
Method three, human face region is divided into the region that some sizes are the same, it is achieved utilize a large amount of and business is unrelated
Human face photo sample carries out classification based training to regional.During actual identification, the regional of image to be identified is divided
Do not carry out classification to judge, according to the similarity in each region between two width images, it is determined that comparison result.Such method has certain effect
Really, but discrimination is relatively low, does not possess practical value.
Summary of the invention
The present invention proposes a kind of new single training image per person method, by building in advance for face subcharacter
Many training sample sets, it is achieved the identification ability to face subcharacter, and zygote feature identification integration technology, it is achieved single training sample
This recognition of face.
The present invention is by the following technical solutions:
A kind of single training image per person method, comprises the following steps:
1) input face subcharacter training sample material: prepare lineup's face photo, capacity be M=m [1]+m [2]+...+
M [N], wherein, N is the quantity of the people participating in shooting sample in training sample, and m [i] (1≤i≤N, m [i] >=1) is i-th people
The total quantity of photo under given different shooting conditions;
2) structure training sample: M training material, matches two-by-two, produces the training sample of M × M human face photo;
3) P subcharacter of each described training sample is extracted, and then by two photo correspondences in each training sample
Difference between subcharacter obtains P sub-characteristic measure module of each training sample;
4) given arbitrary training sample, calculates the difference of two width images in training sample according to P sub-characteristic measure module
Value, the P constructing this sample ties up sample characteristics data vector v, if two secondary photos represent same person in training sample, the sound of v
Should be worth for r=1, otherwise r=0;
5) according in step 4), for M × M training vector and corresponding response value, by the method for machine learning,
Training result data set to machine learning;
6) input two width human face photos of contrast to be identified, call P sub-characteristic measure module calculate P topology away from
Distance under spatial sense, constitutes vector v to be tested ', according to the machine learning algorithm in step 7 and training result data
Collection, it was predicted that judge value r corresponding to v ' ';When r '=1, it is determined that the corresponding same people of two photos;Allow r '=0 time, it is determined that two photos
Corresponding different people.
According to the above-mentioned single training image per person method of the present invention, by appropriate face subcharacter training sample,
Constructor characteristic measure module, and then generate P dimension sample data vector v, form training result data by machine learning algorithm
Collection, according to the machine learning algorithm used and training result data set, step 6) judges that certain is for the photo of one training sample and defeated
The photo to be identified entered is identified, and this mode substantially increases discrimination, makes single training image per person method have
Industrial application prospect.In step 6), the difference of subcharacter refer to two word characteristic vectors under its place topology meaning away from
Distance in space.
Preferably, above-mentioned single training image per person method, in step 2) also include sample material yardstick mark before
The step of standardization: two interpupillary distancies in the pupil average coordinates of people, and unified each photo on unified all photos, and institute
State photo regular for same size.
Above-mentioned single training image per person method, to sample material size normalised after also include sample material gray scale
The step changed.
Preferably, above-mentioned single training image per person method, also include the described photo of the gray processing obtained is entered
The step of row luminance standard, can reduce the operand of subsequent step.
Further, in order to reduce the operand of subsequent step, above-mentioned single training image per person method, luminance standard
Change and be carried out Face datection, cut out human face region, then allow face mean flow rate and contrast standardization.
Preferably, above-mentioned single training image per person method, the standard of face mean flow rate is 127, contrast standardization
Standard be brightness mean square deviation be 32, there is preferable resolution.
Above-mentioned single training image per person method, described step 2) in the regular size of photo be pixel value 240 ×
320, interpupillary distance 64 pixel, in the case of satisfied identification, operand is relatively small.
Above-mentioned single training image per person method, for RGB color photo, the step being converted to gray level image is, reads
Take the brightness value of each 3 passages of pixel, utilize Y=((R*299)+(G*587)+(B*114))/1000 to carry out gray processing.
Above-mentioned single training image per person method, the number of described subcharacter is no less than 6 and no more than 38, coupling
The process storage capacity of relevant device, selects suitable word number of features.
Above-mentioned single training image per person method, the method for machine learning selected from artificial neural network algorithm, support to
Amount machine algorithm, Bayesian Classification Arithmetic, decision Tree algorithms.
Detailed description of the invention
The current universal discrimination of single training image per person method is the highest, mostly about 65%, before not having market
Scape.It has been recognised by the inventors that only discrimination just has the value of industry application more than 90%.
According to the present invention, a kind of single training image per person method, by effective integration many seeds identification feature, it is achieved
Single training image per person.Concrete steps are described as follows with the form of tree structure:
1, obtain sample material: its capacity be M=m [1]+m [2]+...+m [N], N be in training sample participate in shooting sample
My quantity, m [i] (1≤i≤N, m [i] >=1) is i-th people at different shooting conditions (as illumination, attitude, expression etc. are clapped
Take the photograph condition) under number of pictures, the biggest territory finally obtained of this quantity is the biggest, but operand also can increase accordingly.
2, sample material scale calibration, is beneficial to the process of subsequent step: the portrait photo of collection is according to unified mark
Standard, size normalised.
2-1, according to 2, unified scale, rotate, translate, cut out sample material so that photo size unification is 240 × 320,
Two pupil mean ordinates are 160, and the average abscissa of pupil is 120, interpupillary distance 64 pixel.Scaling therein, rotate, translate
The image-element original for photo itself selects, and such as angle the most just, is rotated in place.
Note: in image procossing, ranks are demarcated by pixel value automatically, and coordinate is to should pixel value in length and breadth.
3, sample material gray processing: RGB color image is converted to gray level image.
3-1, according to 3, available formula Y=((R*299)+(G*587)+(B*114))/1000, RGB color image
Be converted to gray level image.
4, lightness standardization: allow face mean flow rate and contrast standardization.
4-1, according to 4, allowing photo face average brightness value is 127, brightness mean square deviation 32.
5, structure training sample: M training material, matches two-by-two, produces M × M human face photo pairing, and these are joined
To being exactly training sample.
6, according to M × M training sample, P(P >=1 is constructed) individual sub-characteristic measure module, each subcharacter metric module can
To calculate the difference between two photo character pairs in sample according to training sample.
Below for passing through the subcharacter metric module that checking can be selected, quantity is 7, and by passing through checking,
How can build 38 sub-characteristic measure modules.
6-1, according to 6, the one of subcharacter metric module be implementation method be to calculate in sample in two photos under face
The difference of bar vertical coordinate.
6-2, according to 6, the one of subcharacter metric module be implementation method be to calculate in sample face width in two photos
The difference of degree.
6-3, according to 6, the one of subcharacter metric module be implementation method be to calculate in sample in two photos under face
The difference of lip vertical coordinate.
6-4, according to 6, the one of subcharacter metric module be implementation method be to calculate in sample face eyebrow in two photos
The area (pixel count) of territory, hair-fields difference section.
6-5, according to 6, the one of subcharacter metric module be implementation method be to calculate in sample face in two photos
Other difference, is 0 with sex difference, and different sexes difference is 1.
6-6, according to 6, subcharacter metric module a kind of be implementation method be to calculate in sample face in two photos
The difference of mouth width.
6-7, according to 6, subcharacter metric module a kind of be implementation method be to calculate in sample face in two photos
ASM skeleton pattern corresponding node coordinate distance sum.
7, given arbitrary training sample, according to the difference of two width images in the sample that P sub-characteristic measure module calculates,
Construct a P and tie up sample characteristics data vector v.When two photos in training sample represent same person, vector v is corresponding
Response value r=1, otherwise r=0.
8, for M × M training sample, M × M training vector and corresponding response value can be obtained, can be by means of
Machine learning algorithm, obtains machine learning training result data set.
8-1, according to 8, machine learning algorithm can be artificial neural network algorithm.
8-2, according to 8, machine learning algorithm can be algorithm of support vector machine.
8-3, according to 8, machine learning algorithm can be Bayesian Classification Arithmetic.
8-4, according to 8, machine learning algorithm can be decision Tree algorithms.
9, the fixed sample to be tested of structure: provide two width human face photos of comparison to be identified, calls P sub-characteristic measure module
Calculate P difference, constitute vector v to be tested '.Machine learning algorithm and training result data set in foundation 8, it was predicted that judge v '
Corresponding value r '.When r '=1, it is determined that the corresponding same people of two photos;Allow r '=0 time, it is determined that two photos correspondence different people.
The algorithm of above machine learning is the most the more commonly used image processing algorithm, does not repeats them here.
Through verifying that the discrimination of above-mentioned recognition methods is 92.5 ~ 96%.
One embodiment:
1, establishment sample material: establishment capacity is the sample material of M=N × 10=200 × 10=2000, N=200 is training
Sample is participated in the quantity of shooting sample people, everyone 10 photos.
2, unification scales, rotates, translates, cuts out sample material so that photo size unification is 240 × 320, and two pupils are put down
All vertical coordinates are 160, and the average abscissa of pupil is 120, interpupillary distance 64 pixel.
3, sample material gray processing: with formula Y=((R*299)+(G*587)+(B*114))/1000, RGB color
Image is converted to gray level image.
4, lightness standardization: allowing photo face average brightness value is 127, brightness mean square deviation 32.
5, structure training sample: M=2000 training material, matches two-by-two, produces M × M=4000000 faces
Photo matches, and these pairings are exactly training sample.
6, according to M × M=4000000 training sample, P=12 sub-characteristic measure module of structure, each subcharacter is measured
Module can calculate the difference between character pair according to two photos that training sample calculates in sample.These 12 sub-character modules
Block measures following characteristics respectively:
(1) eyebrow concentration;
(2) eyebrow width;
(3) nostril vertical coordinate;
(4) nostril spacing;
(5) mouth central point vertical coordinate;
(6) upper lip vertical coordinate;
(7) there is the ASM model of 68 nodes;
(8) distributed areas of eyebrow;
(9) the binaryzation shape of eyes;
(10) shape type (utilizing cluster algorithm to classify) of mouth
(11) nose shape type (utilizing cluster algorithm to classify)
(12) sex
7, given arbitrary training sample, in the sample that P=12 sub-characteristic measure module of foundation calculates, two width images inserts
Value, constructs a P=12 and ties up sample characteristics data vector v.When two photos in sample represent same person, vector v is corresponding
Response value r=1, otherwise r=0.
8, for M × M=4000000 training sample, M × M=4000000 training vector and corresponding sound can be obtained
Should be worth, machine learning training result data set can be obtained by means of Bayes classifier.
9, the fixed sample to be tested of structure: provide two width human face photos of comparison to be identified, calls P=12 sub-characteristic measure
Module calculates P=12 difference, constitutes 12 dimension vector v to be tested '.Machine learning algorithm and training result data in foundation 8
Collection, it was predicted that judge value r corresponding to v ' '.When r '=1, it is determined that the corresponding same people of two photos;Allow r '=0 time, it is determined that two photos
Corresponding different people.
Through checking, the discrimination of the method is 95%.
Claims (7)
1. a single training image per person method, it is characterised in that comprise the following steps:
1) input face subcharacter training sample material: prepare lineup's face photo, capacity be M=m [1]+m [2]+...+m
[N], wherein, N is the quantity of the people participating in shooting sample in training sample, and m [i] (1≤i≤N, m [i] >=1) is that i-th people exists
The total quantity of photo under given different shooting conditions;
2) structure training sample: M training material, matches two-by-two, produces the training sample of M × M human face photo;
3) P subcharacter of each described training sample is extracted, so special by two photo correspondence in each training sample
Difference between levying obtains P sub-characteristic measure module of each training sample;
4) given arbitrary training sample, calculates the difference of two width images, structure in training sample according to P sub-characteristic measure module
The P making this sample ties up sample characteristics data vector v, if two secondary photos represent same person in training sample, the response value of v is
R=1, otherwise r=0;
Wherein, P=12, the subcharacter mated is: eyebrow concentration, eyebrow width, nostril vertical coordinate, nostril spacing, mouth central point
Vertical coordinate, upper lip vertical coordinate, there is the ASM model of 68 nodes, the distributed areas of eyebrow, the binaryzation shape of eyes, profit
The shape type of mouth, the nose shape type utilizing cluster algorithm to classify and the sex classified by cluster algorithm;
5) according in step 4), for M × M training vector and corresponding response value, by the method for machine learning, machine is obtained
The training result data set of device study;
6) input two width human face photos of contrast to be identified, call P sub-characteristic measure module and calculate P topology distance sky
Between distance under meaning, constitute vector v to be tested ', according to the machine learning algorithm in step 7 and training result data set, in advance
Survey value r judging that v ' is corresponding ';When r '=1, it is determined that the corresponding same people of two photos;Allow r '=0 time, it is determined that two photos are corresponding
Different people;
For RGB color photo, the step being converted to gray level image is, reads the brightness value of each 3 passages of pixel, utilize Y=
((R*299)+(G*587)+(B*114))/1000 carry out gray processing;
The method of machine learning is calculated selected from artificial neural network algorithm, algorithm of support vector machine, Bayesian Classification Arithmetic, decision tree
Method.
Single training image per person method the most according to claim 1, it is characterised in that in step 2) also include before
Step to sample material scale calibration: on unified all photos in the pupil average coordinates of people, and unified each photo
Two interpupillary distancies, and regular for described photo for same size.
Single training image per person method the most according to claim 2, it is characterised in that to sample material dimensional standard
The step to sample material gray processing is also included after change.
Single training image per person method the most according to claim 3, it is characterised in that also include the ash obtained
The described photo of degreeization carries out the step of luminance standard.
Single training image per person method the most according to claim 4, it is characterised in that luminance standard is carried out people
Face detects, and cuts out human face region, then allows face mean flow rate and contrast standardization.
Single training image per person method the most according to claim 5, it is characterised in that the standard of face mean flow rate
Be 127, the standardized standard of contrast be brightness mean square deviation be 32.
Single training image per person method the most according to claim 2, it is characterised in that the regular size of photo is picture
Element value 240 × 320, interpupillary distance 64 pixel.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210464629.2A CN103020589B (en) | 2012-11-19 | 2012-11-19 | A kind of single training image per person method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210464629.2A CN103020589B (en) | 2012-11-19 | 2012-11-19 | A kind of single training image per person method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103020589A CN103020589A (en) | 2013-04-03 |
CN103020589B true CN103020589B (en) | 2017-01-04 |
Family
ID=47969180
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210464629.2A Active CN103020589B (en) | 2012-11-19 | 2012-11-19 | A kind of single training image per person method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103020589B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103927560B (en) * | 2014-04-29 | 2017-03-29 | 苏州大学 | A kind of feature selection approach and device |
WO2015180101A1 (en) * | 2014-05-29 | 2015-12-03 | Beijing Kuangshi Technology Co., Ltd. | Compact face representation |
CN106056074A (en) * | 2016-05-27 | 2016-10-26 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | Single training sample face identification method based on area sparse |
CN106407966B (en) * | 2016-11-28 | 2019-10-18 | 南京理工大学 | A kind of face identification method applied to attendance |
CN108038948B (en) * | 2017-12-26 | 2020-12-08 | 杭州数梦工场科技有限公司 | Passenger identity verification method and device and computer readable storage medium |
US11630995B2 (en) * | 2018-06-19 | 2023-04-18 | Siemens Healthcare Gmbh | Characterization of amount of training for an input to a machine-learned network |
CN110008934B (en) * | 2019-04-19 | 2023-03-24 | 上海天诚比集科技有限公司 | Face recognition method |
CN110967678A (en) * | 2019-12-20 | 2020-04-07 | 安徽博微长安电子有限公司 | Data fusion algorithm and system for multiband radar target identification |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101089874A (en) * | 2006-06-12 | 2007-12-19 | 华为技术有限公司 | Identify recognising method for remote human face image |
CN102194131A (en) * | 2011-06-01 | 2011-09-21 | 华南理工大学 | Fast human face recognition method based on geometric proportion characteristic of five sense organs |
-
2012
- 2012-11-19 CN CN201210464629.2A patent/CN103020589B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101089874A (en) * | 2006-06-12 | 2007-12-19 | 华为技术有限公司 | Identify recognising method for remote human face image |
CN102194131A (en) * | 2011-06-01 | 2011-09-21 | 华南理工大学 | Fast human face recognition method based on geometric proportion characteristic of five sense organs |
Non-Patent Citations (2)
Title |
---|
Learning a Similarity Metric Discriminatively, with Application to Face Verification;Sumit Chopra等;《Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on》;20050625;第1-8页 * |
基于主成分分析的人脸识别;李文革;《中国优秀硕士学位论文全文数据库 信息科技辑》;20090515;第23-31页,第49-50页 * |
Also Published As
Publication number | Publication date |
---|---|
CN103020589A (en) | 2013-04-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103020589B (en) | A kind of single training image per person method | |
CN106548165B (en) | A kind of face identification method of the convolutional neural networks based on image block weighting | |
CN105718869B (en) | The method and apparatus of face face value in a kind of assessment picture | |
CN105740780B (en) | Method and device for detecting living human face | |
CN105005774B (en) | A kind of recognition methods of face kinship and device based on convolutional neural networks | |
JP5899472B2 (en) | Person attribute estimation system and learning data generation apparatus | |
CN103020655B (en) | A kind of remote identity authentication method based on single training image per person | |
CN105447529B (en) | Method and system for detecting clothes and identifying attribute value thereof | |
TWI439951B (en) | Facial gender identification system and method and computer program products thereof | |
CN110163114A (en) | A kind of facial angle and face method for analyzing ambiguity, system and computer equipment | |
CN108537215A (en) | A kind of flame detecting method based on image object detection | |
CN109711281A (en) | A kind of pedestrian based on deep learning identifies again identifies fusion method with feature | |
CN111768336B (en) | Face image processing method and device, computer equipment and storage medium | |
CN107886507B (en) | A kind of salient region detecting method based on image background and spatial position | |
CN106295567A (en) | The localization method of a kind of key point and terminal | |
US9280804B2 (en) | Rotation of an image based on image content to correct image orientation | |
CN110378235A (en) | A kind of fuzzy facial image recognition method, device and terminal device | |
CN105405130B (en) | License image highlight detection method and device based on cluster | |
CN110532970A (en) | Age-sex's property analysis method, system, equipment and the medium of face 2D image | |
CN107316029A (en) | A kind of live body verification method and equipment | |
CN105022999A (en) | Man code company real-time acquisition system | |
CN109271930A (en) | Micro- expression recognition method, device and storage medium | |
CN103544478A (en) | All-dimensional face detection method and system | |
CN108960142A (en) | Pedestrian based on global characteristics loss function recognition methods again | |
CN105631404B (en) | The method and device that photo is clustered |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C56 | Change in the name or address of the patentee | ||
CP01 | Change in the name or title of a patent holder |
Address after: Shun high tech Zone of Ji'nan City, Shandong province 250101 China West Road No. 699 Patentee after: SYNTHESIS ELECTRONIC TECHNOLOGY CO., LTD. Address before: Shun high tech Zone of Ji''nan City, Shandong province 250101 China West Road No. 699 Patentee before: Shandong Synthesis Electronic Technology Co., Ltd. |