CN106874877A - A kind of combination is local and global characteristics without constraint face verification method - Google Patents
A kind of combination is local and global characteristics without constraint face verification method Download PDFInfo
- Publication number
- CN106874877A CN106874877A CN201710090721.XA CN201710090721A CN106874877A CN 106874877 A CN106874877 A CN 106874877A CN 201710090721 A CN201710090721 A CN 201710090721A CN 106874877 A CN106874877 A CN 106874877A
- Authority
- CN
- China
- Prior art keywords
- face
- feature
- features
- characteristic points
- kinds
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Collating Specific Patterns (AREA)
Abstract
The present invention provides a kind of combination part and global characteristics without constraint face verification method, and Face Sample Storehouse is arranged first, and everyone multiple faces comprising different attitudes, varying environment and different time shine, 68 characteristic points of extraction face;Then the local feature and global feature totally 5 kinds of features of face are extracted according to human face characteristic point, and by this 5 kinds of Feature Mappings to nuclear space;Then trained using cascade bayes method respectively using this 5 kinds of features on training set and obtain 5 group models;In the face verification stage, the face characteristic of input picture is extracted first, then the model according to training calculates 5 groups of similarities of feature pair respectively, finally using 5 groups of average values of similarity as final similarity, so as to judge whether two people are same person.The present invention has taken the local feature and global feature of face into consideration, solves the problems, such as outdoor without the face verification under constraint environment.
Description
Technical field
The present invention relates to computer vision field, and in particular to a kind of combination is local to test with global characteristics without constraint face
Card method.
Background technology
Recognition of face has both direction:Face verification and face identification, wherein, face verification refers to and judges that two faces are
No is same person, and the given corresponding identity of this face is found in face identification from a face database.Due to outdoor
There are various influences such as illumination, attitude, age, dress without the face picture under constraint environment so that outdoor is without under constraint environment
Face verification difficulty it is very big.
Proposed many methods in recent years to improve without the face verification under constraint environment, these methods can probably be divided into
Two classes:The method of feature based and the method based on metric learning.For first kind method, aim at extract robust, have
The feature of distinction, it is desirable to which the face characteristic of different people can be different as far as possible, and classical face characteristic describes operator to be included
SIFT, LBP, PEM and fisher face etc..Method based on metric learning is then to learn one by the sample data of tape label
Kind of distance metric algorithm, under the algorithm frame, the smaller and different face distance of identical face distance is bigger, it is classical away from
There are LDML, CSML, PCCA and PMML scheduling algorithm from metric algorithm.
At present, existing some face verification methods are disclosed, such as:Chinese invention patent is " based between class distance in class
Face verification method 201310589074.9 " and a kind of " across age face verification method of feature based study
201510270145.8 " both for whole face extraction global characteristics, easily be subject to such as be verified people be branded as, glasses with
And expression etc. local attribute interference.A kind of Chinese invention patent " face verification method and device based on multi-pose identification
201410795404.4 " at least two human face photos of collection are needed, when there was only a human face photo in Sample Storehouse, the method is just
Fail.Chinese invention patent a kind of " unlimited environment face verification method based on piecemeal deep neural network
201310664180.9 " although face is divided into multiple regions, and feature is extracted by block, its zoning is arbitrarily to draw
Point, do not account for the face distribution of face.
The content of the invention
It is local and global characteristics without constraint face verification side that the technical problem to be solved in the present invention is to provide a kind of combination
Method, has taken face contour feature, eye feature, mouth feature, these local features of nose feature and the face of face into consideration
Global characteristics, given a mark by cascading each several part characteristic similarities that Bayesian model is two faces, finally take similarity equal
Value, it is not good such that it is able to effectively evade the recognition effect that the externalities such as local dress cause.
In order to solve the above technical problems, embodiments of the invention provide a kind of combination part with global characteristics without constraint people
Face verification method, comprises the following steps:
Step 1, arranges face training sample set, and sample set includes 10,000 face pictures of people, and everyone includes at least 15
Above difference attitude, varying environment, the face of different time shine;
Step 2,68 characteristic points of human face region and extraction in detection sample set picture, then carries out Face normalization and normalization
Treatment;
Step 3, according to 68 characteristic points that step 2 is obtained, extract the face contour feature of face, eye feature, mouth feature,
The global characteristics of nose feature and face;
Step 4,5 kinds of features that step 3 is obtained project to the non-linear space of easy differentiation using RBF kernel functions respectively;
Step 5,5 kinds of features that step 4 is obtained respectively using cascade bayesian algorithm train, obtain 5 groups of covariance matrix A and
G;
Step 6, in the face verification stage, for two pictures of input, two faces shine 5 is obtained using the method in step 3
Feature is planted, and projects to non-linear space, obtain two the 5 of people kinds of features, be expressed asWith;
Step 7, using two matrix As and G that are obtained in step 5,5 pairs of similarities of feature in calculation procedure 6, calculate respectively
Formula is:;
Step 8,5 pairs of similarities in calculation procedure 7Average value, final Similarity value is obtained, so as to sentence compared with threshold value
Break and whether two people are same person.
Wherein, concretely comprising the following steps for 68 characteristic points on face is extracted in the step 2:
Step 2-1, for input picture, using the face in the Face datection algorithm detection photo based on Adaboost, if
Face is not detected by, is returned, continue with next pictures, detected face and be then transferred to step 2-2;
Step 2-2, the human face region picture feeding facial feature points detection module that step 2-1 is obtained, obtains 68 of face
Characteristic point;
Step 2-3, according to the human face characteristic point that step 2-2 is obtained, carries out Face normalization;
Step 2-4, face normalization treatment, eliminates illumination effect.
Wherein, comprising the following steps that for face characteristic is extracted in the step 3:
Step 3-1, according to 68 characteristic points that step 2 is obtained, optionally a little sets up as datum mark P by origin of datum mark P
Polar coordinates, are spacing with 10 °, and polar coordinates are divided into 36 regions, and the company of remaining 67 characteristic point and datum mark is asked for successively
The angle that line same level positive direction is formed, obtains the angular histogram put on the basis of P;According to the method described above, choose eye, nose,
6 points obtain 6 groups of histograms as datum mark at mouth three, using the histogram as the face contour feature of face;
Step 3-2, according to 68 characteristic points that step 2 is obtained, positions ocular, calculates the intensive LBP features of ocular,
WPCA conversion is carried out and eliminates redundancy to extract main feature, using this feature as eye feature;
Step 3-3, according to 68 characteristic points that step 2 is obtained, positions mouth region, calculates the intensive LBP features of mouth region,
WPCA conversion is carried out and eliminates redundancy to extract main feature, using this feature as mouth feature;
Step 3-4, according to 68 characteristic points that step 2 is obtained, positions nasal region, calculates the intensive LBP features of nasal region,
WPCA conversion is carried out and eliminates redundancy to extract main feature, using this feature as nose feature;
Step 3-5, according to 68 characteristic points that step 2 is obtained, the overall region of locating human face calculates the close of face overall region
Collection LBP features, carry out WPCA conversion and eliminate the redundancy main feature of extraction, using this feature as the global characteristics of face.
Wherein, the detailed process of the step 5 cascade bayesian algorithm training is:
Step 5-1, the face contour feature of the face obtained according to step 3 calculates the association between different people and same person feature
Variance matrixWith;
Step 5-2, is calculated according to step 5-1With, calculateWith, wherein,,;
Step 5-3, the global characteristics of the eye feature obtained according to step 3, mouth feature, nose feature and face repeat to walk
Rapid 5-1 and 5-2, finally gives metric matrixWith。
Wherein, the step 6 is comprised the following specific steps that:
Step 6-1, reads checking picture 1 and picture 2, and Face datection is carried out respectively, if being not detected by two faces, points out not
Face is detected, terminates matching, otherwise into step 6-2;
68 characteristic points of step 6-2, detection face 1 and face 2, if be not detected by, it fails to match for prompting, terminates matching,
Otherwise enter step 6-3;
Step 6-3, according to step 3 methods described, extract 5 kinds of features of face 1 and face 2.
Above-mentioned technical proposal of the invention has the beneficial effect that:The face profile that the present invention has taken face into consideration is special
Levy, the global characteristics of eye feature, mouth feature, these local features of nose feature and face, by cascading Bayesian model
It is two each several part characteristic similarity marking of face, similarity average is finally taken, such that it is able to effectively evade local dress
The recognition effect caused Deng externality is not good.
Brief description of the drawings
Fig. 1 is 68 distribution schematic diagrams of human face characteristic point in the present invention;
Fig. 2 is operation principle block diagram of the invention;
Fig. 3 is flow chart of the step 6 of the present invention to step 8;
Fig. 4 is the schematic diagram of human face data collection in the embodiment of the present invention;
Fig. 5 is 68 characteristic point distribution schematic diagrams of face.
Specific embodiment
To make the technical problem to be solved in the present invention, technical scheme and advantage clearer, below in conjunction with accompanying drawing and tool
Body embodiment is described in detail.
As shown in Figure 1, Figure 3, a kind of combination is local and global characteristics without constraint face verification method, including following step
Suddenly:
Step 1, arranges face training sample set, and sample set includes the human face photo of 10,000, everyone comprising at least 15 with
Upper different attitudes, varying environment, the face of different time shine;
Step 2, detection sample set photo face simultaneously extracts 68 characteristic points on face, as shown in figure 1, and carrying out Face normalization
And normalized;
Wherein, concretely comprising the following steps for 68 characteristic points on face is extracted:
Step 2-1, for input picture, using the face in the Face datection algorithm detection photo based on Adaboost, if
Face is not detected by, is returned, continue with next pictures, detected face and be then transferred to step 2-2;
Step 2-2, the human face region picture feeding facial feature points detection module that step 2-1 is obtained, obtains 68 of face
Characteristic point;
Step 2-3, according to the human face characteristic point that step 2-2 is obtained, carries out Face normalization;
Step 2-4, face normalization treatment, eliminates illumination effect.
Step 3, according to 68 characteristic points that step 2 is obtained, extracts face contour feature, eye feature, the mouth of face
The global characteristics of feature, nose feature and face;
Wherein, comprising the following steps that for face characteristic is extracted:
Step 3-1, according to 68 characteristic points that step 2 is obtained, optionally a little sets up as datum mark P by origin of datum mark P
Polar coordinates, are spacing with 10 °, and polar coordinates are divided into 36 regions, and the company of remaining 67 characteristic point and datum mark is asked for successively
The angle that line same level positive direction is formed, obtains the angular histogram put on the basis of P;According to the method described above, choose eye, nose,
6 points obtain 6 groups of histograms as datum mark at mouth three, using the histogram as the face contour feature of face;
Step 3-2, according to 68 characteristic points that step 2 is obtained, positions ocular, calculates the intensive LBP features of ocular,
WPCA conversion is carried out and eliminates redundancy to extract main feature, using this feature as eye feature;
Step 3-3, according to 68 characteristic points that step 2 is obtained, positions mouth region, calculates the intensive LBP features of mouth region,
WPCA conversion is carried out and eliminates redundancy to extract main feature, using this feature as mouth feature;
Step 3-4, according to 68 characteristic points that step 2 is obtained, positions nasal region, calculates the intensive LBP features of nasal region,
WPCA conversion is carried out and eliminates redundancy to extract main feature, using this feature as nose feature;
Step 3-5, according to 68 characteristic points that step 2 is obtained, the overall region of locating human face calculates the close of face overall region
Collection LBP features, carry out WPCA conversion and eliminate the redundancy main feature of extraction, using this feature as the global characteristics of face.
Step 4,5 kinds of features that step 3 is obtained project to the non-linear space of easy differentiation using RBF kernel functions respectively;
Step 5,5 kinds of features that step 4 is obtained respectively using cascade bayesian algorithm train, obtain 5 groups of covariance matrix A and
G;
Wherein, the detailed process of cascade bayesian algorithm training is:
Step 5-1, the face contour feature of the face obtained according to step 3-1 is calculated between different people and same person feature
Covariance matrixWith;
Step 5-2, is calculated according to step 5-1With, calculateWith, wherein,,;
Step 5-3, the overall situation of the eye feature obtained to step 3-5 according to step 3-2, mouth feature, nose feature and face
Feature, repeat step 5-1 and 5-2, finally gives metric matrixWith。
Step 6, in the face verification stage, for two pictures of input, detects face and extracts face Expressive Features, then
Non-linear space is projected to, two the 5 of people kinds of features are obtained, is expressed asWith;Specific steps
It is as follows:
Step 6-1, reads checking picture 1 and picture 2, and Face datection is carried out respectively, if being not detected by two faces, points out not
Face is detected, terminates matching, otherwise into step 6-2;
68 characteristic points of step 6-2, detection face 1 and face 2, if be not detected by, it fails to match for prompting, terminates matching,
Otherwise enter step 6-3;
Step 6-3, according to step 3 methods described, extract 5 kinds of features of face 1 and face 2.
Step 7, using two matrix As and G that are obtained in step 5, distinguishes 5 pairs of similarities of feature in calculation procedure 6,
Computing formula is:;
Step 8,5 pairs of similarities in calculation procedure 7Average value, final Similarity value is obtained, so as to sentence compared with threshold value
Break and whether two people are same person.
The present invention has taken the face contour feature of face, eye feature, mouth feature, nose feature these parts into consideration
The global characteristics of feature and face, are that two each several part characteristic similarities of face are given a mark by cascading Bayesian model, finally
Similarity average is taken, it is not good such that it is able to effectively evade the recognition effect that the externalities such as local dress cause.
Embodiment:
The model training stage includes step 1 to step 5, human face data collection such as Fig. 4,10,000 people, and everyone is including at least 15
The face picture of different times.
Step 2, detects human face characteristic point, by taking Fig. 5 as an example, detects 68 characteristic points of face.
Step 3, extracts 5 groups of local features and global characteristics;
5 groups of features are respectively mapped to nuclear space by step 4 using RBF kernel functions;
Step 5, is trained using cascade bayesian algorithm and obtains metric matrixWith, so far model
Training stage terminates.
Using the model for training, i.e. metric matrixWith, can to step 8 according to step 6
To judge whether two face pictures are same person, the technology may apply in all multisystems, such as:
(1)Register roll calling system, only one face picture of a people need to be preserved in system, by the face picture of input be
The face contrast verification one by one preserved in system, recognizes this person and completes roll-call and register.
(2)Offender's face retrieval, preserves the identity card head portrait picture of individual, input crime point in public security system
The face picture preserved in sub- face picture, with system is contrasted one by one, is sorted according to Similarity value, provides most close some
People.
The above is the preferred embodiment of the present invention, it is noted that for those skilled in the art
For, on the premise of principle of the present invention is not departed from, some improvements and modifications can also be made, these improvements and modifications
Should be regarded as protection scope of the present invention.
Claims (5)
1. a kind of combination is local and global characteristics without constraint face verification method, it is characterised in that comprise the following steps:
Step 1, arranges face training sample set, and sample set is to contain 10,000 face picture storehouses of different people, and everyone wraps
Face containing at least more than 15 different attitudes, varying environment, different time shines;
Step 2, the human face region in detection samples pictures, and 68 characteristic points of face are extracted, then carry out Face normalization and return
One change is processed;
Step 3, according to 68 characteristic points that step 2 is obtained, extract the face contour feature of face, eye feature, mouth feature,
The global characteristics of nose feature and face;
Step 4,5 kinds of features that step 3 is obtained project to the non-linear space of easy differentiation using RBF kernel functions respectively;
Step 5,5 kinds of features that step 4 is obtained respectively using cascade bayesian algorithm train, obtain 5 groups of covariance matrix A and
G;
Step 6, face verification stage, for two face pictures of input, in extracting picture respectively using the method in step 3
5 kinds of Expressive Features of face, and non-linear space is projected to, two the 5 of people kinds of features are obtained, it is expressed asWith;
Step 7, using two matrix As and G that are obtained in step 5,5 pairs of similarities of feature in calculation procedure 6, calculate respectively
Formula is:;
Step 8,5 pairs of similarities in calculation procedure 7Average value, final Similarity value is obtained, so as to sentence compared with threshold value
Break and whether two people are same person.
2. it is according to claim 1 based on cascade Bayes without constraint face verification method, it is characterised in that the step
Concretely comprising the following steps for 68 characteristic points on face is extracted in rapid 2:
Step 2-1, for input picture, using the face in the Face datection algorithm detection picture based on Adaboost, if
Face is not detected by, is returned, continue with next pictures, detected face and be then transferred to step 2-2;
Step 2-2, the human face region picture feeding facial feature points detection module that step 2-1 is obtained, obtains 68 of face
Characteristic point;
Step 2-3, according to the human face characteristic point that step 2-2 is obtained, carries out Face normalization, eliminates attitude influence;
Step 2-4, face normalization treatment, eliminates illumination effect.
3. it is according to claim 1 based on cascade Bayes without constraint face verification method, it is characterised in that the step
Comprising the following steps that for face characteristic is extracted in rapid 3:
Step 3-1, according to 68 characteristic points that step 2 is obtained, optionally a little sets up as datum mark P by origin of datum mark P
Polar coordinates, are spacing with 10 °, and polar coordinates are divided into 36 regions, and the company of remaining 67 characteristic point and datum mark is asked for successively
The angle that line same level positive direction is formed, obtains the angular histogram put on the basis of P;According to the method described above, choose eye, nose,
6 points obtain 6 groups of histograms as datum mark at mouth three, using the histogram as the face contour feature of face;
Step 3-2, according to 68 characteristic points that step 2 is obtained, positions ocular, calculates the intensive LBP features of ocular,
WPCA conversion is carried out and eliminates redundancy to extract main feature, using this feature as eye feature;
Step 3-3, according to 68 characteristic points that step 2 is obtained, positions mouth region, calculates the intensive LBP features of mouth region,
WPCA conversion is carried out and eliminates redundancy to extract main feature, using this feature as mouth feature;
Step 3-4, according to 68 characteristic points that step 2 is obtained, positions nasal region, calculates the intensive LBP features of nasal region,
WPCA conversion is carried out and eliminates redundancy to extract main feature, using this feature as nose feature;
Step 3-5, according to 68 characteristic points that step 2 is obtained, the overall region of locating human face calculates the close of face overall region
Collection LBP features, carry out WPCA conversion and eliminate the redundancy main feature of extraction, using this feature as the global characteristics of face.
4. it is according to claim 1 based on cascade Bayes without constraint face verification method, it is characterised in that the step
The detailed process of rapid 5 cascade bayesian algorithm training is:
Step 5-1, the face contour feature of the face obtained according to step 3 calculates the association between different people and same person feature
Variance matrixWith;
Step 5-2, is calculated according to step 5-1With, calculateWith, wherein,,;
Step 5-3, the global characteristics of the eye feature obtained according to step 3, mouth feature, nose feature and face repeat to walk
Rapid 5-1 and 5-2, finally gives metric matrixWith。
5. it is according to claim 1 based on cascade Bayes without constraint face verification method, it is characterised in that the step
Rapid 6 comprise the following specific steps that:
Step 6-1, reads checking picture 1 and picture 2, and Face datection is carried out respectively, if being not detected by two faces, points out not
Face is detected, terminates matching, otherwise into step 6-2;
68 characteristic points of step 6-2, detection face 1 and face 2, if be not detected by, it fails to match for prompting, terminates matching,
Otherwise enter step 6-3;
Step 6-3, according to step 3 methods described, extract 5 kinds of features of face 1 and face 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710090721.XA CN106874877A (en) | 2017-02-20 | 2017-02-20 | A kind of combination is local and global characteristics without constraint face verification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710090721.XA CN106874877A (en) | 2017-02-20 | 2017-02-20 | A kind of combination is local and global characteristics without constraint face verification method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106874877A true CN106874877A (en) | 2017-06-20 |
Family
ID=59167314
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710090721.XA Withdrawn CN106874877A (en) | 2017-02-20 | 2017-02-20 | A kind of combination is local and global characteristics without constraint face verification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106874877A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107644208A (en) * | 2017-09-21 | 2018-01-30 | 百度在线网络技术(北京)有限公司 | Method for detecting human face and device |
CN107944401A (en) * | 2017-11-29 | 2018-04-20 | 合肥寰景信息技术有限公司 | The embedded device for tracking and analyzing with multiple faces dynamic |
CN108229444A (en) * | 2018-02-09 | 2018-06-29 | 天津师范大学 | A kind of pedestrian's recognition methods again based on whole and local depth characteristic fusion |
CN108563997A (en) * | 2018-03-16 | 2018-09-21 | 新智认知数据服务有限公司 | It is a kind of establish Face datection model, recognition of face method and apparatus |
CN108764334A (en) * | 2018-05-28 | 2018-11-06 | 北京达佳互联信息技术有限公司 | Facial image face value judgment method, device, computer equipment and storage medium |
CN108829900A (en) * | 2018-07-31 | 2018-11-16 | 成都视观天下科技有限公司 | A kind of Research on face image retrieval based on deep learning, device and terminal |
CN111960203A (en) * | 2020-08-13 | 2020-11-20 | 安徽迅立达电梯有限公司 | Intelligent induction system for opening and closing elevator door |
WO2023029702A1 (en) * | 2021-09-06 | 2023-03-09 | 京东科技信息技术有限公司 | Method and apparatus for verifying image |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103049736A (en) * | 2011-10-17 | 2013-04-17 | 天津市亚安科技股份有限公司 | Face identification method based on maximum stable extremum area |
CN103440510A (en) * | 2013-09-02 | 2013-12-11 | 大连理工大学 | Method for positioning characteristic points in facial image |
CN105138968A (en) * | 2015-08-05 | 2015-12-09 | 北京天诚盛业科技有限公司 | Face authentication method and device |
CN105719248A (en) * | 2016-01-14 | 2016-06-29 | 深圳市商汤科技有限公司 | Real-time human face deforming method and system |
CN106228142A (en) * | 2016-07-29 | 2016-12-14 | 西安电子科技大学 | Face verification method based on convolutional neural networks and Bayesian decision |
-
2017
- 2017-02-20 CN CN201710090721.XA patent/CN106874877A/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103049736A (en) * | 2011-10-17 | 2013-04-17 | 天津市亚安科技股份有限公司 | Face identification method based on maximum stable extremum area |
CN103440510A (en) * | 2013-09-02 | 2013-12-11 | 大连理工大学 | Method for positioning characteristic points in facial image |
CN105138968A (en) * | 2015-08-05 | 2015-12-09 | 北京天诚盛业科技有限公司 | Face authentication method and device |
CN105719248A (en) * | 2016-01-14 | 2016-06-29 | 深圳市商汤科技有限公司 | Real-time human face deforming method and system |
CN106228142A (en) * | 2016-07-29 | 2016-12-14 | 西安电子科技大学 | Face verification method based on convolutional neural networks and Bayesian decision |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107644208A (en) * | 2017-09-21 | 2018-01-30 | 百度在线网络技术(北京)有限公司 | Method for detecting human face and device |
CN107944401A (en) * | 2017-11-29 | 2018-04-20 | 合肥寰景信息技术有限公司 | The embedded device for tracking and analyzing with multiple faces dynamic |
CN108229444A (en) * | 2018-02-09 | 2018-06-29 | 天津师范大学 | A kind of pedestrian's recognition methods again based on whole and local depth characteristic fusion |
CN108229444B (en) * | 2018-02-09 | 2021-10-12 | 天津师范大学 | Pedestrian re-identification method based on integral and local depth feature fusion |
CN108563997A (en) * | 2018-03-16 | 2018-09-21 | 新智认知数据服务有限公司 | It is a kind of establish Face datection model, recognition of face method and apparatus |
CN108563997B (en) * | 2018-03-16 | 2021-10-12 | 新智认知数据服务有限公司 | Method and device for establishing face detection model and face recognition |
CN108764334A (en) * | 2018-05-28 | 2018-11-06 | 北京达佳互联信息技术有限公司 | Facial image face value judgment method, device, computer equipment and storage medium |
CN108829900A (en) * | 2018-07-31 | 2018-11-16 | 成都视观天下科技有限公司 | A kind of Research on face image retrieval based on deep learning, device and terminal |
CN108829900B (en) * | 2018-07-31 | 2020-11-10 | 成都视观天下科技有限公司 | Face image retrieval method and device based on deep learning and terminal |
CN111960203A (en) * | 2020-08-13 | 2020-11-20 | 安徽迅立达电梯有限公司 | Intelligent induction system for opening and closing elevator door |
WO2023029702A1 (en) * | 2021-09-06 | 2023-03-09 | 京东科技信息技术有限公司 | Method and apparatus for verifying image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106874877A (en) | A kind of combination is local and global characteristics without constraint face verification method | |
CN107480658B (en) | Face recognition device and method based on multi-angle video | |
CN106096538B (en) | Face identification method and device based on sequencing neural network model | |
US9064145B2 (en) | Identity recognition based on multiple feature fusion for an eye image | |
Taigman et al. | Multiple one-shots for utilizing class label information. | |
US8655029B2 (en) | Hash-based face recognition system | |
CN103218609B (en) | A kind of Pose-varied face recognition method based on hidden least square regression and device thereof | |
Besbes et al. | Multimodal biometric system based on fingerprint identification and iris recognition | |
WO2015165365A1 (en) | Facial recognition method and system | |
CN105550658A (en) | Face comparison method based on high-dimensional LBP (Local Binary Patterns) and convolutional neural network feature fusion | |
CN106355138A (en) | Face recognition method based on deep learning and key features extraction | |
US11594074B2 (en) | Continuously evolving and interactive Disguised Face Identification (DFI) with facial key points using ScatterNet Hybrid Deep Learning (SHDL) network | |
CN102902986A (en) | Automatic gender identification system and method | |
Ming | Robust regional bounding spherical descriptor for 3D face recognition and emotion analysis | |
CN103824052A (en) | Multilevel semantic feature-based face feature extraction method and recognition method | |
CN101853397A (en) | Bionic human face detection method based on human visual characteristics | |
CN111860453B (en) | Face recognition method for wearing mask | |
CN106650574A (en) | Face identification method based on PCANet | |
Sharma et al. | Recognition of single handed sign language gestures using contour tracing descriptor | |
WO2022213396A1 (en) | Cat face recognition apparatus and method, computer device, and storage medium | |
CN110796101A (en) | Face recognition method and system of embedded platform | |
CN108898623A (en) | Method for tracking target and equipment | |
CN105320948A (en) | Image based gender identification method, apparatus and system | |
CN107220598A (en) | Iris Texture Classification based on deep learning feature and Fisher Vector encoding models | |
CN107368803A (en) | A kind of face identification method and system based on classification rarefaction representation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20170620 |
|
WW01 | Invention patent application withdrawn after publication |