CN109766814A - Face verification method and system - Google Patents
Face verification method and system Download PDFInfo
- Publication number
- CN109766814A CN109766814A CN201811652844.9A CN201811652844A CN109766814A CN 109766814 A CN109766814 A CN 109766814A CN 201811652844 A CN201811652844 A CN 201811652844A CN 109766814 A CN109766814 A CN 109766814A
- Authority
- CN
- China
- Prior art keywords
- face
- picture
- adjusted
- information
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a kind of face verification method and system, which includes: the face information for being detected and being obtained to picture using MTCNN algorithm in picture;It is adjusted according to the face information of the preassigned to the picture;Feature vector is extracted from the picture adjusted using ResNet-34;It is compared according to the feature vector of extraction with the data in database and judges the identity information of face in the picture.Face verification method of the invention, can be improved the accuracy of face verification.
Description
Technical field
The present invention relates to technical field of face recognition, it particularly relates to a kind of face verification method and face verification
System.
Background technique
Face verification based on face recognition technology is the facial image or video flowing to input, first determine whether be wherein
No there are faces, and face, then further provide the position of each face, the position of size and each major facial organ if it exists
Etc. information further extract the identity characteristic contained in each face and according to these information, and by itself and known face
Database compares, to realize identification or authentication.
Face recognition technology is a kind of high-precision, easy to use, stability is high, difficult counterfeit biological identification technology, is had
Extremely wide market application prospect.It is controlled in public security, national defence, customs, traffic, finance, social security, medical treatment and other civil safeties
Etc. industries and department there is extensive demands.By taking financial scenario as an example, face recognition technology can be applied to remotely to open an account, brush face
The scenes such as payment.Other scene applications include but is not limited to: 1) can be according to the photo of suspect or its facial characteristics, from public affairs
Confirmation is searched in the database of peace department rapidly, substantially increases the accuracy and efficiency of criminal investigation and case detection;2) recognition of face is used
Technology can be completed the certificate verification of the occasions such as verifying identification work, such as Haikou, airport, secret department by machine to examine certain
Personal part, to realize automatic intelligent management;3) application in the video monitoring of many public places;4) in-let dimple;
5) Expression analysis;6) intelligent toy, housekeeping robot, virtual game with true image surface etc..
Current many countries expand the research in relation to recognition of face, include following several grind in the field of recognition of face
Study carefully direction: the face identification method based on geometrical characteristic, the face identification method based on template matching, the spy based on Karhunen-Loeve transformation
Levy method, the method based on hidden Markov model, the method for neural network recognization, the elasticity based on dynamic linking structure of face
Figure matching process, the method that recognition of face is carried out to dynamic image sequence using movement and colouring information.
Although face recognition technology has broad application prospects, either in discrimination in antifalsification,
The reason of all having biggish gap with fingerprint, retina etc., influencing recognition of face effect after all, mainly has below several
A aspect: the 1) uncertainty in the acquisition process of facial image (for example, the direction of light, intensity of light etc.);2) face mode
Diversity (for example, beard, glasses, hair style etc.);3) uncertainty (for example, expression etc.) of face plastic deformation;4) involved
And domain knowledge comprehensive (for example, psychology, medicine, pattern-recognition, image procossing, mathematics etc.).
Exactly because there is a problem of in the process of face recognition it is above-mentioned various, therefore in actual detection and
In identification process, when these factors are superimposed together, situation is just become more complicated, so that the accuracy of face verification
It is difficult to be promoted.
Summary of the invention
In view of the above problems in the related art, the present invention proposes a kind of face verification method and face verification system,
It can be improved the accuracy of face verification.
The technical scheme of the present invention is realized as follows:
According to an aspect of the invention, there is provided a kind of face verification method, comprising:
Picture is detected using MTCNN algorithm and obtains the face information in picture;
It is adjusted according to face information of the preassigned to picture;
Feature vector is extracted from picture adjusted using ResNet-34;
The feature vector of extraction is compared with the data in database to the identity information to judge face in picture.
According to an embodiment of the invention, before picture is detected further include: acquire user's face in business scenario
Picture;Training sample is added in the picture of user's face of acquisition to concentrate.
According to an embodiment of the invention, face information includes facial size and face location, to the face information of picture into
Row adjustment includes: that facial size is normalized, so that picture is converted to scheduled standard size;By face location tune
The whole face proper for front.
According to an embodiment of the invention, being adjusted using the library Dlib and the library Open CV to the face information of picture.
According to another aspect of the present invention, a kind of face verification system is provided, comprising:
Detection module, for being detected using MTCNN algorithm to picture and obtaining the face information in picture;
Module is adjusted, for being adjusted according to face information of the preassigned to picture;
Characteristic extracting module, for extracting feature vector from picture adjusted using ResNet-34;
Contrast module, for being compared the feature vector of extraction with the data in database to judge face in picture
Identity information.
According to an embodiment of the invention, face verification system further include: picture collection module, for being adopted in business scenario
Collect the picture of user's face, and training sample is added in the picture of user's face of acquisition and is concentrated.
According to an embodiment of the invention, adjustment module includes: size adjusting submodule, for carrying out normalizing to facial size
Change processing, so that picture is converted to scheduled standard size;Position adjustment submodule, for face location to be adjusted to front end
Positive face.
According to an embodiment of the invention, adjustment module carries out the face information of picture using the library Dlib and the library Open CV
Adjustment.
Above-mentioned technical proposal of the invention is extracted by will be used for the MTCNN algorithm of Face datection and be used for face characteristic
Two neural network structures of ResNet-34 be combined together, to realize higher accuracy rate.
Detailed description of the invention
It in order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, below will be to institute in embodiment
Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention
Example, for those of ordinary skill in the art, without creative efforts, can also obtain according to these attached drawings
Obtain other attached drawings.
Fig. 1 is the flow chart of face verification method according to an embodiment of the present invention;
Fig. 2 is the flow chart of face verification method according to another embodiment of the present invention;
Fig. 3 is the flow chart of feature extraction and the recognition of face of specific embodiment according to the present invention;
Fig. 4 is the block diagram of face verification system according to an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art's every other embodiment obtained belong to what the present invention protected
Range.
As shown in Figure 1, face verification method according to an embodiment of the present invention the following steps are included:
S102 utilizes MTCNN (multitask concatenated convolutional network, Multi-task Cascaded Convolutional
Networks, MTCNN) algorithm detects picture and obtains the face information in picture.
Face information is found out from static images or video, for example, the faces letter such as position, size and number of output face
Breath.The step for purpose be whether to contain face in the picture for verify inspection, and prepare for processing step below.
MTCNN algorithm can efficiently be realized in TensorFlow.MTCNN is gone here and there mutually by three independent convolutional neural networks structures
Join, by being then based on depth learning technology, therefore the MTCNN algorithm in the detection process becomes the light in original image
Change, angle change and human face expression variation have more robustness, can preferably solve recall rate in existing human face detection tech
Not high defect.
S104 is adjusted according to face information of the preassigned to picture.
S106 extracts feature vector from picture adjusted using ResNet-34 (34 layer depth residual error neural network).
The feature vector of extraction is compared with the data in database and is believed with the identity for judging face in picture by S108
Breath.
Above-mentioned technical proposal of the invention by the MTCNN algorithm for being used for Face datection and is used for face characteristic extraction
Two neural network structures of ResNet-34 are combined together, to realize higher accuracy rate.
As shown in Fig. 2, in one embodiment, can also include the following steps before step S102;
S202 acquires the picture of user's face in business scenario;
The picture of user's face of acquisition is added training sample and concentrated by S204.
That is, the sample of picture is being from user's face picture for collecting in actual services scene, these users
Face picture can be shot by mobile phone camera and be obtained, and carry out cutting compression processing further across front end SDK.At one
In specific embodiment, it is 308127 people that training sample, which is concentrated comprising number of users, and picture number is 1249665, that is, average every
The sample number of a user is 4.1/people.In this way, model training is carried out by the great amount of samples grabbed based on actual services scene,
Higher accuracy rate can be reached.
Wherein at step S104, face information includes facial size and face location, is carried out to the face information of picture
Adjustment specifically includes the following steps:
Facial size is normalized in S1042, so that picture is converted to scheduled standard size.
Face location is adjusted to the proper face in front by S1044.
After obtaining the face information of face size and face location by the face datection step of S102, face note is being carried out
Before volume or recognition of face, the face normalization step for needing to carry out S104 carrys out standardized face, to improve registration feature
Standard and discrimination.On the one hand, because the facial size detected is not of uniform size, if directly doing feature extraction, extraction
Intrinsic dimensionality is different, not can be carried out later identification comparison work, it is therefore desirable to which face size is normalized.Example
Such as, picture is converted to standard size by interpolation or sampling.On the other hand, the human face region detected, which is likely that there are, to incline
Tiltedly, the case where deflecting does recognition of face if the face that can adjust inclination and deflection is the proper face in front in next step
Accuracy rate can be promoted very much.Meanwhile via the processing of step S104 after, can be to original in subsequent feature extraction algorithm
Light, angle in picture have very strong robustness, can preferably solve in face recognition technology to different light, shooting
The problem of picture accuracy rate under angle declines.
In one embodiment, it can use the library Dlib and the library Open CV to be adjusted the face information of picture.
At step S106, the process of a feature extraction i.e. face picture be converted to the feature of a fixed dimension to
The process of amount can complete subsequent comparison and identification mission using obtained feature vector.Feature extraction is entire face
The core procedure of identification decides final recognition result, directly affects the height of discrimination.In addition, the complexity of feature itself
Degree, as the dimension of feature is also required to consider.The feature of extraction should be simple as far as possible.
In one embodiment, we used used based on ResNet-34 structure and Cosine Face loss function
In the training for extracting the convolutional neural networks that face characteristic extracts.Traditional method mostly extracts face in the way of engineer
The methods of characteristic information, such as geometrical characteristic, template matching, these methods are easy by changing factors a variety of under actual environment
It influences.And initial data is passed through multi-level Nonlinear Processing by the nervous system of the simulation mankind by depth learning technology, is obtained
More separating capacity, the high-order feature of image essence can more be described.The feature extraction that face is done using depth learning technology, can
To overcome probabilistic influence of the diversity, face plasticity of face mode well.
By 3 above-mentioned Face datection, face normalization and feature extraction steps, i.e. step S102, S104 and S106, energy
Enough very good solution following three problems in the prior art: 1) uncertainty in the acquisition process of facial image is (for example, light
Direction, the intensity of light etc.);2) diversity (for example, beard, glasses, hair style etc.) of face mode;3) face is plastically deformed
Uncertain (for example, expression etc.).Have benefited from increasingly mature depth learning technology to extract in Face datection and face characteristic
Application, the face characteristic of high quality can be obtained.The uncertain problem in the acquisition process of facial image is effectively solved, it is right
Different illumination conditions, different shooting condition, attitude angle face can effectively be identified, improve face comparison and identification
Accuracy.
At step S108, verifies and identification is the final step of recognition of face, it refers to according to face characteristic extraction
As a result, inspection face picture and the data in database are compared, the identity information of the inspection face is judged.As a result
Accuracy rate and the characteristic mass of extraction are closely related.
According to the difference of application scenarios, two generic tasks of verifying and identification can be divided into." you are for face comparison, i.e. verifying
You ", it refers to the comparison that a picture and the existing picture of lane database are done to 1:1, judges by comparing similarity
It whether is the same person.Recognition of face, i.e. identification " who are you ", it refers under the premise of the identity of inspection portrait is unknown, will
The picture finds out most matched picture compared with pictures all in database, so that it is determined that the identity of the people.Fig. 3 shows root
According to the flow chart of feature extraction and the recognition of face of the specific embodiment of the invention.It should be appreciated that according to practical situations to spy
Sign is extracted and the detailed process of recognition of face carries out configuration appropriate.
The key technical indexes for measuring face identification system accuracy rate includes misclassification rate (False Accept Rate, FAR)
With my percent of pass (True Accept Rate, TAR).Misclassification rate refers to that system mistake receives the face alignment Shen of jactitator's initiation
Probability please.My percent of pass refers to that system correctly receives the probability for the face comparison application that I initiates.
Inventor tests in the face picture in actual services scene, and obtained result is as follows:
Face compares scene (1:1) (everyone 2 tests of 5000 people):
Face characteristic success rate of extracting: 99.9%;
My percent of pass under one thousandth misclassification rate: 99.7%;
My percent of pass under a ten thousandth misclassification rate: 99.7%;
My percent of pass under ten a ten thousandth misclassification rates: 99.5%.
Recognition of face scene (1:N) (N=500 people, everyone 1 registration Fig. 1 open test chart):
My percent of pass under one thousandth misclassification rate: 100%;
My percent of pass under a ten thousandth misclassification rate: 100%;
My percent of pass under ten a ten thousandth misclassification rates: 100%.
Compared with prior art, present invention accuracy rate under the misclassification rate of each horizontal segment is substantially improved, it was demonstrated that logical
The quality for crossing new Face datection, face normalization and feature extracting method face characteristic obtained is splendid.
As shown in figure 4, according to an embodiment of the invention, additionally providing a kind of face verification system, comprising:
Detection module 42, for being detected using MTCNN algorithm to picture and obtaining the face information in picture;
Module 44 is adjusted, for being adjusted according to face information of the preassigned to picture;
Characteristic extracting module 46, for extracting feature vector from picture adjusted using ResNet-34;
Contrast module 48 is compared with the data in database for the feature vector according to extraction and judges in picture
The identity information of face.
According to an embodiment of the invention, face verification system further includes picture collection module, which is used for
The picture of user's face is acquired in business scenario, and training sample is added in the picture of user's face of acquisition and is concentrated.
According to an embodiment of the invention, adjustment module 44 may include size adjusting submodule, the size adjusting submodule
For facial size to be normalized, so that picture is converted to scheduled standard size.Adjustment module 44 can also wrap
Position adjustment submodule is included, which is used to for face location to be adjusted to the proper face in front.
According to an embodiment of the invention, adjustment module 44 using the library Dlib and the library Open CV to the face information of picture into
Row adjustment.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention
Within mind and principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.
Claims (8)
1. a kind of face verification method characterized by comprising
Picture is detected using multitask concatenated convolutional network (MTCNN) algorithm and obtains the face information in picture;
It is adjusted according to the face information of the preassigned to the picture;
Feature vector is extracted from the picture adjusted using 34 layer depth residual error neural networks (ResNet-34);
The feature vector of extraction is compared with the data in database to the identity information to judge face in the picture.
2. face verification method according to claim 1, which is characterized in that also wrapped before the picture is detected
It includes:
The picture of user's face is acquired in business scenario;
Training sample is added in the picture of user's face of acquisition to concentrate.
3. face verification method according to claim 1, which is characterized in that the face information includes facial size and people
Face position, is adjusted the face information of the picture and includes:
The facial size is normalized, so that the picture is converted to scheduled standard size;
The face location is adjusted to the proper face in front.
4. face verification method according to claim 1, which is characterized in that using the library Dlib and the library Open CV to described
The face information of picture carries out the adjustment.
5. a kind of face verification system characterized by comprising
Detection module, for being detected and being obtained in picture to picture using multitask concatenated convolutional network (MTCNN) algorithm
Face information;
Module is adjusted, for being adjusted according to the face information of the preassigned to the picture;
Characteristic extracting module, for being mentioned using 34 layer depth residual error neural networks (ResNet-34) from the picture adjusted
Take feature vector;
Contrast module, for being compared the feature vector of extraction with the data in database to judge face in the picture
Identity information.
6. face verification system according to claim 5, which is characterized in that further include:
Picture collection module, for acquiring the picture of user's face in business scenario, and by user's face of acquisition
Picture is added training sample and concentrates.
7. face verification system according to claim 5, which is characterized in that the adjustment module includes:
Size adjusting submodule, for the facial size to be normalized, so that the picture is converted to scheduled
Standard size;
Position adjustment submodule, for the face location to be adjusted to the proper face in front.
8. face verification system according to claim 5, which is characterized in that the adjustment module utilizes the library Dlib and Open
The library CV carries out the adjustment to the face information of the picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811652844.9A CN109766814A (en) | 2018-12-29 | 2018-12-29 | Face verification method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811652844.9A CN109766814A (en) | 2018-12-29 | 2018-12-29 | Face verification method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109766814A true CN109766814A (en) | 2019-05-17 |
Family
ID=66453298
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811652844.9A Pending CN109766814A (en) | 2018-12-29 | 2018-12-29 | Face verification method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109766814A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110458097A (en) * | 2019-08-09 | 2019-11-15 | 软通动力信息技术有限公司 | A kind of face picture recognition methods, device, electronic equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107506717A (en) * | 2017-08-17 | 2017-12-22 | 南京东方网信网络科技有限公司 | Without the face identification method based on depth conversion study in constraint scene |
CN109101871A (en) * | 2018-08-07 | 2018-12-28 | 北京华捷艾米科技有限公司 | A kind of living body detection device based on depth and Near Infrared Information, detection method and its application |
-
2018
- 2018-12-29 CN CN201811652844.9A patent/CN109766814A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107506717A (en) * | 2017-08-17 | 2017-12-22 | 南京东方网信网络科技有限公司 | Without the face identification method based on depth conversion study in constraint scene |
CN109101871A (en) * | 2018-08-07 | 2018-12-28 | 北京华捷艾米科技有限公司 | A kind of living body detection device based on depth and Near Infrared Information, detection method and its application |
Non-Patent Citations (2)
Title |
---|
KAIPENG ZHANG: ""Joint Face Detection and Alignment Using Multitask", 《IEEE SIGNAL PROCESSING LETTERS》 * |
朱超平 等: ""基于YOLO2 和ResNet 算法的监控视频中的人脸检测与识别"", 《重庆理工大学学报》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110458097A (en) * | 2019-08-09 | 2019-11-15 | 软通动力信息技术有限公司 | A kind of face picture recognition methods, device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yuan et al. | Fingerprint liveness detection using an improved CNN with image scale equalization | |
US11288504B2 (en) | Iris liveness detection for mobile devices | |
CN107194341B (en) | Face recognition method and system based on fusion of Maxout multi-convolution neural network | |
CN108985134B (en) | Face living body detection and face brushing transaction method and system based on binocular camera | |
Tome et al. | The 1st competition on counter measures to finger vein spoofing attacks | |
US11263435B2 (en) | Method for recognizing face from monitoring video data | |
CN107967458A (en) | A kind of face identification method | |
WO2018061786A1 (en) | Living body authentication device | |
CN110991346A (en) | Suspected drug addict identification method and device and storage medium | |
Hossain et al. | Next generation identity verification based on face-gait Biometrics | |
Tistarelli et al. | Biometrics in forensic science: challenges, lessons and new technologies | |
Kim et al. | Reconstruction of fingerprints from minutiae using conditional adversarial networks | |
CN114218543A (en) | Encryption and unlocking system and method based on multi-scene expression recognition | |
Rothkrantz | Person identification by smart cameras | |
Hossain et al. | Human identity verification by using physiological and behavioural biometric traits | |
Stylianou et al. | GMM-based multimodal biometric verification | |
CN109766814A (en) | Face verification method and system | |
Zolotarev et al. | Liveness detection methods implementation to face identification reinforcement in gaming services | |
Garg et al. | Performance Analysis of Uni-modal and Multimodal Biometric System | |
Cui et al. | An appearance-based method for iris detection | |
CN112949363A (en) | Face living body identification method and device | |
Dubey et al. | A review of face recognition using SIFT feature extraction | |
Premraja et al. | Iris based ATM Transaction using Machine Learning | |
Lateef et al. | Face Recognition-Based Automatic Attendance System in a Smart Classroom | |
Srivika et al. | Biometric Verification using Periocular Features based on Convolutional Neural Network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190517 |
|
RJ01 | Rejection of invention patent application after publication |