CN111611568A - Face voiceprint rechecking terminal and identity authentication method thereof - Google Patents
Face voiceprint rechecking terminal and identity authentication method thereof Download PDFInfo
- Publication number
- CN111611568A CN111611568A CN202010431690.1A CN202010431690A CN111611568A CN 111611568 A CN111611568 A CN 111611568A CN 202010431690 A CN202010431690 A CN 202010431690A CN 111611568 A CN111611568 A CN 111611568A
- Authority
- CN
- China
- Prior art keywords
- face
- voiceprint
- user
- registered
- unique identification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000012795 verification Methods 0.000 claims description 26
- 238000000605 extraction Methods 0.000 claims description 13
- 238000013507 mapping Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 description 7
- 230000001815 facial effect Effects 0.000 description 5
- 239000000203 mixture Substances 0.000 description 4
- 238000001228 spectrum Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/22—Interactive procedures; Man-machine interfaces
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a face voiceprint rechecking terminal and an identity authentication method thereof, wherein the method comprises the following steps: step S1, collecting voice data of the current user, extracting voiceprint features from the voice data, comparing the extracted voiceprint features with voiceprint models of registered users registered in advance at a terminal or a server, and entering step S2 if the comparison is passed; step S2, when the voiceprint identification is passed, obtaining the unique identification code of the current user from the preset voiceprint database; step S3, capturing the face image of the current user, extracting the face characteristics to perform face recognition, and entering step S4 if the face recognition is passed; step S4, when the face recognition is passed, the unique identification code of the current user is obtained from the face database established in advance; and step S5, comparing the unique identification code of the user obtained in the step S2 with the unique identification code of the user obtained in the step S4, and determining an identity authentication result according to the comparison result.
Description
Technical Field
The invention relates to the technical field of identity identification and authentication, in particular to a face voiceprint rechecking terminal and an identity authentication method thereof.
Background
With the rapid development of mobile internet and the popularization of handheld terminal devices such as smart phones and tablet computers, the security problem of the internet is increasingly prominent, and at present, no matter the hardware digital certificate or the dynamic token of a bank, only the management of a trusted terminal is achieved, and the identity of a user cannot be verified.
The biometric technology is to use human physiological characteristics or behavioral characteristics to identify an individual, and at present, the biometric characteristics used for biometric identification include voice, fingerprint, face, iris, and so on, and since a microphone and a camera are commonly present in the existing mobile terminal, it is the most convenient and economical solution to perform identity authentication through voice or face recognition.
The face recognition technology is a biometric technology that automatically performs identity recognition based on facial features of a person (e.g., statistical or geometric features). The face recognition uses a camera to collect images or video streams containing faces, automatically detects and tracks the faces in the images, and further performs related application operation on the detected face images. Techniques include image acquisition, feature localization, identity verification and lookup, and the like. In short, the features in the human face, such as the height of eyebrows, the corners of the mouth, and the like, are extracted from the picture, and the result is output by comparing the stored features.
The voice of a person covers information of multiple dimensions, such as speaking content, speaking tone, voice characteristics and the like. Voiceprint is a general term of a speech model established based on speech features contained in speech and capable of identifying and marking speakers, voiceprint identification is a technology for distinguishing different speakers through voice characteristics of the speakers, and different vocal tract structures determine the uniqueness of the voiceprint. Voiceprint recognition mainly comprises two modules: the system comprises a voiceprint registration module and a voiceprint authentication module. The voiceprint registration refers to modeling a voice sample of a user by adopting a preselected model to generate a voiceprint model of the user; and when the user requests identity verification, the corresponding voiceprint model is used for authenticating the request voice.
However, current biometric identification techniques have some drawbacks:
1. for the face recognition technology, an attacker can crack through external attack means, such as: photo attack, which simulates a user by printing a high-definition photo; video attack, recording a section of facial video of a user, and playing the facial video in front of a camera; the three-dimensional mask attack is realized by using common plastic or hard paper to make a mask and simulating the mask into a user;
2. for the voiceprint recognition technology, an attacker often records the sound of a user through a recording device, and then cracks the recorded sound during attack.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide a face voiceprint rechecking terminal and an identity authentication method thereof, so that the accuracy and the safety of identity identification are improved by combining voiceprint identification and face identification technologies.
To achieve the above and other objects, the present invention provides a face voiceprint rechecking terminal, comprising:
the voiceprint recognition verification unit is used for acquiring voice data of a current user, extracting voiceprint features from the voice data, comparing the extracted voiceprint features with a voiceprint model of a registered user registered in advance at a terminal or a server, and entering the first matching unit if the voiceprint features are compared with the voiceprint model of the registered user registered in advance at the server;
a first matching unit, configured to obtain the unique identification code of the current user from a pre-established voiceprint database when voiceprint identification passes
The face capturing verification unit is used for capturing a face image of the current user, extracting face features to perform face recognition, and entering the second matching unit when the face recognition is passed;
a second matching unit, configured to acquire the unique identification code of the current user from a pre-established face database when face recognition is passed
And the comparison unit is used for comparing the unique identification codes of the users obtained by the first matching unit and the second matching unit and determining an identity authentication result according to the comparison result.
Preferably, the voiceprint recognition verification unit further comprises:
the voiceprint feature extraction module is used for acquiring voice data of a current user and extracting voiceprint features;
and the voiceprint recognition and authentication module is used for calculating the similarity between the voiceprint features extracted by the voiceprint feature extraction module and a voiceprint model of a registered user registered in advance at a terminal or a server, judging whether the voiceprint similarity is greater than a preset threshold value, and entering the first matching unit if the voiceprint similarity is greater than the preset threshold value.
Preferably, the face capture verification unit includes:
the face capturing module is used for acquiring a face image of a current user, determining a face area according to the face image and calibrating a face key point in the determined face area;
and the face identification authentication module is used for calculating the similarity between the face of the user and a face model of a pre-stored registered user according to the key point characteristics, judging whether the face similarity is greater than a preset threshold value, if the face similarity is greater than the threshold value, passing the face authentication, and entering the second matching unit.
Preferably, the terminal further includes:
the face registration unit is used for acquiring a face image sample of a registered user and establishing a face model of the registered user according to the face image sample;
the voice print registration unit is used for collecting voice data of a registered user, extracting voice print characteristics from the voice data and establishing a voice print model of the registered user;
the face database establishing unit is used for establishing a face database, and establishing a unique identification code and a mapping relation of face features corresponding to the unique identification code for each registered user in the face database;
a voiceprint database establishing unit for establishing a voiceprint database, wherein a mapping relation of a unique identification code and corresponding voiceprint characteristics is established for each registered user in the voiceprint database
In order to achieve the above object, the present invention further provides an identity authentication method for a face voiceprint rechecking terminal, comprising the following steps:
step S1, collecting voice data of the current user, extracting voiceprint features from the voice data, comparing the extracted voiceprint features with voiceprint models of registered users registered in advance at a terminal or a server, and entering step S2 if the comparison is passed;
step S2, when the voiceprint identification is passed, obtaining the unique identification code of the current user from the preset voiceprint database;
step S3, capturing the face image of the current user, extracting the face characteristics to perform face recognition, and entering step S4 if the face recognition is passed;
step S4, when the face recognition is passed, the unique identification code of the current user is obtained from the face database established in advance;
and step S5, comparing the unique identification code of the user obtained in the step S2 with the unique identification code of the user obtained in the step S4, and determining an identity authentication result according to the comparison result.
Preferably, in step S5, if the comparison result is that the two are consistent, the current authentication is passed, and if the two are not consistent, the current authentication is failed.
Preferably, the step S1 further includes:
s100, collecting voice data of a current user and extracting voiceprint characteristics;
step S101, calculating the similarity between the voiceprint features extracted by the voiceprint feature extraction module and the voiceprint model of the registered user registered in advance at the terminal or the server, judging whether the voiceprint similarity is greater than a preset threshold, if so, entering step S2, otherwise, prompting the voiceprint verification failure.
Preferably, the step S3 further includes:
step S300, acquiring a face image of a current user, determining a face area according to the face image, and calibrating a face key point in the determined face area;
step S301, calculating the similarity between the face feature of the current user and the face model of the pre-stored registered user according to the key point feature, judging whether the face similarity is larger than a preset threshold, if the face similarity is larger than the threshold, the face authentication is passed, and step S4 is entered, otherwise, the face recognition verification is prompted to fail.
Preferably, before step S1, the method further includes the following steps:
step S01, collecting face image samples of the registered users, and establishing face models of the registered users according to the face image samples;
step S02, collecting voice data of the registered user, extracting voiceprint characteristics of the voice data, and establishing a voiceprint model of the registered user;
step S03, establishing a face database, and establishing a corresponding relation of a unique identification code and a face feature corresponding to the unique identification code for each registered user in the face database;
step S04, establishing a voiceprint database, and establishing a corresponding relation of the unique identification code and the corresponding voiceprint characteristic for each registered user in the voiceprint database.
Compared with the prior art, the face voiceprint rechecking terminal and the identity authentication method thereof carry out voiceprint recognition by collecting voice data of a user, acquire the unique identification code of the current user from the preset voiceprint database after the voiceprint recognition is passed, then carry out face recognition by collecting a face image, acquire the unique identification code of the current user from the preset face database when the face recognition is passed, finally compare the unique identification code of the user obtained according to the voiceprint recognition with the unique identification code of the user obtained according to the face recognition, and determine the identity authentication result according to the comparison result, thereby realizing the identity authentication mode of double authentication of face and voiceprint.
Drawings
FIG. 1 is a schematic structural diagram of a face voiceprint rechecking terminal according to the present invention;
FIG. 2 is a flowchart illustrating the steps of an identity authentication method of a face voiceprint double-check terminal according to the present invention;
fig. 3 is a flowchart of the identity authentication of the face voiceprint rechecking terminal in the embodiment of the present invention.
Detailed Description
Other advantages and capabilities of the present invention will be readily apparent to those skilled in the art from the present disclosure by describing the embodiments of the present invention with specific embodiments thereof in conjunction with the accompanying drawings. The invention is capable of other and different embodiments and its several details are capable of modification in various other respects, all without departing from the spirit and scope of the present invention.
Fig. 1 is a schematic structural diagram of a face voiceprint rechecking terminal according to the present invention. As shown in fig. 1, the present invention provides a face voiceprint rechecking terminal, which includes:
the voiceprint recognition and verification unit 101 is configured to collect voice data of a current user, extract voiceprint features from the voice data, compare the extracted voiceprint features with a voiceprint model of a registered user registered in advance at a terminal or a server, enter the first matching unit 102 if the extracted voiceprint features are compared with the voiceprint model of the registered user, and prompt that the user identity verification fails if the extracted voiceprint features are not compared with the voiceprint model of the registered user registered in advance at the terminal or the server.
Specifically, the voiceprint recognition verification unit 101 further includes:
and the voiceprint feature extraction module is used for acquiring the voice data of the current user and extracting the voiceprint features. Specifically, voice data is converted into a short-time frequency spectrum feature sequence, the posterior probability of each frame frequency spectrum feature on each Gaussian component of a global background model is calculated, a Gaussian mixture model of a user is obtained by utilizing the maximum posterior probability criterion for self-adaptive training, and the mean values of the Gaussian components in the Gaussian mixture model are spliced to form a high-dimensional vector, so that the voiceprint features are extracted. Since the extraction of the voiceprint features in the present invention is performed by the prior art, it is not described herein.
And the voiceprint recognition and authentication module is used for calculating the similarity between the voiceprint features extracted by the voiceprint feature extraction module and a voiceprint model of a registered user registered in advance at the terminal or the server, judging whether the voiceprint similarity is greater than a preset threshold, if so, indicating that the voiceprint authentication is passed and entering the first matching unit 102, and otherwise, prompting that the voiceprint authentication is failed.
The first matching unit 102 is configured to obtain the unique identifier of the current user from the voiceprint database when the voiceprint identification passes.
That is, the present invention establishes a voiceprint database in advance, and in the feature database, establishes a correspondence between a unique identification code and a voiceprint feature of a registered user corresponding to the unique identification code for each registered user. After the voiceprint identification is passed, the first matching unit 102 obtains a corresponding unique identification code from the voiceprint database according to the voiceprint characteristics obtained by the voiceprint identification.
And the face capturing and verifying unit 103 is used for capturing a face image of the current user, extracting face features to perform face recognition, entering the second matching unit 104 if the face recognition is passed, and otherwise prompting that the face recognition verification fails.
Specifically, the face capture verification unit 103 further includes:
and the face capturing module is used for acquiring a face image of the current user, determining a face area according to the face image and calibrating the key points of the face in the determined face area.
In an embodiment of the present invention, when an authentication request from a user is received, a video capture device, such as a camera, captures a face image of the user, processes the face image to determine a face region, and identifies facial key points in the face region.
And the face recognition authentication module is used for calculating the similarity between the face of the user and a face model of a pre-stored registered user according to the key point characteristics, judging whether the face similarity is greater than a preset threshold value, if the face similarity is greater than the threshold value, the face authentication is passed, and the second matching unit 104 is entered, otherwise, the face recognition authentication is prompted to fail.
And the second matching unit 104 is configured to acquire the unique identification code of the current user from the face database when the face recognition is passed.
That is, the present invention also establishes a face database in advance, in which a correspondence between a unique identification code and a face feature of a registered user corresponding thereto is established for each registered user. After the face recognition is passed, the second matching unit 104 obtains a corresponding unique identification code from the face database according to the face features obtained by the face recognition.
A comparing unit 105, configured to compare the unique identification codes of the users obtained by the first matching unit 102 and the second matching unit 104, and determine an identity authentication result according to the comparison result.
Specifically, if the comparison result of the comparison unit 105 is that the two are identical, the current authentication is passed, that is, the double authentication of voiceprint face recognition is passed, and if the two are not identical, the failure of the current authentication is prompted.
Preferably, the present invention provides a face voiceprint rechecking terminal, further comprising:
and the face registration unit is used for acquiring a face image sample of the registered user and establishing a face model of the registered user according to the face image sample.
Similarly, after the face image sample is collected, the face image sample is processed to process a face area, and the face key points are calibrated in the face area, so that a face model is established. In the embodiment of the invention, in order to obtain a more accurate face model, for the face image samples of the registered user, a plurality of face image samples of the registered user can be adopted, and the face model of the registered user is established according to the plurality of face image samples.
And the voiceprint registration unit is used for acquiring the voice data of the registered user, extracting voiceprint characteristics from the voice data and establishing a voiceprint model of the registered user. The specific voiceprint feature extraction is as described above and will not be described herein.
The face database establishing unit is used for establishing a face database, and establishing a unique identification code and a corresponding relation of face characteristics corresponding to the unique identification code for each registered user in the face database.
And the voiceprint database establishing unit is used for establishing a voiceprint database, and establishing a corresponding relation of the unique identification code and the corresponding voiceprint characteristic for each registered user in the voiceprint database.
Fig. 2 is a flowchart illustrating steps of an identity authentication method of a face voiceprint double check terminal according to the present invention. As shown in fig. 2, the identity authentication method of a face voiceprint double check terminal of the present invention includes the following steps:
and S1, acquiring voice data of the current user, extracting voiceprint features from the voice data, comparing the extracted voiceprint features with voiceprint models of registered users registered in advance at a terminal or a server, entering S2 if the extracted voiceprint features are compared with the voiceprint models of the registered users registered in advance at the terminal or the server, and prompting that the user identity authentication fails if the extracted voiceprint features are not compared with the voiceprint models of the registered users registered in advance at the server.
Specifically, step S1 further includes:
and S100, collecting voice data of the current user and extracting voiceprint features. Specifically, voice data is converted into a short-time frequency spectrum feature sequence, the posterior probability of each frame frequency spectrum feature on each Gaussian component of a global background model is calculated, a Gaussian mixture model of a user is obtained by utilizing the maximum posterior probability criterion for self-adaptive training, and the mean values of the Gaussian components in the Gaussian mixture model are spliced to form a high-dimensional vector, so that the voiceprint features are extracted. Since the extraction of the voiceprint features in the present invention is performed by the prior art, it is not described herein.
Step S101, calculating the similarity between the voiceprint features extracted by the voiceprint feature extraction module and the voiceprint model of the registered user registered in advance at the terminal or the server, judging whether the voiceprint similarity is greater than a preset threshold, if so, indicating that the voiceprint authentication is passed, entering a first matching unit 102, otherwise, prompting that the voiceprint authentication is failed.
Step S2, when the voiceprint identification is passed, the unique identification code of the current user is obtained from the voiceprint database.
That is, the present invention establishes a voiceprint database in advance, and in the feature database, establishes a correspondence between a unique identification code and a voiceprint feature of a registered user corresponding to the unique identification code for each registered user. And when the voiceprint identification is passed, obtaining a corresponding unique identification code in the voiceprint database according to the voiceprint characteristics obtained by the voiceprint identification.
And step S3, capturing the face image of the current user, extracting face features to perform face recognition, entering step S4 if the face recognition is passed, otherwise, prompting that the face recognition verification fails.
Specifically, step S3 further includes:
step S300, collecting a face image of a current user, determining a face area according to the face image, and calibrating a face key point in the determined face area.
In an embodiment of the present invention, when an authentication request from a user is received, a video capture device, such as a camera, captures a face image of the user, processes the face image to determine a face region, and identifies facial key points in the face region.
Step S301, calculating the similarity between the face of the user and a face model of a pre-stored registered user according to the key point characteristics, judging whether the face similarity is larger than a preset threshold, if so, passing the face authentication, and entering step S4, otherwise, prompting the failure of face recognition and verification.
And step S4, when the face recognition is passed, acquiring the unique identification code of the current user from the face database.
That is, the present invention also establishes a face database in advance, in which a correspondence between a unique identification code and a face feature of a registered user corresponding thereto is established for each registered user. And when the face recognition is passed, obtaining a corresponding unique identification code in the face database according to the face characteristics obtained by the face recognition.
And step S5, comparing the unique identification code of the user obtained in the step S2 with the unique identification code of the user obtained in the step S4, and determining an identity authentication result according to the comparison result.
Specifically, if the comparison result in step S5 is that the two are identical, the authentication of this time passes, that is, the double authentication of voiceprint face recognition passes, and if the two results are not identical, the authentication of this time is prompted to fail.
Preferably, before step S1, the method further includes the following steps:
and step S01, collecting face image samples of the registered users, and establishing face models of the registered users according to the face image samples.
Similarly, after the face image sample is collected, the face image sample is processed to process a face area, and the face key points are calibrated in the face area, so that a face model is established. In the embodiment of the invention, in order to obtain a more accurate face model, for the face image samples of the registered user, a plurality of face image samples of the registered user can be adopted, and the face model of the registered user is established according to the plurality of face image samples.
And step S02, collecting the voice data of the registered user, extracting the voiceprint characteristics of the voice data, and establishing the voiceprint model of the registered user. The specific voiceprint feature extraction is as described above and will not be described herein.
Step S03, a face database is established, and a correspondence between the unique identification code and the face feature corresponding thereto is established for each registered user in the face database.
Step S04, establishing a voiceprint database, and establishing a corresponding relation of the unique identification code and the corresponding voiceprint characteristic for each registered user in the voiceprint database.
Examples
As shown in fig. 3, in the embodiment of the present invention, an identity authentication and verification process of the face voiceprint double-check terminal is as follows:
step 1, a user approaches a face voiceprint rechecking terminal, a voiceprint acquisition process is started, for example, voice data of the current user is acquired through a microphone, voiceprint features are extracted from the voice data, and the voiceprint features are compared with a pre-registered voiceprint model to carry out voiceprint verification;
2. if the voiceprint identification is passed, acquiring a unique identification code A from a voiceprint database;
3. starting a face feature acquisition process, for example, acquiring a face region image of a current user through a camera, extracting face features of the face region image, and comparing the face region image with a pre-registered face model to perform face identity verification;
4. if the face identification is passed, acquiring a unique identification code B from a face database;
5. and comparing whether the unique identification codes A, B are consistent, if so, passing the verification of the voiceprint and face flow and verifying the identity, otherwise, prompting that the identity verification fails.
In summary, the face voiceprint rechecking terminal and the identity authentication method thereof of the present invention perform voiceprint recognition by collecting user voice data, acquire the unique identification code of the current user from the pre-established voiceprint database after the voiceprint recognition passes, then perform face recognition by collecting a face image, acquire the unique identification code of the current user from the pre-established face database when the face recognition passes, finally compare the unique identification code of the user obtained according to the voiceprint recognition with the unique identification code of the user obtained according to the face recognition, and determine the identity authentication result according to the comparison result, thereby realizing the identity authentication mode of double authentication of face and voiceprint.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Modifications and variations can be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the present invention. Therefore, the scope of the invention should be determined from the following claims.
Claims (9)
1. A face voiceprint rechecking terminal comprises:
the voiceprint recognition verification unit is used for acquiring voice data of a current user, extracting voiceprint features from the voice data, comparing the extracted voiceprint features with a voiceprint model of a registered user registered in advance at a terminal or a server, and entering the first matching unit if the voiceprint features are compared with the voiceprint model of the registered user registered in advance at the server;
a first matching unit, configured to obtain the unique identification code of the current user from a pre-established voiceprint database when voiceprint identification passes
The face capturing verification unit is used for capturing a face image of the current user, extracting face features to perform face recognition, and entering the second matching unit when the face recognition is passed;
a second matching unit, configured to acquire the unique identification code of the current user from a pre-established face database when face recognition is passed
And the comparison unit is used for comparing the unique identification codes of the users obtained by the first matching unit and the second matching unit and determining an identity authentication result according to the comparison result.
2. The human face voiceprint review terminal as claimed in claim 1, wherein said voiceprint recognition verification unit further comprises:
the voiceprint feature extraction module is used for acquiring voice data of a current user and extracting voiceprint features;
and the voiceprint recognition and authentication module is used for calculating the similarity between the voiceprint features extracted by the voiceprint feature extraction module and a voiceprint model of a registered user registered in advance at a terminal or a server, judging whether the voiceprint similarity is greater than a preset threshold value, and entering the first matching unit if the voiceprint similarity is greater than the preset threshold value.
3. The terminal of claim 2, wherein the face capturing verification unit comprises:
the face capturing module is used for acquiring a face image of a current user, determining a face area according to the face image and calibrating a face key point in the determined face area;
and the face identification authentication module is used for calculating the similarity between the face of the user and a face model of a pre-stored registered user according to the key point characteristics, judging whether the face similarity is greater than a preset threshold value, if the face similarity is greater than the threshold value, passing the face authentication, and entering the second matching unit.
4. The terminal of claim 3, wherein the terminal further comprises:
the face registration unit is used for acquiring a face image sample of a registered user and establishing a face model of the registered user according to the face image sample;
the voice print registration unit is used for collecting voice data of a registered user, extracting voice print characteristics from the voice data and establishing a voice print model of the registered user;
the face database establishing unit is used for establishing a face database, and establishing a unique identification code and a mapping relation of face features corresponding to the unique identification code for each registered user in the face database;
and the voiceprint database establishing unit is used for establishing a voiceprint database, and establishing a mapping relation of the unique identification code and the corresponding voiceprint characteristic for each registered user in the voiceprint database.
5. An identity authentication method of a face voiceprint rechecking terminal comprises the following steps:
step S1, collecting voice data of the current user, extracting voiceprint features from the voice data, comparing the extracted voiceprint features with voiceprint models of registered users registered in advance at a terminal or a server, and entering step S2 if the comparison is passed;
step S2, when the voiceprint identification is passed, obtaining the unique identification code of the current user from the preset voiceprint database;
step S3, capturing the face image of the current user, extracting the face characteristics to perform face recognition, and entering step S4 if the face recognition is passed;
step S4, when the face recognition is passed, the unique identification code of the current user is obtained from the face database established in advance;
and step S5, comparing the unique identification code of the user obtained in the step S2 with the unique identification code of the user obtained in the step S4, and determining an identity authentication result according to the comparison result.
6. The method for authenticating identity of a human face voiceprint rechecking terminal as claimed in claim 5, wherein in step S5, if the comparison result is that the two are consistent, the authentication of this time is passed, and if the two are not consistent, the authentication of this time is failed.
7. The identity authentication method of the face voiceprint double check terminal as claimed in claim 5, wherein the step S1 further comprises:
s100, collecting voice data of a current user and extracting voiceprint characteristics;
step S101, calculating the similarity between the voiceprint features extracted by the voiceprint feature extraction module and the voiceprint model of the registered user registered in advance at the terminal or the server, judging whether the voiceprint similarity is greater than a preset threshold, if so, entering step S2, otherwise, prompting the voiceprint verification failure.
8. The identity authentication method of the human face voiceprint double check terminal as claimed in claim 7, wherein the step S3 further comprises:
step S300, acquiring a face image of a current user, determining a face area according to the face image, and calibrating a face key point in the determined face area;
step S301, calculating the similarity between the face feature of the current user and the face model of the pre-stored registered user according to the key point feature, judging whether the face similarity is larger than a preset threshold, if the face similarity is larger than the threshold, the face authentication is passed, and step S4 is entered, otherwise, the face recognition verification is prompted to fail.
9. The identity authentication method of the human face voiceprint double check terminal as claimed in claim 8, wherein before step S1, the method further comprises the following steps:
step S01, collecting face image samples of the registered users, and establishing face models of the registered users according to the face image samples;
step S02, collecting voice data of the registered user, extracting voiceprint characteristics of the voice data, and establishing a voiceprint model of the registered user;
step S03, establishing a face database, and establishing a corresponding relation of a unique identification code and a face feature corresponding to the unique identification code for each registered user in the face database;
step S04, establishing a voiceprint database, and establishing a corresponding relation of the unique identification code and the corresponding voiceprint characteristic for each registered user in the voiceprint database.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010431690.1A CN111611568A (en) | 2020-05-20 | 2020-05-20 | Face voiceprint rechecking terminal and identity authentication method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010431690.1A CN111611568A (en) | 2020-05-20 | 2020-05-20 | Face voiceprint rechecking terminal and identity authentication method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111611568A true CN111611568A (en) | 2020-09-01 |
Family
ID=72200815
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010431690.1A Pending CN111611568A (en) | 2020-05-20 | 2020-05-20 | Face voiceprint rechecking terminal and identity authentication method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111611568A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112036897A (en) * | 2020-09-17 | 2020-12-04 | 中国银行股份有限公司 | ATM operation method and device |
CN112069484A (en) * | 2020-11-10 | 2020-12-11 | 中国科学院自动化研究所 | Multi-mode interactive information acquisition method and system |
CN112491844A (en) * | 2020-11-18 | 2021-03-12 | 西北大学 | Voiceprint and face recognition verification system and method based on trusted execution environment |
CN112669511A (en) * | 2020-12-18 | 2021-04-16 | 中用科技有限公司 | User registration and authentication method, system and equipment based on face voiceprint |
CN112733591A (en) * | 2020-11-19 | 2021-04-30 | 阿坝师范学院 | Face recognition system for checking in of examination room |
CN113343211A (en) * | 2021-06-24 | 2021-09-03 | 工银科技有限公司 | Data processing method, processing system, electronic device and storage medium |
CN113886792A (en) * | 2021-12-06 | 2022-01-04 | 北京惠朗时代科技有限公司 | Application method and system of print control instrument combining voiceprint recognition and face recognition |
CN117011917A (en) * | 2023-07-28 | 2023-11-07 | 达州领投信息技术有限公司 | Safety verification method based on face and voice recognition |
CN117408973A (en) * | 2023-10-26 | 2024-01-16 | 国网四川省电力公司绵阳供电公司 | Method, terminal and electronic equipment for checking state of pressing plate of relay protection device of transformer substation |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103973441A (en) * | 2013-01-29 | 2014-08-06 | 腾讯科技(深圳)有限公司 | User authentication method and device on basis of audios and videos |
CN110300086A (en) * | 2018-03-22 | 2019-10-01 | 北京语智科技有限公司 | Personal identification method, device, system and equipment |
-
2020
- 2020-05-20 CN CN202010431690.1A patent/CN111611568A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103973441A (en) * | 2013-01-29 | 2014-08-06 | 腾讯科技(深圳)有限公司 | User authentication method and device on basis of audios and videos |
CN110300086A (en) * | 2018-03-22 | 2019-10-01 | 北京语智科技有限公司 | Personal identification method, device, system and equipment |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112036897A (en) * | 2020-09-17 | 2020-12-04 | 中国银行股份有限公司 | ATM operation method and device |
CN112069484A (en) * | 2020-11-10 | 2020-12-11 | 中国科学院自动化研究所 | Multi-mode interactive information acquisition method and system |
CN112491844A (en) * | 2020-11-18 | 2021-03-12 | 西北大学 | Voiceprint and face recognition verification system and method based on trusted execution environment |
CN112733591A (en) * | 2020-11-19 | 2021-04-30 | 阿坝师范学院 | Face recognition system for checking in of examination room |
CN112669511A (en) * | 2020-12-18 | 2021-04-16 | 中用科技有限公司 | User registration and authentication method, system and equipment based on face voiceprint |
CN113343211A (en) * | 2021-06-24 | 2021-09-03 | 工银科技有限公司 | Data processing method, processing system, electronic device and storage medium |
CN113343211B (en) * | 2021-06-24 | 2023-04-07 | 工银科技有限公司 | Data processing method, processing system, electronic device and storage medium |
CN113886792A (en) * | 2021-12-06 | 2022-01-04 | 北京惠朗时代科技有限公司 | Application method and system of print control instrument combining voiceprint recognition and face recognition |
CN117011917A (en) * | 2023-07-28 | 2023-11-07 | 达州领投信息技术有限公司 | Safety verification method based on face and voice recognition |
CN117408973A (en) * | 2023-10-26 | 2024-01-16 | 国网四川省电力公司绵阳供电公司 | Method, terminal and electronic equipment for checking state of pressing plate of relay protection device of transformer substation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111611568A (en) | Face voiceprint rechecking terminal and identity authentication method thereof | |
RU2738325C2 (en) | Method and device for authenticating an individual | |
CN110557376B (en) | Electronic contract signing method, electronic contract signing device, computer equipment and storage medium | |
CN105468950B (en) | Identity authentication method and device, terminal and server | |
WO2017215558A1 (en) | Voiceprint recognition method and device | |
US6810480B1 (en) | Verification of identity and continued presence of computer users | |
WO2020134527A1 (en) | Method and apparatus for face recognition | |
US20130246270A1 (en) | Method and System for Multi-Modal Identity Recognition | |
WO2017206375A1 (en) | Voiceprint registration and authentication methods and devices | |
CN103177238B (en) | Terminal and user identification method | |
CN107346568B (en) | Authentication method and device of access control system | |
KR101724971B1 (en) | System for recognizing face using wide angle camera and method for recognizing face thereof | |
JP2007156974A (en) | Personal identification/discrimination system | |
CN111611437A (en) | Method and device for preventing face voiceprint verification and replacement attack | |
CN111611569A (en) | Face voiceprint rechecking terminal and identity authentication method thereof | |
JP2006085289A (en) | Facial authentication system and facial authentication method | |
CN112769872B (en) | Conference system access method and system based on audio and video feature fusion | |
US20070196000A1 (en) | 2D face authentication system | |
Cheng et al. | An efficient approach to multimodal person identity verification by fusing face and voice information | |
Czyz et al. | Scalability analysis of audio-visual person identity verification | |
Shenai et al. | Fast biometric authentication system based on audio-visual fusion | |
KR20090031084A (en) | Face recognition system, the method of face recognition registration and confirmation | |
WO2021060256A1 (en) | Facial authentication device, facial authentication method, and computer-readable recording medium | |
CN115238260A (en) | Dual-identification authentication method and device, computer equipment and storage medium | |
CN116910733A (en) | Identity verification method realized through face recognition device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |