WO2016184325A1 - 身份验证方法、终端和服务器 - Google Patents

身份验证方法、终端和服务器 Download PDF

Info

Publication number
WO2016184325A1
WO2016184325A1 PCT/CN2016/081489 CN2016081489W WO2016184325A1 WO 2016184325 A1 WO2016184325 A1 WO 2016184325A1 CN 2016081489 W CN2016081489 W CN 2016081489W WO 2016184325 A1 WO2016184325 A1 WO 2016184325A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image
living body
user identity
motion
Prior art date
Application number
PCT/CN2016/081489
Other languages
English (en)
French (fr)
Inventor
黄飞跃
李季檩
谭国富
江晓力
吴丹
陈骏武
谢建国
郭玮
刘奕慧
谢建东
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2016184325A1 publication Critical patent/WO2016184325A1/zh
Priority to US15/632,143 priority Critical patent/US10432624B2/en
Priority to US16/542,213 priority patent/US10992666B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0823Network architectures or network communication protocols for network security for authentication of entities using certificates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/083Network architectures or network communication protocols for network security for authentication of entities using passwords
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/06Authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/179Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2463/00Additional details relating to network architectures or network communication protocols for network security covered by H04L63/00
    • H04L2463/082Additional details relating to network architectures or network communication protocols for network security covered by H04L63/00 applying multi-factor authentication

Definitions

  • the present invention relates to the field of security technologies, and in particular, to an identity verification method, a terminal, and a server.
  • simple password verification methods can only be applied to stand-alone applications such as access control or computer local passwords.
  • Account and password authentication methods are usually applied to applications that need to log in to a remote server, such as logging in to a social networking site or logging in to a mail server.
  • the disadvantage is that anyone who has the account and password can be authenticated and has low security.
  • the mobile phone verification code verification method is a weak authentication method, and usually works in combination with other authentication methods, or is applied separately in an application scenario with low security requirements.
  • the face recognition verification method is easy to crack by the face image spoofing camera, and the security is not high. Therefore, the current authentication method is less secure and needs no improvement.
  • an identity verification method terminal, and server are provided.
  • An authentication method that includes:
  • the action guide information selected from the preset action guide information base is displayed and/or played in audio form, and the corresponding action image is collected;
  • the identity verification result is determined according to the living body detection result.
  • a terminal comprising a memory and a processor, wherein the memory stores instructions that, when executed by the processor, cause the processor to perform the following steps:
  • the action guide information selected from the preset action guide information base is displayed and/or played in audio form, and the corresponding action image is collected;
  • the identity verification result is determined according to the living body detection result.
  • a server includes a memory and a processor, the memory storing instructions that, when executed by the processor, cause the processor to perform the following steps:
  • the authentication result is fed back to the terminal.
  • FIG. 1 is a schematic structural diagram of an identity verification system in an embodiment
  • FIG. 2 is a schematic structural diagram of a terminal in an embodiment
  • FIG. 3 is a schematic structural diagram of a server in an embodiment
  • FIG. 4 is a schematic flow chart of an identity verification method in an embodiment
  • FIG. 5 is a flow chart showing the steps of performing matching detection on the collected motion image and the motion guidance information in an embodiment to obtain a living body detection result indicating whether a living body exists;
  • FIG. 6 is a schematic flowchart of a step of collecting user identity information in an embodiment, and performing verification according to the collected user identity information to obtain a verification result of the user identity information;
  • FIG. 7 is an application environment diagram of an identity verification system in an embodiment
  • FIG. 8 is a schematic flow chart of an identity verification method in another embodiment
  • FIG. 9 is a structural block diagram of an identity verification apparatus in an embodiment
  • FIG. 10 is a structural block diagram of an identity verification apparatus in another embodiment
  • FIG. 11 is a structural block diagram of the user identity information verification module in FIG. 10 in an embodiment
  • FIG. 12 is a structural block diagram of the face image processing module of FIG. 11 in an embodiment
  • FIG. 13 is a structural block diagram of an identity verification apparatus in still another embodiment
  • Figure 14 is a block diagram showing the structure of an identity verification apparatus in an embodiment.
  • an identity verification system 100 including End 110 and server 120.
  • the terminal 110 can be a desktop computer, a public inquiry machine, or a mobile terminal such as a mobile phone, a tablet computer, or a personal digital assistant.
  • Server 120 can be one physical server or multiple physical servers.
  • the structure of the terminal 110 in FIG. 1 is as shown in FIG. 2, including a processor connected through a system bus, an internal memory, a non-volatile storage medium, a network interface, and a display. Screen, camera and input device.
  • the non-volatile storage medium of the terminal 110 stores an identity verification device for implementing an identity verification method.
  • the processor of the terminal 110 is configured to provide computing and control functions configured to perform an authentication method.
  • the display screen of the terminal 110 may be a liquid crystal display or an electronic ink display screen.
  • the input device of the terminal 110 may be a touch layer covered on the display screen, or may be a button, trackball or touch provided on the outer casing of the terminal 110.
  • the control board can also be an external keyboard, a touchpad or a mouse.
  • the composition of the server 120 of FIG. 1 is as shown in FIG. 3, including a processor connected via a system bus, an internal memory, a non-volatile storage medium, and a network interface.
  • the non-volatile storage medium of the server 120 stores an operating system and an identity verification device, and the identity verification device is used to implement an identity verification method.
  • the processor of the server 120 is configured to provide computing and control functions configured to perform an authentication method.
  • an identity verification method is provided, which may be applied to the terminal 110 in FIG. 1 and FIG. 2 described above, and may also be applied to the identity verification system in FIG. 1 above. Terminal 110 and server 120.
  • the method specifically includes the following steps:
  • Step 402 Display the action guide information selected from the preset action guide information base and/or play it in audio form, and collect the corresponding action image.
  • the preset motion guiding information library includes various motion guiding information, and the action guiding information is used to guide the user to perform corresponding actions.
  • the motion guiding information is “blinking”, which means that the user is guided to do the blinking action.
  • Similar motion guidance information may also be "open mouth”, “turning head” or “extending four fingers” or the like to guide the user to make such actions as opening a mouth, turning a head or extending four fingers.
  • the action guide information may be randomly selected from the preset action guide information base, or may be selected according to the secret selection order, and the order of selection may be updated periodically.
  • terminal 110 can receive services The action guidance information selected and sent from the preset action guide information base.
  • the action guidance information includes an action indication sequence formed by a plurality of action indication units.
  • the action indicating unit refers to a minimum action guiding unit, and one action indicating unit represents an action, such as “blinking”, “opening mouth” or “turning head” respectively as an action indicating unit, and the plurality of action indicating units are sequentially arranged to form an action. Indicate the sequence. For example, if an action guidance information is an action indication sequence, specifically "blinking, opening a mouth, turning a head", it means that the user needs to perform the actions of blinking, opening the mouth and turning the head in order.
  • the action guiding information may be a sequence of characters, each word in the sequence of characters is an action indicating unit, and the mouth shape of each word is read by the user as an action.
  • the motion guiding information includes an action indicating sequence formed by a plurality of motion indicating units, so that the method of cracking the identity verification by means of random testing can be avoided as much as possible, and the living body detection result can be more accurate.
  • the motion guidance information can be displayed in a visual form, such as in the form of text, a schematic diagram, and the like.
  • the motion guiding information is played in the form of audio. Specifically, the audio data of the word or the word may be recorded in advance.
  • the motion guiding information may be searched for the corresponding audio data word by word and played, or the motion guiding information may be segmented first. Processing, and then converting the action guide information into corresponding audio data in units of words and playing.
  • the motion guidance information can play the motion guidance information in audio form while displaying the motion guidance information.
  • an action image corresponding to the motion guidance information is acquired.
  • the action image refers to an image that should contain an action made by the user based on the action guidance information.
  • An image may be acquired at a preset time interval during a display and/or play time period corresponding to an action indicating unit, and an image of the acquired image that has the largest difference from other images acquired during the time period or a part of the image may be used as The captured motion image.
  • Step 404 Perform matching detection on the collected motion image and the motion guidance information to obtain a living body detection result indicating whether or not a living body exists.
  • Matching two objects is designed to detect whether the two objects match, and to detect the degree to which the two objects match. If it is detected that the acquired motion image matches the motion guidance information, it indicates that there is a living body, and a living body detection result indicating that the living body exists is obtained. If it is detected that the acquired motion image does not match the motion guidance information, it indicates that there is no living body, and a living body detection result indicating that there is no living body is obtained.
  • the result of the living body test can be taken from two preset values, such as 1 for the presence of a living body and 0 for the absence of a living body.
  • the result of the living body detection can also be represented by a matching value indicating the matching degree between the collected motion image and the motion guiding information. If the matching value exceeds the matching value threshold, the living body exists; if the matching value does not exceed the matching value threshold, it indicates that the living body does not exist. Living body.
  • the matching value may be represented by the similarity of the motion image and the preset motion image corresponding to the motion guidance information, or by a value obtained by performing a positive correlation operation on the similarity.
  • the matching value may also be represented by the Euclidean distance of the motion feature extracted from the motion image and the preset motion feature corresponding to the motion guidance information, or by a value obtained by performing a positive correlation operation on the Euclidean distance.
  • the positive correlation operation refers to inputting a function whose argument is positively related to the dependent variable and outputting the result of the function.
  • step 404 includes: extracting an action feature from the acquired motion image, performing matching detection on the extracted motion feature and the preset motion feature corresponding to the motion guidance information, and obtaining a living body detection result indicating whether a living body exists.
  • the similarity between the extracted action feature and the corresponding preset action feature may be calculated. If the similarity is greater than the similarity threshold, it is determined that the living body exists, and the living body detection result indicating the presence of the living body is obtained; if the similarity is less than or equal to the similarity
  • the degree threshold is used, it is determined that there is no living body, and a living body detection result indicating that there is no living body is obtained.
  • the motion features extracted here may be geometric features, such as Euclidean distances, or algebraic features, such as feature matrices.
  • step 404 includes: transmitting the collected motion image to the server, causing the server to perform matching detection on the motion image and the motion guidance information, and obtaining a living body detection result indicating whether a living body exists.
  • the captured motion image can be encrypted and sent to the server.
  • Step 406 Determine an identity verification result according to the living body detection result.
  • the identity verification result may also be obtained according to the combination of the living body detection result and other verification methods.
  • the above-mentioned identity verification method guides the user to complete the corresponding action in a visual and/or acoustic manner by displaying and/or playing the action guide information selected in the preset action guide information library, so as to collect the corresponding action image. Then, by performing matching detection on the acquired motion image and the motion guidance information, a living body detection result indicating whether or not a living body exists is obtained, thereby obtaining an identity verification result based on the living body detection result. In this way, through the living body detection to verify whether the current operation is a real user, the situation of machine brute force cracking can be avoided, and the final identity verification result is more accurate and the security is improved.
  • the identity verification method further includes: collecting user identity information, and performing verification according to the collected user identity information, to obtain a user identity information verification result.
  • step 406 includes: determining an identity verification result according to the living body detection result and the user identity information verification result.
  • the user identity information refers to information used to prove the identity of the user, and includes at least one of a user account and a user password, user identity information, and user biometric information.
  • the user biometric information includes face feature information, fingerprint feature information, iris feature information, and palm geometry.
  • the document information includes the ID number, name, date of birth, issuing authority and expiration date.
  • the documents may be ID card, driver's license, social security card and passport.
  • the user identity information may be obtained by acquiring the user identity information input by the user, for example, obtaining a character string entered in the user account input box as a user account, obtaining a character string entered in the user password input box as a user password, and obtaining, for example, The ID information entered in the ID information input box.
  • the user identity information may also be obtained by calling a camera, a sensor, or the like to obtain user identity information, such as obtaining a certificate image or a face image by scanning a camera, and obtaining fingerprint feature information, iris feature information, and the like by scanning the sensor.
  • the user identity information itself can be verified.
  • the certificate information it can be judged whether the certificate number conforms to the preset format, and whether the current time is within the validity period.
  • the user identity information collected may be The pre-stored user identity information is matched and detected, thereby obtaining the identity information verification result.
  • the pre-stored password corresponding to the user account may be obtained, and it is determined whether the collected user password and the pre-stored password are consistent, thereby obtaining the identity information verification result.
  • the identity information verification result is used to indicate whether the verification according to the collected user identity information is passed.
  • the user identity information is collected, and the user identity information is verified according to the collected user identity information, and the user identity information verification result is obtained.
  • the terminal 110 collects the user identity information and sends the identity information to the server, so that the server performs the identity information according to the collected user identity information. Verify and obtain the user identity information verification result.
  • the collected user identity information can be encrypted and sent to the server.
  • the step of determining the identity verification result according to the living body detection result and the user identity information verification result includes: when the living body detection result indicates that the living body exists, and the user identity information verification result is the verification pass, determining the identity verification result as the verification by.
  • the step of determining the identity verification result according to the living body detection result and the user identity information verification result includes: when the living body detection result indicates that the living body does not exist, and the user identity information verification result is the verification pass, determining the identity verification result is Verification failed.
  • the step of determining the identity verification result according to the living body detection result and the user identity information verification result includes: when the living body detection result indicates that the living body does not exist, and the user identity information verification result is the verification pass, determining the identity verification result is Verification failed.
  • the step of determining the identity verification result according to the living body detection result and the user identity information verification result includes: receiving an identity verification result fed back by the server after determining the identity verification result according to the living body detection result and the user identity information verification result.
  • step 404 collects user identity information and performs verification according to the collected user identity information.
  • the step of obtaining the user identity information verification result is asynchronous, and does not limit the sequential execution sequence of the two steps. In this embodiment, the efficiency of the identity verification process is ensured by asynchronous processing.
  • the motion guiding information is mouth type guiding information
  • the motion image includes a mouth type image
  • step 404 includes the following steps:
  • Step 502 extracting a mouth feature from the mouth image.
  • the action guidance information is information for guiding the user to speak, and may be referred to as mouth type guidance information.
  • the motion image is acquired, the position of the lips can be directly detected to obtain an action image mainly including the mouth shape of the user.
  • the motion image is a face image and the face image includes a mouth image. The position of the human mouth relative to the face is fixed, so that the mouth image in the face image can be directly positioned after the face image is determined.
  • the mouth shape can also be called a lip shape.
  • the mouth shape of the person can be represented by the inner lip line and the outer lip line of the lips, and a feature capable of reflecting the change of the inner lip line and/or the outer lip line can be used as the mouth type feature.
  • the inner lip line Taking the inner lip line as an example, when the mouth type is closed, the inner lip line is a straight line, and when the mouth type is fully opened, the inner lip line has a circular shape.
  • the area of the area surrounded by the inner lip line can be used as the mouth type feature, and the distance between the left and right borders of the mouth inner lip line and the distance between the upper and lower boundaries can be used as the mouth type feature.
  • Step 504 Perform matching detection on the extracted mouth type feature and the preset mouth type feature corresponding to the motion guiding information, and obtain a living body detection result indicating whether a living body exists.
  • the content expressed by the motion guiding information may be read in advance according to the standard speech rate, and the mouth type image of the mouth type change during the reading process is collected, and the mouth type feature is extracted as the preset mouth type feature and corresponds to the motion guiding information. storage.
  • the extracted mouth type features are compared with the preset mouth type features for matching detection.
  • the similarity between the extracted mouth shape feature and the preset mouth shape feature may be calculated. If the similarity is greater than the similarity threshold, the living body detection result indicating the presence of the living body is obtained; if the similarity is not greater than the similarity threshold, the obtained representation is nonexistent Live test results in vivo.
  • the motion image can also include a complete face image, which can be applied in the subsequent identity verification process, and the resource reuse rate is improved.
  • the number of motion images is a preset number greater than 1.
  • the identity verification method further includes: collecting a face image included in each motion image and performing face recognition, and directly obtaining the representation when the recognition result is inconsistent Verify the failed authentication results.
  • the preset number can be set as needed, for example, it can take 3, 4 or 5, etc.
  • by including for each action image Face image for face recognition If the user changes during the live detection process, the recognition result will be inconsistent, and then the authentication result indicating that the verification fails is directly given. In this way, it takes a certain period of time to consider the living body detection. In order to ensure safety, it is necessary to ensure that the same user operation is always performed during the living body detection process.
  • the face image included in each action image and the face graphic included in the user identity information may be face-recognized, and the recognition result is inconsistent.
  • the authentication result indicating that the verification failed is directly obtained.
  • step 402 includes: displaying action guide information selected from the preset action guide information library, and displaying the read progress information according to the speech rate corresponding to the action guide information.
  • Speech rate refers to the speed of speaking.
  • the content expressed by the motion guidance information may be displayed verbatim according to the speech rate, or all the motion guidance information may be directly displayed and the speech rate progress bar may be displayed, so that the speech progress bar is from the first word of the action guidance information. Start to change according to the corresponding speed of speech.
  • step 402 includes: playing the action guide information in audio form according to the speech rate corresponding to the action guide information selected from the preset action guide information base.
  • the action guidance information is directly played at the standard speech rate, and the user is guided to follow, so that the user controls the mouth shape change according to the speech rate, and the terminal 110 collects the corresponding action image.
  • the accuracy of the living body detection can be improved, and the living body detection failure can be avoided due to the abnormal speech rate of the user.
  • the user identity information is collected, and the user identity information is verified according to the collected user identity information.
  • the step of obtaining the user identity information verification result includes: collecting multiple user identity information, and respectively detecting the user identifier corresponding to each user identity information. Detecting whether the user identifiers corresponding to the various user identity information are consistent, and obtaining the verification result of the user identity information.
  • the user identifier refers to a character or a character string that can uniquely identify the user.
  • the user identity information is separately detected to obtain the corresponding user identifier, and then the user identifiers obtained by the detections are all consistent. If the detected user identifiers are consistent, the verification result of the identity information is verified; If the obtained user IDs are inconsistent, the verification is not passed. The identity information verification result.
  • the detected identity information verification result is more reliable, which makes the final authentication result more reliable.
  • the user identification here may be an identity card number, a driver's license number, a social security card code, or a passport number.
  • the user identity information is collected, and the user identity information is verified according to the collected user identity information, and the step of obtaining the user identity information verification result includes:
  • Step 602 Acquire a document image and perform character recognition on the document image to obtain a user identifier that matches the image of the certificate.
  • the terminal 110 runs a client, and the client may be an original application client or a light application client.
  • the light application is an application that can be used without downloading.
  • the currently used light application is compiled using HTML5 (Hypertext Markup Language Fifth Edition).
  • the terminal 110 sends the collected ID image to the server, and the server performs character recognition on the ID image to obtain a user identifier that matches the ID image.
  • the terminal 110 invokes the camera through the client running on the terminal 110 to scan the document in the form of photographing or video recording to obtain a document image.
  • the terminal 110 can provide an interactive interface through the client to guide the user to scan the document according to the prompt. Specifically, you can scan the front of the document and then scan the reverse side of the document.
  • the original photo of the front and back of the document and the image of the front and back documents cut according to the shape of the document can be provided during the scanning process.
  • the original photo and the front and back ID images can be one each. Of course, you can customize the number of sheets as needed.
  • the terminal 110 can also discriminate the shape and color distribution of the document image, determine whether the document is a forged document or determine whether the document image is forged.
  • the server can use OCR means to perform character recognition on the ID image, identify the text information therein, compare it with the document information stored on the external certificate server, find the matching document information, and obtain the corresponding user ID.
  • the document server here may be an identity card server of a citizenship management institution, a driver's license information server of a vehicle management institution, a social security card information server of a social security institution, or a passport information server of a passport issuing institution.
  • the server can also compare the recognized text information with the text information input by the user to determine whether the match is matched, and if not, directly give an identity verification result indicating that the identity verification fails, thereby preventing the user from stealing the other person's certificate for operation. If you can't identify it, you can give it to you. For other reasons, give the corresponding error message.
  • the entered user identification can also be obtained directly.
  • the user ID entered here refers to the user ID entered by the user.
  • Step 604 Acquire a face image, and calculate a similarity between the collected face image and the face image corresponding to the user identifier in the comparison face database.
  • the document avatar in the ID image is intercepted; the face image is collected; the collected face image and the corresponding document avatar are respectively compared with the corresponding face database corresponding to the user
  • the identified face images are compared and calculated to obtain similarity.
  • the similarity here indicates the degree of similarity between the corresponding face images.
  • the collected face image can be compared with the document avatar, and the collected face image and the document avatar are sent to an external document server for comparison, and the similarity is calculated.
  • the collected face image can be directly sent to the external document server for comparison, and the similarity is calculated.
  • Step 606 Determine a user identity information verification result according to the similarity.
  • the identity information verification result indicating that the verification is passed is obtained; if the similarity does not exceed the similarity threshold, the identity information verification result indicating that the verification fails is obtained. If there is a plurality of similarities, the identity information verification result indicating that the verification is passed may be obtained when each similarity is higher than the corresponding similarity threshold; when there is a case where the similarity does not exceed the corresponding similarity threshold, then The verification result of the identity information can be determined as the verification fails.
  • the integrated identification image and the collected face image are used to comprehensively verify the user identity information, so that the identity information verification result is more accurate, and the identity verification result is more accurate.
  • the method before step 402, the method further includes: detecting a financial service operation instruction, and checking After the financial service operation instruction is detected, the action guidance information selected from the preset action guidance information base is obtained. After the step 406, the method further includes: performing the financial service operation corresponding to the financial service operation instruction when the verification result is the verification.
  • the financial business here includes applying for loan business, online credit card processing, and investment business.
  • the above-mentioned identity verification method is used to ensure transaction security in the financial service, so that the handling of the financial service is more secure and reliable.
  • the server 120 includes a lip biometric detection server 121, a first facial feature extraction server 122, a second facial feature extraction server 123, and a face verification server 124.
  • the lip language detection server 121 is connected to the terminal 110
  • the first facial feature extraction server 122 is connected to the terminal 110, the second facial feature extraction server 123, and the face verification server 124, and the second facial feature extraction server 123 and the person
  • the face verification server 124 and the external certificate server 130 are connected.
  • An authentication method includes the following steps 1) to 5):
  • Step 1) lip-activated living body detection: through the terminal 110, it is determined whether the user is a living body, thereby verifying whether the user is performing a video or self-timer operation.
  • step 1) further includes steps A) to B):
  • Step A) Face Detection: detects the presence of a face from various scenes and determines the position of the face.
  • the main purpose of face detection is to find the face area in the captured image and divide the image into a face area and a non-face area. Thus preparing for subsequent applications.
  • Step B) Active Detection: Pre-select a number of short sentences that can be used to determine the user's mouth type, such as selecting a Chinese sentence within 100 words of 100 sentences.
  • the lip-skin living detection server 121 analyzes the mouth-type features of these phrases and stores them in the lip-skin detection server 121.
  • the terminal 110 randomly selects and displays a short sentence that the user needs to read on the user live detection page, and prompts the user to read it.
  • the terminal 110 collects the change of the mouth shape when the user reads the short sentence on the basis of the facial features, and compares with the mouth type change of the mouth type stored in the lip language detecting server 121 to determine whether the user is consistent, thereby determining the user. Whether to read according to the given statement, and then determine whether the user is operating in real time.
  • Step 2) on the basis of step 1), collect the user's self-photographing or video face information through the mobile device, and perform front and back scanning on the user's identity document.
  • Step 3 using the collected user face information, the scanned ID card photo information, and the photo information of the user ID card stored in the authority to perform face feature extraction using the facial features method; then using the machine learning algorithm to perform the three feature information The similarity calculation.
  • step 3) further includes steps a) to c):
  • Step a) the facial features positioning method: is the premise of the main information of facial feature extraction, and its main purpose is to locate the facial target organ points from the detected face region. Face contours, eyebrows, eyes, nose, lip contours and position.
  • Step b) Face Representation: On the basis of the five-position positioning, the detected face (including the face of the stock) is represented in a pre-selected manner. Common representation methods include geometric features (such as Euclidean distance), algebraic features (characteristic matrices), and the like.
  • Step c) Face Identification: The face to be recognized is compared with the known face in the database to obtain the correlation between the faces.
  • Step 4 the text identification of the user ID card collected by the scanning is performed, and then the similarity calculation is performed with the user ID text information of the third-party authority.
  • Step 5 combining the results of steps 3) and 4) above, thereby determining whether the current user and the user information stored in the authority correspond to the same person.
  • an authentication method is applied to the server 120 of FIG. 1 and FIG. 2 described above.
  • the difference from the identity verification method in the above embodiment is that the steps of data input and output in the embodiment, such as collecting action images and user identity information, and displaying and/or playing the action guide information, are at the terminal.
  • the steps performed on 110, while others requiring a large amount of computation, are performed on server 120. This can significantly reduce the computing pressure of the terminal 110 and improve the efficiency of identity verification.
  • the method includes:
  • Step 802 Select action guide information from the preset action guide information base and send it to the terminal 110, so that the terminal 110 displays the action guide information and/or plays it in audio form, and collects the corresponding action image.
  • the terminal 110 displays the action guide information selected from the preset action guide information base, and displays the read progress information according to the speech rate corresponding to the action guide information; and/or The terminal 110 plays the motion guidance information in audio form according to the speech rate corresponding to the motion guidance information selected from the preset motion guidance information base.
  • Step 804 Receive an action image sent by the terminal, perform matching detection on the motion image and the motion guidance information, and obtain a living body detection result indicating whether a living body exists.
  • the motion guiding information is mouth-type guiding information;
  • the motion image includes a mouth-shaped image;
  • the matching detection of the motion image and the motion guiding information to obtain a living body detection result indicating whether a living body exists includes: extracting from the mouth image
  • the mouth type feature performs matching detection on the extracted mouth type feature and the preset mouth type feature corresponding to the motion guiding information, and obtains a living body detection result indicating whether there is a living body.
  • the number of motion images is a preset number greater than 1.
  • the identity verification method further includes: collecting a face image included in each motion image and performing face recognition, and directly obtaining the representation when the recognition result is inconsistent Verify the failed authentication results.
  • Step 806 After determining the identity verification result according to the living body detection result, the identity verification result is fed back to the terminal.
  • the identity verification method further includes: receiving user identity information collected and sent by the terminal, and performing verification according to the collected user identity information, to obtain a verification result of the user identity information.
  • Step 806 includes: after determining the identity verification result according to the living body detection result and the user identity information verification result, and then feeding back the identity verification result to the terminal.
  • the user identity information collected and sent by the terminal is received, and the user identity information is verified according to the collected user identity information, and the user identity information verification result is obtained, including: receiving, by the terminal, multiple user identity information, and detecting each The user identifier corresponding to the user identity information; detecting whether the user identifiers corresponding to the user identity information are consistent, to obtain the user identity information verification result.
  • the user identity information collected and sent by the terminal is received, and the user identity information is verified according to the collected user identity information, and the user identity information verification result is obtained, including: receiving the image of the certificate collected and sent by the terminal, and performing text on the ID image. Identifying, obtaining a user identifier that matches the image of the certificate; receiving a face image collected and transmitted by the terminal, and calculating the collected face image and comparing The similarity between the face images corresponding to the user identification in the face database; the user identity information verification result is determined according to the similarity.
  • the face image collected and sent by the terminal is received, and the similarity between the collected face image and the face image corresponding to the user identifier in the comparison face database is calculated, which includes: intercepting the ID image The document avatar; the face image collected and transmitted by the receiving terminal; the captured face image and the corresponding document avatar are compared with the face image corresponding to the user identifier in the comparison face database, and the similarity is calculated.
  • determining the identity verification result according to the living body detection result and the user identity information verification result including: when the living body detection result indicates that the living body exists, and the user identity information verification result is the verification pass, determining the identity verification result as the verification pass .
  • the method before step 802, the method further includes: detecting a financial service operation instruction, selecting the action guidance information from the preset action guidance information base after detecting the financial service operation instruction, and transmitting the action guidance information to the terminal;
  • the verification result is the financial service operation corresponding to the financial business operation instruction when the verification is passed.
  • the above-mentioned identity verification method guides the user to complete the corresponding action in a visual and/or acoustic manner by displaying and/or playing the action guide information selected in the preset action guide information library, so as to collect the corresponding action image. Then, by performing matching detection on the acquired motion image and the motion guidance information, a living body detection result indicating whether or not a living body exists is obtained, thereby obtaining an identity verification result based on the living body detection result. In this way, through the living body detection to verify whether the current operation is a real user, the situation of machine brute force cracking can be avoided, and the final identity verification result is more accurate and the security is improved.
  • an identity verification system 100 includes a terminal 110 and a server 120.
  • the terminal 110 is configured to receive the action guiding information that is selected and sent by the server 120 from the preset action guiding information base; and is further configured to display the action guiding information and/or play in an audio form, and collect a corresponding action image; The acquired motion image is transmitted to the server 120.
  • the preset motion guiding information library includes various motion guiding information, and the action guiding information is used to guide the user to perform corresponding actions. You can randomly select the action guide from the preset action guide information base. Information can also be selected in a confidential selection order, and the order of selection is updated periodically.
  • the action guidance information includes an action indication sequence formed by a plurality of action indication units.
  • the action indicating unit refers to a minimum motion guiding unit, and one motion indicating unit represents one motion, and the plurality of motion indicating units are sequentially arranged to form an action indicating sequence.
  • the motion guiding information includes an action indicating sequence formed by a plurality of motion indicating units, so that the method of cracking the identity verification by means of random testing can be avoided as much as possible, and the living body detection result can be more accurate.
  • the motion guidance information can be displayed in a visual form, such as in the form of text, a schematic diagram, and the like.
  • the motion guiding information is played in the form of audio. Specifically, the audio data of the word or the word may be recorded in advance.
  • the motion guiding information may be searched for the corresponding audio data word by word and played, or the motion guiding information may be segmented first. Processing, and then converting the action guide information into corresponding audio data in units of words and playing.
  • the motion guidance information can play the motion guidance information in audio form while displaying the motion guidance information.
  • an action image corresponding to the motion guidance information is acquired.
  • the action image refers to an image that should contain an action made by the user based on the action guidance information.
  • An image may be acquired at a preset time interval during a display and/or play time period corresponding to an action indicating unit, and an image of the acquired image that has the largest difference from other images acquired during the time period or a part of the image may be used as The captured motion image.
  • the server 120 is configured to perform matching detection on the motion image and the motion guidance information, and obtain a living body detection result indicating whether or not a living body exists.
  • Matching two objects is designed to detect whether the two objects match, and to detect the degree to which the two objects match. If it is detected that the acquired motion image matches the motion guidance information, it indicates that there is a living body, and a living body detection result indicating that the living body exists is obtained. If it is detected that the acquired motion image does not match the motion guidance information, it indicates that there is no living body, and a living body indicating that there is no living body is obtained. Test results.
  • the result of the living body test can be taken from two preset values, such as 1 for the presence of a living body and 0 for the absence of a living body.
  • the result of the living body detection can also be represented by a matching value indicating the matching degree between the collected motion image and the motion guiding information. If the matching value exceeds the matching value threshold, the living body exists; if the matching value does not exceed the matching value threshold, it indicates that the living body does not exist. Living body.
  • the matching value may be represented by the similarity of the motion image and the preset motion image corresponding to the motion guidance information, or by a value obtained by performing a positive correlation operation on the similarity.
  • the matching value may also be represented by the Euclidean distance of the motion feature extracted from the motion image and the preset motion feature corresponding to the motion guidance information, or by a value obtained by performing a positive correlation operation on the Euclidean distance.
  • the server 120 is further configured to extract an action feature from the acquired action image, perform matching detection on the extracted action feature and the preset action feature corresponding to the action guide information, and obtain a living body detection result indicating whether a living body exists.
  • the server 120 is further configured to: after determining the identity verification result according to the biometric detection result, return the identity verification result to the terminal 110; the terminal 110 is further configured to receive the identity verification result.
  • the server 120 is configured to determine that the identity verification result is the verification pass when the living body detection result indicates that the living body exists. In one embodiment, the server 120 is configured to determine that the authentication result is that the verification fails if the living body detection result indicates that there is no living body. In one embodiment, the server 120 is further configured to obtain an authentication result based on the combination of the biometric detection result and other verification methods.
  • the above-mentioned identity verification system 100 guides the user to complete the corresponding action in a visual and/or acoustic manner by displaying and/or playing the action guide information selected in the preset action guide information library, so as to collect the corresponding action image. Then, by performing matching detection on the acquired motion image and the motion guidance information, a living body detection result indicating whether or not a living body exists is obtained, thereby obtaining an identity verification result based on the living body detection result. In this way, through the living body detection to verify whether the current operation is a real user, the situation of machine brute force cracking can be avoided, and the final identity verification result is more accurate and the security is improved.
  • the terminal 110 is further configured to collect user identity information and send the message to the server. 120.
  • the server 120 is further configured to perform verification according to the collected user identity information, and obtain the user identity information verification result.
  • the server 120 is further configured to: after determining the identity verification result according to the living body detection result and the user identity information verification result, and returning the identity verification result to the terminal 110.
  • the user identity information refers to information for proving the identity of the user, and includes at least one of a user account and a user password, user identity information, and user biometric information.
  • the user biometric information includes face feature information, fingerprint feature information, iris feature information, and palm geometry.
  • the document information includes the ID number, name, date of birth, issuing authority and expiration date.
  • the documents may be ID card, driver's license, social security card and passport.
  • the server 120 is specifically configured to obtain user identity information input by the user, for example, obtaining a character string entered in the user account input box as a user account, obtaining a character string entered in the user password input box as a user password, and obtaining, for example, the document information. Enter the ID information entered in the box.
  • the user identity information may also be obtained by calling a camera, a sensor, or the like to obtain user identity information, such as obtaining a certificate image or a face image by scanning a camera, and obtaining fingerprint feature information, iris feature information, and the like by scanning the sensor.
  • the server 120 is specifically configured to perform verification according to the collected user identity information, and specifically, the collected user identity information itself may be verified. For example, for the certificate information, it may be determined whether the certificate number conforms to a preset format, and whether the current time is within an effective period.
  • the server 120 is specifically configured to perform verification according to the collected user identity information, and specifically, the collected user identity information and the pre-stored user identity information are matched and detected, thereby obtaining the identity information verification result. For example, for the user account and the user password, the pre-stored password corresponding to the user account may be obtained, and it is determined whether the collected user password and the pre-stored password are consistent, thereby obtaining the identity information verification result.
  • the identity information verification result is used to indicate whether the verification according to the collected user identity information is passed.
  • the server 120 is configured to integrate the living body detection result and the user identity information verification result to obtain a final identity verification result.
  • the server 120 is configured to determine that the identity verification result is the verification pass when the living body detection result indicates that the living body exists and the user identity information verification result is the verification pass.
  • the server 120 is configured to indicate that there is no living body when the living body detection result indicates that If the verification result of the user identity information is that the verification is passed, it is determined that the authentication result is that the verification fails.
  • the server 120 is configured to determine that the identity verification result is that the verification fails if the living body detection result indicates that the living body does not exist and the user identity information verification result is the verification pass.
  • the motion image includes a mouth image; the server 120 is further configured to extract a mouth shape feature from the mouth image, and perform matching detection on the extracted mouth shape feature and the preset mouth feature corresponding to the motion guiding information.
  • a living body test result indicating whether or not a living body exists is obtained.
  • the action guidance information is information for guiding the user to speak, and may be referred to as mouth type guidance information.
  • the motion image is acquired, the position of the lips can be directly detected to obtain an action image mainly including the mouth shape of the user.
  • the motion image is a face image and the face image includes a mouth image. The position of the human mouth relative to the face is fixed, so that the mouth image in the face image can be directly positioned after the face image is determined.
  • the mouth shape can also be called a lip shape.
  • the mouth shape of the person can be represented by the inner lip line and the outer lip line of the lips, and a feature capable of reflecting the change of the inner lip line and/or the outer lip line can be used as the mouth type feature.
  • the inner lip line Taking the inner lip line as an example, when the mouth type is closed, the inner lip line is a straight line, and when the mouth type is fully opened, the inner lip line has a circular shape.
  • the area of the area surrounded by the inner lip line can be used as the mouth type feature, and the distance between the left and right borders of the mouth inner lip line and the distance between the upper and lower boundaries can be used as the mouth type feature.
  • the server 120 can be configured to pre-read the content expressed by the action guiding information according to the standard speech rate, and collect the mouth type image of the mouth type change during the reading process, and extract the mouth type feature as the preset mouth type feature and corresponding to the action guiding. Information storage.
  • the server 120 may be specifically configured to calculate a similarity between the extracted mouth type feature and the preset mouth type feature. If the similarity is greater than the similarity threshold, obtain a living body detection result indicating that the living body exists; if the similarity is not greater than the similarity threshold, A living body test result indicating that there is no living body is obtained.
  • the motion image can also include a complete face image, which can be applied in the subsequent identity verification process, and the resource reuse rate is improved.
  • the terminal 110 is further configured to display the selected from the preset action guide information base.
  • the motion guidance information is displayed at the same time according to the speech rate corresponding to the motion guidance information.
  • Speech rate refers to the speed of speaking.
  • the content expressed by the motion guidance information may be displayed verbatim according to the speech rate, or all the motion guidance information may be directly displayed and the speech rate progress bar may be displayed, so that the speech progress bar is from the first word of the action guidance information. Start to change according to the corresponding speed of speech.
  • the terminal 110 is further configured to play the action guiding information in an audio form according to the speech rate corresponding to the action guiding information selected from the preset action guiding information library.
  • the action guidance information is directly played at the standard speech rate, and the user is guided to follow, so that the user controls the mouth shape change according to the speech rate, and the terminal 110 is configured to collect the corresponding action image.
  • the accuracy of the living body detection can be improved, and the living body detection failure can be avoided due to the abnormal speech rate of the user.
  • the number of motion images is a preset number greater than 1.
  • the terminal 110 is further configured to collect a face image included in each motion image and perform face recognition, and directly obtain the indication verification when the recognition result is inconsistent.
  • the result of the authentication passed.
  • the preset number can be set as needed, for example, it can take 3, 4 or 5, etc.
  • by performing face recognition on the face image included in each motion image if the user changes during the living body detection process, the recognition result is inconsistent, and then the identity verification result indicating that the verification fails is directly given. . In this way, it takes a certain period of time to consider the living body detection. In order to ensure safety, it is necessary to ensure that the same user operation is always performed during the living body detection process.
  • the collected user identity information includes a face image; the server 120 is further configured to perform face recognition on the face image included in each action image and the face graphic included in the user identity information, when the recognition result is In the case of inconsistency, the authentication result indicating that the verification failed is directly obtained.
  • the terminal 110 is further configured to collect multiple user identity information and send the information to the server 120.
  • the server 120 is further configured to detect a user identifier corresponding to each user identity information, and is further configured to detect whether the user identifiers corresponding to the user identity information are consistent, to obtain a user identity information verification result.
  • the user identifier refers to a character or a character string that can uniquely identify the user. Pass The user identity information is separately detected to obtain the corresponding user identifier, and then the user identifiers obtained by the detections are all consistent. If the detected user identifiers are consistent, the verification result of the identity information is verified; If the obtained user IDs are inconsistent, the verification result of the identity information that the verification fails is given. The detected identity information verification result is more reliable, which makes the final authentication result more reliable.
  • the terminal 110 is further configured to collect a document image and a face image and send it to the server 120.
  • the server 120 is further configured to perform character recognition on the ID image to obtain a user identifier that matches the ID image; and calculate a similarity between the collected face image and the face image corresponding to the user identifier in the comparison face database; Determine the user identity information verification result.
  • the terminal 110 runs a client, and the client may be an original application client or a light application client.
  • the light application is an application that can be used without downloading.
  • the currently used light application is compiled using HTML5 (Hypertext Markup Language Fifth Edition).
  • the terminal 110 is configured to send the collected ID image to the server 120, and the server 120 is configured to perform character recognition on the ID image to obtain a user identifier that matches the ID image.
  • the terminal 110 is configured to invoke a camera through a client running on the terminal 110 to scan the document in the form of photographing or video recording to obtain a document image.
  • the terminal 110 can be configured to provide an interactive interface through the client to guide the user to scan the document according to the prompt. Specifically, you can scan the front of the document and then scan the reverse side of the document.
  • the original photo of the front and back of the document and the image of the front and back documents cut according to the shape of the document can be provided during the scanning process.
  • the original photo and the front and back ID images can be one each. Of course, you can customize the number of sheets as needed.
  • the terminal 110 can also be used to determine the shape and color distribution of the document image, determine whether the document is a forged document or determine whether the document image is forged.
  • the server 120 can be used to perform character recognition on the ID image by using the OCR means, identify the text information therein, compare it with the document information stored on the external certificate server, find the matching document information, and obtain the corresponding user.
  • the document server here may be an identity card server of a citizenship management institution, a driver's license information server of a vehicle management institution, a social security card information server of a social security institution, or a passport information server of a passport issuing institution.
  • the server 120 can also be used to compare the recognized text information with the text information input by the user to determine whether the match is matched. If the information is inconsistent, the identity verification result indicating that the identity verification fails is directly provided, thereby preventing the user from stealing the other person's ID. operating. If it is not recognized, you can give an unrecognized reason and give the corresponding error message.
  • the document avatar in the ID image is intercepted; the face image is collected; the collected face image and the corresponding document avatar are respectively compared with the corresponding face database corresponding to the user
  • the identified face images are compared and calculated to obtain similarity.
  • the similarity here indicates the degree of similarity between the corresponding face images.
  • the collected face image can be compared with the document avatar, and the collected face image and the document avatar are sent to an external document server for comparison, and the similarity is calculated.
  • the collected face image can be directly sent to the external document server for comparison, and the similarity is calculated.
  • the identity information verification result indicating that the verification is passed is obtained; if the similarity does not exceed the similarity threshold, the identity information verification result indicating that the verification fails is obtained. If there is a plurality of similarities, the identity information verification result indicating that the verification is passed may be obtained when each similarity is higher than the corresponding similarity threshold; when there is a case where the similarity does not exceed the corresponding similarity threshold, then The verification result of the identity information can be determined as the verification fails.
  • the integrated identification image and the collected face image are used to comprehensively verify the user identity information, so that the identity information verification result is more accurate, and the identity verification result is more accurate.
  • the server 120 is further configured to intercept a document avatar in the ID image; and compare the collected face image and the corresponding document avatar with the face image corresponding to the user identifier in the comparison face database, and calculate Get similarity.
  • the server 120 is further configured to: when the living body detection result indicates that the living body exists, and the user identity information verification result is the verification pass, determine that the identity verification result is the verification pass.
  • the terminal 110 is further configured to detect a financial service operation instruction, obtain the action guidance information selected from the preset action guidance information database after detecting the financial service operation instruction, and further be used for the identity verification fed back by the server 120.
  • the result is the financial service operation corresponding to the financial business operation instruction when the verification is passed.
  • the financial business here includes applying for loan business, online credit card processing, and investment business.
  • the above-mentioned identity verification method is used to ensure transaction security in the financial service, so that the handling of the financial service is more secure and reliable.
  • the above-mentioned identity verification system 100 guides the user to complete the corresponding action in a visual and/or acoustic manner by displaying and/or playing the action guide information selected in the preset action guide information library, so as to collect the corresponding action image. Then, by performing matching detection on the acquired motion image and the motion guidance information, a living body detection result indicating whether or not a living body exists is obtained, thereby obtaining an identity verification result based on the living body detection result. In this way, through the living body detection to verify whether the current operation is a real user, the situation of machine brute force cracking can be avoided, and the final identity verification result is more accurate and the security is improved.
  • an identity verification apparatus 900 including an action image acquisition module 901, a live detection module 902, and an identity verification result determination module 903.
  • the motion image acquisition module 901 is configured to display motion guide information selected from the preset motion guidance information base and/or play in audio form, and collect corresponding motion images.
  • the living body detection module 902 is configured to perform matching detection on the acquired motion image and the motion guidance information to obtain a living body detection result indicating whether or not a living body exists.
  • the authentication result determining module 903 is configured to determine an identity verification result according to the living body detection result.
  • the identity verification apparatus 900 further includes a user identity information verification module 904, configured to collect user identity information, and perform verification according to the collected user identity information, to obtain a user identity information verification result;
  • the verification result determining module 903 is further configured to determine the identity verification result according to the living body detection result and the user identity information verification result.
  • the motion guide information is mouth-type guidance information;
  • the motion image includes a mouth pattern
  • the living body detecting module 902 is further configured to extract a mouth type feature from the mouth type image; and is further configured to perform matching detection on the extracted mouth type feature and the preset mouth type feature corresponding to the motion guiding information to obtain whether the living body exists. Live test results.
  • the motion guiding information is the mouth type guiding information; the motion image collecting module 901 is further configured to display the motion guiding information selected from the preset motion guiding information library, and display the reading according to the speaking speed corresponding to the motion guiding information. Progress information.
  • the action guiding information is mouth type guiding information; the action image collecting module 901 is further configured to play the action guiding information in an audio form according to the speaking speed corresponding to the action guiding information selected from the preset action guiding information library. .
  • the number of motion images is a preset number greater than one; the user identity information verification module 904 is further configured to collect a face image included in each motion image and perform face recognition, and when the recognition result is inconsistent, directly Obtain an authentication result indicating that the verification failed.
  • the user identity information verification module 904 is further configured to collect multiple user identity information, respectively detect a user identifier corresponding to each user identity information, and detect whether the user identifiers corresponding to the user identity information are consistent. Obtain the user identity information verification result.
  • the user identity information verification module 904 includes a certificate image processing module 904a, a face image processing module 904b, and a verification execution module 904c.
  • the ID image processing module 904a is configured to collect the ID image and perform character recognition on the ID image to obtain a user identifier that matches the ID image.
  • the face image processing module 904b is configured to collect a face image, and calculate a similarity between the collected face image and the face image corresponding to the user identifier in the comparison face database.
  • the verification execution module 904c is configured to determine a user identity information verification result according to the similarity.
  • the face image processing module 904b includes an intercept module 904b1, a face image acquisition module 904b2, and a comparison module 904b3.
  • the intercepting module 904b1 intercepts the document avatar in the ID image.
  • the face image acquisition module 904b2 is configured to collect a face image.
  • the comparison module 904b3 is configured to compare the collected face image with the corresponding document avatar The face images corresponding to the user identification in the face database are compared, and the similarity is calculated.
  • the identity verification result determining module 903 is further configured to: when the living body detection result indicates that the living body exists, and the user identity information verification result is the verification pass, determine that the identity verification result is the verification pass.
  • the identity verification apparatus 900 further includes a receiving module 905, configured to receive action guiding information that is selected and sent by the server from the preset action guiding information base.
  • the biological detection module 902 is further configured to send the collected motion image to the server, and cause the server to perform matching detection on the motion image and the motion guidance information to obtain a living body detection result indicating whether or not a living body exists.
  • the user identity information verification module 904 is further configured to collect user identity information and send the message to the server, so that the server performs verification according to the collected user identity information, and obtains the user identity information verification result.
  • the authentication result determining module 903 is further configured to receive an identity verification result fed back by the server after determining the identity verification result according to the living body detection result and the user identity information verification result.
  • the identity verification apparatus 900 further includes a financial service processing module 906, configured to detect a financial service operation instruction, and obtain a selection from the preset action guidance information base after detecting the financial service operation instruction.
  • the action guiding information is also used to perform the financial service operation corresponding to the financial service operation instruction when the authentication result is the verification pass.
  • the above-mentioned identity verification device 900 guides the user to complete the corresponding action in a visual and/or acoustic manner by displaying and/or playing the action guide information selected in the preset action guidance information library, so as to collect the corresponding action image. Then, by performing matching detection on the acquired motion image and the motion guidance information, a living body detection result indicating whether or not a living body exists is obtained, thereby obtaining an identity verification result based on the living body detection result. In this way, through the living body detection to verify whether the current operation is a real user, the situation of machine brute force cracking can be avoided, and the final identity verification result is more accurate and the security is improved.
  • the storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

一种身份验证方法,该方法包括:将从预设动作引导信息库中选取的动作引导信息进行显示和/或以音频形式播放,并采集相应的动作图像;对采集的动作图像与所述动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果;根据所述活体检测结果确定身份验证结果。

Description

身份验证方法、终端和服务器
本申请要求于2015年5月21日提交中国专利局,申请号为201510264333.X,发明名称为“身份验证方法、装置和***”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及安全技术领域,特别是涉及一种身份验证方法、终端和服务器。
背景技术
随着计算机技术的不断发展,用户可以在计算机协助下自助完成各种操作;借助网络技术,用户还可以远程***业务,比如申请贷款、远程考试或者远程遥控等。用户***业务通常是需要进行身份验证的,目前常用的身份验证方法包括密码验证方法、账号及密码验证方法、手机验证码验证方法以及人脸识别验证等。
然而,单纯的密码验证方法通常只能应用在单机应用场景,比如门禁控制或计算机本地密码。账号及密码验证方法通常应用在需要登录远程服务器的应用场景,比如登录社交网站、登录邮件服务器等,缺点是任何人拥有该账号及密码都可以通过验证,安全性较低。手机验证码验证方法是一种较弱的身份验证手段,通常与其他身份验证方法结合起作用,或者单独应用在安全性要求低的应用场景。人脸识别验证方法容易通过人脸图片欺骗摄像头来破解,安全性也不高。因此,目前的身份验证方法,安全性较低,亟须改进。
发明内容
根据本申请公开的各种实施例,提供一种身份验证方法、终端和服务器。
一种身份验证方法,包括:
将从预设动作引导信息库中选取的动作引导信息进行显示和/或以音频形式播放,并采集相应的动作图像;
对采集的动作图像与所述动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果;及
根据所述活体检测结果确定身份验证结果。
一种终端,包括存储器和处理器,所述存储器中储存有指令,所述指令被所述处理器执行时,可使得所述处理器执行以下步骤:
将从预设动作引导信息库中选取的动作引导信息进行显示和/或以音频形式播放,并采集相应的动作图像;
对采集的动作图像与所述动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果;及
根据所述活体检测结果确定身份验证结果。
一种服务器,包括存储器和处理器,所述存储器中储存有指令,所述指令被所述处理器执行时,使得所述处理器执行以下步骤:
从预设动作引导信息库中选取动作引导信息;
向终端发送所述动作引导信息;
接收所述终端反馈的动作图像,所述动作图像由所述终端在将所述动作引导信息进行显示和/或以音频形式播放时采集;
对所述动作图像与所述动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果;
根据所述活体检测结果确定身份验证结果;及
向所述终端反馈所述身份验证结果。
本发明的一个或多个实施例的细节在下面的附图和描述中提出。本发明的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中身份验证***的组成结构示意图;
图2为一个实施例中终端的结构示意图;
图3为一个实施例中服务器的结构示意图;
图4为一个实施例中身份验证方法的流程示意图;
图5为一个实施例中对采集的动作图像与动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果的步骤的流程示意图;
图6为一个实施例中采集用户身份信息,并根据采集的用户身份信息进行验证,获得用户身份信息验证结果的步骤的流程示意图;
图7为一个实施例中身份验证***的应用环境图;
图8为另一个实施例中身份验证方法的流程示意图;
图9为一个实施例中身份验证装置的结构框图;
图10为另一个实施例中身份验证装置的结构框图;
图11为一个实施例中图10中的用户身份信息验证模块的结构框图;
图12为一个实施例中图11中的人脸图像处理模块的结构框图;
图13为再一个实施例中身份验证装置的结构框图;
图14为一个实施例中身份验证装置的结构框图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
如图1所示,在一个实施例中,提供了一种身份验证***100,包括终 端110和服务器120。终端110可以是台式计算机,公共查询机,也可以是如手机、平板电脑或者个人数字助理等的移动终端。服务器120可以是一个物理服务器或者多个物理服务器。
如图2所示,在一个实施例中,图1中的终端110的组成结构如图2所示,包括通过***总线连接的处理器、内存储器、非易失性存储介质、网络接口、显示屏、摄像头以及输入装置。该终端110的非易失性存储介质中存储有一种身份验证装置,用于实现一种身份验证方法。该终端110的处理器用于提供计算和控制功能,被配置为执行一种身份验证方法。该终端110的显示屏可以是液晶显示屏或者电子墨水显示屏等,该终端110的输入装置可以是显示屏上覆盖的触摸层,也可以是终端110的外壳上设置的按键、轨迹球或触控板,也可以是外接的键盘、触控板或鼠标等。
如图3所示,在一个实施例中,图1中的服务器120的组成结构如图3所示,包括通过***总线连接的处理器、内存储器、非易失性存储介质和网络接口。其中,该服务器120的非易失性存储介质中存储有操作***和一种身份验证装置,该身份验证装置用于实现一种身份验证方法。该服务器120的处理器用于提供计算和控制功能,被配置为执行一种身份验证方法。
如图4所示,在一个实施例中,提供了一种身份验证方法,该方法可以应用于上述图1和图2中的终端110,也可以应用于上述图1中的身份验证***中的终端110和服务器120上。该方法具体包括如下步骤:
步骤402,将从预设动作引导信息库中选取的动作引导信息进行显示和/或以音频形式播放,并采集相应的动作图像。
预设动作引导信息库中包括各种动作引导信息,动作引导信息的作用是引导用户做出相应的动作,比如动作引导信息为“眨眼”,则表示引导用户做眨眼的动作。类似的动作引导信息还可以是“张嘴”、“转头”或者“伸出四个手指”等,以引导用户做出张嘴、转头或者伸出四个手指等这些动作。可以从预设动作引导信息库中随机选取动作引导信息,也可以按照保密的选取顺序来选取,并定期更新选取顺序。在一个实施例中,终端110可接收服务 器从预设动作引导信息库中选取并发送的动作引导信息。
在一个实施例中,动作引导信息包括多个动作指示单元构成的动作指示序列。这里动作指示单元是指最小的动作引导单位,一个动作指示单元表示一个动作,比如“眨眼”、“张嘴”或者“转头”分别为一个动作指示单元,多个动作指示单元按顺序排列形成动作指示序列。比如若一个动作引导信息为一个动作指示序列,具体为“眨眼,张嘴,转头”,则表示用户需要按顺序依次做出眨眼、张嘴以及转头的动作。还比如动作引导信息可为一文字序列,该文字序列中的每个字分别为一个动作指示单元,用户读出每个字的嘴型为一个动作。本实施例中,动作引导信息包括多个动作指示单元构成的动作指示序列,这样可以尽量避免通过随机试验的手段破解身份验证的情况,可以使得活体检测结果更加准确。
动作引导信息可以是以可视形式展示出来,比如以文字、示意图等形式展示出来。动作引导信息以音频形式播放,具体可以预先录制字或词的音频数据,在播放动作引导信息时,可以将动作引导信息逐字查找相应的音频数据并播放,或者可以先将动作引导信息进行分词处理,然后将动作引导信息以词为单元转换为相应的音频数据并播放。当然动作引导信息可以在显示动作引导信息的同时,将动作引导信息以音频形式播放出来。
在显示或者播放动作引导信息的同时,采集与该动作引导信息对应的动作图像。这里动作图像是指应当包含用户根据动作引导信息所做的动作的图像。可以在一个动作指示单元对应的显示和/或播放时间段内,按预设时间间隔采集图像,将采集的图像中与该时间段内采集的其它图像差异最大的图像或者该图像的其中一部分作为采集的动作图像。在一个实施例中,还可以通过运动检测,检测到摄像头视野中发生运动时,立即或者等待很短的预设时间通过摄像头采集图像,将采集的图像或者采集的图像的一部分作为动作图像。
步骤404,对采集的动作图像与动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果。
将两个对象进行匹配检测,旨在检测这两个对象是否匹配,还可以检测这两个对象匹配的程度。若检测到采集的动作图像与动作引导信息匹配,则说明存在活体,获得表示存在活体的活体检测结果。如检测到采集的动作图像与动作引导信息不匹配,则说明不存在活体,获得表示不存在活体的活体检测结果。
活体检测结果可以从预设的两个值中取值,比如用1表示存在活体,用0表示不存在活体。活体检测结果还可以用表示采集的动作图像与动作引导信息的匹配程度的匹配值来表示,若匹配值超出匹配值阈值,则说明存在活体;若匹配值未超出匹配值阈值,则说明不存在活体。这里的匹配值可以用动作图像与动作引导信息所对应的预设动作图像的相似度来表示,或者用对该相似度进行正相关运算后的值来表示。匹配值也可以用从动作图像中提取的动作特征与动作引导信息所对应的预设动作特征的欧氏距离来表示,或者用对该欧氏距离进行正相关运算后的值来表示。其中正相关运算是指输入一个自变量与因变量正相关的函数并输出函数结果。
在一个实施例中,步骤404包括:从采集的动作图像中提取动作特征,对提取的动作特征与动作引导信息所对应的预设动作特征进行匹配检测,获得表示是否存在活体的活体检测结果。具体地,可以计算提取的动作特征与相应的预设动作特征的相似度,若该相似度大于相似度阈值,则判定存在活体,获得表示存在活体的活体检测结果;若该相似度小于等于相似度阈值,则判定不存在活体,获得表示不存在活体的活体检测结果。这里提取的动作特征可以是几何特征,比如欧氏距离,还可以是代数特征,比如特征矩阵。
在一个实施例中,步骤404包括:将采集的动作图像发送给服务器,使服务器对动作图像与动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果。采集的动作图像可以经过加密后发送给服务器。
步骤406,根据活体检测结果确定身份验证结果。
具体地,在一个实施例中,当活体检测结果表示存在活体,则确定身份验证结果为验证通过。在一个实施例中,当活体检测结果表示不存在活体, 则确定身份验证结果为验证未通过。在一个实施例中,还可以根据活体检测结果与其它验证方式综合而获得身份验证结果。
上述身份验证方法,通过显示和/或播放预设动作引导信息库中选取的动作引导信息,以视觉和/或声音的方式引导用户完成相应的动作,以便采集到相应的动作图像。然后通过对采集的动作图像与动作引导信息进行匹配检测,就可以获得表示是否存在活体的活体检测结果,从而根据活体检测结果获得身份验证结果。这样通过活体检测来验证当前进行操作的是否为真实用户,可以避免机器暴力破解的情形,使得最终的身份验证结果更为准确,提高了安全性。
在一个实施例中,该身份验证方法还包括:采集用户身份信息,并根据采集的用户身份信息进行验证,获得用户身份信息验证结果。且步骤406包括:根据活体检测结果和用户身份信息验证结果确定身份验证结果。
具体地,用户身份信息是指用于证明用户身份的信息,包括用户账号及用户密码、用户证件信息和用户生物特征信息中的至少一种。用户生物特征信息包括人脸特征信息、指纹特征信息、虹膜特征信息以及手掌几何形状等。证件信息包括证件编号、姓名、出生日期、签发机关以及有效期限等,证件具体可以是身份证、驾驶证、社会保障卡以及护照等。
采集用户身份信息具体可以是获取用户输入的用户身份信息,比如获取在用户账号输入框中录入的字符串作为用户账号,获取在用户密码输入框中录入的字符串作为用户密码,还比如获取在证件信息输入框中输入的证件信息。采集用户身份信息也可以是通过调用摄像头、传感器等来获取用户身份信息,比如通过摄像头扫描获得证件图像或者人脸图像,通过传感器扫描获得指纹特征信息、虹膜特征信息等。
根据采集的用户身份信息进行验证,具体可对采集的用户身份信息本身进行验证,比如对于证件信息,可判断证件编号是否符合预设格式,当前时间是否在有效期限内。
根据采集的用户身份信息进行验证,具体可以将采集的用户身份信息与 预存的用户身份信息进行匹配检测,从而获得身份信息验证结果。比如对于用户账号和用户密码,可以获取该用户账号所对应的预存的密码,判断采集到的用户密码和预存的密码是否一致,从而获得身份信息验证结果。其中身份信息验证结果用于表示根据采集的用户身份信息所进行的验证是否通过。
在一个实施例中,采集用户身份信息,并根据采集的用户身份信息进行验证,获得用户身份信息验证结果,包括:终端110采集用户身份信息并发送给服务器,使服务器根据采集的用户身份信息进行验证,获得用户身份信息验证结果。采集的用户身份信息可以经过加密后发送给服务器。
综合活体检测结果和用户身份信息验证结果,得出最终的身份验证结果。在一个实施例中,根据活体检测结果和用户身份信息验证结果确定身份验证结果的步骤包括:当活体检测结果表示存在活体,且用户身份信息验证结果为验证通过时,则确定身份验证结果为验证通过。在一个实施例中,根据活体检测结果和用户身份信息验证结果确定身份验证结果的步骤包括:当活体检测结果表示不存在活体,且用户身份信息验证结果为验证通过时,则确定身份验证结果为验证未通过。在一个实施例中,根据活体检测结果和用户身份信息验证结果确定身份验证结果的步骤包括:当活体检测结果表示不存在活体,且用户身份信息验证结果为验证通过时,则确定身份验证结果为验证未通过。
在一个实施例中,根据活体检测结果和用户身份信息验证结果确定身份验证结果的步骤包括:接收服务器在根据活体检测结果和用户身份信息验证结果确定身份验证结果后所反馈的身份验证结果。
在一个实施例中,步骤404与采集用户身份信息,并根据采集的用户身份信息进行验证,获得用户身份信息验证结果的步骤是异步进行的,并不限定两个步骤的先后执行顺序。本实施例中,通过异步处理,保证了身份验证过程的效率。
如图5所示,在一个实施例中,动作引导信息为嘴型引导信息;动作图像包括嘴型图像;步骤404包括以下步骤:
步骤502,从嘴型图像中提取嘴型特征。
本实施例中动作引导信息为引导用户说话的信息,可称为嘴型引导信息。采集动作图像时可以直接检测到嘴唇位置从而获得主要包括用户嘴型的动作图像。在一个实施例中,动作图像为人脸图像,人脸图像包括嘴型图像。人的嘴巴相对于人脸的位置是固定的,这样可以在确定人脸图像后直接定位该人脸图像中的嘴型图像。
嘴型也可以称为唇型。人的嘴型通过嘴唇的内唇线和外唇线便可以表示,可以将能够反映内唇线和/或外唇线的变化的特征作为嘴型特征。以内唇线为例,当嘴型为嘴巴紧闭时,内唇线为一条直线,当嘴型为完全张开时,内唇线为一类似圆形的形状。于是,可以采用内唇线所围成的区域的面积作为嘴型特征,也可以采用嘴型内唇线左右边界之间的距离以及上下边界之间的距离作为嘴型特征。
步骤504,对提取的嘴型特征与动作引导信息所对应的预设嘴型特征进行匹配检测,获得表示是否存在活体的活体检测结果。
具体地,可以预先让人按照标准语速阅读动作引导信息所表达的内容,并采集阅读过程中嘴型变化的嘴型图像,提取嘴型特征作为预设嘴型特征并对应于该动作引导信息存储。在执行步骤504时,将提取的嘴型特征与预设嘴型特征比对,以进行匹配检测。具体可以计算提取的嘴型特征与预设嘴型特征的相似度,若相似度大于相似度阈值,则获得表示存在活体的活体检测结果;若相似度不大于相似度阈值,则获得表示不存在活体的活体检测结果。
本实施例中,通过引导用户变化嘴型,并采集嘴型图像进行活体检测,实现成本低,而且准确性高。而且动作图像还可以包括完整的人脸图像,可以应用在后续身份验证过程中,提高了资源复用率。
在一个实施例中,动作图像的数量为大于1的预设数量;该身份验证方法还包括:采集每个动作图像所包括的人脸图像并进行人脸识别,当识别结果不一致时直接获得表示验证未通过的身份验证结果。预设数量可以根据需要设置,比如可以取3、4或5等。本实施例中,通过对每个动作图像所包括 的人脸图像进行人脸识别,若在活体检测过程中用户改变,则会导致识别结果不一致,然后直接给出表示验证未通过的身份验证结果。这样考虑到活体检测需要一段时间,为了保证安全性,保证活体检测过程中一直是同一个用户操作是有必要的。
在一个实施例中,若采集的用户身份信息包括人脸图像,则还可以对每个动作图像所包括的人脸图像以及用户身份信息所包括的人脸图形进行人脸识别,当识别结果不一致时直接获得表示验证未通过的身份验证结果。
在一个实施例中,步骤402包括:显示从预设动作引导信息库中选取的动作引导信息,同时按照动作引导信息所对应的语速显示阅读进度信息。语速是指说话的速度。具体地,可以按照该语速来逐字显示动作引导信息所表达的内容,也可以直接显示全部的动作引导信息并显示语速进度条,使得语速进度条从动作引导信息的第一个字开始按照相应的语速变化。
在一个实施例中,步骤402包括:按照从预设动作引导信息库中选取的动作引导信息所对应的语速,以音频形式播放动作引导信息。本实施例中,直接以标准语速播放动作引导信息,引导用户跟读,从而使得用户按照该语速来控制嘴型变化,终端110采集相应的动作图像。
本实施例中,通过引导用户按照标准语速来完成动作引导信息所要求的嘴型变化,这样可以提高活体检测的准确率,避免因用户语速不正常而导致活体检测失效。
在一个实施例中,采集用户身份信息,并根据采集的用户身份信息进行验证,获得用户身份信息验证结果的步骤包括:采集多种用户身份信息,分别检测每种用户身份信息所对应的用户标识;检测各种用户身份信息所对应的用户标识是否一致,以获得用户身份信息验证结果。
本实施例中,用户标识是指能够唯一标识出用户的字符或者字符串。通过多种用户身份信息分别进行检测以获得相应的用户标识,进而检测这些检测获得的用户标识是否均一致,若检测得到的用户标识均一致,则给出验证通过的身份信息验证结果;若检测得到的用户标识不一致,则给出验证未通 过的身份信息验证结果。这样检测出的身份信息验证结果更加可靠,从而使得最终的身份验证结果更加可靠。这里的用户标识可以是身份证号码、驾驶证编号、社会保障卡编码或者护照号码等。
如图6所示,在一个实施例中,采集用户身份信息,并根据采集的用户身份信息进行验证,获得用户身份信息验证结果的步骤包括:
步骤602,采集证件图像,并对证件图像进行文字识别,以获得与证件图像匹配的用户标识。
具体地,终端110上运行有客户端,该客户端可以是原声应用客户端,也可以是轻应用客户端。其中轻应用是一种无需下载便可以使用的应用,目前常用的轻应用使用HTML5(超文本标记语言第五版)编制。终端110将采集到的证件图像发送到服务器,服务器对证件图像进行文字识别,以获得与证件图像匹配的用户标识。
终端110通过终端110上运行的客户端调用摄像头,以拍照或者录像的形式,对证件进行扫描以获得证件图像。终端110可以通过客户端提供交互界面,以引导用户按照提示扫描证件。具体可以先扫描证件正面,再扫描证件反面。扫描过程中可以提供证件正反面的原始照片以及按照证件形状裁剪的正反面证件图像。原始照片和正反面证件图像可各一张,当然也可以根据需要自定义张数。终端110还可以对证件图像的形状以及颜色分布进行判别,判断证件是否为伪造证件或者判断证件图像是否伪造。
服务器可以采用OCR手段,对证件图像进行文字识别,识别出其中的文字信息,再与外部的证件服务器上存储的证件信息进行比对,查找到匹配的证件信息,并获取相应的用户标识。这里的证件服务器可以是公民身份管理机构的身份证服务器、车辆管理机构的驾驶证信息服务器、社会安全保障机构的社会保障卡信息服务器或者护照签发机构的护照信息服务器等。
服务器还可以将识别出的文字信息与用户输入的文字信息进行比对,判断是否匹配,若不一致则直接给出表示身份验证未通过的身份验证结果,这样可以防止用户盗用他人证件进行操作。如果无法识别,则可以给出无法识 别的原因,给出对应的错误提示。
在一个实施例中,也可以直接获取输入的用户标识。这里输入的用户标识是指用户输入的用户标识。
步骤604,采集人脸图像,并计算采集的人脸图像与对比人脸库中对应于用户标识的人脸图像之间的相似度。
具体地,若有证件图像,也存在对比人脸库,则截取证件图像中的证件头像;采集人脸图像;将采集的人脸图像与相应的证件头像分别与对比人脸库中对应于用户标识的人脸图像进行比对,计算获得相似度。这里的相似度表示的是相应的人脸图像之间的相似程度。
若没有证件图像,或者从证件图像中没有截取到证件头像,则可仅将采集到的人脸图像与对比人脸库中的人脸图像进行比对,计算得到相似度。这样可以不再去外部的证件服务器进行比对校验。
若没有对比人脸库,则可将采集到的人脸图像与证件头像进行比对,同时将采集到的人脸图像与证件头像发送给外部的证件服务器进行比对,计算得到相似度。
若没有证件图像,或者从证件图像中没有截取到证件头像,而且没有对比人脸库,则可直接将采集的人脸图像发送给外部的证件服务器进行比对,计算得到相似度识。
步骤606,根据相似度确定用户身份信息验证结果。
具体地,若相似度高于相似度阈值,则获得表示验证通过的身份信息验证结果;若相似度不超过相似度阈值,则获得表示验证未通过的身份信息验证结果。若有多个相似度,则可在每个相似度均高于相应的相似度阈值时,获得表示验证通过的身份信息验证结果;当存在相似度不超过相应的相似度阈值的情况时,则可判定身份信息验证结果为验证未通过。
本实施例中,综合证件图像和采集的人脸图像来对用户身份信息进行综合验证,使得身份信息验证结果更为准确,进而使得身份验证结果更为准确。
在一个实施例中,步骤402之前,还包括:检测金融业务操作指令,检 测到金融业务操作指令后获取从预设动作引导信息库中选取的动作引导信息;步骤406之后,还包括:当身份验证结果为验证通过时,执行金融业务操作指令所对应的金融业务操作。这里的金融业务包括申请贷款业务、***在线办理业务、投资业务等。本实施例中,通过上述身份验证方法来保证金融业务中的交易安全,使得金融业务的办理更加安全可靠。
如图7所示,在一个具体的实施例中,服务器120包括唇语活体检测服务器121、第一人脸特征提取服务器122、第二人脸特征提取服务器123和人脸验证服务器124。其中唇语活体检测服务器121与终端110连接,第一人脸特征提取服务器122与终端110、第二人脸特征提取服务器123和人脸验证服务器124连接,第二人脸特征提取服务器123与人脸验证服务器124以及外部的证件服务器130连接。一种身份验证方法,具体包括如下步骤1)~步骤5):
步骤1),唇语活体检测:通过终端110,判断用户是否为活体,从而验证是否为用户本人在进行视频或自拍操作。
其中,步骤1)又包括步骤A)~步骤B):
步骤A),人脸检测(Face Detection):从各种不同的场景来检测出现人脸的存在并确定人脸位置。人脸检测的主要目的是在采集的图像中寻找人脸区域,把图像分成人脸区域和非人脸区域。从而为后续的应用做准备。
步骤B),活体判断(Active Detection):预先选择若干可用于判断用户嘴型的短句,比如选择100句10字以内的汉语短句。唇语活体检测服务器121分析出这些短语的嘴型特征,存储在唇语活体检测服务器121。终端110在用户活体检测页面,随机挑选、展示需要用户读的短句,提示用户读起。终端110在五官定位的基础上,收集用户读起短句时嘴型的变化,与唇语活体检测服务器121存储的该嘴型变化的嘴型特征进行比对,判断是否一致,从而判断该用户是否按给出的语句在读,进而判断是否为用户实时在操作。
步骤2),在步骤1)基础上,通过移动设备采集用户自拍照或视频脸部信息、对用户身份证件进行正反面扫描。
步骤3),把采集到的用户人脸信息、扫描身份证照片信息和存储在权威机构的用户身份证照片信息利用五官定位方法,进行人脸特征提取;然后利用机器学习算法进行三者特征信息的相似度计算。
其中,步骤3)又包括步骤a)~步骤c):
步骤a),五官定位方法:是面部特征提取主要信息的前提,其主要目的是从已经检测出的人脸区域里,定位面部目标器官点。人脸轮廓、眉毛、眼睛、鼻子、嘴唇轮廓和位置。
步骤b),人脸特征提取(Face Representation):即在五官定位的基础上,采取预先选定的方式表示检测出的人脸(包括,库存的人脸)。通常的表示方法包括几何特征(如欧式距离)、代数特征(特征矩阵)等。
步骤c),人脸识别(Face Identification):即将待识别的人脸与数据库中的已知人脸对比,得出人脸之间的相关度。
步骤4),把通过扫描收集到的用户身份证文字信息进行文字识别,然后与存在第三方权威机构的用户身份证文字信息进行相似度计算。
步骤5),综合以上3)、4)步骤的结果,从而判断当前用户与存储在权威机构的用户信息是否对应同一个人。
如图8所示,在一个实施例中,一种身份验证方法,应用于上述图1以及图2中的服务器120。与上述实施例中的身份验证方法的不同之处在于,本实施例中的数据输入输出的步骤,比如采集动作图像和用户身份信息以及对动作引导信息的显示和/或播放等,是在终端110上执行的,而其它需要大量计算的步骤则是在服务器120上完成的。这样可以显著减少终端110的计算压力,提高身份验证的效率。该方法包括:
步骤802,从预设动作引导信息库中选取动作引导信息并发送给终端110,以使终端110将动作引导信息进行显示和/或以音频形式播放,并采集相应的动作图像。
在一个实施例中,终端110显示从预设动作引导信息库中选取的动作引导信息,同时按照动作引导信息所对应的语速显示阅读进度信息;和/或,终 端110按照从预设动作引导信息库中选取的动作引导信息所对应的语速,以音频形式播放动作引导信息。
步骤804,接收终端发送的动作图像,对动作图像与动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果。
在一个实施例中,动作引导信息为嘴型引导信息;动作图像包括嘴型图像;对动作图像与动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果包括:从嘴型图像中提取嘴型特征;对提取的嘴型特征与动作引导信息所对应的预设嘴型特征进行匹配检测,获得表示是否存在活体的活体检测结果。
在一个实施例中,动作图像的数量为大于1的预设数量;该身份验证方法还包括:采集每个动作图像所包括的人脸图像并进行人脸识别,当识别结果不一致时直接获得表示验证未通过的身份验证结果。
步骤806,根据活体检测结果确定身份验证结果后向终端反馈身份验证结果。
在一个实施例中,该身份验证方法还包括:接收终端采集并发送的用户身份信息,并根据采集的用户身份信息进行验证,获得用户身份信息验证结果。步骤806包括:根据活体检测结果和用户身份信息验证结果确定身份验证结果后向终端反馈身份验证结果。
在一个实施例中,接收终端采集并发送的用户身份信息,并根据采集的用户身份信息进行验证,获得用户身份信息验证结果,包括:接收终端采集并发送的多种用户身份信息,分别检测每种用户身份信息所对应的用户标识;检测各种用户身份信息所对应的用户标识是否一致,以获得用户身份信息验证结果。
在一个实施例中,接收终端采集并发送的用户身份信息,并根据采集的用户身份信息进行验证,获得用户身份信息验证结果,包括:接收终端采集并发送的证件图像,并对证件图像进行文字识别,以获得与证件图像匹配的用户标识;接收终端采集并发送的人脸图像,并计算采集的人脸图像与对比 人脸库中对应于用户标识的人脸图像之间的相似度;根据相似度确定用户身份信息验证结果。
在一个实施例中,接收终端采集并发送的人脸图像,并计算采集的人脸图像与对比人脸库中对应于用户标识的人脸图像之间的相似度,具体包括:截取证件图像中的证件头像;接收终端采集并发送的人脸图像;将采集的人脸图像与相应的证件头像分别与对比人脸库中对应于用户标识的人脸图像进行比对,计算获得相似度。
在一个实施例中,根据活体检测结果和用户身份信息验证结果确定身份验证结果,包括:当活体检测结果表示存在活体,且用户身份信息验证结果为验证通过时,则确定身份验证结果为验证通过。
在一个实施例中,步骤802之前还包括:检测金融业务操作指令,检测到金融业务操作指令后从预设动作引导信息库中选取动作引导信息并发送给终端;步骤806之后还包括:当身份验证结果为验证通过时,执行金融业务操作指令所对应的金融业务操作。
上述身份验证方法,通过显示和/或播放预设动作引导信息库中选取的动作引导信息,以视觉和/或声音的方式引导用户完成相应的动作,以便采集到相应的动作图像。然后通过对采集的动作图像与动作引导信息进行匹配检测,就可以获得表示是否存在活体的活体检测结果,从而根据活体检测结果获得身份验证结果。这样通过活体检测来验证当前进行操作的是否为真实用户,可以避免机器暴力破解的情形,使得最终的身份验证结果更为准确,提高了安全性。
如图1所示,一种身份验证***100,包括终端110和服务器120。
终端110用于接收服务器120从预设动作引导信息库中选取并发送的动作引导信息;还用于将动作引导信息进行显示和/或以音频形式播放,并采集相应的动作图像;还用于将采集的动作图像发送给服务器120。
预设动作引导信息库中包括各种动作引导信息,动作引导信息的作用是引导用户做出相应的动作。可以从预设动作引导信息库中随机选取动作引导 信息,也可以按照保密的选取顺序来选取,并定期更新选取顺序。
在一个实施例中,动作引导信息包括多个动作指示单元构成的动作指示序列。这里动作指示单元是指最小的动作引导单位,一个动作指示单元表示一个动作,多个动作指示单元按顺序排列形成动作指示序列。本实施例中,动作引导信息包括多个动作指示单元构成的动作指示序列,这样可以尽量避免通过随机试验的手段破解身份验证的情况,可以使得活体检测结果更加准确。
动作引导信息可以是以可视形式展示出来,比如以文字、示意图等形式展示出来。动作引导信息以音频形式播放,具体可以预先录制字或词的音频数据,在播放动作引导信息时,可以将动作引导信息逐字查找相应的音频数据并播放,或者可以先将动作引导信息进行分词处理,然后将动作引导信息以词为单元转换为相应的音频数据并播放。当然动作引导信息可以在显示动作引导信息的同时,将动作引导信息以音频形式播放出来。
在显示或者播放动作引导信息的同时,采集与该动作引导信息对应的动作图像。这里动作图像是指应当包含用户根据动作引导信息所做的动作的图像。可以在一个动作指示单元对应的显示和/或播放时间段内,按预设时间间隔采集图像,将采集的图像中与该时间段内采集的其它图像差异最大的图像或者该图像的其中一部分作为采集的动作图像。在一个实施例中,还可以通过运动检测,检测到摄像头视野中发生运动时,立即或者等待很短的预设时间通过摄像头采集图像,将采集的图像或者采集的图像的一部分作为动作图像。
服务器120用于对动作图像与动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果。
将两个对象进行匹配检测,旨在检测这两个对象是否匹配,还可以检测这两个对象匹配的程度。若检测到采集的动作图像与动作引导信息匹配,则说明存在活体,获得表示存在活体的活体检测结果。如检测到采集的动作图像与动作引导信息不匹配,则说明不存在活体,获得表示不存在活体的活体 检测结果。
活体检测结果可以从预设的两个值中取值,比如用1表示存在活体,用0表示不存在活体。活体检测结果还可以用表示采集的动作图像与动作引导信息的匹配程度的匹配值来表示,若匹配值超出匹配值阈值,则说明存在活体;若匹配值未超出匹配值阈值,则说明不存在活体。这里的匹配值可以用动作图像与动作引导信息所对应的预设动作图像的相似度来表示,或者用对该相似度进行正相关运算后的值来表示。匹配值也可以用从动作图像中提取的动作特征与动作引导信息所对应的预设动作特征的欧氏距离来表示,或者用对该欧氏距离进行正相关运算后的值来表示。
在一个实施例中,服务器120还用于从采集的动作图像中提取动作特征,对提取的动作特征与动作引导信息所对应的预设动作特征进行匹配检测,获得表示是否存在活体的活体检测结果。
服务器120还用于根据活体检测结果确定身份验证结果后向终端110反馈身份验证结果;终端110还用于接收身份验证结果。
具体地,在一个实施例中,服务器120用于当活体检测结果表示存在活体,则确定身份验证结果为验证通过。在一个实施例中,服务器120用于当活体检测结果表示不存在活体,则确定身份验证结果为验证未通过。在一个实施例中,服务器120用于还可以用于根据活体检测结果与其它验证方式综合而获得身份验证结果。
上述身份验证***100,通过显示和/或播放预设动作引导信息库中选取的动作引导信息,以视觉和/或声音的方式引导用户完成相应的动作,以便采集到相应的动作图像。然后通过对采集的动作图像与动作引导信息进行匹配检测,就可以获得表示是否存在活体的活体检测结果,从而根据活体检测结果获得身份验证结果。这样通过活体检测来验证当前进行操作的是否为真实用户,可以避免机器暴力破解的情形,使得最终的身份验证结果更为准确,提高了安全性。
在一个实施例中,终端110还用于采集用户身份信息并发送给服务器 120。服务器120还用于根据采集的用户身份信息进行验证,获得用户身份信息验证结果;还用于根据活体检测结果和用户身份信息验证结果确定身份验证结果后向终端110反馈身份验证结果。
用户身份信息是指用于证明用户身份的信息,包括用户账号及用户密码、用户证件信息和用户生物特征信息中的至少一种。用户生物特征信息包括人脸特征信息、指纹特征信息、虹膜特征信息以及手掌几何形状等。证件信息包括证件编号、姓名、出生日期、签发机关及有效期限等,证件具体可以是身份证、驾驶证、社会保障卡以及护照等。
服务器120具体可用于获取用户输入的用户身份信息,比如获取在用户账号输入框中录入的字符串作为用户账号,获取在用户密码输入框中录入的字符串作为用户密码,还比如获取在证件信息输入框中输入的证件信息。采集用户身份信息也可以是通过调用摄像头、传感器等来获取用户身份信息,比如通过摄像头扫描获得证件图像或者人脸图像,通过传感器扫描获得指纹特征信息、虹膜特征信息等。
服务器120具体可用于根据采集的用户身份信息进行验证,具体可对采集的用户身份信息本身进行验证,比如对于证件信息,可判断证件编号是否符合预设格式,当前时间是否在有效期限内。
服务器120具体可用于根据采集的用户身份信息进行验证,具体可以将采集的用户身份信息与预存的用户身份信息进行匹配检测,从而获得身份信息验证结果。比如对于用户账号和用户密码,可以获取该用户账号所对应的预存的密码,判断采集到的用户密码和预存的密码是否一致,从而获得身份信息验证结果。其中身份信息验证结果用于表示根据采集的用户身份信息所进行的验证是否通过。
服务器120用于综合活体检测结果和用户身份信息验证结果,得出最终的身份验证结果。在一个实施例中,服务器120用于当活体检测结果表示存在活体,且用户身份信息验证结果为验证通过时,则确定身份验证结果为验证通过。在一个实施例中,服务器120用于当活体检测结果表示不存在活体, 且用户身份信息验证结果为验证通过时,则确定身份验证结果为验证未通过。在一个实施例中,服务器120用于当活体检测结果表示不存在活体,且用户身份信息验证结果为验证通过时,则确定身份验证结果为验证未通过。
在一个实施例中,动作图像包括嘴型图像;服务器120还用于从嘴型图像中提取嘴型特征,对提取的嘴型特征与动作引导信息所对应的预设嘴型特征进行匹配检测,获得表示是否存在活体的活体检测结果。
本实施例中动作引导信息为引导用户说话的信息,可称为嘴型引导信息。采集动作图像时可以直接检测到嘴唇位置从而获得主要包括用户嘴型的动作图像。在一个实施例中,动作图像为人脸图像,人脸图像包括嘴型图像。人的嘴巴相对于人脸的位置是固定的,这样可以在确定人脸图像后直接定位该人脸图像中的嘴型图像。
嘴型也可以称为唇型。人的嘴型通过嘴唇的内唇线和外唇线便可以表示,可以将能够反映内唇线和/或外唇线的变化的特征作为嘴型特征。以内唇线为例,当嘴型为嘴巴紧闭时,内唇线为一条直线,当嘴型为完全张开时,内唇线为一类似圆形的形状。于是,可以采用内唇线所围成的区域的面积作为嘴型特征,也可以采用嘴型内唇线左右边界之间的距离以及上下边界之间的距离作为嘴型特征。
服务器120可以用于预先让人按照标准语速阅读动作引导信息所表达的内容,并采集阅读过程中嘴型变化的嘴型图像,提取嘴型特征作为预设嘴型特征并对应于该动作引导信息存储。服务器120可具体用于计算提取的嘴型特征与预设嘴型特征的相似度,若相似度大于相似度阈值,则获得表示存在活体的活体检测结果;若相似度不大于相似度阈值,则获得表示不存在活体的活体检测结果。
本实施例中,通过引导用户变化嘴型,并采集嘴型图像进行活体检测,实现成本低,而且准确性高。而且动作图像还可以包括完整的人脸图像,可以应用在后续身份验证过程中,提高了资源复用率。
在一个实施例中,终端110还用于显示从预设动作引导信息库中选取的 动作引导信息,同时按照动作引导信息所对应的语速显示阅读进度信息。语速是指说话的速度。具体地,可以按照该语速来逐字显示动作引导信息所表达的内容,也可以直接显示全部的动作引导信息并显示语速进度条,使得语速进度条从动作引导信息的第一个字开始按照相应的语速变化。
在一个实施例中,终端110还用于按照从预设动作引导信息库中选取的动作引导信息所对应的语速,以音频形式播放动作引导信息。本实施例中,直接以标准语速播放动作引导信息,引导用户跟读,从而使得用户按照该语速来控制嘴型变化,终端110用于采集相应的动作图像。
本实施例中,通过引导用户按照标准语速来完成动作引导信息所要求的嘴型变化,这样可以提高活体检测的准确率,避免因用户语速不正常而导致活体检测失效。
在一个实施例中,动作图像的数量为大于1的预设数量;终端110还用于采集每个动作图像所包括的人脸图像并进行人脸识别,当识别结果不一致时直接获得表示验证未通过的身份验证结果。预设数量可以根据需要设置,比如可以取3、4或5等。本实施例中,通过对每个动作图像所包括的人脸图像进行人脸识别,若在活体检测过程中用户改变,则会导致识别结果不一致,然后直接给出表示验证未通过的身份验证结果。这样考虑到活体检测需要一段时间,为了保证安全性,保证活体检测过程中一直是同一个用户操作是有必要的。
在一个实施例中,采集的用户身份信息包括人脸图像;服务器120还用于对每个动作图像所包括的人脸图像以及用户身份信息所包括的人脸图形进行人脸识别,当识别结果不一致时直接获得表示验证未通过的身份验证结果。
在一个实施例中,终端110还用于采集多种用户身份信息并发送给服务器120。服务器120还用于分别检测每种用户身份信息所对应的用户标识;还用于检测各种用户身份信息所对应的用户标识是否一致,以获得用户身份信息验证结果。
本实施例中,用户标识是指能够唯一标识出用户的字符或者字符串。通 过多种用户身份信息分别进行检测以获得相应的用户标识,进而检测这些检测获得的用户标识是否均一致,若检测得到的用户标识均一致,则给出验证通过的身份信息验证结果;若检测得到的用户标识不一致,则给出验证未通过的身份信息验证结果。这样检测出的身份信息验证结果更加可靠,从而使得最终的身份验证结果更加可靠。
在一个实施例中,终端110还用于采集证件图像和人脸图像并发送给服务器120。服务器120还用于对证件图像进行文字识别,以获得与证件图像匹配的用户标识;计算采集的人脸图像与对比人脸库中对应于用户标识的人脸图像之间的相似度;根据相似度确定用户身份信息验证结果。
具体地,终端110上运行有客户端,该客户端可以是原声应用客户端,也可以是轻应用客户端。其中轻应用是一种无需下载便可以使用的应用,目前常用的轻应用使用HTML5(超文本标记语言第五版)编制。终端110用于将采集到的证件图像发送到服务器120,服务器120用于对证件图像进行文字识别,以获得与证件图像匹配的用户标识。
终端110用于通过终端110上运行的客户端调用摄像头,以拍照或者录像的形式,对证件进行扫描以获得证件图像。终端110可以用于通过客户端提供交互界面,以引导用户按照提示扫描证件。具体可以先扫描证件正面,再扫描证件反面。扫描过程中可以提供证件正反面的原始照片以及按照证件形状裁剪的正反面证件图像。原始照片和正反面证件图像可各一张,当然也可以根据需要自定义张数。终端110还可以用于对证件图像的形状以及颜色分布进行判别,判断证件是否为伪造证件或者判断证件图像是否伪造。
服务器120可以用于采用OCR手段,对证件图像进行文字识别,识别出其中的文字信息,再与外部的证件服务器上存储的证件信息进行比对,查找到匹配的证件信息,并获取相应的用户标识。这里的证件服务器可以是公民身份管理机构的身份证服务器、车辆管理机构的驾驶证信息服务器、社会安全保障机构的社会保障卡信息服务器或者护照签发机构的护照信息服务器等。
服务器120还可以用于将识别出的文字信息与用户输入的文字信息进行比对,判断是否匹配,若不一致则直接给出表示身份验证未通过的身份验证结果,这样可以防止用户盗用他人证件进行操作。如果无法识别,则可以给出无法识别的原因,给出对应的错误提示。
具体地,若有证件图像,也存在对比人脸库,则截取证件图像中的证件头像;采集人脸图像;将采集的人脸图像与相应的证件头像分别与对比人脸库中对应于用户标识的人脸图像进行比对,计算获得相似度。这里的相似度表示的是相应的人脸图像之间的相似程度。
若没有证件图像,或者从证件图像中没有截取到证件头像,则可仅将采集到的人脸图像与对比人脸库中的人脸图像进行比对,计算得到相似度。这样可以不再去外部的证件服务器进行比对校验。
若没有对比人脸库,则可将采集到的人脸图像与证件头像进行比对,同时将采集到的人脸图像与证件头像发送给外部的证件服务器进行比对,计算得到相似度。
若没有证件图像,或者从证件图像中没有截取到证件头像,而且没有对比人脸库,则可直接将采集的人脸图像发送给外部的证件服务器进行比对,计算得到相似度识。
若相似度高于相似度阈值,则获得表示验证通过的身份信息验证结果;若相似度不超过相似度阈值,则获得表示验证未通过的身份信息验证结果。若有多个相似度,则可在每个相似度均高于相应的相似度阈值时,获得表示验证通过的身份信息验证结果;当存在相似度不超过相应的相似度阈值的情况时,则可判定身份信息验证结果为验证未通过。
本实施例中,综合证件图像和采集的人脸图像来对用户身份信息进行综合验证,使得身份信息验证结果更为准确,进而使得身份验证结果更为准确。
在一个实施例中,服务器120还用于截取证件图像中的证件头像;将采集的人脸图像与相应的证件头像分别与对比人脸库中对应于用户标识的人脸图像进行比对,计算获得相似度。
在一个实施例中,服务器120还用于当活体检测结果表示存在活体,且用户身份信息验证结果为验证通过时,则确定身份验证结果为验证通过。
在一个实施例中,终端110还用于检测金融业务操作指令,检测到金融业务操作指令后获取从预设动作引导信息库中选取的动作引导信息;还用于当服务器120所反馈的身份验证结果为验证通过时,执行金融业务操作指令所对应的金融业务操作。这里的金融业务包括申请贷款业务、***在线办理业务、投资业务等。本实施例中,通过上述身份验证方法来保证金融业务中的交易安全,使得金融业务的办理更加安全可靠。
上述身份验证***100,通过显示和/或播放预设动作引导信息库中选取的动作引导信息,以视觉和/或声音的方式引导用户完成相应的动作,以便采集到相应的动作图像。然后通过对采集的动作图像与动作引导信息进行匹配检测,就可以获得表示是否存在活体的活体检测结果,从而根据活体检测结果获得身份验证结果。这样通过活体检测来验证当前进行操作的是否为真实用户,可以避免机器暴力破解的情形,使得最终的身份验证结果更为准确,提高了安全性。
如图9所示,在一个实施例中,提供了一种身份验证装置900,包括动作图像采集模块901、活体检测模块902和身份验证结果确定模块903。
动作图像采集模块901,用于将从预设动作引导信息库中选取的动作引导信息进行显示和/或以音频形式播放,并采集相应的动作图像。
活体检测模块902,用于对采集的动作图像与动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果。
身份验证结果确定模块903,用于根据活体检测结果确定身份验证结果。
如图10所示,在一个实施例中,身份验证装置900还包括用户身份信息验证模块904,用于采集用户身份信息,并根据采集的用户身份信息进行验证,获得用户身份信息验证结果;身份验证结果确定模块903还用于根据活体检测结果和用户身份信息验证结果确定身份验证结果。
在一个实施例中,动作引导信息为嘴型引导信息;动作图像包括嘴型图 像;活体检测模块902还用于从嘴型图像中提取嘴型特征;还用于对提取的嘴型特征与动作引导信息所对应的预设嘴型特征进行匹配检测,获得表示是否存在活体的活体检测结果。
在一个实施例中,动作引导信息为嘴型引导信息;动作图像采集模块901还用于显示从预设动作引导信息库中选取的动作引导信息,同时按照动作引导信息所对应的语速显示阅读进度信息。
在一个实施例中,动作引导信息为嘴型引导信息;动作图像采集模块901还用于按照从预设动作引导信息库中选取的动作引导信息所对应的语速,以音频形式播放动作引导信息。
在一个实施例中,动作图像的数量为大于1的预设数量;用户身份信息验证模块904还用于采集每个动作图像所包括的人脸图像并进行人脸识别,当识别结果不一致时直接获得表示验证未通过的身份验证结果。
在一个实施例中,用户身份信息验证模块904还用于采集多种用户身份信息,分别检测每种用户身份信息所对应的用户标识;检测各种用户身份信息所对应的用户标识是否一致,以获得用户身份信息验证结果。
如图11所示,在一个实施例中,用户身份信息验证模块904包括:证件图像处理模块904a、人脸图像处理模块904b和验证执行模块904c。
证件图像处理模块904a,用于采集证件图像,并对证件图像进行文字识别,以获得与证件图像匹配的用户标识。
人脸图像处理模块904b,用于采集人脸图像,并计算采集的人脸图像与对比人脸库中对应于用户标识的人脸图像之间的相似度。
验证执行模块904c,用于根据相似度确定用户身份信息验证结果。
如图12所示,在一个实施例中,人脸图像处理模块904b包括:截取模块904b1、人脸图像采集模块904b2和比对模块904b3。
截取模块904b1,用户截取证件图像中的证件头像。
人脸图像采集模块904b2,用于采集人脸图像。
比对模块904b3,用于将采集的人脸图像与相应的证件头像分别与对比 人脸库中对应于用户标识的人脸图像进行比对,计算获得相似度。
在一个实施例中,身份验证结果确定模块903还用于当活体检测结果表示存在活体,且用户身份信息验证结果为验证通过时,则确定身份验证结果为验证通过。
如图13所示,在一个实施例中,该身份验证装置900还包括接收模块905,用于接收服务器从预设动作引导信息库中选取并发送的动作引导信息。
活体检测模块902还用于将采集的动作图像发送给服务器,使服务器对动作图像与动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果。
用户身份信息验证模块904还用于采集用户身份信息并发送给服务器,使服务器根据采集的用户身份信息进行验证,获得用户身份信息验证结果。
身份验证结果确定模块903还用于接收服务器在根据活体检测结果和用户身份信息验证结果确定身份验证结果后所反馈的身份验证结果。
如图14所示,在一个实施例中,该身份验证装置900还包括金融业务处理模块906,用于检测金融业务操作指令,检测到金融业务操作指令后获取从预设动作引导信息库中选取的动作引导信息;还用于当身份验证结果为验证通过时,执行金融业务操作指令所对应的金融业务操作。
上述身份验证装置900,通过显示和/或播放预设动作引导信息库中选取的动作引导信息,以视觉和/或声音的方式引导用户完成相应的动作,以便采集到相应的动作图像。然后通过对采集的动作图像与动作引导信息进行匹配检测,就可以获得表示是否存在活体的活体检测结果,从而根据活体检测结果获得身份验证结果。这样通过活体检测来验证当前进行操作的是否为真实用户,可以避免机器暴力破解的情形,使得最终的身份验证结果更为准确,提高了安全性。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施 例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random Access Memory,RAM)等。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。

Claims (30)

  1. 一种身份验证方法,包括:
    将从预设动作引导信息库中选取的动作引导信息进行显示和/或以音频形式播放,并采集相应的动作图像;
    对采集的动作图像与所述动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果;及
    根据所述活体检测结果确定身份验证结果。
  2. 根据权利要求1所述的方法,其特征在于,所述动作引导信息为嘴型引导信息;所述动作图像包括嘴型图像;所述对采集的动作图像与所述动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果包括:
    从所述嘴型图像中提取嘴型特征;及
    对提取的嘴型特征与所述动作引导信息所对应的预设嘴型特征进行匹配检测,获得表示是否存在活体的活体检测结果。
  3. 根据权利要求1所述的方法,其特征在于,所述动作引导信息为嘴型引导信息;所述将从预设动作引导信息库中选取的动作引导信息进行显示和/或以音频形式播放,并采集相应的动作图像包括:
    显示从预设动作引导信息库中选取的动作引导信息,同时按照所述动作引导信息所对应的语速显示阅读进度信息;和/或,
    按照从预设动作引导信息库中选取的动作引导信息所对应的语速,以音频形式播放所述动作引导信息。
  4. 根据权利要求2所述的方法,其特征在于,所述动作图像的数量为大于1的预设数量;所述方法还包括:
    采集每个动作图像所包括的人脸图像并进行人脸识别,当识别结果不一致时直接获得表示验证未通过的身份验证结果。
  5. 根据权利要求1所述的方法,其特征在于,还包括:
    采集用户身份信息,并根据采集的用户身份信息进行验证,获得用户身份信息验证结果;
    所述根据所述活体检测结果确定身份验证结果包括:
    根据所述活体检测结果和所述用户身份信息验证结果确定身份验证结果。
  6. 根据权利要求5所述的方法,其特征在于,所述采集用户身份信息,并根据采集的用户身份信息进行验证,获得用户身份信息验证结果包括:
    采集多种用户身份信息,分别检测每种用户身份信息所对应的用户标识;及
    检测各种用户身份信息所对应的用户标识是否一致,以获得用户身份信息验证结果。
  7. 根据权利要求5所述的方法,其特征在于,所述采集用户身份信息,并根据采集的用户身份信息进行验证,获得用户身份信息验证结果包括:
    采集证件图像,并对所述证件图像进行文字识别,以获得与所述证件图像匹配的用户标识;
    采集人脸图像,并计算采集的人脸图像与对比人脸库中对应于所述用户标识的人脸图像之间的相似度;及
    根据所述相似度确定用户身份信息验证结果。
  8. 根据权利要求7所述的方法,其特征在于,所述采集人脸图像,并计算采集的人脸图像与对比人脸库中对应于所述用户标识的人脸图像之间的相似度包括:
    截取所述证件图像中的证件头像;
    采集人脸图像;及
    将所述采集的人脸图像与相应的证件头像分别与对比人脸库中对应于所述用户标识的人脸图像进行比对,计算获得相似度。
  9. 根据权利要求5所述的方法,其特征在于,所述根据所述活体检测结果和所述用户身份信息验证结果确定身份验证结果包括:
    当所述活体检测结果表示存在活体,且所述用户身份信息验证结果为验证通过时,则确定身份验证结果为验证通过。
  10. 根据权利要求5所述的方法,其特征在于,所述将从预设动作引导信息库中选取的动作引导信息进行显示和/或以音频形式播放,并采集相应的动作图像之前,所述方法还包括:
    接收服务器从预设动作引导信息库中选取并发送的动作引导信息;
    所述对采集的动作图像与所述动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果包括:
    将所述采集的动作图像发送给服务器,使所述服务器对所述动作图像与所述动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果;
    所述采集用户身份信息,并根据采集的用户身份信息进行验证,获得用户身份信息验证结果包括:
    采集用户身份信息并发送给所述服务器,使所述服务器根据采集的用户身份信息进行验证,获得用户身份信息验证结果;
    所述根据所述活体检测结果和所述用户身份信息验证结果确定身份验证结果包括:
    接收所述服务器在根据所述活体检测结果和所述用户身份信息验证结果确定身份验证结果后所反馈的身份验证结果。
  11. 根据权利要求1所述的方法,其特征在于,所述将从预设动作引导信息库中选取的动作引导信息进行显示和/或以音频形式播放,并采集相应的动作图像之前,所述方法还包括:
    检测金融业务操作指令,检测到所述金融业务操作指令后获取从预设动作引导信息库中选取的动作引导信息;
    所述方法还包括:
    当所述身份验证结果为验证通过时,执行所述金融业务操作指令所对应的金融业务操作。
  12. 一种终端,包括存储器和处理器,所述存储器中储存有指令,其特征在于,所述指令被所述处理器执行时,使得所述处理器执行以下步骤:
    将从预设动作引导信息库中选取的动作引导信息进行显示和/或以音频 形式播放,并采集相应的动作图像;
    对采集的动作图像与所述动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果;及
    根据所述活体检测结果确定身份验证结果。
  13. 根据权利要求12所述的终端,其特征在于,所述动作引导信息为嘴型引导信息;所述动作图像包括嘴型图像;所述对采集的动作图像与所述动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果包括:
    从所述嘴型图像中提取嘴型特征;及
    对提取的嘴型特征与所述动作引导信息所对应的预设嘴型特征进行匹配检测,获得表示是否存在活体的活体检测结果。
  14. 根据权利要求12所述的终端,其特征在于,所述动作引导信息为嘴型引导信息;所述将从预设动作引导信息库中选取的动作引导信息进行显示和/或以音频形式播放,并采集相应的动作图像包括:
    显示从预设动作引导信息库中选取的动作引导信息,同时按照所述动作引导信息所对应的语速显示阅读进度信息;和/或,
    按照从预设动作引导信息库中选取的动作引导信息所对应的语速,以音频形式播放所述动作引导信息。
  15. 根据权利要求13所述的终端,其特征在于,所述动作图像的数量为大于1的预设数量;所述指令被所述处理器执行时,还使得所述处理器执行以下步骤:
    采集每个动作图像所包括的人脸图像并进行人脸识别,当识别结果不一致时直接获得表示验证未通过的身份验证结果。
  16. 根据权利要求12所述的终端,其特征在于,所述指令被所述处理器执行时,还使得所述处理器执行以下步骤:
    采集用户身份信息,并根据采集的用户身份信息进行验证,获得用户身份信息验证结果;
    所述根据所述活体检测结果确定身份验证结果包括:
    根据所述活体检测结果和所述用户身份信息验证结果确定身份验证结果。
  17. 根据权利要求16所述的终端,其特征在于,所述采集用户身份信息,并根据采集的用户身份信息进行验证,获得用户身份信息验证结果包括:
    采集多种用户身份信息,分别检测每种用户身份信息所对应的用户标识;及
    检测各种用户身份信息所对应的用户标识是否一致,以获得用户身份信息验证结果。
  18. 根据权利要求16所述的终端,其特征在于,所述采集用户身份信息,并根据采集的用户身份信息进行验证,获得用户身份信息验证结果包括:
    采集证件图像,并对所述证件图像进行文字识别,以获得与所述证件图像匹配的用户标识;
    采集人脸图像,并计算采集的人脸图像与对比人脸库中对应于所述用户标识的人脸图像之间的相似度;及
    根据所述相似度确定用户身份信息验证结果。
  19. 根据权利要求18所述的终端,其特征在于,所述采集人脸图像,并计算采集的人脸图像与对比人脸库中对应于所述用户标识的人脸图像之间的相似度包括:
    截取所述证件图像中的证件头像;
    采集人脸图像;及
    将所述采集的人脸图像与相应的证件头像分别与对比人脸库中对应于所述用户标识的人脸图像进行比对,计算获得相似度。
  20. 根据权利要求16所述的终端,其特征在于,所述根据所述活体检测结果和所述用户身份信息验证结果确定身份验证结果包括:
    当所述活体检测结果表示存在活体,且所述用户身份信息验证结果为验证通过时,则确定身份验证结果为验证通过。
  21. 根据权利要求16所述的终端,其特征在于,所述指令被所述处理器 执行时,还使得所述处理器在执行所述将从预设动作引导信息库中选取的动作引导信息进行显示和/或以音频形式播放,并采集相应的动作图像的步骤之前,执行以下步骤:
    接收服务器从预设动作引导信息库中选取并发送的动作引导信息;
    所述对采集的动作图像与所述动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果包括:
    将所述采集的动作图像发送给服务器,使所述服务器对所述动作图像与所述动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果;
    所述采集用户身份信息,并根据采集的用户身份信息进行验证,获得用户身份信息验证结果包括:
    采集用户身份信息并发送给所述服务器,使所述服务器根据采集的用户身份信息进行验证,获得用户身份信息验证结果;
    所述根据所述活体检测结果和所述用户身份信息验证结果确定身份验证结果包括:
    接收所述服务器在根据所述活体检测结果和所述用户身份信息验证结果确定身份验证结果后所反馈的身份验证结果。
  22. 根据权利要求12所述的终端,其特征在于,所述指令被所述处理器执行时,还使得所述处理器在执行所述将从预设动作引导信息库中选取的动作引导信息进行显示和/或以音频形式播放,并采集相应的动作图像的步骤之前,执行以下步骤:
    检测金融业务操作指令,检测到所述金融业务操作指令后获取从预设动作引导信息库中选取的动作引导信息;
    所述指令被所述处理器执行时,还可使得所述处理器执行以下步骤:
    当所述身份验证结果为验证通过时,执行所述金融业务操作指令所对应的金融业务操作。
  23. 一种服务器,包括存储器和处理器,所述存储器中储存有指令,其特征在于,所述指令被所述处理器执行时,使得所述处理器执行以下步骤:
    从预设动作引导信息库中选取动作引导信息;
    向终端发送所述动作引导信息;
    接收所述终端反馈的动作图像;
    对所述动作图像与所述动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果;
    根据所述活体检测结果确定身份验证结果;及
    向所述终端反馈所述身份验证结果。
  24. 根据权利要求23所述的服务器,其特征在于,所述动作引导信息为嘴型引导信息;所述动作图像包括嘴型图像;所述对所述动作图像与所述动作引导信息进行匹配检测,获得表示是否存在活体的活体检测结果包括:
    从所述嘴型图像中提取嘴型特征;及
    对提取的嘴型特征与所述动作引导信息所对应的预设嘴型特征进行匹配检测,获得表示是否存在活体的活体检测结果。
  25. 根据权利要求24所述的服务器,其特征在于,所述动作图像的数量为大于1的预设数量;所述指令被所述处理器执行时,还使得所述处理器执行以下步骤:
    采集每个动作图像所包括的人脸图像并进行人脸识别,当识别结果不一致时直接获得表示验证未通过的身份验证结果。
  26. 根据权利要求23所述的服务器,其特征在于,所述指令被所述处理器执行时,还使得所述处理器执行以下步骤:
    接收所述终端采集并发送的用户身份信息,并根据所述用户身份信息进行验证,获得用户身份信息验证结果;
    所述根据所述活体检测结果确定身份验证结果,包括:
    根据所述活体检测结果和所述用户身份信息验证结果确定身份验证结果。
  27. 根据权利要求26所述的服务器,其特征在于,所述接收所述终端采集并发送的用户身份信息,并根据所述用户身份信息进行验证,获得用户身 份信息验证结果包括:
    接收所述终端采集并发送的多种用户身份信息,分别检测每种用户身份信息所对应的用户标识;及
    检测各种用户身份信息所对应的用户标识是否一致,以获得用户身份信息验证结果。
  28. 根据权利要求26所述的服务器,其特征在于,所述指令被所述处理器执行时,还使得所述处理器执行以下步骤:
    接收所述终端采集并发送的证件图像和人脸图像;
    对所述证件图像进行文字识别,以获得与所述证件图像匹配的用户标识;
    计算所述人脸图像与对比人脸库中对应于所述用户标识的人脸图像之间的相似度;
    根据所述相似度确定用户身份信息验证结果。
  29. 根据权利要求28所述的服务器,其特征在于,所述计算所述人脸图像与对比人脸库中对应于所述用户标识的人脸图像之间的相似度包括:
    截取所述证件图像中的证件头像;及
    将所述人脸图像与相应的证件头像分别与对比人脸库中对应于所述用户标识的人脸图像进行比对,计算获得相似度。
  30. 根据权利要求26所述的服务器,其特征在于,所述根据所述活体检测结果和所述用户身份信息验证结果确定身份验证结果包括:
    当所述活体检测结果表示存在活体,且所述用户身份信息验证结果为验证通过时,则确定身份验证结果为验证通过。
PCT/CN2016/081489 2015-05-21 2016-05-10 身份验证方法、终端和服务器 WO2016184325A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/632,143 US10432624B2 (en) 2015-05-21 2017-06-23 Identity verification method, terminal, and server
US16/542,213 US10992666B2 (en) 2015-05-21 2019-08-15 Identity verification method, terminal, and server

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510264333.XA CN106302330B (zh) 2015-05-21 2015-05-21 身份验证方法、装置和***
CN201510264333.X 2015-05-21

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/632,143 Continuation US10432624B2 (en) 2015-05-21 2017-06-23 Identity verification method, terminal, and server

Publications (1)

Publication Number Publication Date
WO2016184325A1 true WO2016184325A1 (zh) 2016-11-24

Family

ID=57319473

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/081489 WO2016184325A1 (zh) 2015-05-21 2016-05-10 身份验证方法、终端和服务器

Country Status (3)

Country Link
US (2) US10432624B2 (zh)
CN (1) CN106302330B (zh)
WO (1) WO2016184325A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108306886A (zh) * 2018-02-01 2018-07-20 深圳市腾讯计算机***有限公司 一种身份验证方法、装置及存储介质
CN111756705A (zh) * 2020-06-05 2020-10-09 腾讯科技(深圳)有限公司 活体检测算法的攻击测试方法、装置、设备及存储介质

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11256792B2 (en) * 2014-08-28 2022-02-22 Facetec, Inc. Method and apparatus for creation and use of digital identification
CN106874876A (zh) * 2017-02-20 2017-06-20 深圳市科漫达智能管理科技有限公司 一种人脸活体检测方法及装置
CN106851403B (zh) * 2017-02-27 2023-11-28 首影科技(深圳)有限公司 防止盗录播放画面的显示装置及内容安全播放方法
CN107066983B (zh) * 2017-04-20 2022-08-09 腾讯科技(上海)有限公司 一种身份验证方法及装置
CN106998332B (zh) * 2017-05-08 2020-06-30 深圳市牛鼎丰科技有限公司 安全登录方法、装置、存储介质和计算机设备
US10606993B2 (en) * 2017-08-09 2020-03-31 Jumio Corporation Authentication using facial image comparison
RU2676884C1 (ru) * 2017-10-20 2019-01-11 Андрей Владимирович Дзыгарь Система и способ контроля доступа на территорию
CN108875508B (zh) * 2017-11-23 2021-06-29 北京旷视科技有限公司 活体检测算法更新方法、装置、客户端、服务器及***
CN108564688A (zh) * 2018-03-21 2018-09-21 阿里巴巴集团控股有限公司 身份验证的方法及装置和电子设备
CN108460266A (zh) * 2018-03-22 2018-08-28 百度在线网络技术(北京)有限公司 用于认证身份的方法和装置
CN108494778A (zh) * 2018-03-27 2018-09-04 百度在线网络技术(北京)有限公司 身份认证方法和装置
CN108416595A (zh) * 2018-03-27 2018-08-17 百度在线网络技术(北京)有限公司 信息处理方法和装置
CN110555330A (zh) * 2018-05-30 2019-12-10 百度在线网络技术(北京)有限公司 图像面签方法、装置、计算机设备及存储介质
CN109067767B (zh) * 2018-08-31 2021-02-19 上海艾融软件股份有限公司 一种人脸识别认证方法及***
CN109255618A (zh) * 2018-09-02 2019-01-22 珠海横琴现联盛科技发展有限公司 针对动态视频的人脸识别支付信息防伪方法
CN109325332A (zh) * 2018-09-17 2019-02-12 北京旷视科技有限公司 人证核验方法、服务器、后台及***
CN109492551B (zh) * 2018-10-25 2023-03-24 腾讯科技(深圳)有限公司 活体检测方法、装置及应用活体检测方法的相关***
CN111209768A (zh) * 2018-11-06 2020-05-29 深圳市商汤科技有限公司 身份验证***及方法、电子设备和存储介质
CN109376725B (zh) * 2018-12-21 2022-09-23 北京无线电计量测试研究所 一种基于虹膜识别的身份核查方法和装置
CN111353144A (zh) * 2018-12-24 2020-06-30 航天信息股份有限公司 一种身份认证的方法和装置
CN109934191A (zh) * 2019-03-20 2019-06-25 北京字节跳动网络技术有限公司 信息处理方法和装置
CN109905401A (zh) * 2019-03-22 2019-06-18 深圳市元征科技股份有限公司 实名认证方法及终端、服务器
CN110163094A (zh) * 2019-04-15 2019-08-23 深圳壹账通智能科技有限公司 基于手势动作的活体检测方法、装置、设备及存储介质
CN110223710A (zh) * 2019-04-18 2019-09-10 深圳壹账通智能科技有限公司 多重联合认证方法、装置、计算机装置及存储介质
CN112507889A (zh) * 2019-04-29 2021-03-16 众安信息技术服务有限公司 一种校验证件与持证人的方法及***
CN110222486A (zh) * 2019-05-18 2019-09-10 王�锋 用户身份验证方法、装置、设备及计算机可读存储介质
CN110167029B (zh) * 2019-06-28 2022-12-23 深圳开立生物医疗科技股份有限公司 超声设备控制方法、移动终端及控制***
CN111126158A (zh) * 2019-11-27 2020-05-08 中铁程科技有限责任公司 基于人脸识别的自动检票方法、装置及***
CN113254893B (zh) * 2020-02-13 2023-09-19 百度在线网络技术(北京)有限公司 一种身份校验方法、装置、电子设备及存储介质
CN111353434A (zh) * 2020-02-28 2020-06-30 北京市商汤科技开发有限公司 信息识别方法及装置、***、电子设备和存储介质
CN111461948A (zh) * 2020-04-17 2020-07-28 南京慧智灵杰信息技术有限公司 一种通过物联网技术的社区矫正生物采集识别认证***
CN111613333A (zh) * 2020-05-29 2020-09-01 惠州Tcl移动通信有限公司 自助健康检测方法、装置、存储介质及移动终端
CN111950382A (zh) * 2020-07-21 2020-11-17 马赫 一种基于vr眼镜的虹膜识别方法
CN112395580A (zh) * 2020-11-19 2021-02-23 联通智网科技有限公司 一种认证方法、装置、***、存储介质和计算机设备
CN112702738B (zh) * 2020-12-21 2021-09-28 深圳日谦科技有限公司 一种基于多组无线终端的可身份识别录播***
CN112861104A (zh) * 2021-03-24 2021-05-28 重庆度小满优扬科技有限公司 身份验证方法及相关装置
CN113821781A (zh) * 2021-11-17 2021-12-21 支付宝(杭州)信息技术有限公司 基于图灵测试的活体检测的方法和装置
CN114978546A (zh) * 2022-05-24 2022-08-30 中国银行股份有限公司 用户身份验证方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110154444A1 (en) * 2009-12-17 2011-06-23 Verizon Patent And Licensing Inc. Method and apparatus for providing user authentication based on user actions
CN103870725A (zh) * 2012-12-13 2014-06-18 华为技术有限公司 一种验证码的生成验证方法和装置
CN104298909A (zh) * 2013-07-19 2015-01-21 富泰华工业(深圳)有限公司 电子装置、身份验证***及方法
CN104504321A (zh) * 2015-01-05 2015-04-08 湖北微模式科技发展有限公司 一种基于摄像头实现远程用户身份验证的方法与***

Family Cites Families (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8745541B2 (en) * 2003-03-25 2014-06-03 Microsoft Corporation Architecture for controlling a computer using hand gestures
DE102007011831A1 (de) * 2007-03-12 2008-09-18 Voice.Trust Ag Digitales Verfahren und Anordnung zur Authentifizierung einer Person
US10169646B2 (en) * 2007-12-31 2019-01-01 Applied Recognition Inc. Face authentication to mitigate spoofing
US8638939B1 (en) * 2009-08-20 2014-01-28 Apple Inc. User authentication on an electronic device
US20130080898A1 (en) * 2011-09-26 2013-03-28 Tal Lavian Systems and methods for electronic communications
US20170322687A1 (en) * 2011-09-26 2017-11-09 Tal Lavian Systems and methods for electronic communications
EP2610724B1 (en) * 2011-12-27 2022-01-05 Tata Consultancy Services Limited A system and method for online user assistance
US9075975B2 (en) * 2012-02-21 2015-07-07 Andrew Bud Online pseudonym verification and identity validation
KR101971697B1 (ko) * 2012-02-24 2019-04-23 삼성전자주식회사 사용자 디바이스에서 복합 생체인식 정보를 이용한 사용자 인증 방법 및 장치
US9082011B2 (en) * 2012-03-28 2015-07-14 Texas State University—San Marcos Person identification using ocular biometrics with liveness detection
CN103384234B (zh) * 2012-05-04 2016-09-28 深圳市腾讯计算机***有限公司 人脸身份认证方法和***
US9035955B2 (en) * 2012-05-16 2015-05-19 Microsoft Technology Licensing, Llc Synchronizing virtual actor's performances to a speaker's voice
CN102801528A (zh) * 2012-08-17 2012-11-28 珠海市载舟软件技术有限公司 基于智能移动通讯设备的身份验证***及其方法
WO2015112108A1 (en) * 2012-11-28 2015-07-30 Visa International Service Association Multi disparate gesture actions and transactions apparatuses, methods and systems
EP2962175B1 (en) * 2013-03-01 2019-05-01 Tobii AB Delay warp gaze interaction
US8943558B2 (en) * 2013-03-08 2015-01-27 Next Level Security Systems, Inc. System and method for monitoring a threat
US20140310764A1 (en) * 2013-04-12 2014-10-16 Verizon Patent And Licensing Inc. Method and apparatus for providing user authentication and identification based on gestures
US9129478B2 (en) * 2013-05-20 2015-09-08 Microsoft Corporation Attributing user action based on biometric identity
CN103716309B (zh) * 2013-12-17 2017-09-29 华为技术有限公司 一种安全认证方法及终端
US9607137B2 (en) * 2013-12-17 2017-03-28 Lenovo (Singapore) Pte. Ltd. Verbal command processing based on speaker recognition
CN103634120A (zh) * 2013-12-18 2014-03-12 上海市数字证书认证中心有限公司 基于人脸识别的实名认证方法及***
US10129251B1 (en) * 2014-02-11 2018-11-13 Morphotrust Usa, Llc System and method for verifying liveliness
KR20150115365A (ko) * 2014-04-04 2015-10-14 삼성전자주식회사 전자장치에서 사용자 입력에 대응한 사용자 인터페이스 제공 방법 및 장치
US9916010B2 (en) * 2014-05-16 2018-03-13 Visa International Service Association Gesture recognition cloud command platform, system, method, and apparatus
US10095850B2 (en) * 2014-05-19 2018-10-09 Kadenze, Inc. User identity authentication techniques for on-line content or access
CN104217212A (zh) * 2014-08-12 2014-12-17 优化科技(苏州)有限公司 真人身份验证方法
US10614204B2 (en) * 2014-08-28 2020-04-07 Facetec, Inc. Facial recognition authentication system including path parameters
CN111898108B (zh) * 2014-09-03 2024-06-04 创新先进技术有限公司 身份认证方法、装置、终端及服务器
US9904775B2 (en) * 2014-10-31 2018-02-27 The Toronto-Dominion Bank Systems and methods for authenticating user identity based on user-defined image data
TWI669103B (zh) * 2014-11-14 2019-08-21 日商新力股份有限公司 資訊處理裝置、資訊處理方法及程式
KR101714349B1 (ko) * 2014-12-29 2017-03-09 주식회사 슈프리마 생체 인증 장치와 그 생체 영상 출력제어 방법
US9928603B2 (en) * 2014-12-31 2018-03-27 Morphotrust Usa, Llc Detecting facial liveliness
KR20170011617A (ko) * 2015-07-23 2017-02-02 엘지전자 주식회사 이동 단말기 및 그것의 제어방법
US9911290B1 (en) * 2015-07-25 2018-03-06 Gary M. Zalewski Wireless coded communication (WCC) devices for tracking retail interactions with goods and association to user accounts
US20170068952A1 (en) * 2015-09-03 2017-03-09 Bank Of America Corporation System for electronic collection and display of account token usage and association
US10102358B2 (en) * 2015-12-29 2018-10-16 Sensory, Incorporated Face-controlled liveness verification
US20170269797A1 (en) * 2016-03-18 2017-09-21 Tal Lavian Systens and Methods For Electronic Communication
JP6208837B1 (ja) * 2016-10-12 2017-10-04 株式会社エイチアイ ユーザインタフェースを制御する方法、プログラム及び装置
US10438584B2 (en) * 2017-04-07 2019-10-08 Google Llc Multi-user virtual assistant for verbal device control
JP6988205B2 (ja) * 2017-07-04 2022-01-05 富士フイルムビジネスイノベーション株式会社 情報処理装置及びプログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110154444A1 (en) * 2009-12-17 2011-06-23 Verizon Patent And Licensing Inc. Method and apparatus for providing user authentication based on user actions
CN103870725A (zh) * 2012-12-13 2014-06-18 华为技术有限公司 一种验证码的生成验证方法和装置
CN104298909A (zh) * 2013-07-19 2015-01-21 富泰华工业(深圳)有限公司 电子装置、身份验证***及方法
CN104504321A (zh) * 2015-01-05 2015-04-08 湖北微模式科技发展有限公司 一种基于摄像头实现远程用户身份验证的方法与***

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108306886A (zh) * 2018-02-01 2018-07-20 深圳市腾讯计算机***有限公司 一种身份验证方法、装置及存储介质
CN108306886B (zh) * 2018-02-01 2021-02-02 深圳市腾讯计算机***有限公司 一种身份验证方法、装置及存储介质
CN111756705A (zh) * 2020-06-05 2020-10-09 腾讯科技(深圳)有限公司 活体检测算法的攻击测试方法、装置、设备及存储介质
CN111756705B (zh) * 2020-06-05 2021-09-14 腾讯科技(深圳)有限公司 活体检测算法的攻击测试方法、装置、设备及存储介质

Also Published As

Publication number Publication date
US10432624B2 (en) 2019-10-01
US10992666B2 (en) 2021-04-27
CN106302330A (zh) 2017-01-04
US20190372972A1 (en) 2019-12-05
CN106302330B (zh) 2021-01-05
US20170295177A1 (en) 2017-10-12

Similar Documents

Publication Publication Date Title
WO2016184325A1 (zh) 身份验证方法、终端和服务器
US10643164B2 (en) Touchless mobile applications and context-sensitive workflows
WO2020024398A1 (zh) 生物特征辅助支付方法、装置、计算机设备及存储介质
US10796136B2 (en) Secondary source authentication of facial biometric
CN105681316B (zh) 身份验证方法和装置
US10509895B2 (en) Biometric authentication
EP3174262B1 (en) Voiceprint login method and apparatus based on artificial intelligence
US11188628B2 (en) Biometric challenge-response authentication
CN106850648B (zh) 身份验证方法、客户端和服务平台
US20180048641A1 (en) Identity authentication method and apparatus
TW201907330A (zh) 身份認證的方法、裝置、設備及資料處理方法
WO2020019591A1 (zh) 用于生成信息的方法和装置
CN113656761B (zh) 基于生物识别技术的业务处理方法、装置和计算机设备
CN110555330A (zh) 图像面签方法、装置、计算机设备及存储介质
US20230012235A1 (en) Using an enrolled biometric dataset to detect adversarial examples in biometrics-based authentication system
WO2024060951A9 (zh) 一种业务服务方法及装置
WO2020007191A1 (zh) 活体识别检测方法、装置、介质及电子设备
US10719596B1 (en) System, method, and computer-accessible medium for authentication via handwriting style
CN116612538A (zh) 电子合同内容的在线确认方法
CN103700151A (zh) 一种晨跑签到方法
Tschinkel et al. Keylogger keystroke biometric system
US11521428B1 (en) Methods and systems for signature verification
US20240086508A1 (en) System and method for facilitating multi-factor face authentication of user
KR102523598B1 (ko) 출입자 신원 무인 인증시스템
WO2023120221A1 (ja) 認証装置、認証システム、認証方法及び記録媒体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16795812

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11/04/2018)

122 Ep: pct application non-entry in european phase

Ref document number: 16795812

Country of ref document: EP

Kind code of ref document: A1