CN115840931A - Identity verification method based on face sliding screen interaction and related product - Google Patents

Identity verification method based on face sliding screen interaction and related product Download PDF

Info

Publication number
CN115840931A
CN115840931A CN202211502276.0A CN202211502276A CN115840931A CN 115840931 A CN115840931 A CN 115840931A CN 202211502276 A CN202211502276 A CN 202211502276A CN 115840931 A CN115840931 A CN 115840931A
Authority
CN
China
Prior art keywords
verification
user
face
video
identity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211502276.0A
Other languages
Chinese (zh)
Inventor
林志伟
张鹏
李少华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Merchants Union Consumer Finance Co Ltd
Original Assignee
Merchants Union Consumer Finance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Merchants Union Consumer Finance Co Ltd filed Critical Merchants Union Consumer Finance Co Ltd
Priority to CN202211502276.0A priority Critical patent/CN115840931A/en
Publication of CN115840931A publication Critical patent/CN115840931A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Collating Specific Patterns (AREA)

Abstract

The embodiment of the application provides an identity verification method based on face sliding screen interaction and a related product, and is characterized in that the method is applied to a user terminal and can comprise the following steps: responding to a verification instruction of a user, and acquiring a first verification video; generating at least one piece of first image information according to the first verification video, and judging whether the preset identity information of the user is matched or not according to the at least one piece of first image information; if the judgment result is yes, generating and displaying first reminding information; acquiring a second verification video within a first preset time; and if the first facial key points of the user move according to the verification sequence in the second verification video and the signal to noise ratio of the face posture of the user is greater than the preset value, determining that the user passes the identity verification. Therefore, the method provided by the embodiment of the application can confirm whether the identity is a living body or not by enabling the user to perform the random verification action based on the face, so that the reliability of the identity verification is improved, and the information safety of the user is protected.

Description

Identity verification method based on face sliding screen interaction and related product
Technical Field
The application relates to the field of internet, in particular to an identity verification method based on face sliding screen interaction and a related product.
Background
The face living body recognition technology is used as a wind control means for verifying the identity of a user, and has a large number of application scenes in the financial industry, for example, a bank has a plurality of business scenes and has applications in face recognition and living body detection. Currently, the main living body detection technologies mainly include human face action living body detection, light living body and silence living body. However, in the current human face action living body detection method based on the user interaction, a user needs to complete a specific action for many times, the waiting time of the user on a verification interface is long, the user experience is poor, and the attack such as video and the like is difficult to resist; for the face silence live detection method, the face live detection and identification are performed by only shooting one image, and the face silence live detection method is easily influenced by various external factors, so that the face silence live detection method is low in face identification accuracy. The mainstream detection method based on the auxiliary equipment detects the living human face by adding the auxiliary equipment such as an infrared camera or a depth camera, so that although the detection precision is improved, the detection cost is greatly increased, and the landing scene and the popularization progress are limited.
Therefore, how to skillfully apply the deep learning technology and improve the accuracy and the detection speed of the human face living body detection on the premise of not increasing auxiliary equipment is a problem which needs to be solved urgently by technical personnel in the field.
Disclosure of Invention
The embodiment of the application provides an identity verification method based on face sliding screen interaction, which can randomly generate verification codes and a verification sequence, and judge whether identity verification is performed or not by analyzing whether a user inputs the verification codes according to the verification sequence, so that the accuracy and the detection speed of face living body detection can be improved on the premise of not adding auxiliary equipment.
In a first aspect, an embodiment of the present application provides an identity authentication method based on face sliding interaction, which is applied to a user terminal, and the method may include the following steps:
responding to a verification instruction of a user, and acquiring a first verification video;
generating at least one piece of first image information according to the first verification video, and judging whether the user is matched with preset identity information or not according to the at least one piece of first image information;
if the first reminding information is judged to be yes, first reminding information is generated and displayed, the first reminding information can comprise digital keyboard image information, a random verification code and a verification sequence, and the random verification code can comprise at least two numbers;
acquiring a second verification video within a first preset time;
and if the first facial key points of the user move according to the verification sequence in the second verification video and the signal-to-noise ratio of the face posture of the user is greater than the preset value, determining that the user passes the identity verification.
Therefore, the method of the embodiment of the application can achieve the purpose of living body detection by judging whether the user performs identity verification according to the random verification code and the verification sequence without other auxiliary tools or equipment, and can also judge whether the user performing identity verification is a living body (namely the living body detection) by combining the signal-to-noise ratio of the facial posture of the user. According to the method, on the basis that the use experience (or identity verification experience) of the user is not influenced, the accuracy of identity verification is improved, and after the living body detection is introduced, the property safety and the information safety of the user can be further protected.
In a possible implementation manner, if in the second verification video, the first facial key point of the user moves according to the verification sequence, and the human face pose signal-to-noise ratio of the user is greater than a preset value, it is determined that the user passes the identity verification, which may include the following steps:
judging whether the first facial key points of the user move according to the verification sequence or not according to the second verification video;
if yes, calculating the face pose signal-to-noise ratio of the user according to the second verification video;
if not, second reminding information is generated and displayed, and the second reminding information can be used for prompting the user to move according to the verification sequence.
Therefore, the method of the embodiment of the application can adopt different measures or reminders according to the identity authentication condition of the user (whether the user moves according to the authentication sequence), and is beneficial to improving the use experience of the user in identity authentication.
In another possible implementation manner, if in the second verification video, the first facial key points of the user move according to the verification sequence, and the signal-to-noise ratio of the face pose of the user is greater than a preset value, it is determined that the user passes the identity verification, which may include the following steps:
if the first face key point of the user moves according to the verification sequence in the second verification video, calculating the face posture ratio corresponding to each digit in the random verification code;
calculating signal power and noise power according to the face attitude ratio corresponding to each digit;
generating a human face posture signal-to-noise ratio according to the signal power and the noise power;
and if the signal-to-noise ratio of the face posture is greater than a preset value, determining that the user passes the identity verification.
Therefore, the method in the embodiment of the application can calculate the face pose ratio, the signal power and the noise power corresponding to each number in the random verification code, and further calculate the face pose signal-to-noise ratio, so as to judge whether the user passes the identity verification (or whether the user performing the identity verification is in a living state, wherein the living state refers to that the user is the real person performing the identity verification, but not performing the identity verification through a user photo or a video). According to the method, the human face posture signal-to-noise ratio is calculated through a plurality of reference values (or reference factors), so that the accuracy of judgment of user identity verification (or living body detection) by the method is favorably ensured, and further, the information safety and property safety of the user are ensured.
In another possible implementation manner, if the first facial key point of the user moves in the second verification video according to the verification order, calculating the face pose ratio corresponding to each number in the random verification code may include the following steps:
for a single number in the random authentication code,
calculating a first horizontal distance and a first vertical distance between the second face key point and the first face key point and a second horizontal distance and a second vertical distance between the third face key point and the first face key point according to image information of the second key video when the first face key point stays in a numeric keyboard corresponding to a single number;
and calculating the face pose ratio corresponding to the single number according to the first horizontal distance, the first vertical distance, the second horizontal distance and the second vertical distance.
Therefore, the method of the embodiment of the application can calculate the face pose ratio corresponding to each number in the random verification code, and is beneficial to performing living body detection on the user performing identity verification according to the face pose ratio.
In another possible implementation, before acquiring the first verification video in response to the verification instruction of the user, the method may further include the following steps:
receiving a login instruction of a user;
and acquiring preset identity information of the user according to the login instruction, wherein the preset identity information can comprise preset image information.
Therefore, the method in the embodiment of the application can acquire the preset identity information of the user after the user inputs the login instruction, is beneficial to subsequently authenticating the identity of the user, acquires the preset identity information of the user only when needed (after the user logs in), is beneficial to reducing the operation burden and the internal burden of the user terminal, and completes the preparation work of the user identity authentication while ensuring the smooth operation of the user terminal.
In another possible embodiment, after generating at least one first image information according to the first verification video and determining whether the user matches the preset identity information according to the at least one first image information, the method may further include the following steps:
if not, generating and displaying third reminding information which can be used for indicating that the user does not pass the identity authentication.
Therefore, the method in the embodiment of the application can send the reminding information after the user fails to pass the authentication, is helpful for guiding or reminding the user to perform other operations (such as quitting the application program and/or performing the authentication again), and improves the use experience of the user.
In another possible implementation manner, if in the second verification video, the first facial key point of the user moves according to the verification sequence, and the signal-to-noise ratio of the face pose of the user is greater than the preset value, after it is determined that the user passes the authentication, the method may further include the following steps:
a jump is made to the target page, which is associated with the validation instruction.
Therefore, after the user passes the identity authentication, the method can directly jump to the target page without the need of the user to perform more operations such as function selection and the like, and is beneficial to improving the use experience of the user.
In a second aspect, an embodiment of the present application provides a user terminal, where the user terminal may include the following: the system comprises an interaction module, a photographing module, a calculation module and a multimedia module;
the interaction module can be used for receiving a verification instruction of a user;
the camera module can be used for responding to a verification instruction of a user and acquiring a first verification video;
the computing module can be used for generating at least one piece of first image information according to the first verification video and judging whether the user is matched with the preset identity information or not according to the at least one piece of first image information;
the computing module may be further configured to generate first prompting information if it is determined that the user matches the preset identity information, where the first prompting information may include numeric keypad image information, a random verification code, and a verification sequence, and the random verification code may include at least two digits;
the multimedia module can be used for displaying the first reminding information;
the camera module can be further used for acquiring a second verification video within first preset time;
the calculation module can be further used for determining that the user passes the identity verification if the first facial key points of the user move according to the verification sequence and the human face posture signal-to-noise ratio of the user is greater than a preset value in the second verification video.
In a third aspect, an embodiment of the present application provides a user terminal, where the user terminal may include: a processor, a memory, and a bus;
a processor and a memory are connected by a bus, wherein the memory is adapted to store a set of program codes and the processor is adapted to call the program codes stored in the memory to perform the method according to the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, including:
the computer readable storage medium has stored therein instructions which, when run on a computer, implement the method according to the first aspect.
By implementing the method of the embodiment of the application, the step of live body detection can be added to the user needing identity verification under the condition that no new auxiliary device or equipment is added, so that the rigor of identity verification is further improved, and the identity information safety and property information safety of the user are further ensured. The method for performing in vivo detection by the method of the embodiment of the application comprises the following steps: the user interacts with the numeric keyboard on the screen of the user terminal according to the random verification code and the verification sequence, the interaction process is simple, and high-accuracy living body detection judgment can be guaranteed under the condition that the user experience is not influenced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an identity authentication method based on human face sliding screen interaction according to an embodiment of the present application;
fig. 2 is a schematic view of a scene showing first reminding information according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a scene labeled with facial key points according to an embodiment of the present disclosure;
fig. 4 is a schematic composition diagram of a user terminal according to an embodiment of the present application;
fig. 5 is a schematic composition diagram of another user terminal according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, of the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein may be combined with other embodiments.
In order to better understand the technical solution of the embodiment of the present application, the following describes in detail an identity authentication method based on human face sliding screen interaction provided in the embodiment of the present application with reference to the steps in fig. 1.
Please refer to fig. 1, which is a flowchart illustrating an identity authentication method based on human face sliding interaction according to an embodiment of the present disclosure. It can be understood that the following method is executed mainly by the user terminal; as shown in fig. 1, the method may include the steps of:
s101, responding to a verification instruction of a user, and acquiring a first verification video.
In a possible implementation manner, before the first verification video is acquired in response to the verification instruction of the user, the method may further include the following steps:
receiving a login instruction of a user;
and acquiring preset identity information of the user according to the login instruction.
It should be noted that the preset identity information may include preset image information.
Specifically, the login instruction of the user (or the way in which the user logs in the target software) may be "account-password", "fingerprint recognition", or "voiceprint recognition", or the like. More, the preset identity information of the user may also include other information capable of indicating the identity of the user, such as identification photo information of the user, image information of different angles or orientations of the face of the user, voiceprint information of the user, age and sex of the user, and the like. The preset image information may be image information of different angles or orientations of the face of the user, and the user may subsequently perform living body detection on the user, where it should be noted that "living body detection" mentioned in the embodiment (method) of the present application is "determining whether the user performing authentication is a living body, and whether there is a case of performing authentication using a photo or a video".
More specifically, in the above "authentication instruction in response to a user", the authentication instruction may be an authentication request triggered by the user selecting a different function (or function module, function button, etc.) to use in the target software. For example, if the user selects the "view account balance" function in the target software (or clicks the "view account balance" button), the authentication request may be triggered, and the act of the user selecting the "view account balance" function may be regarded as the authentication instruction.
The method of the embodiment of the application is described and designed in the target software of the user terminal, and the target software can also be called target application software, which is application software that is selected by a user on the user terminal, installed in the user terminal and can be run on the user terminal.
More generally, a user terminal may also be referred to as a terminal device, and the user terminal may be fixed or mobile. The specific form of the mobile phone can be a mobile phone (mobile phone), a tablet computer (Pad), a computer with a wireless transceiving function, a wearable terminal device and the like. The operating system of the PC-side terminal device, such as a kiosk or the like, may include, but is not limited to, operating systems such as Linux system, unix system, windows series system (e.g., windows xp, windows 7, etc.), mac OS X system (operating system of apple computer), and the like. The operating system of the terminal device at the mobile terminal, such as a smart phone, may include, but is not limited to, an operating system such as an android system, an IOS (operating system of an apple mobile phone), and a Window system.
Therefore, the method of the embodiment of the application can flexibly acquire the preset identity information related to the user (or the account) without increasing the operation burden of the user terminal and the memory-fuzzy burden, and is beneficial to improving the use experience of the user.
S102, generating at least one piece of first image information according to the first verification video, and judging whether the user is matched with preset identity information or not according to the at least one piece of first image information.
Specifically, the method according to the embodiment of the application may be configured to decompose the first verification video into a plurality of pieces of image information according to a preset time interval (or a preset number of frames), and then screen out, from the plurality of pieces of image information, at least one piece of image information in which the face of the user keeps a front-facing direction (i.e., the front-facing direction faces towards the lens of the user terminal) and the sharpness is greater than a preset value, as the first image information. Illustratively, the user terminal 1 acquires a section of first verification video of the user 1 through a lens, and then the user terminal 1 intercepts a first verification video terminal picture every 0.1 second to obtain 20 images (information), and after face orientation and definition screening, 5 images (information) are obtained, and then the 5 screened images (information) are the at least one piece of first image information.
More specifically, the length of the first verification video, the preset time interval (or the preset number of frames) for decomposing the first verification video, and the preset value related to the definition are set by the technician according to the actual situation, and are not limited herein.
Wherein, regarding to judge whether the user is matched with the preset identity information according to the at least one first image information, the following process may be referred to: the user terminal may analyze similarity between at least one piece of first image information and preset image information, and when the similarity between the preset number of pieces of first image information and preset expected information is greater than a preset value, it may determine that the user performing the identity verification (i.e., the user present in the first image information) matches the preset identity information. More, regarding the similarity analysis, the similarity between the first image information and the preset image information may be obtained by performing overlapping comparison between the first image information and the preset image information; it is also possible to analyze the similarity between the relative positional relationship between the user's face key points in the first image information and the relative positional relationship between the user's face key points in the preset image information. It should be noted that, the specific similarity calculation method and the corresponding preset value are set by a technician, and are not limited herein.
For example, if the similarity between 80% of the first image information and the preset image information is greater than 76%, it indicates that the user performing the identity authentication is matched with the preset identity information, and if there are 20 existing first images (information), where the similarity between 15 first images (information) and the preset image information is 80% or more, it is determined that the user performing the identity authentication (i.e., the user present in the 20 pieces of first image information) is not matched with the preset identity information; if there are 20 first images (information) in the past, and the similarity between 18 first images (information) and the preset image information is 77% or more, it is determined that the user performing the authentication (i.e., the user present in the 20 first images) matches the preset identity information.
Therefore, the method in the embodiment of the application can judge whether the user performing the identity verification (i.e. the user presented in the first image information) is matched with the preset identity information by comparing the similarity between the plurality of pieces of first image information and the preset image information, so that the accuracy of the identity verification is ensured quantitatively; the method provided by the embodiment of the application also provides a plurality of different similarity calculation methods, and the accurate determination of the identity authentication can be technically ensured.
And S103, if the judgment result is yes, generating and displaying first reminding information.
It should be noted that the first reminding message may include the numeric keypad image information, the random verification code and the verification sequence. The random authentication code may include at least two digits.
For example, please refer to fig. 2, and fig. 2 is a schematic view of a scene showing first reminder information according to an embodiment of the present application. As shown in fig. 2, before generating the first reminding message, the user terminal (or the target software) performs a matching determination (and determines whether the user performing the authentication matches the preset identity information), and the determination process is shown as a flow 210 in fig. 2. In the process 210, after the user terminal 20 collects the first verification video of the user 21, at least one piece of first image information is extracted according to the first verification video, similarity analysis is performed on the at least one piece of first image information and preset image information (the specific similarity analysis process may refer to the example related to step S102, which is not described herein), then a result is output, and when the output result is "the user 21 performing identity verification matches the preset identity information", the user terminal may generate the first reminding information and display the first reminding information on the display screen of the user terminal 20. The first reminder message may be composed of the numeric keypad image information 220, the random verification code 230 and the verification sequence 240 as shown in fig. 2, and the random verification code 230 and the verification sequence 240 may be language-combined as shown in fig. 2. It should be noted that, as shown in fig. 2, the display layer where the image information 220 of the numeric keypad is located is overlaid on the display layer where the image information of the user is located.
Further, in addition to the presentation format of the random authentication code and the authentication order presented in fig. 2, the presentation format of the random authentication code 230 and the authentication order 240 may also be "random authentication code: 3657; and (3) verification sequence: from left to right "; it may also be "please move your nose tip in the order of 3 → 6 → 5 → 7". Without limitation, the specific presentation format of the random verification code and the verification sequence is set by a technician according to actual conditions.
Therefore, the method of the embodiment of the application can present the first reminding information to the user through a humanized and clear display mode or format, is beneficial to the user to more accurately understand the authentication requirement of the target software, and is beneficial to improving the efficiency of the user in authentication, thereby improving the use experience of the user.
And S104, acquiring a second verification video within the first preset time.
Illustratively, after displaying the first reminding information, the user terminal starts recording/acquiring the second verification video, and stops or ends recording/acquiring the second verification video when the time for recording/acquiring the second verification video exceeds a first preset time. For example, if the first preset time is 6 seconds, the timer starts to count time after the first reminding message is displayed (meanwhile, the user terminal also starts to record/acquire the second verification video), and then stops or ends to record/acquire the second verification video after 6 seconds. More specifically, the first preset time may be set by a technician according to an average time for completing the verification, and for example, if the average time taken for completing one verification in the test phase is 5 seconds, the first preset time may be set to 6 seconds (a little more than the average time is beneficial for ensuring that the user completes the verification within the first preset time).
Therefore, the method of the embodiment of the application can reasonably set the first preset time, and is beneficial to reducing the operation burden and the memory burden of the user terminal.
S105, if the first facial key points of the user move according to the verification sequence in the second verification video and the human face posture signal-to-noise ratio of the user is larger than a preset value, determining that the user passes identity verification.
In a possible implementation manner, if in the second verification video, the first facial key point of the user moves according to the verification sequence, and the human face pose signal-to-noise ratio of the user is greater than a preset value, it is determined that the user passes the identity verification, which may include the following steps:
judging whether the first facial key points of the user move according to the verification sequence or not according to the second verification video;
if the judgment result is yes, calculating the face pose signal-to-noise ratio of the user according to the second verification video;
if not, generating and displaying second reminding information.
It should be noted that the second reminding message may be used to prompt the user to move according to the verification order (or prompt the user to move the nose tip according to the verification order). For example, if it is determined that the first facial key point of the user is not moved in the verification order, a second reminding message similar to "detect that you are not moving your nose in the verification order, please verify again" may be generated. More, if the user terminal determines that the first face key point of the user does not move according to the verification sequence for the preset number of times, it may be determined that the user authentication fails (or fails), and therefore, the remaining authentication opportunities of the user may be displayed in the second reminding information. For example, if the user has 3 chances of performing authentication (3 chances of performing authentication according to the first reminder message), and if the first facial key point of the user is not moved according to the preset sequence during the first authentication, the second reminder message similar to "detect that you do not move your nose according to the authentication sequence, and you still have 2 remaining opportunities of performing authentication again" may be generated.
Therefore, when the user does not move according to the verification sequence, the method provided by the embodiment of the application can prompt the user to perform the next operation through the detailed reminding information, and can also inform the user of the remaining verification opportunities, so that the user can know the verification requirements more easily, and the use experience of the user is improved.
In another possible implementation manner, if in the second verification video, the first facial key points of the user move according to the verification sequence, and the signal-to-noise ratio of the face pose of the user is greater than a preset value, it is determined that the user passes the identity verification, which may include the following steps:
if the first face key point of the user moves according to the verification sequence in the second verification video, calculating the face posture ratio corresponding to each digit in the random verification code;
calculating signal power and noise power according to the face attitude ratio corresponding to each digit;
generating a human face posture signal-to-noise ratio according to the signal power and the noise power;
and if the signal-to-noise ratio of the face posture is greater than a preset value, determining that the user passes the identity verification.
In another possible implementation manner, if the first facial key point of the user moves in the second verification video according to the verification order, calculating the face pose ratio corresponding to each number in the random verification code may include the following steps:
for a single number in the random authentication code,
calculating a first horizontal distance and a first vertical distance between the second face key point and the first face key point and a second horizontal distance and a second vertical distance between the third face key point and the first face key point according to image information of the second verification video when the first face key point stays in the numeric keyboard corresponding to the single number;
and calculating the face pose ratio corresponding to the single number according to the first horizontal distance, the first vertical distance, the second horizontal distance and the second vertical distance.
For example, please refer to fig. 3, fig. 3 is a schematic view of a scene labeled with facial key points according to an embodiment of the present disclosure. As shown in FIG. 3, the contour of the face may be labeled or decomposed from keypoint 1 to keypoint 17, the eyebrow may be labeled or decomposed from keypoint 18 to keypoint 27, the nose may be labeled or decomposed from keypoint 28 to keypoint 36, the eye may be labeled or decomposed from keypoint 37 to keypoint 48, and the lip may be labeled or decomposed from keypoint 49 to keypoint 68. The key point 34 of the nose region is the first face key point in this embodiment, the key point 3 of the contour is the second face key point in this embodiment, and the key point 15 of the contour is the third face key point in this embodiment.
Specifically, in the method according to the embodiment of the present application, in the second verification video, image information corresponding to different numbers in the random verification code may be acquired, and then the key points (key point 1 to key point 68) of each face are marked according to the image information, or coordinate information of the key points (key point 1 to key point 68) of each face is obtained. For example, for the calculation of the face pose ratio corresponding to a single number i (note: i may be 1 to 9), the key point 3, the key point 34, and the key point 15 in the image information corresponding to the single number i may be A, B, C, respectively, and then coordinate information a (Xa, ya), B (Xb, yb), and C (Xc, yc) may be further obtained, so that the horizontal distance between B and a and C may be H 1 =Xb-Xa、H 2 = Xc-Xb, the vertical distance of B from a and C may be V 1 =Yb-Ya、V 2 If the face pose ratio of the user face on the number i is Si = (H), yc-Yb 1 /H 2 ,V 1 /V 2 ) The face attitude ratio Si corresponding to a single number i is calculated for multiple times, and then the average value is taken to obtain the face attitude ratio corresponding to each number
Figure BDA0003968236360000071
Then, the signal power of the user terminal (or target software) generating the random verification code containing n numbers is further calculated as follows:
Figure BDA0003968236360000081
the noise power generated is:
Figure BDA0003968236360000082
then the face pose snr is:
Figure BDA0003968236360000083
and comparing the calculated Snr value with a preset value, wherein if the calculated Snr value is larger than the preset value, the matching degree of the other result of the face after finishing the sliding verification code and the real user (living body) is high, and the user is judged to pass the identity verification instead of using a photo for the identity verification, or vice versa.
Therefore, the face attitude signal-to-noise ratio is finally obtained by gradually calculating the face attitude ratio corresponding to a single number in the random verification code, the signal power for generating the random verification code and the noise power, and the accuracy of the living body detection of the user is favorably ensured.
In another possible implementation manner, if in the second verification video, the first facial key point of the user moves according to the verification order, and the human face pose signal-to-noise ratio of the user is greater than a preset value, after it is determined that the user passes the identity verification, the method may further include the following steps:
a jump is made to the target page, which is associated with the validation instruction.
Illustratively, if a user logs in target software, a user terminal (or the target software) displays reminding information of a verification request, when the user agrees to the verification request, the action of agreeing to the verification request can be regarded as a verification instruction, the user can jump to a homepage of the target software after identity verification, and the homepage of the target software is the target page; if the user selects the function of checking account balance in the target software (or clicks the button of checking account balance), the verification request can be triggered, and after the user completes the identity verification, the target software can jump to an account balance page, which is the target page.
Therefore, the method provided by the embodiment of the application can jump to the target page corresponding to the verification instruction after the user completes the identity verification, is beneficial to reducing the complexity of the user in operating the target software, and is beneficial to improving the use experience of the user.
In a possible implementation manner, after generating at least one first image information according to the first verification video and determining whether the user matches the preset identity information according to the at least one first image information, the method may further include the following steps:
if not, generating and displaying third reminding information.
It should be noted that the third reminding message may be used to indicate that the user fails to authenticate.
For example, after determining that the user does not match the preset identity information according to the determination of the at least one first image information, the user terminal may generate a similar "no-go" message, which is currently detected that you are not a customer corresponding to the account, and please confirm the account information again or perform authentication again. "is provided.
In summary, according to the method of the embodiment of the present application, without adding an auxiliary device or an apparatus, the user can be authenticated, and the user can be simultaneously subjected to liveness detection (i.e., whether the user performing the authentication is a live body or not is determined, or whether the user performs the authentication using the static/dynamic image information of the user exists), which is helpful for protecting the identity information security and the property information security of the user, and is also helpful for completing more authentication items without affecting the user using the user terminal (or the target software).
The following describes an apparatus according to an embodiment of the present application with reference to the drawings.
Referring to fig. 4, a schematic composition diagram of a user terminal provided in an embodiment of the present application is shown, where the user terminal may include: an interaction module 410, a camera module 420, a calculation module 430 and a multimedia module 440;
an interaction module 410, which may be configured to receive a verification instruction of a user;
the camera module 420 may be configured to obtain a first verification video in response to a verification instruction of a user;
the calculating module 430 may be configured to generate at least one piece of first image information according to the first verification video, and determine whether the user matches the preset identity information according to the at least one piece of first image information;
the calculating module 430 may be further configured to generate first prompting information if it is determined that the user matches the preset identity information, where the first prompting information may include digital keypad image information, a random verification code, and a verification sequence, and the random verification code may include at least two digits;
the multimedia module 440 may be configured to display the first reminder information;
the camera module 420 may be further configured to acquire a second verification video within a first preset time;
the calculating module 430 may be further configured to determine that the user passes the identity verification if the first facial key points of the user move according to the verification sequence and the signal-to-noise ratio of the face pose of the user is greater than a preset value in the second verification video.
In one possible implementation, the user terminal may further include: a judging module 450;
the determining module 450 may be configured to determine whether the first facial key point of the user moves according to the verification order according to the second verification video;
the calculating module 430 may be further configured to calculate a human face pose signal-to-noise ratio of the user according to the second verification video when the first facial key point of the user moves according to the verification sequence;
the calculating module 430 may be further configured to generate second reminding information when the first facial key point of the user does not move according to the verification order, where the second reminding information may be used to prompt the user to move according to the verification order;
the multimedia module 440 may be further configured to display the second reminding information.
In another possible implementation, the user terminal may further include:
the calculating module 430 may be further configured to calculate a face pose ratio corresponding to each number in the random verification code if the first facial key point of the user moves according to the verification sequence in the second verification video;
the calculating module 430 may be further configured to calculate signal power and noise power according to the face pose ratio corresponding to each number;
the calculating module 430 may be further configured to generate a human face pose signal-to-noise ratio according to the signal power and the noise power;
the calculating module 430 may be further configured to determine that the user passes the identity verification if the signal-to-noise ratio of the face pose is greater than a preset value.
In another possible implementation, the user terminal may further include:
for a single number in the random authentication code,
the calculating module 430 may be further configured to calculate, according to image information of the second key video when the first face key point stays in the numeric keyboard corresponding to the single number, a first horizontal distance and a first vertical distance between the second face key point and the first face key point, and a second horizontal distance and a second vertical distance between the third face key point and the first face key point;
the calculating module 430 may be further configured to calculate a face pose ratio corresponding to a single number according to the first horizontal distance, the first vertical distance, the second horizontal distance, and the second vertical distance.
In another possible implementation, the user terminal may further include: a communication module 460;
the interaction module 410 may be further configured to receive a login instruction of a user;
the communication module 460 may further be configured to obtain preset identity information of the user according to the login instruction, where the preset identity information may include preset image information.
In another possible implementation, the user terminal may further include:
the calculating module 430 may be further configured to generate third prompting information if the user does not match the preset identity information, where the third prompting information may be used to indicate that the user does not pass the identity authentication;
the multimedia module 440 may be further configured to display a third reminding message.
In another possible implementation, the user terminal may further include: a control module 470;
the control module 470 may be configured to control the target software to jump to a target page, which is related to the verification instruction, after determining that the user passes the authentication.
Please refer to fig. 5, which is a schematic diagram illustrating another ue according to an embodiment of the present disclosure, where the ue includes:
a processor 510, a memory 520, and an I/O interface 530. Communicatively coupled between processor 510, memory 520, and I/O interface 530, the memory 520 is configured to store instructions, and the processor 510 is configured to execute the instructions stored by the memory 520 to perform the corresponding method steps of fig. 1, as described above.
The processor 510 is configured to execute the instructions stored in the memory 520 to control the I/O interface 530 to receive and transmit signals to perform the steps of the above-described method. The memory 520 may be integrated in the processor 510, or may be provided separately from the processor 510.
Also included in memory 520 are storage system 521, cache 522, and RAM523. The cache 522 is a first-level memory existing between the RAM523 and the CPU, and is composed of a static memory chip (SRAM), and has a relatively small capacity but a much higher speed than the main memory, which is close to the speed of the CPU; the RAM523 is an internal memory that directly exchanges data with the CPU, can be read and written at any time (except for refresh), is fast, and is generally used as a temporary data storage medium for an operating system or other programs in operation. The three combine to implement the memory 520 function.
As an implementation manner, the function of the I/O interface 530 may be realized by a transceiving circuit or a transceiving dedicated chip. Processor 510 may be considered to be implemented by a dedicated processing chip, processing circuit, processor, or a general purpose chip.
As another implementation manner, a manner of using a general-purpose computer to implement the apparatus provided in the embodiment of the present application may be considered. I.e., program code that implements the functions of processor 510 and i/O interface 530, is stored in memory 520, and a general-purpose processor implements the functions of processor 510 and i/O interface 530 by executing the code in memory 520.
For the concepts, explanations, details and other steps related to the technical solutions provided in the embodiments of the present application related to the apparatus, reference is made to the foregoing methods or descriptions related to the method steps executed by the apparatus in other embodiments, which are not described herein again.
As another implementation of the present embodiment, a computer-readable storage medium is provided, on which instructions are stored, which when executed perform the method in the above-described method embodiment.
As another implementation of the present embodiment, a computer program product is provided that contains instructions that, when executed, perform the method in the above-described method embodiments.
Those skilled in the art will appreciate that only one memory and processor are shown in fig. 5 for ease of illustration. In an actual terminal or server, there may be multiple processors and memories. The memory may also be referred to as a storage medium or a storage device, and the like, which is not limited in this application.
It should be understood that, in the embodiment of the present Application, the processor may be a Central Processing Unit (CPU), and the processor may also be other general-purpose processors, digital Signal Processors (DSP), application Specific Integrated Circuits (ASIC), field-programmable Gate arrays (FPGA) or other programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like.
It will also be appreciated that the memory referred to in the embodiments of the application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM) which serves as an external cache. By way of example and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and Direct bus RAM (DR RAM).
It should be noted that when the processor is a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, the memory (memory module) is integrated in the processor.
It should be noted that the memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The bus may include a power bus, a control bus, a status signal bus, and the like, in addition to the data bus. But for clarity of illustration the various buses are labeled as buses in the figures.
It should also be understood that reference herein to first, second, third, fourth, and various numerical designations is made only for ease of description and should not be used to limit the scope of the present application.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and combines hardware thereof to complete the steps of the method. To avoid repetition, it is not described in detail here.
In the embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various Illustrative Logical Blocks (ILBs) and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), among others.
Embodiments of the present application further provide a computer storage medium, where the computer storage medium stores a computer program, and the computer program is executed by a processor to implement part or all of the steps of any one of the methods for identity verification based on human face sliding screen interaction as described in the above method embodiments.
Embodiments of the present application further provide a computer program product, which includes a non-transitory computer readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform part or all of the steps of any one of the methods for identity verification based on human face sliding screen interaction as described in the above method embodiments.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An identity verification method based on face sliding screen interaction is characterized by being applied to a user terminal, and comprises the following steps:
responding to a verification instruction of a user, and acquiring a first verification video;
generating at least one piece of first image information according to the first verification video, and judging whether the user is matched with preset identity information or not according to the at least one piece of first image information;
if the first reminding information is judged to be yes, first reminding information is generated and displayed, the first reminding information comprises digital keyboard image information, a random verification code and a verification sequence, and the random verification code comprises at least two numbers;
acquiring a second verification video within a first preset time;
and if the first facial key point of the user moves according to the verification sequence in the second verification video and the human face posture signal-to-noise ratio of the user is greater than a preset value, determining that the user passes identity verification.
2. The method according to claim 1, wherein if the first facial key point of the user moves according to the verification sequence in the second verification video and the signal-to-noise ratio of the face pose of the user is greater than a preset value, determining that the user passes the identity verification, comprising the steps of:
judging whether the first facial key point of the user moves according to the verification sequence or not according to the second verification video;
if the judgment result is yes, calculating the face pose signal-to-noise ratio of the user according to the second verification video;
and if not, generating and displaying second reminding information, wherein the second reminding information is used for prompting the user to move according to the verification sequence.
3. The method according to claim 2, wherein if the first facial key point of the user moves according to the verification sequence in the second verification video and the signal-to-noise ratio of the face pose of the user is greater than a preset value, determining that the user passes the identity verification, comprising the steps of:
if the first facial key point of the user moves according to the verification sequence in the second verification video, calculating a face posture ratio corresponding to each digit in the random verification code;
calculating signal power and noise power according to the face attitude ratio corresponding to each digit;
generating the human face attitude signal-to-noise ratio according to the signal power and the noise power;
and if the signal-to-noise ratio of the face posture is greater than a preset value, determining that the user passes identity verification.
4. The method according to claim 3, wherein if the first facial key point of the user moves according to the verification sequence in the second verification video, calculating the face pose ratio corresponding to each digit in the random verification code, comprises the following steps:
for a single number in the random authentication code,
calculating a first horizontal distance and a first vertical distance between a second face key point and the first face key point and a second horizontal distance and a second vertical distance between a third face key point and the first face key point according to image information of the first face key point in a second verification video when the first face key point stays in a numeric keyboard corresponding to the single number;
and calculating the face pose ratio corresponding to the single number according to the first horizontal distance, the first vertical distance, the second horizontal distance and the second vertical distance.
5. The method according to claim 1 or 4, wherein before said obtaining the first verification video in response to the verification instruction of the user, further comprising the steps of:
receiving a login instruction of the user;
and acquiring preset identity information of the user according to the login instruction, wherein the preset identity information comprises preset image information.
6. The method according to claim 5, wherein after generating at least one first image information according to the first verification video and determining whether the user matches preset identity information according to the at least one first image information, the method further comprises the following steps:
if not, generating and displaying third reminding information, wherein the third reminding information is used for indicating that the user does not pass the identity authentication.
7. The method according to claim 6, wherein if the first facial key point of the user moves according to the verification sequence in the second verification video and the signal-to-noise ratio of the face pose of the user is greater than a preset value, after determining that the user passes the identity verification, the method further comprises the following steps:
and jumping to a target page, wherein the target page is related to the verification instruction.
8. A user terminal, characterized in that the user terminal comprises: the system comprises an interaction module, a camera module, a calculation module and a multimedia module;
the interaction module is used for receiving a verification instruction of a user;
the camera module is used for responding to a verification instruction of the user and acquiring a first verification video;
the computing module is used for generating at least one piece of first image information according to the first verification video and judging whether the user is matched with preset identity information or not according to the at least one piece of first image information;
the computing module is further configured to generate first reminding information if the user is judged to be matched with preset identity information, wherein the first reminding information comprises numeric keyboard image information, a random verification code and a verification sequence, and the random verification code comprises at least two digits;
the multimedia module is used for displaying the first reminding information;
the camera module is further used for acquiring a second verification video within a first preset time;
the calculation module is further configured to determine that the user passes the identity verification if the first facial key point of the user moves according to the verification sequence and the human face pose signal-to-noise ratio of the user is greater than a preset value in the second verification video.
9. A user terminal, characterized in that the user terminal comprises:
a processor, a memory and a bus, the processor and the memory being connected by the bus, wherein the memory is configured to store a set of program code, and the processor is configured to call the program code stored in the memory to perform the method according to any one of claims 1-7.
10. A computer-readable storage medium, comprising:
the computer-readable storage medium has stored therein instructions which, when run on a computer, implement the method of any one of claims 1-7.
CN202211502276.0A 2022-11-28 2022-11-28 Identity verification method based on face sliding screen interaction and related product Pending CN115840931A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211502276.0A CN115840931A (en) 2022-11-28 2022-11-28 Identity verification method based on face sliding screen interaction and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211502276.0A CN115840931A (en) 2022-11-28 2022-11-28 Identity verification method based on face sliding screen interaction and related product

Publications (1)

Publication Number Publication Date
CN115840931A true CN115840931A (en) 2023-03-24

Family

ID=85576106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211502276.0A Pending CN115840931A (en) 2022-11-28 2022-11-28 Identity verification method based on face sliding screen interaction and related product

Country Status (1)

Country Link
CN (1) CN115840931A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117789272A (en) * 2023-12-26 2024-03-29 中邮消费金融有限公司 Identity verification method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117789272A (en) * 2023-12-26 2024-03-29 中邮消费金融有限公司 Identity verification method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US10789343B2 (en) Identity authentication method and apparatus
US10992666B2 (en) Identity verification method, terminal, and server
US10430679B2 (en) Methods and systems for detecting head motion during an authentication transaction
CN110247898B (en) Identity verification method, identity verification device, identity verification medium and electronic equipment
US11552944B2 (en) Server, method for controlling server, and terminal device
CN110348193A (en) Verification method, device, equipment and storage medium
JP2010049357A (en) Authentication device, authentication system, and authentication method
CN113780212A (en) User identity verification method, device, equipment and storage medium
EP3622435B1 (en) Method and apparatus for security verification based on biometric feature
CN109034029A (en) Detect face identification method, readable storage medium storing program for executing and the electronic equipment of living body
JP7428242B2 (en) Authentication device, authentication system, authentication method and authentication program
JP2023549934A (en) Method and apparatus for user recognition
CN112989299A (en) Interactive identity recognition method, system, device and medium
CN115840931A (en) Identity verification method based on face sliding screen interaction and related product
CN113032047A (en) Face recognition system application method, electronic device and storage medium
CN108985035B (en) Control method and device for user operation authority, storage medium and electronic equipment
CN111274602A (en) Image characteristic information replacement method, device, equipment and medium
CN113705428A (en) Living body detection method and apparatus, electronic device, and computer-readable storage medium
CN111989693A (en) Biometric identification method and device
CN110348194A (en) Method of password authentication, device, equipment and storage medium
Cîrlugea et al. Facial Recognition Software for Android Operating Systems
KR101432484B1 (en) User outhenticaion system, apparatus and method for user outhenticaion in the system
JP7248348B2 (en) Face authentication device, face authentication method, and program
US20230237139A1 (en) Device and method for authenticating a user of a first electronic device connected to a second electronic device
CN116881887A (en) Application program login method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: 518000 18th floor, building A4, Kexing Science Park, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: Zhaolian Consumer Finance Co.,Ltd.

Address before: 518000 18th floor, building A4, Kexing Science Park, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: MERCHANTS UNION CONSUMER FINANCE Co.,Ltd.

Country or region before: China