CN113869906A - Face payment method and device and storage medium - Google Patents

Face payment method and device and storage medium Download PDF

Info

Publication number
CN113869906A
CN113869906A CN202111152752.6A CN202111152752A CN113869906A CN 113869906 A CN113869906 A CN 113869906A CN 202111152752 A CN202111152752 A CN 202111152752A CN 113869906 A CN113869906 A CN 113869906A
Authority
CN
China
Prior art keywords
face
payment
living body
information
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111152752.6A
Other languages
Chinese (zh)
Inventor
闫研
时占
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202111152752.6A priority Critical patent/CN113869906A/en
Publication of CN113869906A publication Critical patent/CN113869906A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Accounting & Taxation (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a face payment method and apparatus, and a storage medium, wherein the method includes: performing living body detection on the collected first face to determine a living body detection result; under the condition that the living body detection result indicates that the first face belongs to a living body, carrying out face recognition on the first face to determine a face recognition result; and carrying out face payment based on the face recognition result. The face recognition method and device can be used for recognizing the faces belonging to the non-living bodies in the face payment process, and economic loss caused by payment by using the faces belonging to the non-living bodies is reduced.

Description

Face payment method and device and storage medium
Technical Field
The present disclosure relates to the field of computer vision, and in particular, to a method and an apparatus for face payment, and a storage medium.
Background
At present, with the continuous development of face recognition technology, the accuracy of face recognition is higher and higher, and face recognition is also applied to many scenes, such as face-brushing payment, face recognition entrance guard, intelligent identity authentication, commercial member recognition, people counting, real estate visiting record and the like.
Disclosure of Invention
The disclosure provides a face payment method and device and a storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided a face payment method, the method including: performing living body detection on the collected first face to determine a living body detection result; under the condition that the living body detection result indicates that the first face belongs to a living body, carrying out face recognition on the first face to determine a face recognition result; and carrying out face payment based on the face recognition result.
In some optional embodiments, the performing a living body detection on the acquired first face and determining a living body detection result includes: acquiring at least one set of video data comprising the first face; determining at least one frame of face image in the at least one group of video data; performing living body detection on each frame of face image, and determining a single-frame detection result corresponding to each frame of face image; determining the in-vivo detection result based on at least one of the single-frame detection results.
In some alternative embodiments, each set of video data corresponds to a specified action; the live body detection is carried out to the first face of gathering, confirms the live body detection result, still includes: inputting each group of video data into a corresponding action detection neural network to obtain an action detection result output by each action detection neural network; under the condition that each action detection result indicates that the corresponding specified action is completed, executing the step of determining at least one frame of face image in the at least one group of video data; and determining that the living body detection result is that the first face does not belong to a living body when any action detection result indicates that the corresponding specified action is not completed.
In some optional embodiments, the determining at least one frame of a face image in the at least one set of video data comprises at least one of: in each group of video data, determining a frame of face image with the highest image quality in each specified time period to obtain at least one frame of face image; and/or determining a frame of face image with the highest image quality in each group of video data to obtain the at least one frame of face image.
In some optional embodiments, the performing living body detection on each frame of facial image, and determining a single-frame detection result corresponding to each frame of facial image, includes: and inputting each frame of face image into a living body detection neural network, and determining the single-frame detection result output by the living body detection neural network.
In some optional embodiments, the single frame detection result comprises a probability value that the first face does not belong to a living body; the determining the in-vivo detection result based on at least one of the single-frame detection results includes: determining the ratio of the number of the face images with the probability value smaller than or equal to a preset threshold value to the total number of the at least one frame of face images; in response to determining that the ratio is greater than or equal to a preset ratio, determining that the in-vivo detection result is that the first face belongs to a living body.
In some optional embodiments, the performing face recognition on the first face and determining a face recognition result includes: comparing the face information of the first face with associated face information under the condition of executing online payment to determine the face recognition result, wherein the associated face information is the face information associated to a first payment account when face payment is set; or, in case of performing an offline payment, at least one of the following: uploading the face information of the first face to a cloud server; receiving the face recognition result returned after the cloud server compares the face information of the first face with at least one face information included in a face database; uploading the face information of the first face and the information of the third payment account to a cloud server; and receiving the face recognition result returned after the face information of the first face is compared with second target face information included in a face database by the cloud server, wherein the second target face information is the face information associated with a second payment account in the face database.
In some optional embodiments, the method further comprises: when face payment is set, performing living body detection on the collected second face; and under the condition that the second face belongs to the living body, determining the face information of the second face as the associated face information.
In some optional embodiments, the method further comprises at least one of: when the face recognition result indicates that first target face information matched with the face information of the first face exists in the face database, receiving information of a second payment account which is sent by the cloud server and is associated with the first target face information; and receiving information of a third payment password which is sent by the cloud server and is associated with the second target face information when the face recognition result indicates that the face information of the first face is matched with the second target face information.
In some optional embodiments, the making of the face payment based on the face recognition result includes at least one of: when the face recognition result indicates that the face information of the first face is matched with the associated face information, performing online payment under the first payment account based on a pre-stored first payment password corresponding to the first payment account; outputting the second payment account number to enable the offline payment equipment to carry out offline payment based on the second payment account number and the acquired second payment password; and outputting the third payment password so that the offline payment equipment carries out offline payment based on the acquired third payment account and the third payment password.
According to a second aspect of the embodiments of the present disclosure, there is provided a face payment apparatus, including: the living body detection module is used for carrying out living body detection on the collected first face and determining a living body detection result; the face recognition result determining module is used for carrying out face recognition on the first face under the condition that the living body detection result indicates that the first face belongs to a living body, and determining a face recognition result; and the face payment module is used for carrying out face payment based on the face recognition result.
In some optional embodiments, the liveness detection module further comprises: the acquisition sub-module is used for acquiring at least one group of video data comprising the first face; a first determining sub-module, configured to determine at least one frame of face image in the at least one group of video data; the single-frame detection submodule is used for carrying out living body detection on each frame of face image and determining a single-frame detection result corresponding to each frame of face image; a second determining sub-module for determining the in-vivo detection result based on at least one of the single-frame detection results.
In some alternative embodiments, each set of video data corresponds to a specified action; the in-vivo detection module further includes: the third determining submodule is used for inputting each group of video data into a corresponding action detection neural network to obtain an action detection result output by each action detection neural network; the control submodule is used for controlling the first determining submodule to determine at least one frame of face image in the at least one group of video data under the condition that each action detection result indicates that the corresponding specified action is finished; and the fourth determining submodule is used for determining that the living body detection result is that the first face does not belong to a living body under the condition that any action detection result indicates that the corresponding specified action is not finished.
In some optional embodiments, the first determination submodule is further configured to: in each group of video data, determining a frame of face image with the highest image quality in each specified time period to obtain at least one frame of face image; and/or determining a frame of face image with the highest image quality in each group of video data to obtain the at least one frame of face image.
In some optional embodiments, the single frame detection sub-module is further configured to: and inputting each frame of face image into a living body detection neural network, and determining the single-frame detection result output by the living body detection neural network.
In some optional embodiments, the single frame detection result comprises a probability value that the first face does not belong to a living body; the second determination submodule is further configured to: determining the ratio of the number of the face images with the probability value smaller than or equal to a preset threshold value to the total number of the at least one frame of face images; in response to determining that the ratio is greater than or equal to a preset ratio, determining that the in-vivo detection result is that the first face belongs to a living body.
In some optional embodiments, the face recognition result determining module comprises: a fifth determining sub-module, configured to compare, in a case where online payment is performed, face information of the first face with associated face information, and determine the face recognition result, where the associated face information is face information associated with the first payment account when face payment is set; or, in case of performing an offline payment, at least one of the following: the first uploading sub-module is used for uploading the face information of the first face to a cloud server; the first receiving submodule is used for receiving the face recognition result returned after the cloud server compares the face information of the first face with at least one piece of face information included in a face database; the second uploading sub-module is used for uploading the face information of the first face and the information of the third payment account to a cloud server; and the second receiving submodule is used for receiving the face recognition result returned after the face information of the first face is compared with second target face information included in a face database by the cloud server, wherein the second target face information is face information associated with a second payment account in the face database.
In some optional embodiments, the living body detection module is further configured to perform living body detection on the second acquired face when face payment is set; the device further comprises: and the association module is used for determining the face information of the second face as the associated face information under the condition that the second face belongs to the living body.
In some optional embodiments, the apparatus further comprises at least one of: the first receiving module is used for receiving information of a second payment account which is sent by the cloud server and is associated with the first target face information when the face recognition result indicates that the first target face information matched with the face information of the first face exists in the face database; and the second receiving module is used for receiving information of a third payment password which is sent by the cloud server and is associated with the second target face information when the face recognition result indicates that the face information of the first face is matched with the second target face information.
In some optional embodiments, the face payment module is further configured to perform at least one of: when the face recognition result indicates that the face information of the first face is matched with the associated face information, performing online payment under the first payment account based on a pre-stored first payment password corresponding to the first payment account; outputting the second payment account number to enable the offline payment equipment to carry out offline payment based on the second payment account number and the acquired second payment password; and outputting the third payment password so that the offline payment equipment carries out offline payment based on the acquired third payment account and the third payment password.
According to a third aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing any one of the face payment methods described above.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a face payment apparatus, including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to invoke executable instructions stored in the memory to implement any of the above-described face payment methods.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the embodiment of the disclosure, the living body detection may be performed on the collected first face first, and under the condition that the living body detection result indicates that the first face belongs to a living body, the face payment may be performed based on the face information obtained by the face recognition. The face recognition method and device can be used for recognizing the faces belonging to the non-living bodies in the face payment process, and economic loss caused by payment by using the faces belonging to the non-living bodies is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a face payment method according to an exemplary embodiment of the present disclosure;
FIG. 2 is a flow diagram illustrating another face payment method according to an exemplary embodiment of the present disclosure;
FIG. 3 is a flow diagram illustrating another face payment method according to an exemplary embodiment of the present disclosure;
FIG. 4 is a block diagram of a face payment device according to an exemplary embodiment of the present disclosure;
fig. 5 is a schematic structural diagram illustrating a face payment device according to an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as operated herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if," as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination," depending on the context.
At present, in a face payment scene, if a face belonging to a non-living body is used for payment, for example, a face model obtained by using a paper photo, a screen shot photo, a video synthesis technology and performing 3D modeling according to a real face is used, great economic loss is caused to a user, and the loss is not easy to recover. To solve this technical problem, the present disclosure provides the following face payment scheme.
For example, as shown in fig. 1, fig. 1 illustrates a face payment method according to an exemplary embodiment, which may be used in a terminal, which may be a mobile phone, a notebook computer, a desktop computer, a smart band, and the like, and includes the following steps 101 to 103:
in step 101, a living body detection is performed on the acquired first face, and a living body detection result is determined.
In this disclosure, a living body refers to an object having life, such as an animal, a plant, a human body and a tissue thereof having vital signs, and in the embodiment of the present disclosure, a living body specifically refers to a human body having vital signs.
In step 102, in a case where the living body detection result indicates that the first face belongs to a living body, face recognition is performed on the first face, and a face recognition result is determined.
In the embodiment of the present disclosure, a face image including a first face may be input to a feature detection neural network, so as to obtain face information of the first face output by the feature detection neural network. The face information of the first face includes, but is not limited to, feature point information of facial features. Further, the face recognition result may be determined based on matching the face information of the first face with the associated face information or face information included in the face database. The associated face information is face information associated to the first payment account when face payment is set.
In step 103, face payment is performed based on the face recognition result.
In the embodiment of the present disclosure, the payment information may be determined based on the face recognition result, so that face payment is performed based on the payment information. The payment information includes, but is not limited to, a payment account number and/or a payment password.
In the embodiment, the faces belonging to the non-living bodies can be screened in the face payment scene, and the economic loss caused by payment by using the faces belonging to the non-living bodies is reduced.
In some optional embodiments, the in-vivo detection result may be determined in a silent in-vivo detection manner without user cooperation. For example, as shown in fig. 2, step 101 may include the following steps 201 to 204:
in step 201, at least one set of video data comprising the first face is acquired.
In the embodiment of the present disclosure, the number of video data may be one or more groups.
In one possible implementation manner, when video data is collected, a user does not need to cooperate, and the terminal can directly call an image collecting device on the terminal, for example, a camera collects one or more groups of video data including the face of the user. The duration of each set of video data may be a specified duration, for example, may be 5 seconds or 10 seconds, and the disclosure does not limit this.
In step 202, at least one frame of face image is determined in the at least one set of video data.
In the embodiment of the disclosure, after the image capturing device captures the at least one set of video data, the at least one set of video data may be input into a Software Development Kit (SDK) of the terminal. And determining at least one frame of face image in the at least one group of video data by the SDK.
In a possible implementation manner, the terminal SDK may determine, from each set of video data, one frame of face image with the highest image quality in each specified time period, to obtain the at least one frame of face image. Alternatively, the duration of the specified time period may be less than the duration of the set of video data, which is not limited by this disclosure.
For example, the terminal may acquire 25 images per second, the duration of a certain set of video data is 5 seconds, that is, the set of video data includes 125 images, the duration of the specified time period is 1 second, and the terminal SDK may select one image with the highest image quality in every 25 images.
In another possible implementation manner, the terminal SDK may directly select one frame of image with the highest image quality from each group of video data, so as to obtain at least one frame of face image.
In step 203, performing living body detection on each frame of face image, and determining a single-frame detection result corresponding to each frame of face image.
In the embodiment of the disclosure, the terminal SDK may input each frame of face image into a pre-trained in-vivo detection neural network, and determine a single-frame detection result corresponding to each frame of face image output by the in-vivo detection neural network.
The living body detection neural network can use a deep learning method, train the initial neural network through sample data, learn out characteristics and differences of living bodies and non-living bodies which can be observed by human eyes in multiple dimensions through deep learning, including but not limited to screen reflection, photo paper edge characteristics and the like, and obtain the trained living body detection neural network. The initial neural Network may be *** lenet, VGG (Visual Geometry Group Network), residual neural Network, etc. based on *** architecture.
In the actual application stage, each frame of face image can be input into the living body detection neural network, and a single frame detection result output by the living body detection neural network is obtained. Therefore, attacks of pictures, video reproduction, mask models and the like can be effectively prevented.
In step 204, the in-vivo test result is determined based on at least one of the single-frame verification results.
In an embodiment of the present disclosure, the single-frame detection result includes a probability value that the first face does not belong to a living body.
Further, the ratio of the number of the face images with the probability value smaller than or equal to the preset threshold value in the at least one frame of face image to the total number of the at least one frame of face image can be determined. If the ratio is greater than or equal to a preset ratio, it may be determined that the first face belongs to a living body as a living body detection result. Otherwise, if the ratio is smaller than or equal to the preset ratio, determining that the living body detection result is that the first face does not belong to the living body.
In the embodiment of the present disclosure, the larger the probability value is, the higher the possibility that the first face does not belong to the living body is. The smaller the probability value, the greater the likelihood that the first face belongs to a living body. The preset ratio may fall within the range of (0, 100% >), which is not limited by the present disclosure.
If only one frame of face image exists, the single frame detection result corresponding to the frame of face image can be directly used as the required living body detection result. For example, if the preset threshold is 0.8 and the single-frame detection result is 0.99, it may be considered that the living body detection result is that the first face does not belong to a living body. The single-frame detection result is 0.7, then the living body detection result may be regarded as the first face belonging to the living body.
If a multi-frame image face exists, assuming that the preset ratio is 60%, the total number of the face images is 5, and the probability values indicated by single-frame detection results corresponding to 3 or more than 3 frames of face images are all smaller than or equal to a preset threshold value, then the living body detection result can be regarded as that the first face belongs to a living body. On the contrary, if the probability value indicated by the corresponding single-frame detection result with 3 frames or more than 3 frames is greater than the preset threshold, the living body detection result can be regarded as that the first face does not belong to the living body.
In the embodiment, the in-vivo detection can be automatically completed by the terminal without the cooperation of users in the process of determining the in-vivo detection result, and the in-vivo detection is simple and convenient to realize and high in usability.
In some optional embodiments, the user may also perform a specified action in cooperation, and determine the living body detection result in an interactive living body detection manner. For example, as shown in fig. 3, step 101 may include the following steps 301 to 306:
in step 301, at least one set of video data comprising the first face is acquired.
In the embodiments of the present disclosure, user cooperation is required when video data is collected. The terminal may output a prompt by which the user is prompted to make a specified action, including but not limited to nodding his head, blinking, shaking his head, opening his mouth, and the like. The terminal can correspondingly acquire a group of corresponding video data through outputting one prompt message by the application program every time, and finally one or more groups of video data are obtained.
In step 302, each set of video data is input into a corresponding action detection neural network, so as to obtain an action detection result output by each face action detection neural network.
In the embodiment of the present disclosure, the action detection neural network corresponding to each designated action may be trained in advance, and each group of video data is input to the corresponding action detection neural network through the terminal SDK, so as to obtain an action detection result output by each action detection neural network.
For example, the terminal outputs prompt information 1 through the application program to prompt the user to execute the nodding action, and the terminal calls the image acquisition device to acquire a group of video data corresponding to the nodding action. And the terminal SDK acquires the group of video data, inputs the pre-trained nodding action detection neural network and obtains an action detection result output by the nodding action detection neural network. In addition, the terminal also outputs prompt information 2 through the application program to prompt the user to execute the blinking action, and the terminal calls the image acquisition equipment to acquire a group of video data corresponding to the blinking action. And the terminal SDK acquires the group of video data, inputs a pre-trained blink action detection neural network and obtains an action detection result output by the blink action detection neural network. And by analogy, the action detection result output by each action detection neural network is obtained.
In the embodiment of the present disclosure, the action detection result includes a completion degree of the user completing the specified action. And if the completion degree exceeds the threshold corresponding to the specified action, the terminal SDK determines that the action detection result is that the specified action is completed, or else, determines that the action detection result is that the specified action is not completed.
Step 303 is performed in case each action detection result indicates that the corresponding specified action is completed, otherwise step 306 is performed in case any action detection result indicates that the corresponding specified action is not completed.
In step 303, at least one frame of face image is determined in the at least one set of video data.
The implementation of step 303 is similar to that of step 202, and is not described herein again.
In step 304, a living body detection is performed on each frame of face image, and a single frame detection result corresponding to each frame of face image is determined.
In the embodiment of the present disclosure, the manner of determining the single frame detection result is similar to that of step 203, and is not described herein again.
In step 305, the in-vivo test result is determined based on at least one of the single-frame verification results.
In the embodiment of the present disclosure, the manner of determining the living body detection result is similar to that of step 204, and is not described herein again.
In step 306, it is determined that the living body detection result is that the first face belongs to a non-living body.
In the embodiment of the present disclosure, if it is determined that any one of the specified actions is not completed, it may be directly determined that the living body detection result is that the first face belongs to a non-living body.
The above scheme is further exemplified as follows.
Assuming that 4 designated actions are provided, the terminal respectively outputs corresponding prompt information through an application program, and schedules image acquisition equipment so as to obtain video data respectively corresponding to the 4 designated actions, namely 4 groups of video data, and the terminal SDK inputs each group of video data into a corresponding action detection neural network so as to obtain an action detection result output by each action detection neural network. And under the condition that the 4 action detection results all indicate that the corresponding specified action is finished, the terminal SDK determines a frame of face image with the highest image quality in each group of video data.
Inputting 4 frames of face images into a pre-trained living body detection neural network to obtain a single-frame detection result output by the living body detection neural network, and assuming that probability values of a first face not belonging to a living body are respectively 0.7, 0.6, 0.5 and 0.85, a preset threshold value is 0.8, and a preset ratio value is 70%, the terminal SDK can determine that the number of the face images with the probability values smaller than the preset threshold value is 3, and the ratio of the total number is 75%, and exceeds the preset ratio value, so that the terminal SDK determines that the living body detection result is that the first face belongs to the living body.
In the case where any one of the 4 motion detection results indicates that the corresponding specified motion has not been completed, the terminal SDK directly determines that the living body detection result is that the first face does not belong to the living body.
In the embodiment, the living body detection can be performed in an interactive mode under the condition that the specified action is determined to be completed, so that the accuracy of the living body detection is further improved, and the usability is high.
In some optional embodiments, when the face payment is performed based on the recognized face information, the two cases of online payment and offline payment can be divided.
The first way, online payment.
Step 102 may include the following step 102-1:
in step 102-1, in the case of performing online payment, the face information of the first face is compared with the associated face information to determine the face recognition result.
In the embodiment of the present disclosure, the associated face information is face information associated with a first payment account when face payment is set. The first payment account may be an online payment account set in a different application.
In one possible implementation, the face information of the first face and the associated face information are subjected to 1: and 1, comparing to determine whether the first face information is matched with the associated face information, thereby obtaining a face recognition result.
Accordingly, step 103 may include the following step 103-1:
in step 103-1, when the face recognition result indicates that the face information of the first face matches the associated face information, performing online payment based on a pre-stored first payment password corresponding to the first payment account under the first payment account.
In the embodiment of the present disclosure, the matching of the first face information and the associated face information means that the face information of the first face is completely matched with the associated face information, or the matching degree exceeds a preset value.
Further, under the first payment account, on-line payment is carried out based on a pre-stored first payment password corresponding to the first payment account.
In the above embodiment, the online payment can be completed by replacing the first payment password with the face information of the first face belonging to the living body, so that the economic loss caused by payment by using the face belonging to the non-living body is reduced.
The second mode is the case of offline payment.
In a first scheme, in an online lower payment scene, a payment account is replaced by face information of a first face.
Step 102 may include the following steps 102-2 through 102-3:
in step 102-2, the first facial information is uploaded to a cloud server in the event that offline payment is performed.
In step 102-3, the face recognition result returned after the cloud server compares the face information of the first face with at least one piece of face information included in a face database is received.
In this embodiment of the disclosure, can upload the face information of first face to the high in the clouds server by the terminal, the high in the clouds server carries out 1 with the face information of this first face that receives and at least one face information that the face information storehouse that prestores includes: and N, comparing to determine a face recognition result. Further, the cloud server can feed back the face recognition result to the terminal.
Accordingly, the above method may further include the following step 104:
in step 104, when the face recognition result indicates that first target face information matching with the face information of the first face exists in the face database, receiving information of a second payment account associated with the first target face information, which is sent by the cloud server.
In the embodiment of the disclosure, the cloud server may send the pre-stored information of the second payment account associated with the target face information to the terminal when it is determined that the face information of the first face matches the first target face information in the face information base.
In performing step 103, step 103 accordingly comprises the following steps 103-3:
in step 103-3, the second payment account is output, so that the offline payment device performs linear payment based on the second payment account and the acquired second payment password.
In the embodiment of the disclosure, the terminal may output the second payment account, provide the second payment account to the offline payment device through the face information of the first face instead of the second payment account, and complete the offline payment by the offline payment device based on the second payment account output by the terminal and the second payment password input by the user on the offline payment device. In the second scheme, in an online lower payment scene, the payment password is replaced by the face information of the first face.
Step 102 may include the following steps 102-4 through 102-5:
in step 102-4, in the case of performing offline payment, the face information of the first face and the information of the third payment account are uploaded to a cloud server.
In the embodiment of the disclosure, under the condition that the face information of the first face replaces the payment password, the face information of the first face and the information of the third payment account can be uploaded to the cloud server.
In step 102-5, the face recognition result returned after the cloud server compares the face information of the first face with the second target face information included in the face database is received.
In this embodiment of the present disclosure, the second target face information is face information associated with a third payment account in the face database. The cloud server may determine, in the face database, second target face information associated with the information of the third payment account through the information of the third payment account. Further, the face information of the first face is compared with the face information of the second target in a ratio of 1:1, a face recognition result is determined, and the face recognition result is fed back to the terminal.
Correspondingly, the method further comprises the following step 105:
in step 105, when the face recognition result indicates that the face information of the first face matches the second target face information, receiving information of a third payment password associated with the second target face information, which is sent by the cloud server.
Further, the above step 103 may include the following step 103-4:
in step 103-4, the third payment password is output, so that the offline payment device performs offline payment based on the acquired third payment account and the third payment password.
In the embodiment of the present disclosure, the terminal may output the third payment password, provide the third payment password to the offline payment device through the face information of the first face instead of the third payment password, and acquire, by the offline payment device, the third payment account input by the user and the third payment password output by the terminal, thereby completing the offline payment.
The cloud server provides a third payment password to the terminal, the user needs to provide a third payment account number, such as a bank card number, on the online lower payment device, and the online lower payment device completes the online lower payment based on the third payment account number and the third payment password.
In the embodiment, the face information belonging to the living body can replace the payment account number or the payment password, so that the off-line payment is completed, and the economic loss caused by using the face belonging to the non-living body for payment is reduced.
In the third scheme, in an online lower payment scene, the face information of the first face is used for replacing a payment account and a payment password at the same time.
In the embodiment of the present disclosure, the in-line payment may also be completed by replacing the payment account and the payment password with the face information belonging to the living body. However, in practical application, the risk of replacing the payment account and the payment password at the same time is high, and once misjudgment is made, economic loss is easily caused to a user, so that in an online offline payment scene, the payment account or the payment password is generally replaced only by face information belonging to a living body, and offline payment is completed. In some optional embodiments, the present disclosure may further set a living body detection on the second face acquired when the face payment is performed. The living body detection method is similar to the above-mentioned living body detection method for the collected first human face, and is not described herein again.
And under the condition that the second face belongs to the living body, determining the face information of the second face as the associated face information.
In the above embodiment, when the face payment is set, the associated face information associated with the first payment account may be determined, so that economic loss caused by an illegal user registering a payment account using a face belonging to a non-living body is reduced.
Corresponding to the foregoing method embodiments, the present disclosure also provides embodiments of an apparatus.
As shown in fig. 4, fig. 4 is a block diagram of a face payment apparatus according to an exemplary embodiment of the present disclosure, the apparatus including: the living body detection module 401 is configured to perform living body detection on the acquired first face and determine a living body detection result;
a face recognition module 402, configured to perform face recognition on the first face to determine a face recognition result when the living body detection result indicates that the first face belongs to a living body;
a face payment module 403, configured to perform face payment based on the face recognition result.
In some optional embodiments, the liveness detection module further comprises: the acquisition sub-module is used for acquiring at least one group of video data comprising the first face; a first determining sub-module, configured to determine at least one frame of face image in the at least one group of video data; the single-frame detection submodule is used for carrying out living body detection on each frame of face image and determining a single-frame detection result corresponding to each frame of face image; a second determining sub-module for determining the in-vivo detection result based on at least one of the single-frame detection results.
In some alternative embodiments, each set of video data corresponds to a specified action; the in-vivo detection module further includes: the third determining submodule is used for inputting each group of video data into a corresponding action detection neural network to obtain an action detection result output by each action detection neural network; the control submodule is used for controlling the first determining submodule to determine at least one frame of face image in the at least one group of video data under the condition that each action detection result indicates that the corresponding specified action is finished; and the fourth determining submodule is used for determining that the living body detection result is that the first face does not belong to a living body under the condition that any action detection result indicates that the corresponding specified action is not finished.
In some optional embodiments, the first determination submodule is further configured to: in each group of video data, determining a frame of face image with the highest image quality in each specified time period to obtain at least one frame of face image; and/or determining a frame of face image with the highest image quality in each group of video data to obtain the at least one frame of face image.
In some optional embodiments, the single frame detection sub-module is further configured to: and inputting each frame of face image into a living body detection neural network, and determining the single-frame detection result output by the living body detection neural network.
In some optional embodiments, the single frame detection result comprises a probability value that the first face does not belong to a living body; the second determination submodule is further configured to: determining the ratio of the number of the face images with the probability value smaller than or equal to a preset threshold value to the total number of the at least one frame of face images; in response to determining that the ratio is greater than or equal to a preset ratio, determining that the in-vivo detection result is that the first face belongs to a living body.
In some optional embodiments, the face recognition result determining module comprises: a fifth determining sub-module, configured to compare, in a case where online payment is performed, face information of the first face with associated face information, and determine the face recognition result, where the associated face information is face information associated with the first payment account when face payment is set; or, in case of performing an offline payment, at least one of the following: the first uploading sub-module is used for uploading the face information of the first face to a cloud server; the first receiving submodule is used for receiving the face recognition result returned after the cloud server compares the face information of the first face with at least one piece of face information included in a face database; the second uploading sub-module is used for uploading the face information of the first face and the information of the third payment account to a cloud server; and the second receiving submodule is used for receiving the face recognition result returned after the face information of the first face is compared with second target face information included in a face database by the cloud server, wherein the second target face information is face information associated with a second payment account in the face database.
In some optional embodiments, the living body detection module is further configured to perform living body detection on the second acquired face when face payment is set; the device further comprises: and the association module is used for determining the face information of the second face as the associated face information under the condition that the second face belongs to the living body.
In some optional embodiments, the apparatus further comprises at least one of: the first receiving module is used for receiving information of a second payment account which is sent by the cloud server and is associated with the first target face information when the face recognition result indicates that the first target face information matched with the face information of the first face exists in the face database; and the second receiving module is used for receiving information of a third payment password which is sent by the cloud server and is associated with the second target face information when the face recognition result indicates that the face information of the first face is matched with the second target face information.
In some optional embodiments, the face payment module is further configured to perform at least one of: when the face recognition result indicates that the face information of the first face is matched with the associated face information, performing online payment under the first payment account based on a pre-stored first payment password corresponding to the first payment account; outputting the second payment account number to enable the offline payment equipment to carry out offline payment based on the second payment account number and the acquired second payment password; and outputting the third payment password so that the offline payment equipment carries out offline payment based on the acquired third payment account and the third payment password.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the disclosure also provides a computer-readable storage medium, wherein the storage medium stores a computer program, and the computer program is used for executing any one of the above-mentioned face payment methods.
In some optional embodiments, the disclosed embodiments provide a computer program product comprising computer readable code which, when run on a device, a processor in the device executes instructions for implementing a face payment method as provided in any one of the above embodiments.
In some optional embodiments, the present disclosure further provides another computer program product for storing computer readable instructions, which when executed, cause a computer to execute the face payment method provided in any one of the above embodiments.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment the computer program product is embodied as a computer storage medium, in another alternative embodiment the computer program product is embodied as a software product, such as an SDK or the like.
The embodiment of the present disclosure further provides a face payment device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke executable instructions stored in the memory to implement any of the above-described face payment methods.
Fig. 5 is a schematic diagram of a hardware structure of a face payment apparatus according to an embodiment of the present disclosure. The face payment device 510 includes a processor 511 and may further include an input device 512, an output device 513, and a memory 514. The input device 512, the output device 513, the memory 514 and the processor 511 are connected to each other via a bus.
The memory includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a portable read-only memory (CD-ROM), which is used for storing instructions and data.
The input means are for inputting data and/or signals and the output means are for outputting data and/or signals. The output means and the input means may be separate devices or may be an integral device.
The processor may include one or more processors, for example, one or more Central Processing Units (CPUs), and in the case of one CPU, the CPU may be a single-core CPU or a multi-core CPU.
The memory is used to store program codes and data of the network device.
The processor is used for calling the program codes and data in the memory and executing the steps in the method embodiment. Specifically, reference may be made to the description of the method embodiment, which is not repeated herein.
It will be appreciated that figure 5 shows only a simplified design of a face payment device. In practical applications, the face payment device may further include other necessary components, including but not limited to any number of input/output devices, processors, controllers, memories, etc., and all face payment devices that can implement the embodiments of the present disclosure are within the scope of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (13)

1. A face payment method, comprising:
performing living body detection on the collected first face to determine a living body detection result;
under the condition that the living body detection result indicates that the first face belongs to a living body, carrying out face recognition on the first face to determine a face recognition result;
and carrying out face payment based on the face recognition result.
2. The method according to claim 1, wherein the performing the in-vivo detection on the acquired first face and determining the in-vivo detection result comprises:
acquiring at least one set of video data comprising the first face;
determining at least one frame of face image in the at least one group of video data;
performing living body detection on each frame of face image, and determining a single-frame detection result corresponding to each frame of face image;
determining the in-vivo detection result based on at least one of the single-frame detection results.
3. The method of claim 2, wherein each set of video data corresponds to a specified action;
the live body detection is carried out to the first face of gathering, confirms the live body detection result, still includes:
inputting each group of video data into a corresponding action detection neural network to obtain an action detection result output by each action detection neural network;
under the condition that each action detection result indicates that the corresponding specified action is completed, executing the step of determining at least one frame of face image in the at least one group of video data;
and determining that the living body detection result is that the first face does not belong to a living body when any action detection result indicates that the corresponding specified action is not completed.
4. The method according to claim 2 or 3, wherein the determining at least one frame of face image in the at least one set of video data comprises at least one of:
in each group of video data, determining a frame of face image with the highest image quality in each specified time period to obtain at least one frame of face image; and/or the presence of a gas in the gas,
and determining a frame of face image with the highest image quality in each group of video data to obtain at least one frame of face image.
5. The method according to claim 2 or 3, wherein the performing the living body detection on each frame of face image and determining the single-frame detection result corresponding to each frame of face image comprises:
and inputting each frame of face image into a living body detection neural network, and determining the single-frame detection result output by the living body detection neural network.
6. The method according to any one of claims 2 to 5, wherein the single frame detection result includes a probability value that the first face does not belong to a living body;
the determining the in-vivo detection result based on at least one of the single-frame detection results includes:
determining the ratio of the number of the face images with the probability value smaller than or equal to a preset threshold value to the total number of the at least one frame of face images;
in response to determining that the ratio is greater than or equal to a preset ratio, determining that the in-vivo detection result is that the first face belongs to a living body.
7. The method according to any one of claims 1 to 6, wherein the performing face recognition on the first face to determine a face recognition result comprises:
comparing the face information of the first face with associated face information under the condition of executing online payment to determine the face recognition result, wherein the associated face information is the face information associated to a first payment account when face payment is set;
or, in case of performing an offline payment, at least one of the following:
uploading the face information of the first face to a cloud server; receiving the face recognition result returned after the cloud server compares the face information of the first face with at least one face information included in a face database;
uploading the face information of the first face and the information of the third payment account to a cloud server; and receiving the face recognition result returned after the face information of the first face is compared with second target face information included in a face database by the cloud server, wherein the second target face information is the face information associated with a second payment account in the face database.
8. The method of claim 7, further comprising:
when face payment is set, performing living body detection on the collected second face;
and under the condition that the second face belongs to the living body, determining the face information of the second face as the associated face information.
9. The method of claim 7, further comprising at least one of:
when the face recognition result indicates that first target face information matched with the face information of the first face exists in the face database, receiving information of a second payment account which is sent by the cloud server and is associated with the first target face information;
and receiving information of a third payment password which is sent by the cloud server and is associated with the second target face information when the face recognition result indicates that the face information of the first face is matched with the second target face information.
10. The method of claim 9, wherein the face payment based on the face recognition result comprises at least one of:
when the face recognition result indicates that the face information of the first face is matched with the associated face information, performing online payment under the first payment account based on a pre-stored first payment password corresponding to the first payment account;
outputting the second payment account number to enable the offline payment equipment to carry out offline payment based on the second payment account number and the acquired second payment password;
and outputting the third payment password so that the offline payment equipment carries out offline payment based on the acquired third payment account and the third payment password.
11. A face payment device, comprising:
the living body detection module is used for carrying out living body detection on the collected first face and determining a living body detection result;
the face recognition result determining module is used for carrying out face recognition on the first face under the condition that the living body detection result indicates that the first face belongs to a living body, and determining a face recognition result;
and the face payment module is used for carrying out face payment based on the face recognition result.
12. A computer-readable storage medium, characterized in that the storage medium stores a computer program for executing the face payment method of any one of the above claims 1 to 10.
13. A face payment device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to invoke executable instructions stored in the memory to implement the face payment method of any one of claims 1 to 10.
CN202111152752.6A 2021-09-29 2021-09-29 Face payment method and device and storage medium Pending CN113869906A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111152752.6A CN113869906A (en) 2021-09-29 2021-09-29 Face payment method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111152752.6A CN113869906A (en) 2021-09-29 2021-09-29 Face payment method and device and storage medium

Publications (1)

Publication Number Publication Date
CN113869906A true CN113869906A (en) 2021-12-31

Family

ID=79000502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111152752.6A Pending CN113869906A (en) 2021-09-29 2021-09-29 Face payment method and device and storage medium

Country Status (1)

Country Link
CN (1) CN113869906A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229325A (en) * 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 Method for detecting human face and system, electronic equipment, program and medium
CN108764887A (en) * 2018-07-10 2018-11-06 中化能源科技有限公司 Multi-functional payment machine for gas station
CN109615387A (en) * 2018-11-29 2019-04-12 中国科学院重庆绿色智能技术研究院 A kind of consumption and payment system and method based on recognition of face
CN110287671A (en) * 2019-06-27 2019-09-27 深圳市商汤科技有限公司 Verification method and device, electronic equipment and storage medium
CN110443619A (en) * 2019-08-05 2019-11-12 中国工商银行股份有限公司 Method of payment and device by bus based on iris recognition and face recognition
CN111242090A (en) * 2020-01-22 2020-06-05 腾讯科技(深圳)有限公司 Human face recognition method, device, equipment and medium based on artificial intelligence
CN111275448A (en) * 2020-02-22 2020-06-12 腾讯科技(深圳)有限公司 Face data processing method and device and computer equipment
CN111368601A (en) * 2018-12-26 2020-07-03 北京市商汤科技开发有限公司 Living body detection method and apparatus, electronic device, and computer-readable storage medium
CN111767760A (en) * 2019-04-01 2020-10-13 北京市商汤科技开发有限公司 Living body detection method and apparatus, electronic device, and storage medium
CN111915305A (en) * 2019-05-10 2020-11-10 腾讯科技(深圳)有限公司 Payment method, device, equipment and storage medium
CN112183167A (en) * 2019-07-04 2021-01-05 钉钉控股(开曼)有限公司 Attendance checking method, authentication method, living body detection method, device and equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229325A (en) * 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 Method for detecting human face and system, electronic equipment, program and medium
CN108764887A (en) * 2018-07-10 2018-11-06 中化能源科技有限公司 Multi-functional payment machine for gas station
CN109615387A (en) * 2018-11-29 2019-04-12 中国科学院重庆绿色智能技术研究院 A kind of consumption and payment system and method based on recognition of face
CN111368601A (en) * 2018-12-26 2020-07-03 北京市商汤科技开发有限公司 Living body detection method and apparatus, electronic device, and computer-readable storage medium
CN111767760A (en) * 2019-04-01 2020-10-13 北京市商汤科技开发有限公司 Living body detection method and apparatus, electronic device, and storage medium
CN111915305A (en) * 2019-05-10 2020-11-10 腾讯科技(深圳)有限公司 Payment method, device, equipment and storage medium
CN110287671A (en) * 2019-06-27 2019-09-27 深圳市商汤科技有限公司 Verification method and device, electronic equipment and storage medium
CN112183167A (en) * 2019-07-04 2021-01-05 钉钉控股(开曼)有限公司 Attendance checking method, authentication method, living body detection method, device and equipment
CN110443619A (en) * 2019-08-05 2019-11-12 中国工商银行股份有限公司 Method of payment and device by bus based on iris recognition and face recognition
CN111242090A (en) * 2020-01-22 2020-06-05 腾讯科技(深圳)有限公司 Human face recognition method, device, equipment and medium based on artificial intelligence
CN111275448A (en) * 2020-02-22 2020-06-12 腾讯科技(深圳)有限公司 Face data processing method and device and computer equipment

Similar Documents

Publication Publication Date Title
EP3885965B1 (en) Image recognition method based on micro facial expressions, apparatus and related device
US10319130B2 (en) Anonymization of facial images
CN110728255B (en) Image processing method, image processing device, electronic equipment and storage medium
US10275672B2 (en) Method and apparatus for authenticating liveness face, and computer program product thereof
US9852327B2 (en) Head-pose invariant recognition of facial attributes
CN108470169A (en) Face identification system and method
CN108875452A (en) Face identification method, device, system and computer-readable medium
CN108920640B (en) Context obtaining method and device based on voice interaction
US20140328547A1 (en) Anonymization of facial expressions
WO2016084071A1 (en) Systems and methods for recognition of faces e.g. from mobile-device-generated images of faces
CN111310705A (en) Image recognition method and device, computer equipment and storage medium
CN109358922A (en) A kind of personalized menu methods of exhibiting, device, intelligent terminal and storage medium
CN106471440A (en) Eye tracking based on efficient forest sensing
CN111914811B (en) Image data processing method, image data processing device, computer equipment and storage medium
CN108229375B (en) Method and device for detecting face image
US11295117B2 (en) Facial modelling and matching systems and methods
CN111095268A (en) User identity identification method and device and electronic equipment
CN208351494U (en) Face identification system
CN111242714A (en) Product recommendation method and device
CN113516167A (en) Biological feature recognition method and device
CN109377577A (en) A kind of Work attendance method based on recognition of face, system and storage device
KR102581415B1 (en) UBT system using face contour recognition AI to prevent the cheating behaviour and method thereof
CN108334821B (en) Image processing method and electronic equipment
CN110675312A (en) Image data processing method, image data processing device, computer equipment and storage medium
CN113869906A (en) Face payment method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination