CN110991433B - Face recognition method, device, equipment and storage medium - Google Patents

Face recognition method, device, equipment and storage medium Download PDF

Info

Publication number
CN110991433B
CN110991433B CN202010142025.0A CN202010142025A CN110991433B CN 110991433 B CN110991433 B CN 110991433B CN 202010142025 A CN202010142025 A CN 202010142025A CN 110991433 B CN110991433 B CN 110991433B
Authority
CN
China
Prior art keywords
user
face
feature data
face brushing
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010142025.0A
Other languages
Chinese (zh)
Other versions
CN110991433A (en
Inventor
翁祖建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202010142025.0A priority Critical patent/CN110991433B/en
Publication of CN110991433A publication Critical patent/CN110991433A/en
Application granted granted Critical
Publication of CN110991433B publication Critical patent/CN110991433B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Library & Information Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the specification relates to a face recognition method, a face recognition device, face recognition equipment and a storage medium. One of the methods comprises: the method comprises the steps of obtaining the face characteristics of a first user, searching a second user corresponding to the face characteristics in a first face brushing database, obtaining the characteristic data of the second user, the characteristic data of a face brushing terminal and the associated characteristic data between the second user and the face brushing terminal, and identifying the consistency between the second user and the first user through a pre-constructed identification model according to the characteristic data.

Description

Face recognition method, device, equipment and storage medium
Technical Field
Embodiments of the present disclosure relate to the field of computers, and in particular, to a face recognition method, a face recognition apparatus, a face recognition device, and a computer-readable storage medium.
Background
With the rapid development of computer internet, more and more services need to identify the user in order to provide personalized services for the user and ensure the security of the user service information.
At present, in order to ensure the convenience and safety of user identity identification, the mode of identifying the user identity is continuously changed, a new identity identification mode is continuously generated, the user identity is identified by an account number and a password, the user identity is identified by a fingerprint, the user identity is identified by an iris, the user identity is identified by a face, and corresponding services, such as payment services, are provided for the user after the user identity is identified by the mode.
In order to ensure the accuracy of face recognition, the embodiments of the present specification need to provide a scheme capable of accurately recognizing a face.
Disclosure of Invention
The embodiment of the specification provides a new technical scheme of face recognition.
According to a first aspect of the present description, there is provided an embodiment of a face recognition method, including:
acquiring the face features of a first user;
according to the face features, a second user corresponding to the face features is searched in a first face brushing database;
acquiring feature data of a face brushing terminal for acquiring the face features, feature data of the second user and associated feature data between the second user and the face brushing terminal;
and identifying the consistency of the second user and the first user through an identification model which is constructed in advance based on a graph neural network and a linear neural network according to the feature data of the face brushing terminal for collecting the face features, the feature data of the second user and the associated feature data between the second user and the face brushing terminal.
Optionally, before retrieving, according to the facial features, a second user corresponding to the facial features in the first face brushing database, the method further includes:
and a second user corresponding to the face features is not found in a second face brushing database.
Optionally, retrieving, according to the facial features, a second user corresponding to the facial features in a first face brushing database, including:
retrieving a user set corresponding to the face features according to the face features;
reordering the users in the user set according to the characteristic data of the face brushing terminal;
a second user is determined within the reordered set of users.
Optionally, the graph neural network is constructed by feature data of the sample user, feature data of the sample face brushing terminal and associated feature data between the sample user and the sample face brushing terminal.
Optionally, identifying, according to feature data of a face brushing terminal for collecting the facial features, feature data of the second user, and associated feature data between the second user and the face brushing terminal, consistency between the second user and the first user through an identification model constructed in advance based on a graph neural network and a linear neural network, includes:
determining a deep semi-structured graph feature through a graph neural network contained in the recognition model according to the feature data of the face brushing terminal, the feature data of the second user and the associated feature data between the face brushing terminal and the second user;
determining a structural feature through a linear neural network contained in the recognition model according to the feature data of the face brushing terminal, the feature data of the second user and the associated feature data between the face brushing terminal and the second user;
determining a probability value corresponding to the second user according to the depth semi-structured graph feature and the structured feature;
and identifying the consistency of the second user and the first user according to the probability value corresponding to the second user.
Optionally, identifying consistency between the second user and the first user according to the probability value corresponding to the second user includes:
when the probability value corresponding to the second user exceeds a preset first threshold value, identifying that the second user is consistent with the first user;
when the probability value corresponding to the second user does not exceed a preset first threshold value and exceeds a preset second threshold value, identifying that a first difference exists between the second user and the first user;
and when the probability value corresponding to the second user does not exceed a preset second threshold value, identifying that a second difference exists between the second user and the first user.
Optionally, the method further comprises:
when the second user is identified to be consistent with the first user, processing corresponding service for the first user;
when the second user is identified to have a first difference with the first user, prompting the user to input a verification number with a first designated digit, and processing corresponding services according to the verification number input by the user;
and when a second difference exists between the second user and the first user, prompting the user to input a verification number of a second designated number, and processing corresponding services according to the verification number input by the user.
According to a second aspect of the present specification, there is also provided a face recognition apparatus comprising:
the first acquisition module is used for acquiring the face characteristics of a first user;
the first retrieval module is used for retrieving a second user corresponding to the face features in a first face brushing database according to the face features;
the second acquisition module is used for acquiring feature data of a face brushing terminal for acquiring the human face features, the feature data of the second user and associated feature data between the second user and the face brushing terminal;
and the recognition module is used for recognizing the consistency of the second user and the first user through a recognition model which is constructed in advance based on a graph neural network and a linear neural network according to the feature data of the face brushing terminal for collecting the face features, the feature data of the second user and the associated feature data between the second user and the face brushing terminal.
According to a third aspect of the present specification, there is also provided an embodiment of a face recognition apparatus, including the face recognition device according to the third aspect of the present specification, or the apparatus includes:
a memory for storing executable commands;
a processor for executing the face recognition method according to the first aspect of the present specification under the control of the executable command.
According to a fourth aspect of the present description, there is also provided an embodiment of a computer-readable storage medium, which stores executable instructions that, when executed by a processor, perform the face recognition method according to the first aspect of the present description.
In one embodiment, since the recognition model is added with the supplementary capability and the reasoning capability of the graph neural network and the memory capability of the linear neural network, whether the retrieved second user is the first user can be accurately recognized.
Other features of the present description and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description, serve to explain the principles of the specification.
FIG. 1a is a schematic diagram of a scene that may be used to implement a face recognition method of an embodiment;
FIG. 1b is a block diagram of a hardware configuration of a face recognition device that can be used to implement the face recognition method of one embodiment;
FIG. 2 is a flow chart of a face recognition method according to a first embodiment;
FIG. 3 is a recognition model according to one embodiment;
FIG. 4 is a functional block diagram of a face recognition apparatus according to one embodiment;
FIG. 5 is a functional block diagram of a face recognition device according to one embodiment.
Detailed Description
Various exemplary embodiments of the present specification will now be described in detail with reference to the accompanying drawings.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
< hardware configuration >
Referring to fig. 1a, a mobile phone 1900 displays a page a to a first user, so as to collect facial features of the first user, and sends the facial features of the first user to a face recognition device 1000, the face recognition device 1000 sends a retrieval request to a first face database 2000, where the retrieval request includes the facial features of the first user, the first face database 2000 locally retrieves a second user corresponding to the facial features according to the facial features, and sends the retrieved second user to the face recognition device 1000, the face recognition device 1000 sends a feature data acquisition request to a data center console 2100, the data center console 2100 locally acquires feature data of a face brushing terminal collecting the facial features, the feature data of the second user, and associated feature data between the second user and the face brushing terminal, and sends the feature data to the face recognition device 1000, the face recognition device 1000 recognizes consistency between the second user and the first user through a recognition model constructed in advance based on a graph neural network and a linear neural network according to feature data of a face brushing terminal, feature data of the second user and associated feature data between the second user and the face brushing terminal, which are acquired by the face feature, and sends a final recognition result to the mobile phone 1900, and the mobile phone 1900 displays the page B to the first user, so that whether the retrieved second user is the first user can be accurately recognized.
Fig. 1b is a block diagram of a hardware configuration of a face recognition device to which a face recognition method according to an embodiment of the present specification can be applied.
The face recognition device 1000 may be a virtual machine or a physical machine. The face recognition apparatus 1000 may include a processor 1100, a memory 1200, an interface device 1300, a communication device 1400, a display device 1500, an input device 1600, a speaker 1700, a microphone 1800, and the like. The processor 1100 may be a central processing unit CPU, a microprocessor MCU, or the like. The memory 1200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface device 1300 includes, for example, a USB interface, a headphone interface, and the like. Communication device 1400 is capable of wired or wireless communication, for example. The display device 1500 is, for example, a liquid crystal display panel, a touch panel, or the like. The input device 1600 may include, for example, a touch screen, a keyboard, and the like. A user can input/output voice information through the speaker 1700 and the microphone 1800.
As applied to this embodiment, the memory 1200 is used to store computer program instructions for controlling the processor 1100 to operate to perform a face recognition method according to any embodiment of the present invention. The skilled person can design the instructions according to the disclosed solution. How the instructions control the operation of the processor 1100 is well known in the art and will not be described in detail herein.
Although a plurality of devices are shown for the face recognition apparatus 1000 in fig. 1b, the present invention may only relate to some of the devices, for example, the face recognition apparatus 1000 only relates to the memory 1200 and the processor 1100.
In the above description, the skilled person will be able to design instructions in accordance with the disclosed solution. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
< method examples >
The present embodiment provides a face recognition method, as shown in fig. 2, the method includes the following steps:
s201: the face features of the first user are obtained.
In practical applications, in order to ensure convenience and security of user identity identification, a face is generally used to identify a user identity, and after the user identity is identified, a corresponding service, such as a payment service, is provided for the user.
Further, in the process of identifying the user identity using the face, the identity of the user needs to be identified according to the face features, and therefore, in the embodiment of the present specification, the face features of the user need to be acquired.
It should be noted that, since the embodiment of the present specification relates to a user acquiring facial features and a user retrieved from a database, in order to distinguish two types of users in the embodiment of the present specification, a user acquired by a face brushing terminal is defined as a first user, that is, acquiring facial features of the user specifically is acquiring facial features of the first user.
S202: and searching a second user corresponding to the face features in a first face brushing database according to the face features.
Further, since the identity of the first user needs to be found in the database in order to identify the identity of the first user, in the embodiment of the present specification, after the face feature of the first user is obtained, the second user corresponding to the face feature is retrieved in the database according to the face feature.
It should be noted that all users who open the face brushing service are recorded in the database, that is, as long as the users open the face brushing service, the facial features of the users are stored in the database in advance.
In addition, in the embodiment of the present specification, a user corresponding to the facial feature retrieved from the database is defined as a second user, so as to distinguish the second user from the first user.
In practical application, in order to improve the recognition efficiency and improve the face brushing experience of the user, in this embodiment of the present specification, the face features of the user who has used the face brushing service for the first time may be separately stored in another database, that is, the face features stored in the database are all the face features of the user who does not use the face brushing service for the first time, and subsequently, when the user uses the face brushing service, the second user corresponding to the face features is retrieved by first matching with the user who has stored the non-face brushing service for the first time, and when the second user corresponding to the retrieved face features is not found in the database, the second user corresponding to the face features is retrieved in the database in which all the users who have opened the face brushing service are stored.
In order to distinguish the database storing the facial features of the users who do not use the face brushing service for the first time from the database storing the facial features of all the users who have opened the face brushing service, in the embodiment of the present specification, the database storing the facial features of all the users who have opened the face brushing service is defined as a first face brushing database, and the database storing the facial features of the users who do not use the face brushing service for the first time is defined as a second face brushing database.
To sum up, in the embodiment of the present specification, after the face feature of the first user is obtained, according to the face feature, the second user corresponding to the face feature is retrieved in the second face database, and when the second user corresponding to the face feature is not found in the second face database, the second user corresponding to the face feature is retrieved in the first face database.
It should be noted that, when the second user corresponding to the facial features is retrieved from the second face brushing library, it can be proved that the second user currently found is the first user himself.
In addition, in this embodiment of the present specification, specifically, the second user corresponding to the face feature is retrieved in the first face brushing database according to the face feature, and a user set corresponding to the face feature is retrieved according to the face feature, and users in the user set are reordered according to feature data of the face brushing terminal, and the second user is determined in the reordered user set.
It should be noted that, since the ranking of the users in the retrieved user set is usually arranged according to the similarity of the facial features, and is only ordered according to the similarity of the facial features, there may be a second user that is consistent with the first user and is not arranged at a front position, in order to further improve the accuracy of the facial recognition, in this embodiment of the present specification, the users in the user set may be reordered in combination with feature data of a face brushing terminal, where the face brushing terminal is a terminal that collects the facial features of the first user.
In practical applications, after reordering users in combination with the feature data of the face brushing terminal, users ranked further forward can be more consistent with the first user in various aspects, and therefore, in this embodiment of the present specification, a user ranked first in the reordered user set can be determined as a second user.
S203: and acquiring feature data of a face brushing terminal for collecting the face features, the feature data of the second user and associated feature data between the second user and the face brushing terminal.
In practical applications, after the second user corresponding to the face features is retrieved from the first face database, there may be a case where the retrieved second user is not the first user himself or herself due to a retrieval error or a face feature error, and therefore, in the embodiment of the present specification, in order to improve the accuracy of face recognition, it is necessary to verify whether the retrieved second user is the first user himself or herself, that is, to identify the consistency between the retrieved second user and the first user, after the second user corresponding to the face features is retrieved from the first face database.
Further, the embodiment of the present specification identifies the consistency of the retrieved second user with the first user based on the identification model, that is, verifies whether the retrieved second user is the first user himself.
It should be noted that, since in practical applications, the topology between nodes in the graph is naturally used as the neural network structure, when each node is expressed, the features of the adjacent nodes and edges of the node are aggregated, for example, for the expression of the k-th user u, the k-th user u is expressed by a formula
Figure DEST_PATH_IMAGE001
Aggregating the expression of its neighboring node N (u) at layer k-1, wherein,
Figure 266954DEST_PATH_IMAGE002
representing the expression of node u at the k-th level,
Figure DEST_PATH_IMAGE003
represents a non-linear function of the measured signal,
Figure 976021DEST_PATH_IMAGE004
representing the weight at the k-th layer when neighboring nodes are aggregated,
Figure DEST_PATH_IMAGE005
a set of neighboring nodes representing node u,
Figure 580309DEST_PATH_IMAGE006
representing the number of neighbors of node u,
Figure DEST_PATH_IMAGE007
representing the expression of node v at level k-1,
Figure 782094DEST_PATH_IMAGE008
representing the bias at the k-th layer when neighboring nodes are aggregated,
Figure DEST_PATH_IMAGE009
representing the expression of the node u on the k-1 layer, learning the parameters W and B through end-to-end, and finally expressing each node in the graph, namelyThat is to say, the graph neural network can delineate the relationship between different nodes, and therefore, in the embodiment of the present specification, the information of the decision node to be expressed can be expressed and supplemented (i.e., the deep mining supplementation capability) and inferred (i.e., the inference capability) by using the neighboring nodes in the graph neural network, so that an obvious gain effect can be achieved for important information loss and behavior prediction.
In addition, the use experience of the offline face brushing user can be improved by the deep mining supplement capability and the reasoning capability of the graph neural network through the graph neural network, for example, if a user A, a user B, a user C and a user D use a face brushing terminal E to perform face brushing operation, and the user B and the user C use a face brushing terminal F to perform face brushing operation, the mining capability of the graph neural network can enable a neighbor node E and a node F of the user B to supplement certain missing important features, so that the model is more robust to the missing features, and the reasoning capability of the graph neural network can use the incidence relation of the user C and the user D to reason that the user A may possibly perform transactions with the face brushing terminal F in the future.
Based on the superior performance of the graph neural network, in the embodiment of the present specification, the recognition model is constructed based on the graph neural network and the linear neural network, and is a binary model, as shown in fig. 3, the lowest layer is an input layer, the upper layer is the graph neural network and the linear neural network, and the upper layer is a cross entropy classification layer, wherein the graph neural network includes a graph feature network structure and a hidden layer, and the graph neural network is constructed by feature data of a sample user, feature data of a sample face brushing terminal, and associated feature data between the sample user and the sample face brushing terminal.
In addition, in order to ensure the consistency rate of the secondary face brushing experience of the user, historical face brushing records, direct correlation information between the user and the face brushing terminal and real-time data are reserved in the linear neural network part, wherein the real-time data comprise biological comparison separation features and real-time live reporting relations.
It should be further noted that, single-layer neural network learning is performed on the layer of the graph neural network and the linear neural network, gradient learning is performed on the two-class cross entropy classification layer, and a formula is trained
Figure 889859DEST_PATH_IMAGE010
Wherein the first and second parameters are, among others,
Figure DEST_PATH_IMAGE011
a pair of nodes u and a during training, V is a node set,
Figure 894462DEST_PATH_IMAGE012
in order to be the label of the sample,
Figure 782783DEST_PATH_IMAGE003
represents a non-linear function of the measured signal,
Figure DEST_PATH_IMAGE013
is a transpose of the representation of the node,
Figure 979803DEST_PATH_IMAGE014
is a weight variable for training and the training platform is based on the artificial intelligence part ALPS framework.
When the identification model is actually applied, in fact, many face-brushing transactions cannot acquire point location information (i.e., node information) of a face-brushing terminal currently used by a user due to reasons such as node cold start, and therefore prediction identification cannot be performed on line due to lack of node information.
Further, since the graph neural network is constructed by the feature data of the sample user, the feature data of the sample face brushing terminal, and the associated feature data between the sample user and the sample face brushing terminal, that is, the graph neural network covers each user and the feature data thereof, each face brushing terminal and the feature data thereof, and the associated feature data between the user and the face brushing terminal, in this embodiment of the present specification, after a second user corresponding to a face feature is retrieved from the first face brushing database, the feature data of the face brushing terminal that collects the face feature, the feature data of the second user, and the associated feature data between the second user and the face brushing terminal are acquired.
It should be noted here that the feature data of the user may include: user portrayal, domain behavior, and historical face brushing and biometric bisection features; the feature data of the face brushing terminal may include: the position of the face brushing terminal and the image of the face brushing terminal are brushed; the associated feature data between the user and the face brushing terminal may include: the system comprises a user and a face brushing terminal, wherein the user and the face brushing terminal are in position relation, code scanning relation, face brushing relation, transaction relation and real-time live reporting relation.
S204: and identifying the consistency of the second user and the first user through an identification model which is constructed in advance based on a graph neural network and a linear neural network according to the feature data of the face brushing terminal for collecting the face features, the feature data of the second user and the associated feature data between the second user and the face brushing terminal.
Further, in an embodiment of the present specification, after feature data of a face brushing terminal that collects the facial features, feature data of the second user, and associated feature data between the second user and the face brushing terminal are acquired, according to the feature data of the face brushing terminal that collects the facial features, the feature data of the second user, and the associated feature data between the second user and the face brushing terminal, a recognition model that is constructed in advance based on a graph neural network and a linear neural network is used to recognize consistency between the second user and the first user.
Further, an embodiment of the present specification provides a method for identifying a consistency between the second user and the first user by using an identification model constructed in advance based on a graph neural network and a linear neural network according to feature data of a face brushing terminal for collecting the facial features, the feature data of the second user, and associated feature data between the second user and the face brushing terminal, which includes:
according to the feature data of the face brushing terminal, the feature data of the second user and the associated feature data between the face brushing terminal and the second user, determining a deep semi-structured graph feature through a graph neural network contained in the recognition model, according to the feature data of the face brushing terminal, the feature data of the second user and the associated feature data between the face brushing terminal and the second user, determining a structural feature through a linear neural network contained in the recognition model, determining a probability value corresponding to the second user according to the deep semi-structured graph feature and the structural feature, and according to the probability value corresponding to the second user, recognizing the consistency between the second user and the first user.
It should be noted that, according to the probability value corresponding to the second user, identifying the consistency between the second user and the first user may specifically be as follows:
when the probability value corresponding to the second user exceeds a preset first threshold value, identifying that the second user is consistent with the first user;
when the probability value corresponding to the second user does not exceed a preset first threshold value and exceeds a preset second threshold value, identifying that a first difference exists between the second user and the first user;
and when the probability value corresponding to the second user does not exceed a preset second threshold value, identifying that a second difference exists between the second user and the first user.
It should be noted that the first threshold and the second threshold may be set according to actual situations, but the first threshold is larger than the second threshold. The first difference indicates that the second user is more likely to be the first user himself, and the second difference indicates that the second user is certainly not the first user himself.
For example, assuming that the probability value corresponding to the second user exceeds 2 (i.e., a preset first threshold), it is very certain that the second user is the first user himself;
assuming that the probability value corresponding to the second user exceeds 1 (i.e., the preset second threshold value) and does not exceed 2 (i.e., the preset first threshold value), it cannot be confirmed that the second user is the first user himself;
assuming that the probability value corresponding to the second user does not exceed 1 (i.e., the preset second threshold), it is very certain that the second user is not the first user himself.
It should be noted that, in the above description, an implementation manner exemplarily provided by the embodiment of the present specification is to divide the recognition result into three parts, or may also divide the recognition into two parts, that is, when a probability value corresponding to the second user exceeds a preset first threshold, the second user is recognized to be consistent with the first user; when the probability value corresponding to the second user does not exceed a preset first threshold, it is recognized that the second user is different from the first user, and how to divide the probability value specifically can be determined according to actual conditions, which is not described herein any more.
By the method, because the supplementary capability and the reasoning capability of the graph neural network and the memory capability of the linear neural network are added into the recognition model used in the embodiment of the specification, whether the retrieved second user is the first user can be accurately recognized.
In addition, the relationship information between the people and the face brushing terminal is depicted through a graph feature network structure in the graph neural network, deep-level relationship mining is carried out through the structure, the offline first face brushing transaction and the cross-store face brushing transaction behaviors of the users are inferred, the recall performance of the offline first face brushing transaction and the cross-store face brushing transaction is obviously improved under the condition that the whole recall (namely, the memory capacity) is guaranteed, in addition, the features are automatically screened in the network training process, the process of extracting the features is greatly saved, and the manual participation cost is reduced.
It should be noted that, in order to more directly embody that the recall performance of the offline first face brushing transaction and the across-store face brushing transaction is improved obviously compared with the conventional rule scheme and tree model scheme under the condition that the overall recall (i.e., the memory capability) is ensured by applying the embodiment of the present specification, the experimental data shown in table 1 is given herein, wherein the experimental data is not used as a specific limitation to the embodiment of the present specification.
Overall recall rate Extreme case recall
Rule scheme 86.05% 10.88%
Tree model 90.82% 47.45%
Identification model 92.09% 58.22%
Rate of increase 1.27% 10.77%
TABLE 1
In the actual face brushing service, in order to further improve the security of user service information and the convenience of the service, after the consistency between a second user and a first user is identified, further verification can be provided in a targeted manner according to the identified consistency condition, in the embodiment of the description, when the second user is identified to be consistent with the first user, the corresponding service is processed for the first user; when the second user is identified to have a first difference with the first user, prompting the user to input a verification number with a first designated digit, and processing corresponding services according to the verification number input by the user; and when a second difference exists between the second user and the first user, prompting the user to input a verification number of a second designated number, and processing corresponding services according to the verification number input by the user.
For example, if the second user is identified to be consistent with the first user, processing the corresponding service for the first user; if the second user is recognized to have a first difference with the first user, prompting the user to input a four-digit check after the mobile phone number, and processing corresponding services according to the four-digit check after the mobile phone number input by the user; and prompting the user to input 11-bit mobile phone number for verification when a second difference exists between the second user and the first user, and processing corresponding services according to the 11-bit mobile phone number input by the user.
< apparatus embodiment >
Fig. 4 provides a face recognition apparatus 40 for the present embodiment, where the apparatus 40 includes:
a first obtaining module 401, configured to obtain a facial feature of a first user;
a first retrieving module 402, configured to retrieve, according to the facial features, a second user corresponding to the facial features in a first face-brushing database;
a second obtaining module 403, configured to obtain feature data of a face brushing terminal for collecting the facial features, the feature data of the second user, and associated feature data between the second user and the face brushing terminal;
the identification module 404 is configured to identify, according to feature data of a face brushing terminal for collecting the facial features, feature data of the second user, and associated feature data between the second user and the face brushing terminal, consistency between the second user and the first user through an identification model constructed in advance based on a graph neural network and a linear neural network.
In one embodiment, the apparatus 40 further comprises:
a second retrieving module 405, configured to, before the first retrieving module 402 retrieves the second user corresponding to the facial feature in the first face database according to the facial feature, find the second user corresponding to the facial feature in the second face database.
In an embodiment, the first retrieving module 402 is specifically configured to retrieve, according to the facial features, a user set corresponding to the facial features; reordering the users in the user set according to the characteristic data of the face brushing terminal; a second user is determined within the reordered set of users.
In one embodiment, the graph neural network is constructed from feature data of the sample user, feature data of the sample face brushing terminal, and associated feature data between the sample user and the sample face brushing terminal.
In an embodiment, the recognition module 404 is specifically configured to determine, according to the feature data of the face brushing terminal, the feature data of the second user, and the associated feature data between the face brushing terminal and the second user, a deep semi-structured graph feature through a graph neural network included in the recognition model; determining a structural feature through a linear neural network contained in the recognition model according to the feature data of the face brushing terminal, the feature data of the second user and the associated feature data between the face brushing terminal and the second user; determining a probability value corresponding to the second user according to the depth semi-structured graph feature and the structured feature; and identifying the consistency of the second user and the first user according to the probability value corresponding to the second user.
In an embodiment, the identifying module 404 is specifically configured to identify that the second user is consistent with the first user when the probability value corresponding to the second user exceeds a preset first threshold; when the probability value corresponding to the second user does not exceed a preset first threshold value and exceeds a preset second threshold value, identifying that a first difference exists between the second user and the first user; and when the probability value corresponding to the second user does not exceed a preset second threshold value, identifying that a second difference exists between the second user and the first user.
The apparatus 40 further comprises:
a verification module 406, configured to process a corresponding service for the first user when the identification module 404 identifies that the second user is consistent with the first user; when the identification module 404 identifies that the second user and the first user have a first difference, prompting the user to input a verification number with a first designated digit, and processing a corresponding service according to the verification number input by the user; when the identification module 404 identifies that the second user has a second difference with the first user, the user is prompted to input a verification number of a second designated number, and a corresponding service is processed according to the verification number input by the user.
< apparatus embodiment >
In this embodiment, there is also provided a face recognition apparatus 50 as shown in fig. 5, where the face recognition apparatus 50 includes the recognition device 40 described in the apparatus embodiment of this specification; alternatively, the face recognition apparatus 50 includes:
a memory for storing executable commands.
A processor for executing the method described in any of the method embodiments of the present specification under the control of executable commands stored in the memory.
The implementation subject according to the executed method embodiment at the face recognition device is a server.
In one embodiment, any of the modules in the above apparatus embodiments may be implemented by a processor.
< computer-readable storage Medium embodiment >
The present embodiments provide a computer-readable storage medium having stored therein an executable command that, when executed by a processor, performs a method described in any of the method embodiments of the present specification.
One or more embodiments of the present description may be a system, method, and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the specification.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations for embodiments of the present description may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), can execute computer-readable program instructions to implement various aspects of the present description by utilizing state information of the computer-readable program instructions to personalize the electronic circuit.
Aspects of the present description are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the description. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present description. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are equivalent.
The foregoing description of the embodiments of the present specification has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the application is defined by the appended claims.

Claims (9)

1. A face recognition method, comprising:
acquiring the face features of a first user;
according to the face features, a second user corresponding to the face features is searched in a first face brushing database;
acquiring feature data of a face brushing terminal for acquiring the face features, feature data of the second user and associated feature data between the second user and the face brushing terminal;
according to the feature data of the face brushing terminal for collecting the face features, the feature data of the second user and the associated feature data between the second user and the face brushing terminal, identifying the consistency of the second user and the first user through an identification model constructed on the basis of a graph neural network and a linear neural network in advance;
the method for recognizing the consistency of the second user and the first user through a recognition model constructed in advance based on a graph neural network and a linear neural network according to the feature data of the face brushing terminal for collecting the face features, the feature data of the second user and the associated feature data between the second user and the face brushing terminal comprises the following steps:
determining a deep semi-structured graph feature through a graph neural network contained in the recognition model according to the feature data of the face brushing terminal, the feature data of the second user and the associated feature data between the face brushing terminal and the second user;
determining a structural feature through a linear neural network contained in the recognition model according to the feature data of the face brushing terminal, the feature data of the second user and the associated feature data between the face brushing terminal and the second user;
determining a probability value corresponding to the second user according to the depth semi-structured graph feature and the structured feature;
and identifying the consistency of the second user and the first user according to the probability value corresponding to the second user.
2. The method of claim 1, prior to retrieving, based on the facial features, a second user corresponding to the facial features within a first brushing database, the method further comprising:
and a second user corresponding to the face features is not found in a second face brushing database.
3. The method of claim 1, wherein retrieving, from the facial features, a second user corresponding to the facial features in a first brushing database comprises:
retrieving a user set corresponding to the face features according to the face features;
reordering the users in the user set according to the characteristic data of the face brushing terminal;
a second user is determined within the reordered set of users.
4. The method of claim 1, the graph neural network constructed from feature data of sample users, feature data of sample face brushing terminals, and associated feature data between sample users and sample face brushing terminals.
5. The method of claim 1, identifying the second user's correspondence with the first user according to the probability value corresponding to the second user, comprising:
when the probability value corresponding to the second user exceeds a preset first threshold value, identifying that the second user is consistent with the first user;
when the probability value corresponding to the second user does not exceed a preset first threshold value and exceeds a preset second threshold value, identifying that a first difference exists between the second user and the first user;
and when the probability value corresponding to the second user does not exceed a preset second threshold value, identifying that a second difference exists between the second user and the first user.
6. The method of claim 5, further comprising:
when the second user is identified to be consistent with the first user, processing corresponding service for the first user;
when the second user is identified to have a first difference with the first user, prompting the user to input a verification number with a first designated digit, and processing corresponding services according to the verification number input by the user;
and when a second difference exists between the second user and the first user, prompting the user to input a verification number of a second designated number, and processing corresponding services according to the verification number input by the user.
7. A face recognition apparatus comprising:
the first acquisition module is used for acquiring the face characteristics of a first user;
the first retrieval module is used for retrieving a second user corresponding to the face features in a first face brushing database according to the face features;
the second acquisition module is used for acquiring feature data of a face brushing terminal for acquiring the human face features, the feature data of the second user and associated feature data between the second user and the face brushing terminal;
the recognition module is used for recognizing the consistency of the second user and the first user through a recognition model which is constructed in advance based on a graph neural network and a linear neural network according to the feature data of the face brushing terminal for collecting the face features, the feature data of the second user and the associated feature data between the second user and the face brushing terminal;
the identification module is specifically configured to determine a deep semi-structured graph feature through a graph neural network included in the identification model according to the feature data of the face brushing terminal, the feature data of the second user, and the associated feature data between the face brushing terminal and the second user;
the face brushing terminal is specifically used for determining a structural feature through a linear neural network contained in the recognition model according to the feature data of the face brushing terminal, the feature data of the second user and the associated feature data between the face brushing terminal and the second user;
the method is specifically used for determining a probability value corresponding to the second user according to the depth semi-structured graph feature and the structured feature; and the number of the first and second groups,
specifically, the method is used for identifying the consistency between the second user and the first user according to the probability value corresponding to the second user.
8. A face recognition apparatus comprising the face recognition device of claim 7, or the apparatus comprising:
a memory for storing executable commands;
a processor for executing the face recognition method according to any one of claims 1-6 under the control of the executable command.
9. A computer-readable storage medium storing executable instructions that, when executed by a processor, perform the face recognition method of any one of claims 1-6.
CN202010142025.0A 2020-03-04 2020-03-04 Face recognition method, device, equipment and storage medium Active CN110991433B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010142025.0A CN110991433B (en) 2020-03-04 2020-03-04 Face recognition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010142025.0A CN110991433B (en) 2020-03-04 2020-03-04 Face recognition method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110991433A CN110991433A (en) 2020-04-10
CN110991433B true CN110991433B (en) 2020-06-23

Family

ID=70081529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010142025.0A Active CN110991433B (en) 2020-03-04 2020-03-04 Face recognition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110991433B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382410B (en) * 2020-03-23 2022-04-29 支付宝(杭州)信息技术有限公司 Face brushing verification method and system
CN112069877B (en) * 2020-07-21 2022-05-03 北京大学 Face information identification method based on edge information and attention mechanism
CN113011339A (en) * 2021-03-19 2021-06-22 支付宝(杭州)信息技术有限公司 User identity verification method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654049A (en) * 2015-12-29 2016-06-08 中国科学院深圳先进技术研究院 Facial expression recognition method and device
CN106096535A (en) * 2016-06-07 2016-11-09 广东顺德中山大学卡内基梅隆大学国际联合研究院 A kind of face verification method based on bilinearity associating CNN
CN109300267A (en) * 2018-10-31 2019-02-01 杭州有赞科技有限公司 The cash method and system of member system based on recognition of face
CN109766840A (en) * 2019-01-10 2019-05-17 腾讯科技(深圳)有限公司 Facial expression recognizing method, device, terminal and storage medium
CN110688941A (en) * 2019-09-25 2020-01-14 支付宝(杭州)信息技术有限公司 Face image recognition method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654049A (en) * 2015-12-29 2016-06-08 中国科学院深圳先进技术研究院 Facial expression recognition method and device
CN106096535A (en) * 2016-06-07 2016-11-09 广东顺德中山大学卡内基梅隆大学国际联合研究院 A kind of face verification method based on bilinearity associating CNN
CN109300267A (en) * 2018-10-31 2019-02-01 杭州有赞科技有限公司 The cash method and system of member system based on recognition of face
CN109766840A (en) * 2019-01-10 2019-05-17 腾讯科技(深圳)有限公司 Facial expression recognizing method, device, terminal and storage medium
CN110688941A (en) * 2019-09-25 2020-01-14 支付宝(杭州)信息技术有限公司 Face image recognition method and device

Also Published As

Publication number Publication date
CN110991433A (en) 2020-04-10

Similar Documents

Publication Publication Date Title
CN112632385B (en) Course recommendation method, course recommendation device, computer equipment and medium
CN110991433B (en) Face recognition method, device, equipment and storage medium
TWI710964B (en) Method, apparatus and electronic device for image clustering and storage medium thereof
CN112307472B (en) Abnormal user identification method and device based on intelligent decision and computer equipment
CN111461637A (en) Resume screening method and device, computer equipment and storage medium
CN112418292B (en) Image quality evaluation method, device, computer equipment and storage medium
CN109471978B (en) Electronic resource recommendation method and device
CN111400504A (en) Method and device for identifying enterprise key people
CN112418059B (en) Emotion recognition method and device, computer equipment and storage medium
CN107679457A (en) User identity method of calibration and device
CN112214775A (en) Injection type attack method and device for graph data, medium and electronic equipment
CN112995414B (en) Behavior quality inspection method, device, equipment and storage medium based on voice call
CN113190702B (en) Method and device for generating information
CN111898675A (en) Credit wind control model generation method and device, scoring card generation method, machine readable medium and equipment
CN112632248A (en) Question answering method, device, computer equipment and storage medium
CN110288468B (en) Data feature mining method and device, electronic equipment and storage medium
CN114943937A (en) Pedestrian re-identification method and device, storage medium and electronic equipment
CN112597292B (en) Question reply recommendation method, device, computer equipment and storage medium
CN114898266A (en) Training method, image processing method, device, electronic device and storage medium
CN110162769B (en) Text theme output method and device, storage medium and electronic device
CN114494809A (en) Feature extraction model optimization method and device and electronic equipment
CN113947140A (en) Training method of face feature extraction model and face feature extraction method
CN113886821A (en) Malicious process identification method and device based on twin network, electronic equipment and storage medium
CN116205726B (en) Loan risk prediction method and device, electronic equipment and storage medium
Pai et al. Designing a secure audio/text based captcha using neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant