WO2020034733A1 - 身份认证方法和装置、电子设备和存储介质 - Google Patents
身份认证方法和装置、电子设备和存储介质 Download PDFInfo
- Publication number
- WO2020034733A1 WO2020034733A1 PCT/CN2019/090034 CN2019090034W WO2020034733A1 WO 2020034733 A1 WO2020034733 A1 WO 2020034733A1 CN 2019090034 W CN2019090034 W CN 2019090034W WO 2020034733 A1 WO2020034733 A1 WO 2020034733A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- face
- processed
- document
- detection result
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
- G06F16/535—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/33—User authentication using certificates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/45—Structures or tools for the administration of authentication
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/191—Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06V30/19173—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
- G06V30/41—Analysis of document content
- G06V30/413—Classification of content, e.g. text, photographs or tables
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/169—Holistic features and representations, i.e. based on the facial image taken as a whole
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present disclosure relates to computer vision technology, and in particular, to an identity authentication method and device, an electronic device, and a storage medium.
- the commonly used method is that the image acquisition device collects the photo of the user holding the ID card, uploads the photo of the ID card to the server, and conducts manual review in the server background. Manually verifying the collected photos requires a lot of human resources , The cost is high, the efficiency is low, and manual processing may have errors, the accuracy is low, and it cannot meet business needs.
- the embodiments of the present disclosure provide a technical solution for identity authentication.
- an identity authentication method comprising: performing face detection on an image to be processed through a first neural network, obtaining a face detection result, and performing a second neural network on the image to be processed; Perform document detection to obtain a document detection result; determine whether the image to be processed is a valid identity authentication image according to the face detection result and the document detection result; and in response to determining that the image to be processed is a valid identity authentication
- the image is subjected to identity authentication according to the face detection result and the document detection result to obtain the identity authentication result of the image to be processed.
- the valid identity authentication image includes a hand-held ID image.
- face detection is performed on an image to be processed by a first machine learning method to obtain a face detection result
- document detection is performed on the to-be-processed image by a second machine learning method to obtain a document detection result.
- the handheld ID image is a handheld ID image.
- the face detection result includes at least one of the following: the number of human faces included in the to-be-processed image and position information of the human face in the to-be-processed image.
- the face detection result may include the number of faces in the image and the position information of each face in the image.
- the position information of the face in the image may include the position information of the face frame.
- the detection result of the document includes at least one of the following: the number of documents included in the image to be processed and the position information of the document in the image to be processed.
- the document detection result further includes document face information.
- the document face information includes: the number and / or position information of the faces included in the document.
- the document detection result includes at least one selected from the following combinations: the number of documents contained in the image, location information of each document, and detection information of a face included in each document.
- the face information of the document is not part of the document detection result, but is obtained based on the face detection result and the document detection result.
- determining the credential face information based on the face detection result and the credential detection result includes: according to a face included in the face detection result in the to-be-processed image.
- the position information and the position information of the credential in the to-be-processed image included in the detection result of the credential determine the number and / or position information of the faces included in the credential.
- the position information of the document in the image may include the position information of the document frame.
- the position information of the human face in the to-be-processed image includes: vertex coordinates of a first detection frame of the human face in the to-be-processed image.
- the position information of the human face in the to-be-processed image includes: the coordinates of the center of the first detection frame of the human face in the to-be-processed image, the Length and width.
- the position information of the document in the image to be processed includes: a vertex coordinate of a second detection frame of the document in the image to be processed.
- the position information of the credential in the image to be processed includes the coordinates of the center of the second detection frame of the credential in the image to be processed, and the length and width of the second detection frame.
- the determining whether the image to be processed is a valid identity authentication image according to the face detection result and the document detection result includes:
- the face detection result Based on the document face information, the face detection result, and the document detection result, it is determined whether the image to be processed is a valid identity authentication image.
- the face information of the credential includes at least one of the following: the number of faces included in the credential detected in the image to be processed, and position information of the face included in the credential.
- the determining face information of a document based on the face detection result and the document detection result includes:
- the determining whether the image to be processed is a valid identity authentication image based on the document face information, the face detection result, and the document detection result includes:
- the number of documents in the document detection result meeting a first preset requirement
- the number of faces in the face detection result satisfying a second preset requirement satisfying a second preset requirement
- the number of faces in the document included in the document face information The number meets a third preset requirement, and it is determined that the image to be processed is a valid identity authentication image.
- determining whether an image is valid may include determining whether the image meets the following three judgment conditions: the number of documents included in the image meets a first preset requirement, and the number of faces included in the image meets The second preset requirement, and the number of faces in the document included in the image meets the third preset requirement.
- the credential detection result may include face detection information in the credential contained in the image, such as the number and / or position information of the face.
- the method before determining whether the number of faces in the detected document meets a third preset requirement, the method further includes: according to the faces included in the face detection result, in the image to be processed.
- the position information of the ID and the position information of the ID included in the ID detection result in the image to be processed determine the number of faces included in the ID.
- the number of faces in the document may be determined based on the position information of each face in the image and the position information of the document in the image. For example, a person's face that is located in the area where the document is located is determined as a face that is located in the document.
- the first preset requirement includes that the number of certificates included in the certificate detection result is one. In some embodiments, the second preset requirement includes that the number of faces included in the face detection result is greater than or equal to two. In some embodiments, the third preset requirement includes that the number of faces included in the detected document is one.
- the performing identity authentication according to the face detection result and the document detection result includes: determining a first included in the document based on the face detection result and the document detection result. Similarity between a human face and a second human face in the to-be-processed image that is outside the document; based on the similarity between the first human face and the second human face, an identity check is obtained result.
- position information of a first face located in the document and position information of a second face outside the document may be determined based on the face detection result and the document detection result.
- an image of the first face may be obtained from the image to be processed based on the position information of the first face
- an image of the second face may be obtained from the image to be processed based on the position information of the second face
- determining a first face included in the document and a second person outside the document in the image to be processed The similarity between faces includes: obtaining an image of the first face and an image of the second face from the to-be-processed image based on the face detection result and the document detection result; Feature extraction of the image of the first face to obtain the first feature, and feature extraction of the image of the second face to obtain the second feature; based on the first feature and the second feature, determine Similarity between the first human face and the second human face.
- the person who is located outside the document will be The face is determined as the second person's face.
- the number of faces outside the document is determined to be greater than or equal to 2 based on the face detection result and the document detection result, that is, the number of faces included in the face detection result is greater than 2, then from outside the document Select a second face from at least two faces of.
- the method before performing identity authentication based on the face detection result and the document detection result, the method further includes: if the number of faces included in the image to be processed is greater than 2, The largest human face among at least two human faces in the to-be-processed image that are located outside the document is determined as the second human face.
- it is determined according to position information of a face included in the face detection result in the to-be-processed image and position information of a document included in the document detection result in the to-be-processed image.
- the position information of at least two faces outside the document and based on the position information of at least two faces outside the document, for example, the position of the detection frame of each of the at least two faces, determines the largest of the at least two faces. human face.
- a face with the smallest depth among at least two faces outside the document in the image to be processed is determined as the second face.
- the obtaining a result of an identity check according to the similarity between the first face and the second face further includes: in response to determining the first The similarity between a human face and the second human face is greater than a preset threshold, and the document is text-recognized to obtain text information of the document, where the text information includes at least one of a name and a document number ; Authenticating the text information based on the user information database to obtain a result of an identity check.
- an image to be processed in response to receiving an identity authentication request, an account login request, or a transaction request, acquiring an image to be processed.
- an image to be processed in response to receiving a registration request, an image to be processed is acquired.
- the method further includes: in response to determining that the result of the identity check is identity authentication, storing user information in a service database, the user information includes any one or more of the following: text information of the certificate , The image to be processed, the image of the second human face, and feature information of the second human face.
- the method further includes: in response to receiving the identity authentication request, obtaining an image including a face to be authenticated; querying whether there is user information in the service database that matches the image of the face to be authenticated; The result of the query is used to determine the authentication result of the face to be authenticated.
- the identity authentication request includes account information or credential information of the face to be authenticated.
- the performing identity authentication according to the face detection result and the credential detection result to obtain the identity authentication result of the image to be processed further includes: according to the face detection result and the credential The detection result is subjected to anti-counterfeit detection to obtain an anti-counterfeit detection result; based on the anti-counterfeit detection result and the identity check result, an identity authentication result of the image to be processed is determined.
- performing identity authentication according to the face detection result and the document detection result to obtain the identity authentication result of the image to be processed includes: according to the face detection result and the document detection As a result, an anti-counterfeiting detection is performed, and an anti-counterfeiting detection result is obtained.
- performing anti-counterfeit detection according to the face detection result and the document detection result to obtain the anti-counterfeit detection result includes: based on the face detection result and the document detection result, Obtain a face area image and a document area image in the processed image; perform forged clue detection on the to-be-processed image, the face area image, and the document area image respectively; based on the result of the forged clue detection, obtain the Anti-counterfeit detection result of the image to be processed.
- a proportion of a face included in the face region image in the face region image satisfies a fourth preset requirement.
- a proportion of a document included in the document region image in the document region image satisfies the fourth preset requirement.
- the fourth preset requirement includes that the ratio is greater than or equal to 1/4 and less than or equal to 9/10.
- the performing forged clue detection on the to-be-processed image, the face region image, and the document region image respectively includes: separately performing the to-be-processed image, the face region image, and Feature extraction of the document area image to obtain the features of the image to be processed, the features of the face area image, and the features of the document area image; detecting the features of the image to be processed, the features of the face area Whether the characteristics and the characteristics of the document area contain forged clue information.
- the extracted features include one or any of the following: a local binary pattern feature, a sparsely encoded histogram feature, a panorama feature, a face feature, and a face detail feature.
- the fake clue information has human eye observability under visible light conditions.
- the forged clue information includes any one or more of the following: forged clue information of the imaging medium, forged clue information of the imaging medium, and clue information of a fake face that actually exists.
- the forged clue information of the imaging medium includes: edge information, reflective information, and / or material information of the imaging medium; and / or, the forged clue information of the imaging medium includes: a screen edge of a display device, Screen reflection and / or screen moire; and / or, the clue information of the fake face that actually exists includes the characteristics of a masked face, the characteristics of a model face, and the characteristics of a sculpture face.
- the detecting whether the feature of the image to be processed, the feature of the face region, and the feature of the document region contains forged clue information includes detecting the feature of the image to be processed. To determine whether the features of the image to be processed contain forged clue information; detect the features of the face region image to determine whether the features of the face region image contain forged clue information; to the document area image The features of the image are detected to determine whether the features of the document area image contain forged clue information.
- the detecting whether the features of the image to be processed, the features of the face region, and the features of the document region include forged clue information includes: The features of the face area image are connected with the features of the document area image to obtain the connected features; it is determined whether the connected features include forged clue information.
- performing the fake clue detection on the to-be-processed image, the face region image, and the document region image respectively includes: using a third neural network to separately detect the to-be-processed image, the The face area image and the document area image are subjected to forged clue detection.
- obtaining the anti-counterfeit detection result of the to-be-processed image based on the result of the forged clue detection includes: responding to the result of the forged clue detection indicating that the to-be-processed image, the face Neither the area image nor the document area image contains forged clues, and it is determined that the anti-counterfeit detection result of the to-be-processed image passes the anti-forgery detection; and / or, in response to the forged clue detection result, the to-be-processed image, Any one or more of the face area image and the document area image contain forged clues, and it is determined that the anti-counterfeiting detection result of the image to be processed is that the anti-counterfeiting detection fails.
- an identity authentication device including: a first detection module configured to perform face detection on an image to be processed through a first neural network to obtain a face detection result; and a second detection module , Configured to perform credential detection on the to-be-processed image through a second neural network to obtain a credential detection result; a first determination module configured to determine the to-be-processed image according to the face detection result and the credential detection result. Whether it is a valid identity authentication image; an authentication module configured to, in response to determining that the image to be processed is a valid identity authentication image, perform identity authentication according to the face detection result and the document detection result to obtain the to-be-processed Image authentication results.
- an electronic device including: a memory configured to store a computer program; a processor configured to execute a computer program stored in the memory, and when the computer program is executed , To implement the identity authentication method described in any one of the foregoing embodiments of the present disclosure.
- a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the identity authentication according to any one of the foregoing embodiments of the present disclosure is implemented. method.
- a computer program including computer-readable code.
- the computer-readable code runs on a device
- a processor in the device executes the program to implement the foregoing disclosure. Instructions for each operation in the identity authentication method according to any one of the foregoing embodiments.
- the computer program product may be a computer storage medium.
- the computer program product may be a software product, such as a Software Development Kit (SDK), and many more.
- SDK Software Development Kit
- the embodiment of the present disclosure uses a neural network to identify whether a to-be-processed image is a valid identity authentication image through a deep learning method, and can quickly screen out qualified images for user identity authentication, providing work efficiency; based on effective identity authentication
- the image authenticates the user without manual review, which saves costs, improves work efficiency and processing speed, and avoids possible errors in manual review and processing, and improves the accuracy of authentication results.
- FIG. 1A is a flowchart of an identity authentication method according to an embodiment of the present disclosure.
- FIG. 1B is another flowchart of an identity authentication method according to an embodiment of the present disclosure.
- FIG. 2 is another flowchart of an identity authentication method according to an embodiment of the present disclosure.
- FIG. 3A is a schematic diagram of an application scenario example according to an embodiment of the present disclosure.
- FIG. 3B is a schematic diagram of a photo of a user holding an ID card collected in an embodiment of the present disclosure.
- FIG. 4 is a flowchart of an identity authentication method according to an embodiment of the present disclosure.
- FIG. 5 is a schematic structural diagram of an identity authentication apparatus according to an embodiment of the present disclosure.
- FIG. 6 is another schematic structural diagram of an identity authentication apparatus according to an embodiment of the present disclosure.
- FIG. 7 is another flowchart of an identity authentication method according to an embodiment of the present disclosure.
- FIG. 8 is another flowchart of an identity authentication method according to an embodiment of the present disclosure.
- FIG. 9 is another flowchart of an identity authentication method according to an embodiment of the present disclosure.
- FIG. 10 is a schematic structural diagram of an identity authentication apparatus according to an embodiment of the present disclosure.
- FIG. 11 is another schematic structural diagram of an identity authentication apparatus according to an embodiment of the present disclosure.
- FIG. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
- Embodiments of the present disclosure can be applied to electronic devices such as terminals, computer systems, and servers, which can operate with many other general-purpose or special-purpose computing system environments or configurations.
- Examples of well-known terminals, computing systems, environments, and / or configurations suitable for use with electronic devices include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, handheld or laptop devices, based on Microprocessor systems, set-top boxes, programmable consumer electronics, network personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments including any of the above, and so on.
- An electronic device may be described in the general context of computer system executable instructions, such as program modules, executed by a computer system.
- program modules may include routines, programs, target programs, components, logic, data structures, and so on, which perform specific tasks or implement specific abstract data types.
- the computer system / server can be implemented in a distributed cloud computing environment.
- tasks are performed by remote processing devices linked through a communication network.
- program modules may be located on a local or remote computing system storage medium including a storage device.
- An embodiment of the present disclosure provides an identity authentication method. As shown in FIG. 1A, the method includes:
- Face detection is performed on the image to be processed through the first neural network to obtain a face detection result; and the second neural network performs document detection on the to-be-processed image to obtain the document detection result.
- the image to be processed in the embodiment of the present disclosure is an image acquired through a camera, or may be an image received from another device.
- the received image may be an acquired image, or one or more of the acquired images may be processed. Get it.
- the image to be processed may be a static image (that is, an image acquired separately) or an image in a video (that is, an image selected from the acquired video according to a preset standard or randomly selected) ) Can be used for identity authentication in the embodiments of the present disclosure.
- the embodiments of the present disclosure have no restrictions on all attributes such as the source, nature, size, etc. of the image.
- a face detection algorithm based on image processing for example, rough segmentation based on histogram
- singular value feature face detection algorithms for example, dyadic wavelet transform-based face detection algorithms, etc.
- face detection is performed on the image to be processed.
- image processing-based document detection algorithms for example, edge detection method, mathematical morphology method, texture analysis-based positioning method, and line detection
- edge statistical method genetic algorithm, Hough transform and contour line method, wavelet transform-based method, etc.
- a face detection algorithm can be used to find the face position in the image to be processed, and a document detection algorithm is used to find the position of the document in the image to be processed; based on the relationship between the found document position and the face position, Determine whether the image to be processed is a photo of a handheld ID card.
- This can help staff quickly screen out qualified images and improve work efficiency.
- the two faces in the image to be processed can be compared to help the staff to quickly read and judge the photo on the photo. Are the two faces the same person?
- the response time is short and can be processed in real time. This can help customers improve work efficiency and user experience, and the recognition accuracy is higher than the human eye to avoid staff errors.
- the first neural network when face detection is performed on an image to be processed through the first neural network, the first neural network may be trained by using the sample image in advance, so that the trained first neural network can effectively implement the face in the image. Detection.
- the second neural network when the document is processed by the second neural network for document detection, the second neural network may be trained using sample images in advance, so that the trained second neural network can effectively detect the document in the image.
- the above-mentioned face detection result may include, but is not limited to, at least one of the following: the number of faces included in the image to be processed and the position information of each face in the image to be processed.
- the document detection result may include, for example, but is not limited to, at least one of the following: the number of documents included in the image to be processed and the position information of each document in the image to be processed.
- the position information of the human face in the image to be processed may be expressed as, for example, the coordinates of the vertices of the four fixed points of the face detection frame (which may be referred to as: the first detection frame) of the human face in the image to be processed. Based on the vertex coordinates of the four vertices of the face detection frame in the image to be processed, the position of the face detection frame in the image to be processed can be determined, thereby determining the position of the face in the image to be processed.
- the position information of the face in the image to be processed can also be expressed as: the coordinates of the center point of the face detection frame (that is, the first detection frame) in the image to be processed, and the position of the face detection frame. Length and width. Based on the coordinates of the center point of the face detection frame in the image to be processed, and the length and width of the face detection frame, the position of the face detection frame in the wipe image can be determined, thereby determining the face in the image to be processed Location.
- the credentials in the embodiment of the present disclosure refer to items used to prove the identity of a user, such as an ID card, a passport, a student ID, an employee card, and the like.
- the position information of the document in the image to be processed can be expressed, for example, as the vertex coordinates of the four vertices of the document's object detection frame (may be referred to as: the second detection frame) in the image to be processed. Based on the vertex coordinates of the four vertices of the object detection frame in the image to be processed, the position of the object detection frame of the document in the image to be processed can be determined, thereby determining the position of the document in the image to be processed.
- the position information of the document in the image can also be expressed as the coordinates of the center point of the object detection frame (ie, the second detection frame) of the document in the image to be processed, and the length and width of the object detection frame. Based on the coordinates of the center point of the object detection frame in the image to be processed, and the length and width of the object detection frame, the position of the object detection frame of the document in the image to be processed can be determined, thereby determining the identity of the document in the image to be processed. position.
- a valid identity authentication image refers to an image that satisfies a preset requirement, for example, a face to be included in an image to be processed and a document that meets a preset requirement in terms of position and quantity.
- a preset requirement for example, a face to be included in an image to be processed and a document that meets a preset requirement in terms of position and quantity.
- the required authentication image is a photo of a user holding an ID card
- the valid ID image should include an ID card
- the ID card includes a face
- the ID card includes Less than one face.
- the number of faces in the face detection result and the ID detection result cumulatively are less than two, the number of ID cards is not unique, or the position verification of the face and ID card is incorrect (the number of faces in the ID card area is unique) , There is at least one face outside the ID card area), it is not considered a valid identity authentication image (that is, it is not a valid handheld ID photo).
- operation 106 is performed. Otherwise, if the image to be processed is not a valid identity authentication image, the subsequent process is not performed, or a prompt message is output that the image to be processed is invalid.
- the identity authentication may include an identity check to determine whether the user and the certificate are consistent, that is, determine whether the certificate is the user's own certificate.
- identity authentication may include anti-counterfeit detection to determine if there is a forgery.
- identity authentication may include anti-counterfeit detection and identity verification. The embodiment of the present disclosure does not limit the specific implementation of identity authentication.
- face detection is performed on the image to be processed through the first neural network
- document detection is performed on the image to be processed through the second neural network.
- identity authentication is performed according to a face detection result and a document detection result.
- the embodiment of the present disclosure uses a neural network to identify whether a to-be-processed image is a valid identity authentication image through a deep learning method, and can quickly screen out qualified images for user identity authentication, improving work efficiency; based on effective identity authentication
- the image authenticates the user without manual review, which saves costs, improves work efficiency and processing speed, and avoids possible errors in manual review and processing, and improves the accuracy of authentication results.
- the above-mentioned document detection result may include at least one of the following: the number of faces included in the document detected in the image to be processed, the position information of the face included in the document, and so on.
- the method may further include: determining a credential according to the position information of the face in the image to be processed included in the face detection result and the position information of the credential in the image to be processed included in the credential detection result. The number of faces included in.
- operation 104 it may be determined whether the number of documents in the above-mentioned document detection result meets the first preset requirement, whether the number of faces in the above-mentioned face detection result meets the second preset requirement, and whether the detected Whether the number of faces in the document meets the third preset requirement, the number of documents in the above-mentioned document detection result can meet the first preset requirement, the number of faces in the above-mentioned face detection result meet the second preset requirement, and the certificate
- the number of faces in the credential included in the face information meets the third preset requirement, it is determined that the image to be processed is a valid identity authentication image.
- the number of documents in the document detection result meets the first preset requirement
- the number of faces in the face detection result meets the second preset requirement
- the number of faces in the document meets the third preset requirement
- the number of documents in the document detection result is 1, the number of faces in the face detection result is greater than or equal to 2, and the number of faces in the document is 1.
- the number of faces in the face detection result is greater than 2, it indicates that the number of faces included in the image to be processed outside the document area may be greater than one. At this time, it may be due to the fact that The face also includes the face of the onlooker user.
- the number of faces in the face detection result is less than 2, the number of documents is not unique, or the position relationship between the face and the document is incorrect (the criterion for the correct position relationship between the face and the document is that the (The number of faces is unique and there is at least one face outside the document area), and the image to be processed is not considered a valid identity authentication image.
- an image acquisition device collects a photo of a user holding an ID card, wherein a photo of a user holding an ID card is shown in FIG. 3B.
- the detection is performed according to the face detection result and the ID detection result
- Identity authentication may include: determining, based on the above-mentioned face detection result and document detection result, a face included in the document (referred to as: the first face 31) and a face outside the document in the image to be processed (referred to as: The similarity between the second face 32); according to the similarity between the first face and the second face, an identity test result is obtained.
- an image of a first face and an image of a second face may be obtained from an image to be processed based on the above-mentioned face detection result and document detection result;
- Feature extraction is performed on the image of the first face to obtain the first feature; feature extraction is performed on the image of the second face to obtain the second feature.
- the second human face is the largest human face in the to-be-processed image that is outside the document.
- feature extraction may be performed through a neural network; and based on the first feature and the second feature, a similarity between the first face and the second face is determined.
- the similarity between the first feature and the second feature may be compared.
- the similarity between the first feature and the second feature can be compared through a neural network; according to whether the similarity between the first feature and the second feature is greater than a preset threshold, an identity check is obtained. result.
- the preset thresholds can be set according to actual needs, such as the rigor of user identity authentication of the current business, the performance of the first neural network and the second neural network, the image acquisition environment, etc., and can be changed according to the actual needs. Make adjustments. For example, for financial services with high security requirements, etc., the required performance of the first neural network and the second neural network is high, and the preset threshold can be set higher (for example, 98%), that is, the above-mentioned first characteristic and The similarity between the second features is above 98%, so that the image to be processed can pass the identity authentication in order to ensure the security of financial services; for services with less high security requirements and poor image acquisition environments, you can set this
- the preset threshold is low (for example, 80%), that is, the similarity between the first feature and the second feature reaches more than 80%, and the image to be processed can pass identity authentication, so as to simultaneously realize the security of the service, and Feasibility of user identity authentication based on images to be processed in this service.
- a neural network may be used for feature extraction of an image of a face in a document and an image of a face outside the document, and the similarity between the extracted first feature and the second feature may be compared in advance.
- the training is performed so that the trained neural network can effectively extract the features of the face image in the document and the face image outside the document, and accurately compare the similarity, so that the face in the document and the Whether the faces outside the document are the faces of the same person.
- the face of the two faces included in the image to be processed that is located outside the document is directly determined as the above-mentioned second face.
- the number of faces included in the image to be processed is greater than 2, it may be due to the fact that in addition to the face of the authenticated user, the face of the onlooker is included in the image to be processed. It can be considered that the authenticated user is closest to the image acquisition device, so the face is the largest, and other onlookers are the farthest from the image acquisition device, and the face is relatively smaller than the face of the authenticated user.
- the embodiment of the present disclosure uses a neural network to Feature extraction and similarity comparison of the face image in the document and the largest face image outside the document can effectively identify whether the two are the same user, thereby quickly and accurately determining whether the two faces are the same person
- the human face has short response time and high accuracy, which can effectively improve work efficiency and user experience, and avoid visual recognition errors.
- An embodiment of the present disclosure provides an identity authentication method. As shown in FIG. 1B, the method includes:
- Face detection is performed on the image to be processed through the first neural network to obtain a face detection result; and the second neural network performs document detection on the to-be-processed image to obtain the document detection result.
- the face detection result includes at least one of the following: the number of faces included in the to-be-processed image and position information of the face in the to-be-processed image; and /
- the detection result of the document includes at least one of the following: the number of documents included in the image to be processed and the position information of the document in the image to be processed.
- the face information of the credential includes at least one of the following: the number of faces included in the credential detected in the image to be processed, and position information of the face included in the credential.
- the number of faces included in the document is less than or equal to the number of faces included in the image to be processed, and the position information of the face included in the document partially overlaps with the position information of the face in the image to be processed, that is, the document
- the position information of the face included in the is a subset of the position information of the face in the image to be processed.
- Operations 1041 and 1042 in this embodiment provide an implementation manner for implementing operation 104 in the method shown in FIG. 1A.
- operation 1041, the determining face information of a document based on the face detection result and the document detection result including:
- position information of a human face in the to-be-processed image and the number of human faces in the to-be-processed image are determined; wherein, the position information of the human face in the to-be-processed image includes the human face in the document
- the included position information, the number of human faces in the to-be-processed image includes the number of human faces in the document, for example, the number of human faces in the to-be-processed image is 2, namely, human face 1 and human face 2,
- the position information of face 1 in the image to be processed includes wz1, and the position information of face 2 in the image to be processed is wz2; where the position of the document in the image to be processed is wz3, and the range of wz3 includes wz2, then you can It is determined that the number of faces included in the document is 1, and the position information of the faces included in the document is wz2.
- operation 1042 the determining whether the image to be processed is a valid identity authentication image based on the credential face information, the face detection result, and the credential detection result, includes:
- the number of documents in the document detection result meeting a first preset requirement
- the number of faces in the face detection result satisfying a second preset requirement satisfying a second preset requirement
- the number of faces in the document included in the document face information The number meets a third preset requirement, and it is determined that the image to be processed is a valid identity authentication image.
- the embodiment of the present disclosure provides another identity authentication method. As shown in Figure 2, this includes:
- operation 206 is performed. Otherwise, if the image to be processed is not a valid identity authentication image, the subsequent process is not performed, or a prompt message is output that the image to be processed is invalid.
- a neural network may be used to perform feature extraction and similarity comparison on the first face in the document and the second face outside the document to confirm the first face and Whether the second face outside the document is the face of the same user.
- 210 Use text recognition, such as Optical Character Recognition (OCR) algorithm, to perform text recognition on a document to obtain text information of the document.
- OCR Optical Character Recognition
- the text information may include, but is not limited to, any one or more of the following: name, ID number, address, validity period, etc.
- FIG. 3B an example of a valid identity authentication image in the embodiment of the present disclosure.
- the OCR algorithm is used to perform text recognition on the document 33.
- the text information 34 on the document can be quickly read, and the work order can be automatically filled based on the text information, which can greatly improve the work efficiency of customer service staff. Save labor costs.
- the use of face recognition and document OCR recognition technology can effectively solve the problems existing in the traditional industry using handheld ID cards for identity verification, and complete the screening of handheld ID cards, the comparison of two faces on the ID card photos, and ID information in real time. Extraction and so on.
- the text information of the certificate may optionally include:
- the user information database may be, for example, a user information database provided by the Ministry of Public Security or other authoritative authentication structure, in which user information is stored to ensure the authority of the user information source and the correctness of the user information.
- the result of the identity verification is identity authentication; otherwise, if the text information of the certificate is inconsistent with the user information stored in the user information database, the result of the identity verification is Not authenticated.
- the text information of the certificate may optionally further include:
- the user information may include any one or more of the following: text information of the above certificate, an identity authentication image (that is, pending authentication Image), the image of the second human face, and feature information of the second human face.
- the user after the user's registration information is successfully stored, the user has successfully registered in the corresponding service, and then the user can use the service.
- the embodiments of the present disclosure can be applied to any service that requires real-name authentication, such as a transaction service, an application (Application, APP) service, an access control service, and the like.
- the user In the process of using the service, the user needs to be authenticated based on the user information stored in the service database. After the user passes the identity authentication, the service can continue to be used.
- an anti-counterfeit detection of the image to be processed may also be performed based on a face detection result and a document detection result to obtain an anti-counterfeit detection result of the image to be processed.
- identity authentication includes anti-counterfeit detection and identity inspection.
- anti-counterfeiting detection may be performed first, and whether to perform identity verification is determined based on the results of the anti-counterfeiting detection. For example, in response to the anti-counterfeit detection result being that the anti-counterfeit detection is passed, an operation of performing an identity check according to a face detection result and a document detection result is performed. Otherwise, if the result of the anti-counterfeiting detection is that the anti-counterfeiting detection fails, the operation of performing an identity check according to the face detection result and the document detection result is not performed.
- the anti-counterfeit detection and identity check may be performed in parallel, and the identity authentication result of the image to be processed is determined based on the anti-counterfeit detection result of the image to be processed and the result of the identity check.
- the anti-counterfeiting detection result of the image to be processed passes the anti-counterfeiting detection and the result of the identity check is passing the identity check, it is determined that the image to be processed passes identity authentication. Otherwise, if the anti-counterfeit detection result of the to-be-processed image fails the anti-counterfeit detection and / or the result of the identity check is that the identity check has failed, it is determined that the to-be-processed image fails the identity authentication.
- the performing an anti-counterfeit detection according to the face detection result and the document detection result to obtain the anti-counterfeit detection result includes: based on the face detection result and the document detection result, Obtain a face area image and a document area image from the processed image; perform forged clue detection on the to-be-processed image, a face area image, and a document area image respectively; based on the result of the forged clue detection, obtain the anti-counterfeit detection result of the to-be-processed image.
- the performing identity authentication according to the face detection result and the credential detection result to obtain the identity authentication result of the image to be processed further includes: according to the face detection result and the credential The detection result is subjected to anti-counterfeit detection to obtain an anti-counterfeit detection result; based on the anti-counterfeit detection result and the identity check result, an identity authentication result of the image to be processed is determined.
- performing identity authentication according to the face detection result and the document detection result to obtain the identity authentication result of the image to be processed includes: according to the face detection result and the document detection As a result, an anti-counterfeiting detection is performed, and an anti-counterfeiting detection result is obtained.
- obtaining the anti-counterfeit detection result of the to-be-processed image based on the result of the forged clue detection includes: responding to the result of the forged clue detection indicating that the to-be-processed image, the face Neither the area image nor the document area image contains forged clues, and it is determined that the anti-counterfeit detection result of the to-be-processed image passes the anti-forgery detection; and / or, in response to the forged clue detection result, the to-be-processed image, Any one or more of the face area image and the document area image contain forged clues, and it is determined that the anti-counterfeiting detection result of the image to be processed is that the anti-counterfeiting detection fails.
- feature extraction may be performed on the to-be-processed image, the face region image, and the document region image to obtain the to-be-processed Features of the image, features of the face area image, and features of the document area image; detect whether the features of the image to be processed, the features of the face area image, and the features of the document area image contain forged clue information.
- the anti-counterfeit detection result of the to-be-processed image is determined to have failed.
- Anti-counterfeit detection and only if the forged clue information is not detected in the features of the image to be processed, the features of the face area image, and the features of the document area image, the anti-counterfeit detection result of the to-be-processed image is determined to pass the anti-counterfeit detection.
- the features of the image to be processed, the features of the face area, and the features of the document area can be detected to include forged clue information: the features of the image to be processed are detected to determine the features of the image to be processed Whether forged clue information is included in the image; the features of the face region image are detected to determine whether the feature of the face region image contains forged clue information; the feature of the document area image is detected to determine whether the feature of the document area image contains forgery Lead information.
- the foregoing operations of performing forged clue detection on the image to be processed, the face area image, and the document area image, respectively, may be performed through a third neural network.
- the to-be-processed image, the face region image, and the credential region image are respectively input to a third neural network for processing, and the to-be-processed image, the face region image, and the credential region image respectively include fake clue information. Probability information or indication information indicating whether the fake clue information is included.
- a to-be-processed image, a face area image, and a document area image are simultaneously input to a third neural network.
- the third neural network includes a three-branch feature extraction network, which is used to perform feature extraction on the three input images, and The extracted features are connected to obtain the connected features. Finally, based on the connected features, at least one of the to-be-processed image, the face region image, and the document region image includes probability information or instruction information that contains fake clue information.
- the third neural network is pre-trained based on a training image set including fake clue information.
- the third neural network may be a deep neural network.
- the deep neural network refers to a multilayer neural network, such as a multilayer convolutional neural network.
- the fake clue information contained in various features extracted in the embodiments of the present disclosure can be learned by the third neural network in advance by training the third neural network, and then any image containing these fake clue information After the third neural network is input, it will be detected, and it can be judged as a fake image and cannot pass the anti-counterfeit detection, otherwise it is a real image and can pass the anti-counterfeit detection.
- the training image set may include multiple images that can be used as positive samples for training and multiple images that can be used as negative samples for training.
- the positive sample image is a real image that does not include forged clue information, which can include the entire image, as well as the features of the face area image and the document area image extracted from the entire image; the negative sample image includes the forged clue. Falsified image of information.
- the face area image and the document area image can be obtained from the image to be processed as follows: the proportion of the face included in the face area image in the face area image satisfies the fourth preset Set requirements; and / or, the proportion of the documents included in the document area image in the document area image meets the fourth preset requirement.
- the fourth preset requirement may include, for example, the proportion of the face included in the face region image in the face region image, and the proportion of the document included in the document region image in the document region image being greater than or equal to 1/4 is less than or equal to 9/10.
- the ratio can range from 1/2 to 3/4.
- a value range of a face included in a face region image in the face region image, and a ratio of a document included in a document region image in the document region image may be in a range of values: 1 / 2-3 / 4 can improve the efficiency of anti-counterfeit detection while ensuring the features of the face region image and the security detection effect of the document region image.
- a training image set including forged clue information may be obtained by: acquiring multiple images that can be used as positive samples for training; performing at least part of at least one of the acquired positive samples Image processing for simulating fake clue information to generate at least one image that can be used as a negative sample for training.
- the method may further include, for example, collecting a visible light camera via a terminal, and collecting an image sequence or video sequence including a face and a document through the visible light camera of the terminal; based on a preset frame selection condition, from the image sequence or Select the image to be processed in the video sequence.
- the preset frame selection conditions may include, but are not limited to, any one or more of the following: whether the face and the document are located in the center of the image, whether the edges of the face are completely included in the image, and whether the edges of the document are complete Included in the image, the proportion of the face in the image, the proportion of the document in the image, the angle of the face (that is, whether the face is frontal), the sharpness of the image, the exposure of the image, and so on. According to the above frame selection conditions, an image with higher comprehensive quality can be selected for identity authentication, which can improve the accuracy of the identity authentication result.
- an image with a higher comprehensive quality may be selected from the video sequence as the image to be processed based on the foregoing frame selection condition, where a standard for an image with a higher comprehensive quality may be, for example, one that meets any one or more of the following indicators Image:
- the face and the document are located in the center of the image. The edges of the face and the document are completely included in the image.
- the proportion of the face in the image is about 1 / 2-3 / 4, and the proportion of the document in the image.
- the ratio is about 1 / 2-3 / 4, the face is frontal, and the image has higher definition and higher exposure.
- the above selection can automatically detect the face image's orientation, sharpness, light intensity and other indicators through a set algorithm. According to preset criteria, select the one or several indicators with the best indicators from the entire video sequence. Images.
- the selected to-be-processed images that do not meet the preset criteria may also be pre-processed to obtain the pre-processed to-be-processed images. Accordingly, identity authentication is performed on the pre-processed to-be-processed image.
- the above-mentioned preset standards may include, but are not limited to, any one or more of the following: a preset size, a normal (z-score) distribution standard, a preset image brightness, and the like. Accordingly, preprocessing the image to be processed that does not meet the preset standard may be: performing any one or more of the following operations on the image to be processed that does not meet the preset standard and corresponding to the preset standard that does not meet: Adjustment or cropping, normal normalization, brightness adjustment (such as dark light improvement based on histogram equalization), and so on.
- normal normalization is a statistical data processing method.
- the pixel values in a finger image are processed to meet the standard normal distribution to eliminate the uneven distribution of pixels in the image and affect the recognition effect of the image.
- Improve the pre-processing operation based on the histogram equalization of dark light. It is mainly aimed at the actual face of the hand-held document anti-counterfeit detection scene. The face and the document part may be in the dark light condition. In this case, it is easy to affect the human face.
- the accuracy of anti-counterfeiting and document anti-counterfeiting, the image improved by dark light can readjust the brightness distribution of the image, so that the image originally captured in low light can meet the requirements for image quality of identity authentication, and thus obtain more accurate identity authentication results.
- the identity authentication method may further include:
- a neural network may be used to perform feature extraction on the image of the face to be authenticated, and query whether there is user information in the service database that matches the feature information of the face to be authenticated.
- the query result if there is user information in the service database that matches the characteristic information of the face to be authenticated, it is determined that the authentication result of the face to be authenticated is passed authentication; otherwise, if there is no The user information matching the characteristic information of the face to be authenticated determines that the authentication result of the face to be authenticated is that the authentication has not passed.
- the user requesting the service can be authenticated based on the user's registration information, and the service can be continued to be used only after the user is authenticated, thereby improving the service.
- the method may further include: performing an anti-counterfeit detection on the image including the face to be authenticated, and obtaining the image including the face to be authenticated. Authentication result of an image of a human face.
- the face to be authenticated is determined according to whether there is a query result matching the characteristic information of the face to be authenticated in the service database, and whether the image including the face to be authenticated has passed the anti-counterfeit detection result of the anti-counterfeit detection. Certification results.
- the authentication result of the face to be authenticated is authentication; otherwise If there is no user information in the service database that matches the characteristic information of the face to be authenticated, and / or the image including the face to be authenticated does not pass the anti-counterfeit detection, it is determined that the authentication result of the face to be authenticated is not authenticated.
- the method includes performing an anti-counterfeit detection on an image including a face to be authenticated. For example, a person may be obtained from an image including a face to be authenticated. Face area image and document area image; forge clue detection on the image including the face to be authenticated, the face area image and the document area image respectively; based on the result of the forged clue detection, an anti-counterfeit detection is performed on the image including the face to be authenticated result.
- the fake clue detection is performed on the image including the face to be authenticated, the face area image, and the document area image separately, a method similar to that of performing anti-counterfeit detection on the image including the face to be authenticated may be used.
- the face image of the authentication face, the face area image, and the document area image are subjected to feature extraction to obtain the features including the face image to be authenticated, the feature of the face area image, and the feature of the document area image; and the face including the face to be authenticated is detected. Whether the features of the image, the features of the face area, and the features of the document area contain forged clue information.
- the features extracted from the image to be processed or an image including a face to be authenticated, a face area image, and a document area image may include, but are not limited to, any of the following: Local Binary Pattern (LBP) feature, Histogram of Sparse Code (HSC) histogram feature, panorama feature, LARGE feature, face feature (SMALL) feature, face detail map (TINY) feature.
- LBP Local Binary Pattern
- HSC Histogram of Sparse Code
- LARGE LARGE feature
- SMALL face feature
- TINY face detail map
- the feature items included in the extracted features may be updated according to the possible forged clue information.
- the LBP feature can highlight the edge information in the image; the HSC feature can more clearly reflect the zero reflection and blur information in the image; the LARGE feature is a full-picture feature, and based on the LARGE feature, it can be extracted to the most obvious image hack; the face map is an area cut of the face frame in the image several times the size (for example, 1.5 times the size), which contains the face, the face and the background fit part, based on SMALL features, can be extracted Reflective, remake device screen moiré patterns and fake edges of models or masks, etc .; the face detail map is an area cut of the size of the face frame, including the face. Based on TINY features, it can be extracted into the image PS (based on image editing software photoshop editing), remake screen moire and the texture of models or masks to fake clues.
- the above-mentioned fake clue information has human eye observability under visible light conditions, that is, the human eye can observe these fake clue information under visible light conditions.
- the fake clue information may include, but is not limited to, any one or more of the following: fake clue information of the imaging medium, fake clue information of the imaging medium, and clue information of a fake face that actually exists.
- the forged clue information of the imaging medium is also called 2D (2D) type of forged clue information.
- the forged clue information of the imaging medium can be called 2.5D forged clue information.
- the clue information of a real forged face can be called Forging 3D (2D) type fake clue information, for example, the fake clue information that needs to be detected may be updated correspondingly according to the possible forged face mode.
- 2D Forging 3D
- the electronic device can "discover" the boundaries between various real and fake faces, and realize various types of anti-counterfeit detection under the conditions of universal hardware devices such as visible light cameras. To resist forged face attacks and improve security.
- the forged clue information of the imaging medium may include, but is not limited to, edge information, reflection information, and / or material information of the imaging medium.
- the fake clue information of the imaging medium may include, but is not limited to, a screen edge, a screen reflection, and / or a screen moire of the display device.
- the clue information of a fake face that actually exists may include, but is not limited to, characteristics of a face with a mask, characteristics of a model face, and characteristics of a sculpture face.
- the fake clue information in the embodiments of the present disclosure can be observed by human eyes under visible light conditions.
- the fake clue information can be divided into 2D, 2.5D and 3D fake faces from the dimensions.
- the 2D fake face refers to a face image printed from a paper material.
- the 2D fake lead information may include, for example, the fake clue information such as the edge of the paper face, the paper material, the reflection on the paper surface, and the edge of the paper. .
- the 2.5D fake face refers to the face image carried by a carrier device such as a video remake device.
- the 2.5D fake counter information can include, for example, the screen moire, screen reflection, and screen edges of a carrier device such as a video remake device.
- 3D fake faces refer to real fake faces, such as masks, models, sculptures, 3D printing, etc.
- the 3D fake faces also have corresponding fake clue information, such as the stitching of masks and the more abstract models. Or fake skin information such as too smooth skin.
- the embodiments of the present disclosure can achieve effective anti-counterfeiting detection under visible light conditions without relying on special multi-spectral equipment, and without the need for special hardware equipment, reducing the hardware cost caused thereby, and can be conveniently applied to various people
- the face detection scene is especially suitable for general mobile applications.
- any of the identity authentication methods provided by the embodiments of the present disclosure may be executed by any appropriate electronic device having data processing capabilities.
- any of the identity authentication methods provided in the embodiments of the present disclosure may be executed by a processor.
- the processor executes any of the share authentication methods mentioned in the embodiments of the present disclosure by calling corresponding instructions stored in a memory. I will not repeat them below.
- a person of ordinary skill in the art may understand that all or part of the operations (steps) for implementing the foregoing method embodiments may be performed by a program instructing related hardware.
- the foregoing program may be stored in a computer-readable storage medium, and the program is being executed. At this time, operations including the foregoing method embodiments are performed; and the foregoing storage medium includes: various media that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disc.
- an embodiment of the present disclosure provides an identity authentication device.
- the apparatus may be used to implement the foregoing method embodiments of the present disclosure, but the embodiments of the present disclosure are not limited thereto.
- the device includes a first detection module 51, a second detection module 52, a first determination module 53, and an authentication module 54. among them:
- the first detection module 51 is configured to perform face detection on an image to be processed through a first neural network to obtain a face detection result.
- the face detection result may include, but is not limited to, at least one of the following: the number of human faces included in the image to be processed and position information of the human face in the image to be processed.
- the position information of the human face in the image to be processed may be expressed as, for example, the vertex coordinates of four fixed points of the first detection frame of the face in the image to be processed, or the first detection frame of the face in the image to be processed. Coordinates of the center point, and the length and width of the face detection frame.
- the second detection module 52 is configured to perform credential detection on an image to be processed through a second neural network to obtain a credential detection result.
- the document detection result may include, for example, but is not limited to, at least one of the following: the number of documents included in the image to be processed and the position information of the document in the image to be processed.
- the position information of the document in the image to be processed may be expressed as: vertex coordinates of the second detection frame of the document in the image to be processed; or, coordinates of the center of the second detection frame of the document in the image to be processed, The length and width of the second detection frame.
- the first determining module 53 is configured to determine whether the image to be processed is a valid identity authentication image, for example, a handheld certificate image, according to a face detection result and a document detection result.
- the authentication module 54 is configured to, in response to determining that the image to be processed is a valid identity authentication image, perform identity authentication according to a face detection result and a document detection result, to obtain an identity authentication result of the image to be processed.
- face detection is performed on the image to be processed through the first neural network
- document detection is performed on the image to be processed through the second neural network
- the to-be-processed is determined according to the obtained face detection result and the document detection result.
- identity authentication is performed according to a face detection result and a document detection result.
- the embodiment of the present disclosure uses a neural network to identify whether a to-be-processed image is a valid identity authentication image through a deep learning method, and can quickly screen out qualified images for user identity authentication, providing work efficiency; based on effective identity authentication
- the image authenticates the user without manual review, which saves costs, improves work efficiency and processing speed, and avoids possible errors in manual review and processing, and improves the accuracy of authentication results.
- the first determining module includes:
- a credential determining unit configured to determine credential face information based on the face detection result and the credential detection result
- the identity authentication determining unit is configured to determine whether the image to be processed is a valid identity authentication image based on the credential face information, the face detection result, and the credential detection result.
- the face information of the credential includes at least one of the following: the number of faces included in the credential detected in the image to be processed, and position information of the face included in the credential.
- the credential determination unit is configured to: according to position information of a face included in the face detection result in the to-be-processed image and a credential included in the credential detection result in the The position information in the image to be processed determines the number and / or position information of the faces included in the document.
- the above document detection result may further include at least one of the following: the number of faces included in the document detected in the image to be processed, the position information of the face included in the document, and so on.
- the first determining module may be further configured to include, according to the number of faces in the face detection result, the position information of the face included in the face detection result in the image to be processed and the document detection result.
- the position information of the ID in the image to be processed determines the number of faces included in the ID.
- the first determining module is configured to respond to the number of documents in the document detection result meeting the first preset requirement, the number of faces in the face detection result satisfying the second preset requirement, and the document face information The number of faces in the included document meets the third preset requirement, and it is determined that the image to be processed is a valid identity authentication image.
- the number of documents in the above document detection result meets the first preset requirement, the number of faces in the face detection result meets the second preset requirement, and the number of faces in the document meets the third preset requirement, for example:
- the number of documents in the document detection result is 1, the number of faces in the face detection result is greater than or equal to 2, and the number of faces in the detected document is 1.
- the authentication module is configured to determine the similarity between the first face included in the document and the second face outside the document in the image to be processed based on the face detection result and the document detection result; According to the similarity between the first face and the second face, the result of the identity check is obtained.
- the embodiment of the present disclosure provides another identity authentication device.
- the authentication module 54 includes a first obtaining unit 541 configured to be based on a face detection result and a document detection result.
- An image of the first face and an image of the second face are obtained in the processed image;
- the feature extraction unit 543 is configured to perform feature extraction on the image of the first face, obtain the first feature, and perform an image of the second face Feature extraction to obtain a second feature;
- a first determining unit 544 configured to determine a similarity between the first face and a second face based on the first feature and a second feature;
- an authentication unit 545 configured to determine The similarity between the human face and the second human face, and the result of the identity test is obtained.
- the apparatus in each of the foregoing embodiments may further include: a second determining module configured to, in a case where the number of faces included in the image to be processed is greater than 2, according to the faces included in the face detection result
- the position information of the face in the to-be-processed image and the position information of the document in the to-be-processed image included in the to-be-processed image determine the largest human face out of the at least two faces included in the to-be-processed image as the second person face.
- the authentication module may further include a text recognition unit 547 configured to respond to determining that the similarity between the first face and the second face is greater than a preset threshold, and The document is text-recognized to obtain text information of the document, and the text information includes at least one of a name and a document number.
- the authentication unit 545 is further configured to authenticate the text information based on the user information database and obtain the result of the identity check.
- the authentication module may further include: a storage processing unit 546 configured to respond to determining that the identity authentication result is identity authentication, and store user information in a service database.
- the user information is, for example, It may include, but is not limited to, any one or more of the following: text information, an image to be processed, an image of a second face, feature information of a second face, and so on.
- the authentication module further includes a query unit 542.
- the first obtaining unit 541 is further configured to obtain an image including a face to be authenticated in response to receiving the identity authentication request.
- the query unit 542 is configured to query whether there is user information in the service database that matches the image of the face to be authenticated.
- the first determining unit 544 is further configured to determine an authentication result of a face to be authenticated according to a result of the query.
- the authentication module 54 is further configured to perform anti-counterfeit detection according to the face detection result and the document detection result to obtain the anti-counterfeit detection result; based on the anti-counterfeit The detection result and the identity verification result determine the identity authentication result of the image to be processed.
- the authentication module 54 is further configured to perform anti-counterfeit detection according to the face detection result and the document detection result to obtain the anti-counterfeit detection result.
- the anti-counterfeit detection module 55 includes a second obtaining unit 551 configured to obtain a face region from the image to be processed based on the face detection result and the document detection result.
- Image and document area image forged clue detection unit 552 is configured to detect forged clues of the image to be processed, the face area image and the document area image respectively; and the second determination unit 553 is configured to obtain the pending processing based on the result of the forged clue detection.
- Image security detection results are configured to obtain a face region from the image to be processed based on the face detection result and the document detection result.
- the proportion of the face included in the face region image in the face region image meets the fourth preset requirement; and / or, the proportion of the document included in the document region image in the document region image meets the Four preset requirements.
- the fourth preset requirement may be, for example, that the ratio is greater than or equal to 1/4 and less than or equal to 9/10.
- the second determining unit is configured to: in response to a result of detecting the forged clues, indicate that the to-be-processed image, the face area image, and the document area image do not include forged clues. , Determining that the anti-counterfeiting detection result of the image to be processed is passing the anti-counterfeiting detection; and / or, in response to the result of the forged clue detection, indicating that the image to be processed, the face area image, and the document area image are Any one or more of them contains forged clues, and it is determined that the anti-counterfeit detection result of the image to be processed is that the anti-counterfeit detection fails.
- the forged clue detection unit is configured to perform feature extraction on the image to be processed, the face region image, and the document region image, respectively, to obtain the features of the image to be processed, the features of the face region image, and the features of the document region image; Detect whether the features of the image to be processed, the features of the face area, and the features of the document area contain forged clue information.
- the extracted features may include, but are not limited to, one or any of the following: local binary pattern features, sparsely encoded histogram features, panorama features, face feature, face detail feature ,and many more.
- the fake clue information has human eye observability under visible light conditions.
- the forged clue information may include, but is not limited to, any one or more of the following: forged clue information of the imaging medium, forged clue information of the imaging medium, clue information of a fake face that actually exists, and the like.
- the forged clue information of the imaging medium may include, but is not limited to, edge information, reflective information, and / or material information of the imaging medium; and / or, the forged clue information of the imaging medium may include, but is not limited to: Screen edges, screen reflections, and / or screen moire of the display device; and / or, the clue information of a fake face that actually exists may include, but is not limited to, characteristics of a face with a mask, characteristics of a model face, sculpture Face-like properties.
- the forged clue detection unit is configured to detect whether the features of the image to be processed, the features of the face area, and the features of the document area include forged clue information, including: the forged clue detection unit is configured to perform the feature of the image to be processed. Detect to determine whether the features of the image to be processed contain forged clue information; detect the features of the face region image to determine whether the features of the face region image contain forged clue information; detect the features of the document area image to determine the document Whether the feature of the area image contains fake clue information.
- the forged clue detection unit is configured to detect whether the features of the image to be processed, the features of the face region, and the features of the document region contain forged clue information, including: the forged clue detection unit is configured to configure the features of the image to be processed 2. The features of the face area image and the features of the document area image are connected to obtain the connected features; it is determined whether the connected features contain forged clue information.
- the fake clue detection unit is configured to perform fake clue detection on the image to be processed, the face region image, and the document region image, respectively.
- the fake clue detection unit is configured to separately treat the processed image and the face through a third neural network.
- the area image and the document area image are subjected to forged clue detection.
- an electronic device provided by an embodiment of the present disclosure includes: a memory configured to store a computer program; and a processor configured to execute the computer program stored in the memory, and when the computer program is executed, any one of the foregoing implementations of the present disclosure is implemented Example authentication method.
- FIG. 7 is a flowchart of an identity authentication method according to an embodiment of the present disclosure. As shown in Figure 7, the method includes:
- the operation 1060 may include: performing feature extraction on the to-be-processed image, the face area image, and the document area image to obtain the features of the to-be-processed image, the features of the face area image, and the features of the document area image, respectively. ; Detecting whether the features of the extracted image to be processed, the features of the face area image, and the features of the document area image contain forged clue information.
- the extracted features may include, but are not limited to, any of the following: LBP features, HSC histograms Features, LARGE features, SMALL features, and TINY features on faces.
- the result of anti-counterfeit detection of the image to be processed may be determined when the result of the detection of the forged clues indicates that the to-be-processed image, the face region image, and the document region image do not contain the forged clue information.
- the result of the counterfeit clue detection may indicate that any one or more of the to-be-processed image, the face region image and the document region image contain counterfeit clue information
- it is determined that the anti-counterfeiting detection result of the image to be processed is that the anti-counterfeiting detection has failed (it can be considered that the identity authentication has failed).
- Identity authentication in some embodiments may include anti-counterfeit detection and / or identity verification.
- the anti-counterfeit detection (see the method shown in FIG. 7) is used to determine whether the image to be processed is counterfeit.
- an image synthesized by image processing technology is a counterfeit image, so the machine cannot pass the anti-counterfeit detection; for example, if It is the image obtained by the user by holding the ID, but not the composite image, which can pass the anti-counterfeit detection.
- the identity check (see the methods shown in Figures 1A, 1B, and 2) is to determine the face in the image to be processed (can be considered as face 1) and the face in the document in the image to be processed (can be considered as a person Whether face 2) is consistent.
- identity authentication includes anti-counterfeit detection and identity verification
- successful identity authentication includes passing anti-counterfeit detection and identity verification.
- the anti-counterfeit detection and identity verification may be performed in no particular order. The anti-counterfeit detection may be performed first and then For identity verification, you can also perform identity verification before anti-counterfeit detection.
- the inventors have discovered that when face authentication and document anti-counterfeit detection technologies are currently used for identity authentication and identification, the face and the document are usually divided into two images for independent anti-counterfeit detection.
- This detection method It has the following disadvantages: it is not possible to guarantee that the document and the user are in the same space-time dimension; it is easier to obtain independent real face photo information and real credential information, and the credibility of the source of the photo cannot be guaranteed; it is very likely that the real face is forged Credentials and forgery Faces.
- an identity verification image including a face and a document is obtained, and a face region image and a document region image are obtained from an image to be processed; an image to be processed, a face region image, and a document region image Perform forged clue detection; determine the anti-forgery detection result of the image to be processed according to the result of the forged clue detection.
- the embodiment of the present disclosure proposes a new anti-counterfeiting detection scheme, which enables a human face and a document to appear in an image at the same time, and simultaneously performs anti-counterfeiting detection of a human face and a document, and simultaneously authenticates the authenticity of the human face and the document to ensure the authentic Holding a real ID prevents various forgery situations such as real faces holding fake documents and fake faces holding real documents, and improves the reliability of identity authentication.
- the method may further include: performing face detection and document detection on the image to be processed, and obtaining the face detection result and Document detection results; determine whether the image to be processed is valid based on the results of face detection and document detection.
- performing forged clue detection on the image to be processed, the face region, and the document region may include: in response to determining that the image to be processed is valid, detecting the forged clue on the image to be processed, the face region, and the document region.
- the above-mentioned face detection result may include, but is not limited to, at least one of the following: the number of faces included in the image to be processed and the position information of each face in the image to be processed.
- the document detection result may include, for example, but is not limited to, at least one of the following: the number of documents included in the image to be processed and the position information of each document in the image to be processed.
- the position information of the human face in the image to be processed may be expressed as, for example, the coordinates of the vertices of the four fixed points of the face detection frame (which may be referred to as: the first detection frame) of the human face in the image to be processed. Based on the vertex coordinates of the four vertices of the face detection frame in the image to be processed, the position of the face detection frame in the image to be processed can be determined, thereby determining the position of the face in the image to be processed.
- the position information of the face in the image to be processed can also be expressed as: the coordinates of the center point of the face detection frame (that is, the first detection frame) in the image to be processed, and the position of the face detection frame. Length and width. Based on the coordinates of the center point of the face detection frame in the image to be processed, and the length and width of the face detection frame, the position of the face detection frame in the wipe image can be determined, thereby determining the face in the image to be processed Location.
- the number of faces included in the image to be processed meets the first preset requirement, for example, the number of faces included in the image to be processed is greater than or equal to 2;
- the number of documents that meet the second preset requirement may be, for example, the number of documents included in the image to be processed is one;
- the number of faces in the document meets the third preset requirement, such as: the number of faces in the document Is 1.
- the number of faces in the image to be processed is greater than 2, it indicates that the number of faces included in the image to be processed may be greater than one. In this case, it may be due to the fact that in The face also includes the faces of onlookers.
- the image to be processed is less than 2, the number of documents is not unique, or the position relationship between the face and the document is incorrect (the standard for the correct position relationship between the face and the document is that the The number of faces is unique and there is at least one face outside the document area), the image is considered illegal and is not a valid image to be processed.
- face detection and document detection are performed on the images to be processed, and the face detection results and the document detection results are obtained. Based on the face detection results and the document detection results, it is determined whether the images to be processed are valid, which can be quickly filtered out. Qualified images for user identity authentication provide work efficiency; users are authenticated based on effective to-be-processed images without manual review, saving costs, improving work efficiency and processing speed, and avoiding the possibility of manual review and processing The errors that occur improve the accuracy of the authentication results. If it is determined that the image to be processed is valid, then the forged clue detection is performed on the image to be processed, the face area and the document area therein. This improves the efficiency of anti-counterfeit detection.
- operation 1020 may include: collecting a video sequence through a visible light camera of a terminal device; and selecting a to-be-processed image from the video sequence based on a preset frame selection condition.
- operation 1020 may include: acquiring a visible light camera of the terminal device to collect a to-be-detected image or a to-be-detected video including a face and a document, and obtaining a to-be-processed image from the to-be-detected image or the to-be-detected video collected by the visible light camera .
- FIG. 8 is another flowchart of an identity authentication method according to an embodiment of the present disclosure. As shown in FIG. 8, the method includes:
- face detection is performed on the image to be processed through the first neural network to obtain a face detection result.
- operation 2080 is performed. Otherwise, if it is determined that the image to be processed is invalid, the subsequent process of this embodiment is not performed, or a prompt message indicating that the image to be processed is invalid is output.
- the image of the area where the document is located may be obtained from the image to be processed according to the position information of the document included in the detection result of the document, and the image of the area where the document is located may be determined as the image of the area of the document; and
- the position information of the face included in the detection result and the position information of the document included in the detection result of the document determine the second face outside the document in the image to be processed; based on the second face included in the detection result of the face Position information, obtaining an image of a region where the second face is located from the image to be processed, and determining the image of the region where the second face is located as the face region image.
- the face area image and the document area image can be obtained from the image to be processed as follows: the proportion of the face included in the face area image in the face area image satisfies the fourth preset Set requirements; and / or, the proportion of the documents included in the document area image in the document area image meets the fourth preset requirement.
- 2100 Perform feature extraction on the to-be-processed image, the face area image, and the document area image, respectively, to obtain the features of the to-be-processed image, the features of the face area image, and the features of the document area image.
- whether the features of the image to be processed, the features of the face region image, and the features of the document region image contain forged clue information can be detected as follows: detecting the features of the image to be processed, and determining the features of the image to be processed Whether forged clue information is included in the image; the features of the face region image are detected to determine whether the feature of the face region image contains forged clue information; the feature of the document area image is detected to determine whether the feature of the document area image contains forgery Lead information.
- the features of the image to be processed, the features of the face area image, and the identity of the credential area image may be detected by the three binary classifiers in the neural network respectively. Whether the feature contains forged clue information and outputs the detection result. That is, the neural network includes three binary classifiers. One classifier determines whether the features of the image to be processed contain fake clue information and outputs detection results. The other classifier determines whether the features of the area where the face is located include. Forge the clue information and output the detection result, and determine whether the feature of the area where the above-mentioned document is contained contains the forged clue information through another classifier, and output the detection result.
- the result of the fake clue detection is determined according to the detection results output by the three two-classifiers. If the detection results output by the above three binary classifiers do not contain forged clue information, it is determined that the detection result of the forged clues is detected by the forged clues; otherwise, as long as there is any one or more of the above two binary classifiers, If the output detection result contains forged clue information, it is determined that the detection result of the forged clue is that the forged clue detection has failed.
- whether the features of the image to be processed, the features of the face region image, and the features of the document region image include forged clue information can be detected in the following ways: the features of the image to be processed, the face region image Connect the features of the document with the features of the document area image to obtain the connected features; determine whether the connected features contain forged clue information.
- a binary classifier in the neural network may be used to detect whether the connection feature contains forged clue information and output the detection result.
- the detection result of the fake clue is determined according to the detection results output by the two classifiers. If the detection result output by the two classifiers does not contain forged clue information, it is determined that the detection result of the forged clue is passed through the detection of forged clues; otherwise, if the detection result output by the two classifiers includes forged clue information, the forged clue is determined. The result of the detection was that the fake clue detection was not passed.
- a fake clue detection may be performed on the to-be-processed image, the face area image, and the document area image through a neural network, respectively. That is, operations 2100 to 2120 can be implemented in the following manner: the above-mentioned to-be-processed image, face area image, and document area image are input to a neural network, and the neural network outputs features for representing the image to be processed and the features of the face area image The result of the fake clue detection whether the features of the image of the document area include fake clue information, wherein the neural network is pre-trained based on the training image set including the fake clue information.
- the neural network of the embodiments of the present disclosure may be a deep neural network.
- the deep neural network refers to a multilayer neural network, such as a multilayer convolutional neural network.
- the training image set may include a plurality of first images including a face and a document that can be used as positive training samples and a plurality of second images that can be used as negative samples for training.
- the training image set with fake clue information can be obtained by the following methods:
- Image processing for simulating forged clue information is performed on at least one of at least a part of the first image, at least a part of a face area in the first image, and at least a part of a document area in the first image to generate at least one Zhang can be used as the second image for training negative samples.
- modeling is performed by the powerful description capabilities of deep neural networks, and training is performed on image sets of data through large-scale training to learn the authenticity and forgery of faces and credentials in multiple dimensions that can be observed by the human eye. The difference between them is to determine whether the face is alive. If the face part is a photo-type forgery attack, it can be judged as a fake face through photo reflection or photo edge characteristics; at the same time, learn the difference between normal and fake documents.
- the neural network includes: a third neural network located in the terminal device, that is, the third neural network located in the terminal device executes the images to be processed and the face in the foregoing embodiments.
- the area image and the document area image are subjected to forged clue detection.
- the terminal device may determine the anti-counterfeit detection result of the image to be processed according to the result of the fake clue detection output by the third neural network.
- the fake clue information contained in the features extracted in the embodiments of the present disclosure can be learned by the third neural network by training the third neural network in advance, and then any image input containing these fake clue information is input. After the third neural network is detected, it can be judged as a fake image, otherwise it is a real image.
- the method may include: the server receives a to-be-processed image sent by the terminal device.
- the neural network includes: a fourth neural network located in the server, that is, the fourth neural network located in the server executes the image to be processed and the face area image in the foregoing embodiments.
- the forged clues contained in the features extracted in the embodiments of the present disclosure can be learned by the fourth neural network in advance by training the fourth neural network, and then any image containing these forged clue information is input to the first After the four neural networks are detected, they can be judged as fake images, otherwise they are real images.
- operation 1080 may include: the server may determine the anti-counterfeit detection result of the image to be processed according to the result of the fake thread detection output by the fourth neural network, and return the pending processing to the terminal device.
- the result of anti-counterfeiting detection of the image or the server may return the result of the detection of the forged clue output by the fourth neural network to the terminal device, and the terminal device determines the detection of the anti-counterfeiting of the image to be processed according to the result of the detection of the forged clue output by the fourth neural network. result.
- the neural network may further include: a third neural network located in the terminal device, wherein the size of the third neural network is smaller than that of the fourth neural network.
- the third neural network is smaller than the fourth neural network in the network layer and / or the number of parameters.
- FIG. 9 it is a flowchart of an identity authentication method according to another embodiment of the present disclosure.
- a neural network includes a third neural network located in a terminal device and a fourth neural network located in a server. The method includes:
- the third neural network may use the operations of the foregoing embodiments of the present disclosure to extract the features of the image to be processed, the features of the face region image, and the features of the document region image, and detect the extracted images to be processed. Whether the features of the face region image and the features of the document region image contain forged clue information, and the results of the forged clue detection are obtained.
- operation 3080 if none of the extracted features contains fake clue information, operation 3080 is performed. Otherwise, if any of the extracted features contains forged clue information, operation 3120 is performed.
- the terminal device sends the image to be processed, the face area image, and the credential area image to the server.
- the server inputs the to-be-processed image, the face area image, and the document area image into a fourth neural network on the server, and outputs the fourth neural network via the third neural network to output features representing the image to be processed,
- the features of the face area image and the features of the document area image contain the results of the fake clue detection.
- the fourth neural network may use the operations of the foregoing embodiments of the present disclosure to extract the features of the image to be processed, the features of the face region image, and the features of the document region image, and detect the extracted images to be processed. Whether the features of the face region image and the features of the document region image contain forged clue information, and the results of the forged clue detection are obtained.
- the extracted features do not include forged clue information, it is determined that the image to be processed passes the anti-forgery detection. If the extracted features include forged clue information according to the detection results of the forged clues output from the third neural network and / or the fourth neural network, it is determined that the image to be processed fails the anti-forgery detection.
- the extracted features include forged clue information according to the result of the forged clue detection output by the third neural network, it is determined that the image to be processed fails the anti-forgery detection of identity information. If the extracted features do not contain fake clue information according to the results of the fake clue detection output from the third neural network, and according to the fake clue detection results output from the fourth neural network, the extracted features do not contain the fake clue information, it is determined The image to be processed passes anti-counterfeit detection. If the extracted features do not contain forged clue information based on the results of the forged clue detection output from the third neural network, but based on the forged clues output results from the fourth neural network, the extracted features contain forged clue information. The processed image did not pass the security detection.
- the fourth neural network outputs the result of the detection of the forged clues
- the server may return the result of the detection of the forged clues output by the fourth neural network to the terminal device; the terminal device performs the foregoing operation 3120, That is, the terminal device determines whether the to-be-processed image passes the anti-counterfeiting detection result of the anti-counterfeiting detection based on the result of the forged clue detection output by the fourth neural network.
- the server may determine whether the image to be processed passes the security detection result of the security detection according to the detection result of the forged clue output by the fourth neural network, and Send to the terminal device whether the image to be processed passes the anti-counterfeiting detection result.
- the terminal device sends the image to be processed to the server only when the extracted features do not contain forged clue information according to the result of the detection of the forged clues output by the third neural network, and the fourth neural network performs operation 3100. Therefore, in the foregoing embodiment, the anti-counterfeiting detection of whether the image to be processed passes can be directly determined according to the detection result of the forged clue output by the fourth neural network.
- the extracted features do not contain forged clue information based on the results of the forged clue detection output by the fourth neural network, it is determined that the image to be processed passes the anti-forgery detection; if the extracted features are based on the results of the forged clue detection output by the fourth neural network, the extracted features If it contains fake clue information, it is determined that the image to be processed fails the anti-counterfeit detection.
- neural networks for more feature extraction and detection will require more computing and storage resources, while the computing and storage resources of terminal devices are relatively limited compared to cloud servers.
- the computing and storage resources occupied by the network can also ensure effective face anti-counterfeit detection.
- a smaller (thinner network and / or fewer network parameters) third neural network is set in the terminal device. Fusion of fewer features, such as extracting only LBP features and face SMALL features from the image to be processed to detect the corresponding forged clue information. Larger cloud server settings with better hardware performance (deeper networks and / or networks)
- the fourth neural network with more parameters integrates comprehensive anti-counterfeiting clue features, making the fourth neural network more robust and better detection performance.
- the detection result containing forged clue information is used, a more accurate and comprehensive anti-counterfeit detection is performed through the fourth neural network, which improves the accuracy of the detection result; when the third neural network outputs the detection result containing forged clue information, it is not necessary to pass The fourth neural network performs anti-counterfeiting detection, which improves the efficiency of anti-counterfeiting detection.
- the embodiment of the present disclosure can focus on detecting whether there is a forged clue (ie, forged clue information) in the image to be processed, and the activity is authenticated in a nearly non-interactive manner, which is called silent living detection.
- the silent live detection has almost no interaction during the whole process, which greatly simplifies the live detection process.
- the subject only needs to face the video or image acquisition device (such as a visible light camera) of the device where the neural network is located, and adjust the light and position. Requires any action class interaction.
- the neural network in the embodiment of the present disclosure learns in advance the human eye in multiple dimensions through learning and training methods, which can “observe” the fake clue information, thereby judging whether the face image originates from the real in subsequent applications. Living body.
- the image to be processed contains arbitrary fake clue information
- these clues will be captured by the neural network, and the user's face image will be prompted to be a fake face image.
- the face in the image can be judged as a non-living body by judging the characteristics of the screen reflection or the edge of the screen.
- the method may further include: determining an identity authentication result of the image to be processed according to an anti-counterfeit detection result of the image to be processed.
- the to-be-processed image passes the anti-counterfeit detection, an identity check is performed on the to-be-processed image; based on the result of the identity check, the identity authentication result of the to-be-processed image is determined.
- the second human face before performing user identity authentication according to a face detection result and a document detection result, the second human face may be obtained in the following manner:
- the face of the two faces included in the image to be processed that is located outside the document is directly determined as the above-mentioned second face.
- the number of faces included in the image to be processed is greater than 2, it may be due to the fact that in addition to the face of the authenticated user, the face of the onlooker is included in the image to be processed. It can be considered that the authenticated user is closest to the image acquisition device, so the face is the largest, and other onlookers are the farthest from the image acquisition device, and the face is relatively smaller than the face of the authenticated user.
- the embodiment of the present disclosure uses a neural network to Feature extraction and similarity comparison of the face image in the document and the largest face image outside the document can effectively identify whether the two are the same user, thereby quickly and accurately determining whether the two faces are the same person
- the human face has short response time and high accuracy, which can effectively improve work efficiency and user experience, and avoid visual recognition errors.
- performing identity authentication on the image to be processed may include: determining a first face included in the credential and the image to be processed based on a face detection result of the image to be processed and a credential detection result of the image to be processed. The similarity between the second human face located outside the document; the result of the identity test is obtained based on the similarity between the first human face and the second human face.
- an image of the first face and an image of the second face can be obtained from the image to be processed; feature extraction is performed on the first face to obtain the first feature; feature extraction is performed on the second face to obtain the second feature Determining a similarity between the first human face and the second human face based on the first feature and the second feature.
- a third neural network may be used to perform feature extraction on the first face to obtain the first feature; a second face may be subjected to feature extraction to obtain the second feature; based on the first feature and The second feature determines the similarity between the first feature and the second feature; according to whether the similarity between the first feature and the second feature is greater than a preset threshold, it is determined whether the image to be processed passes an identity check, thereby obtaining an identity check the result of.
- the preset threshold can be set according to actual requirements, such as the rigor of user identity authentication of the current business, the performance of the third neural network, the image acquisition environment to be processed, etc., and can be adjusted according to changes in actual needs.
- the third neural network is used to perform feature extraction on the first and second faces, and when comparing the similarity between the extracted first feature and the second feature, the third neural network may be trained in advance. , So that the trained third neural network can effectively extract the features of the first face in the document and the second face outside the document, and accurately compare the similarity, so that the first Whether one face and the second face other than the ID are the face of the same person.
- feature extraction and comparison can be performed on the first face in the document and the largest face outside the document, so as to quickly and accurately determine whether the two are the same person's face, with short response time and accuracy. High, can effectively improve work efficiency and user experience, and avoid visual recognition errors.
- the face detection result includes at least one of the following: the number of faces included in the image to be processed and the position information of the face in the image to be processed; and / or, the document detection result includes the following At least one of: the number of documents included in the image to be processed and the position information of the documents in the image to be processed.
- the foregoing second human face may be obtained in the following manner:
- the face of the two faces included in the image to be processed that is located outside the document is directly determined as the above-mentioned second face.
- the number of faces included in the image to be processed is greater than 2, it may be due to the fact that in addition to the face of the authenticated user, the face of the onlooker is included in the image to be processed. It can be considered that the authenticated user is closest to the image acquisition device, so the face is the largest, and other onlookers are the farthest from the image acquisition device, and the face is relatively smaller than the face of the authenticated user.
- the embodiment of the present disclosure uses a neural network to Feature extraction and similarity comparison of the face image in the document and the largest face image outside the document can effectively identify whether the two are the same user, thereby quickly and accurately determining whether the two faces are the same person
- the human face has short response time and high accuracy, which can effectively improve work efficiency and user experience, and avoid visual recognition errors.
- performing identity authentication on the image to be processed may further include: in response to determining that the similarity between the first face and the second face is greater than a preset threshold, using text recognition ( OCR) algorithm, which performs text recognition on the document and obtains the text information of the document.
- OCR text recognition
- the text information may include, but is not limited to, any one or more of the following: name, document number, address, validity period, etc .; based on the user information database pair The text information is authenticated and the result of the identity check is obtained.
- the user information database may be, for example, a user information database provided by the Ministry of Public Security or other authoritative authentication structure, in which user information is stored to ensure the authority of the user information source and the correctness of the user information. If the text information of the certificate is consistent with the user information stored in the user information database, the result of the identity verification is identity authentication; otherwise, if the text information of the certificate is inconsistent with the user information stored in the user information database, the result of the identity verification is Not authenticated.
- the OCR algorithm is used to perform text recognition on the document, and the text information on the document can be quickly read, and the text information can be authenticated based on the user information database, and the result of identity authentication is quickly obtained, thereby improving the efficiency of identity authentication.
- anti-counterfeit detection and user identity verification can be performed based on the embodiments of the present disclosure. After the anti-counterfeit detection and user identity verification pass, the requested service can be used, thereby improving service usage. Security.
- the embodiments of the present disclosure can be applied to any service that requires real-name authentication, for example, a payment service, an application (APP) use service, an access control service, and the like.
- the embodiments of the present disclosure can be applied to any scenario that requires a user to hold a certificate (such as an ID card) for identity authentication, for example:
- Scenario 1 When a user performs identity authentication through handheld document detection, he or she opens an application (APP) on a mobile phone terminal to implement the embodiments of the present disclosure, and faces the camera on the mobile phone terminal to ensure that the face and the certificate appear on the screen at the same time. A few seconds to complete and pass the anti-counterfeit detection of the hand-held document;
- APP application
- Scenario 2 The user uses the prepared face-held ID video to perform identity authentication, puts the video on the display screen, and faces the camera on the mobile phone terminal. The security check failed.
- any of the identity authentication methods provided by the embodiments of the present disclosure may be executed by any appropriate device having data processing capabilities, including, but not limited to, a terminal device and a server.
- any of the identity authentication methods provided in the embodiments of the present disclosure may be executed by a processor.
- the processor executes any of the identity authentication methods mentioned in the embodiments of the present disclosure by calling corresponding instructions stored in a memory. I will not repeat them below.
- the foregoing program may be stored in a computer-readable storage medium.
- the program is executed, the program is executed.
- the method includes the steps of the foregoing method embodiment; and the foregoing storage medium includes: a ROM, a RAM, a magnetic disk, or an optical disc, which can store various program codes.
- FIG. 10 is a schematic structural diagram of an identity authentication apparatus according to an embodiment of the present disclosure.
- the device in this embodiment may be configured to implement the foregoing identity authentication method embodiments of the present disclosure.
- the apparatus of this embodiment includes a first detection module 4010, a second detection module 4020, a first acquisition module 4030, a third detection module 4040, and a third determination module 4050. among them:
- a first detection module 4010 configured to perform face detection on an image to be processed through a first neural network to obtain a face detection result
- a second detection module 4020 configured to perform credential detection on the image to be processed through a second neural network to obtain a credential detection result
- the face detection result may include, for example, but is not limited to, at least one of the following: the number of faces included in the image to be processed and the position information of the faces in the image to be processed; and / or, a certificate
- the detection result may include, but is not limited to, at least one of the following: the number of documents included in the image to be processed and the position information of the documents in the image to be processed.
- the first obtaining module 4030 is configured to obtain a face area image from an image to be processed based on a face detection result, and obtain a document area image from the to-be-processed image based on a result of the document detection.
- the third detection module 4040 is configured to perform forged clue detection on the image to be processed, the face area image and the document area image.
- the third determination module 4050 is configured to determine an anti-counterfeit detection result of the image to be processed according to a result of the fake clue detection.
- an identity verification image including a face and a document is obtained, a face region image and a document region image are obtained from an image to be processed; an image to be processed, a face region image, and a document region image are forged Clue detection; determine the anti-counterfeit detection result of the image to be processed according to the result of the fake clue detection.
- the embodiment of the present disclosure proposes a new anti-counterfeiting detection scheme for an image to be processed, which enables a human face and a document to appear in an image at the same time, and simultaneously performs anti-counterfeiting detection of a human face and a document, while authenticating the authenticity of the human face and the document. In order to ensure that the real person holds the authentic certificate, to prevent various forgery situations such as the real face holding a fake document and the fake face holding a real document, and improve the reliability of identity authentication.
- FIG. 11 is another schematic structural diagram of an identity authentication device according to an embodiment of the present disclosure. As shown in FIG. 11, compared with the embodiment shown in FIG. 10, the device in this embodiment may further include a first determination module 4060. among them:
- a first determining module 4060 is configured to determine whether the to-be-processed image is valid according to the face detection result and the ID detection result; the third detection module 4040 may be configured to respond to determining that the to-be-processed image is valid and to process the image. , Face area and document area for fake clue detection.
- the device may further include a second acquisition module, which may be configured to: collect a video sequence; and select a to-be-processed image from the video sequence based on a preset frame selection condition.
- a second acquisition module which may be configured to: collect a video sequence; and select a to-be-processed image from the video sequence based on a preset frame selection condition.
- the preset frame selection conditions may include, but are not limited to, any one or more of the following: whether the face and the document are located in the center of the image, whether the edges of the face are completely included in the image, and whether the edges of the document are complete Contained in the image, the proportion of the face in the image, the proportion of the document in the image, the angle of the face, the sharpness of the image, the exposure of the image, and so on.
- the device of the above embodiment may further include a preprocessing module configured to preprocess the image to be processed to obtain a preprocessed image to be processed.
- the first detection module 4010 is configured to perform face detection on the preprocessed to-be-processed image through a first neural network to obtain a face detection result
- the second detection module 4020 is configured to perform the face detection on the second neural network through a second neural network.
- the pre-processed image to be processed is subjected to document detection to obtain the document detection result.
- the first obtaining module 4030 may be configured to obtain a face area image from a preprocessed to-be-processed image based on a face detection result, and obtain a document area image from a pre-processed to-be-processed image based on a document detection result.
- the pre-processing may include, but is not limited to, any one or more of the following: size adjustment, image cropping, normal normalization, brightness adjustment, and so on.
- the first obtaining module 4030 may include a third determining unit configured to determine the image to be processed according to the position information of the face included in the face detection result and the position information of the document included in the document detection result.
- a second face located outside the document an acquisition unit, based on the position information of the second face included in the face detection result, obtains an image of the area where the second face is located from the image to be processed, and The image of the area where the face is located is determined as the face area image.
- the first obtaining module 4030 may further include a fourth determining unit configured to obtain an image of a region where the document is located from the image to be processed according to the position information of the document included in the detection result of the document, and to locate the document where the document is located The image of the area is determined as the image of the document area.
- a fourth determining unit configured to obtain an image of a region where the document is located from the image to be processed according to the position information of the document included in the detection result of the document, and to locate the document where the document is located The image of the area is determined as the image of the document area.
- the proportion of the face included in the face region image in the face region image satisfies the fourth preset requirement; and / or, the document included in the document region image occupies the document region image.
- the ratio satisfies the fourth preset requirement.
- the fourth preset requirement may include, for example, a ratio greater than or equal to 1/4 and less than or equal to 9/10.
- the third detection module 4040 may include an anti-counterfeit feature extraction unit configured to perform feature extraction on the features of the image to be processed, the face region image and the document region image, respectively, to obtain the features of the image to be processed and the face region.
- the feature of the image and the feature of the document area image; the detection unit is configured to detect whether the feature of the image to be processed, the feature of the face area image and the feature of the document area image contain forged clue information.
- the extracted features may include, but are not limited to, one or any of the following: local binary pattern features, sparsely encoded histogram features, panorama features, face feature, face detail feature ,and many more.
- the fake clue information has human eye observability under visible light conditions.
- the fake clue information includes any one or more of the following: fake clue information of the imaging medium, fake clue information of the imaging medium, and clue information of a fake face that actually exists.
- the fake clue information of the imaging medium includes: edge information, reflective information, and / or material information of the imaging medium; and / or, the fake clue information of the imaging medium includes: a screen edge, a screen reflection, and / Or screen moiré; and / or, the clue information of a fake face that actually exists includes the characteristics of a masked face, the characteristics of a model face, and the characteristics of a sculpture face.
- the detection unit may be configured to: detect features of the image to be processed, determine whether the features of the image to be processed contain forged clue information; detect features of the face region image, and determine the Whether the feature contains forged clue information; detects the features of the document area image to determine whether the features of the document area image contain forged clue information.
- the detection unit may be configured to: connect the features of the image to be processed, the features of the face area image and the features of the document area image to obtain the connection features; and determine whether the connection features include forged clue information .
- the third detection module 4040 may be configured to perform fake clue detection on the image to be processed, the face area image, and the document area image through a third neural network, respectively.
- the third determining module may be configured to determine the anti-counterfeit detection of the image to be processed when the result of the detection of the forged clues indicates that the to-be-processed image, the face region image, and the document region image do not contain the forged clues.
- the result is that the anti-counterfeit detection is passed; and / or, if any one or more of the to-be-processed image, the face area image and the document area image contain forged clues, the anti-counterfeit of the image to be processed is determined
- the detection result is that it failed the anti-counterfeiting detection.
- the first detection module is provided in a server and may be configured to receive a to-be-processed image sent by a terminal device.
- the device of the above embodiment may further include a fourth determination module configured to determine an identity authentication result of the image to be processed according to an anti-counterfeit detection result of the image to be processed.
- the fourth determining module includes: an identity authentication unit configured to perform identity verification on the image to be processed if the anti-counterfeiting detection result of the image to be processed passes the anti-counterfeiting detection; a fifth determining unit configured to be based on The result of the identity check determines the identity authentication result of the image to be processed.
- the identity authentication unit may be configured to determine that the first face included in the credential and the image to be processed are located outside the credential based on the face detection result of the image to be processed and the credential detection result of the image to be processed. The similarity between the second human face; and the identity check result based on the similarity between the first human face and the second human face.
- the identity authentication unit may be configured to: obtain an image of the first face and an image of the second face from the image to be processed; perform feature extraction on the image of the first face to obtain the first feature, Feature extraction is performed on the image of the second face to obtain the second feature; based on the first feature and the second feature, the similarity between the first face and the second face is determined.
- the face detection result includes at least one of the following: the number of faces included in the image to be processed and the position information of the face in the image to be processed; and / or, the document detection result includes the following At least one of: the number of documents included in the image to be processed and the position information of the documents in the image to be processed.
- the third determining module includes a third determining unit configured to, when the number of faces included in the image to be processed is greater than 2, according to the faces included in the face detection result,
- the position information in the to-be-processed image and the position information of the credential in the to-be-processed image included in the document detection result determine the largest face of the at least two faces included in the to-be-processed image that is outside the document as the second human face.
- the identity authentication unit is further configured to: in response to determining that the similarity between the first face and the second face is greater than a preset threshold, perform text recognition on the document to obtain text information of the document, and text information Including at least one of a name and a certificate number; and authenticating text information based on a user information database to obtain a result of an identity check.
- another electronic device provided by an embodiment of the present disclosure includes:
- a memory configured to store a computer program
- the processor is configured to execute a computer program stored in a memory, and when the computer program is executed, implements the identity authentication method of any one of the foregoing embodiments of the present disclosure.
- FIG. 12 illustrates a schematic structural diagram of an electronic device suitable for implementing a terminal or a server of an embodiment of the present disclosure.
- the electronic device includes one or more processors, a communication unit, and the like.
- the one or more processors are, for example, one or more central processing units (CPUs), and / or one Or multiple graphics processors (Graphics, Processing Unit, GPU), etc., the processor can be loaded into Random Access Memory (Random Access Memory) according to the executable instructions stored in read-only memory (ROM) or from the storage part , RAM) to execute various appropriate actions and processes.
- the communication unit may include, but is not limited to, a network card.
- the network card may include, but is not limited to, an IB (Infiniband) network card.
- the processor may communicate with a read-only memory and / or a random access memory to execute executable instructions, and is connected to the communication unit through a bus. And communicate with other target devices via the communication unit, thereby completing operations corresponding to any of the identity authentication methods provided in the embodiments of the present disclosure, for example, performing face detection on an image to be processed through a first neural network, obtaining a face detection result, and Performing a document detection on the image to be processed through a second neural network to obtain a document detection result; determining whether the image to be processed is a valid identity authentication image according to the face detection result and the document detection result; It is determined that the image to be processed is a valid identity authentication image, and identity authentication is performed according to the face detection result and the document detection result to obtain the identity authentication result of the image to be processed.
- various programs and data required for the operation of the device can be stored in the RAM.
- the CPU, ROM, and RAM are connected to each other through a bus.
- ROM is an optional module.
- the RAM stores executable instructions, or writes executable instructions to ROM at runtime, and the executable instructions cause the processor to perform operations corresponding to any of the above-mentioned identity authentication methods of the present disclosure.
- Input / output (I / O) interfaces are also connected to the bus.
- the communication unit can be integrated or set to have multiple sub-modules (for example, multiple IB network cards) and be on the bus link.
- the following components are connected to the I / O interface: including input parts such as keyboard, mouse, etc .; including output parts such as cathode ray tube (CRT), liquid crystal display (LCD), etc .; speakers; storage parts including hard disks; etc .; LAN card, modem, and other network interface card communication part.
- the communication section performs communication processing via a network such as the Internet.
- the drive is also connected to the I / O interface as required. Removable media, such as magnetic disks, optical disks, magneto-optical disks, semiconductor memories, etc., are installed on the drive as needed, so that a computer program read therefrom is installed into the storage section as needed.
- FIG. 12 is only an optional implementation manner. In practice, the number and types of the components in FIG. 12 may be selected, deleted, added or replaced according to actual needs. Functional settings can also be implemented in separate settings or integrated settings. For example, the GPU and CPU can be set separately or the GPU can be integrated on the CPU. The communications department can be set separately or integrated on the CPU or GPU. Wait. These alternative embodiments all fall within the protection scope of the present disclosure.
- embodiments of the present disclosure include a computer program product including a computer program tangibly embodied on a machine-readable medium, the computer program including program code for performing a method shown in a flowchart, and the program code may include a corresponding An instruction corresponding to the operation of the identity authentication method provided by any embodiment of the present disclosure is executed.
- the computer program may be downloaded and installed from a network through a communication section, and / or installed from a removable medium.
- an embodiment of the present disclosure also provides a computer program including computer instructions.
- the computer instructions When the computer instructions are run in a processor of the device, the identity authentication method of any one of the foregoing embodiments of the present disclosure is implemented.
- the computer program may be a software product, such as an SDK, or the like.
- an embodiment of the present disclosure further provides a computer program program product for storing computer-readable instructions. When the instructions are executed, the computer executes any one of the foregoing possible implementation manners.
- the identity authentication method may be implemented by hardware, software, or a combination thereof.
- the computer program product may be embodied as a computer storage medium.
- the computer program product may be embodied as a software product, such as an SDK or the like.
- an embodiment of the present disclosure further provides an identity authentication method and a corresponding device and electronic device thereof, a computer storage medium, a computer program, and a computer program product.
- the method includes: A device sends an identity authentication instruction to the second device, and the instruction causes the second device to execute the identity authentication method in any of the foregoing possible embodiments; the first device receives the identity authentication result sent by the second device.
- the image processing instruction may be a call instruction
- the first device may instruct the second device to perform an identity authentication method by means of a call. Accordingly, in response to receiving the call instruction, the second device may perform the above-mentioned identity authentication. Steps and / or processes in any embodiment of the method.
- the embodiment of the present disclosure also provides a computer-readable storage medium on which a computer program is stored. When the computer program is executed by a processor, the identity authentication method of any one of the foregoing embodiments of the present disclosure is implemented.
- the methods and apparatuses and devices of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware.
- the above-mentioned order of operations for the method is for illustration only, and the operations of the method of the present disclosure are not limited to the order described above unless specifically stated otherwise.
- the present disclosure may also be implemented as programs recorded in a recording medium, which programs include machine-readable instructions for implementing the method according to the present disclosure.
- the present disclosure also covers a recording medium storing a program for executing a method according to the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Computer Hardware Design (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Geometry (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Collating Specific Patterns (AREA)
- Character Input (AREA)
Abstract
Description
Claims (52)
- 一种身份认证方法,包括:通过第一神经网络对待处理图像进行人脸检测,得到人脸检测结果,并通过第二神经网络对所述待处理图像进行证件检测,得到证件检测结果;根据所述人脸检测结果和所述证件检测结果,确定所述待处理图像是否为有效的身份认证图像;响应于确定所述待处理图像为有效的身份认证图像,根据所述人脸检测结果和所述证件检测结果进行身份认证,得到所述待处理图像的身份认证结果。
- 根据权利要求1所述的方法,所述有效的身份认证图像包括:手持证件图像。
- 根据权利要求1或2所述的方法,所述人脸检测结果包括下列中的至少一项:所述待处理图像中包括的人脸的数量和所述人脸在所述待处理图像中的位置信息;和/或,所述证件检测结果包括下列中的至少一项:所述待处理图像中包括的证件的数量和所述证件在所述待处理图像中的位置信息。
- 根据权利要求1至3任一所述的方法,所述根据所述人脸检测结果和所述证件检测结果确定所述待处理图像是否为有效的身份认证图像,包括:基于所述人脸检测结果和所述证件检测结果,确定证件人脸信息;基于所述证件人脸信息、所述人脸检测结果和所述证件检测结果,确定所述待处理图像是否为有效的身份认证图像。
- 根据权利要求4所述的方法,所述证件人脸信息包括下列中的至少一项:所述待处理图像中检测到的证件中包括的人脸的数量、所述证件中包括的人脸的位置信息。
- 根据权利要求4所述的方法,所述基于所述人脸检测结果和所述证件检测结果,确定证件人脸信息,包括:根据所述人脸检测结果中包括的人脸在所述待处理图像中的位置信息和所述证件检测结果中包括的证件在所述待处理图像中的位置信息,确定所述证件中包括的人脸的数量和/或位置信息。
- 根据权利要求4所述的方法,所述基于所述证件人脸信息、所述人脸检测结果和所述证件检测结果,确定所述待处理图像是否为有效的身份认证图像,包括:响应于所述证件检测结果中证件的数量满足第一预设要求、所述人脸检测结果中人脸的数量满足第二预设要求、且所述证件人脸信息包括的证件中人脸的数量满足第三预设要求,确定所述待处理图像为有效的身份认证图像。
- 根据权利要求7所述的方法,下列中的至少一项成立:第一预设要求包括:所述证件检测结果中包括的证件的数量为1;第二预设要求包括:所述人脸检测结果中包括的人脸的数量大于或等于2;第三预设要求包括:所述证件中人脸的数量为1。
- 根据权利要求1至8任一所述的方法,所述根据所述人脸检测结果和所述证件检测结果进行身份认证,包括:基于所述人脸检测结果和所述证件检测结果,确定所述证件中包括的第一人脸和所述待处理图像中位于所述证件之外的第二人脸之间的相似度;根据所述第一人脸和所述第二人脸之间的相似度,得到身份检验的结果。
- 根据权利要求9所述的方法,所述基于所述人脸检测结果和所述证件检测结果,确定所述证件中包括的第一人脸和所述待处理图像中位于所述证件之外的第二人脸之间的相似度,包括:基于所述人脸检测结果和所述证件检测结果,从所述待处理图像中获取所述第一人脸的图像和所述第二人脸的图像;对所述第一人脸的图像进行特征提取,得到第一特征,并对所述第二人脸的图像进行特征提取,得到第二特征;基于所述第一特征与所述第二特征,确定所述第一人脸与所述第二人脸之间的相似度。
- 根据权利要求10或9所述的方法,在所述确定所述证件中包括的第一人脸和所述待处理图像中位于所述证件之外的第二人脸之间的相似度之前,还包括:在所述待处理图像中包括的人脸的数量大于2的情况下,将所述待处理图像中位于所述证件之外的至少两个人脸中的最大人脸确定为所述第二人脸。
- 根据权利要求9至11任一所述的方法,所述根据所述第一人脸和所述第二人脸之间的相似度,得到身份检验的结果,包括:响应于确定所述第一人脸和所述第二人脸之间的相似度大于预设阈值,对所述证件进行文本识别,得到所述证件的文本信息,所述文本信息包括姓名和证件号码中的至少一项;基于用户信息数据库对所述文本信息进行认证,得到身份检验的结果。
- 根据权利要求9至12任一所述的方法,还包括:响应于确定所述身份检验的结果为通过身份认证,在业务数据库中存储用户信息,所述用户信息包括以下任意一项或多项:所述文本信息、所述待处理图像、所述第二人脸的图像、所述第二人脸的特征信息。
- 根据权利要求13所述的方法,还包括:响应于接收到身份认证请求,获取包括待认证人脸的图像;查询所述业务数据库中是否存在与所述待认证人脸的图像匹配的用户信息;根据所述查询的结果,确定所述待认证人脸的认证结果。
- 根据权利要求9至14任一所述的方法,所述根据所述人脸检测结果和所述证件检测结果进行身份认证,得到所述待处理图像的身份认证结果,还包括:根据所述人脸检测结果和所述证件检测结果进行防伪检测,得到防伪检测结果;基于所述防伪检测结果和所述身份检验结果,确定所述待处理图像的身份认证结果。
- 根据权利要求1至14任一所述的方法,所述根据所述人脸检测结果和所述证件检测结果进行身份认证,得到所述待处理图像的身份认证结果,包括:根据所述人脸检测结果和所述证件检测结果进行防伪检测,得到防伪检测结果。
- 根据权利要求15或16所述的方法,所述根据所述人脸检测结果和所述证件检测结果进行防伪检测,得到防伪检测结果,包括:基于所述人脸检测结果和所述证件检测结果,从所述待处理图像中获取人脸区域图像和证件区域图像;分别对所述待处理图像、所述人脸区域图像和所述证件区域图像进行伪造线索检测;基于所述伪造线索检测的结果,得到所述待处理图像的防伪检测结果。
- 根据权利要求17所述的方法,其中,所述人脸区域图像中包括的人脸在所述人脸区域图像中所占的比例满足第四预设要求;和/或,所述证件区域图像中包括的证件在所述证件区域图像中所占的比例满足所述第四预设要求。
- 根据权利要求18所述的方法,其中,所述第四预设要求包括:所述比例大于或等于1/4且小于或等于9/10。
- 根据权利要求17至19任一所述的方法,所述分别对所述待处理图像、所述人脸区域图像和所述证件区域图像进行伪造线索检测,包括:分别对所述待处理图像、所述人脸区域图像和所述证件区域图像进行特征提取,得到所述待处理图像的特征、所述人脸区域图像的特征和所述证件区域图像的特征;检测所述待处理图像的特征、所述人脸区域的特征和所述证件区域的特征中是否包含伪造线索信息。
- 根据权利要求20所述的方法,所述伪造线索信息具有可见光条件下的人眼可观测性。
- 根据权利要求20或21所述的方法,所述检测所述待处理图像的特征、所述人脸区域的特征和所述证件区域的特征中是否包含伪造线索信息,包括:对所述待处理图像的特征进行检测,确定所述待处理图像的特征中是否包含伪造线索信息;对所述人脸区域图像的特征进行检测,确定所述人脸区域图像的特征中是否包含伪造线索信息;对所述证件区域图像的特征进行检测,确定所述证件区域图像的特征中是否包含伪造线索信息。
- 根据权利要求20或21所述的方法,所述检测所述待处理图像的特征、所述人脸区域的特征和所述证件区域的特征中是否包含伪造线索信息,包括:将所述待处理图像的特征、所述人脸区域图像的特征和所述证件区域图像的特征进行连接,得到连接特征;确定所述连接特征中是否包含伪造线索信息。
- 根据权利要求20至23任一所述的方法,所述分别对所述待处理图像、所述人脸区域图像和所述证件区域图像进行伪造线索检测,包括:通过第三神经网络分别对所述待处理图像、所述人脸区域图像和所述证件区域图像进行伪造线索检测。
- 根据权利要求17至24任一所述的方法,所述基于所述伪造线索检测的结果,得到所述待处理图像的防伪检测结果,包括:响应于所述伪造线索检测的结果表明所述待处理图像、所述人脸区域图像和所述证件区域图像中均不包含伪造线索,确定所述待处理图像的防伪检测结果为通过防伪检测;和/或,响应于所述伪造线索检测的结果表明所述待处理图像、所述人脸区域图像和所述证件区域图像中的任意一项或多项包含伪造线索,确定所述待处理图像的防伪检测结果为未通过防伪检测。
- 一种身份认证装置,包括:第一检测模块,配置为通过第一神经网络对待处理图像进行人脸检测,得到人脸检测结果;第二检测模块,配置为通过第二神经网络对所述待处理图像进行证件检测,得到证件检测结果;第一确定模块,配置为根据所述人脸检测结果和所述证件检测结果,确定所述待处理图像是否为有效的身份认证图像;认证模块,配置为响应于确定所述待处理图像为有效的身份认证图像,根据所述人脸检测结果和所述证件检测结果进行身份认证,得到所述待处理图像的身份认证结果。
- 根据权利要求26所述的装置,所述有效的身份认证图像包括:手持证件图像。
- 根据权利要求26或27所述的装置,所述人脸检测结果包括下列中的至少一项:所述待处理图像中包括的人脸的数量和所述人脸在所述待处理图像中的位置信息;和/或,所述证件检测结果包括下列中的至少一项:所述待处理图像中包括的证件的数量和所述证件在所述待处理图像中的位置信息。
- 根据权利要求26至28任一所述的装置,所述第一确定模块,包括:证件确定单元,配置为基于所述人脸检测结果和所述证件检测结果,确定证件人脸信息;身份认证确定单元,配置为基于所述证件人脸信息、所述人脸检测结果和所述证件检测结果,确定所述待处理图像是否为有效的身份认证图像。
- 根据权利要求29所述的装置,所述证件人脸信息包括下列中的至少一项:所述待处理图像中检测到的证件中包括的人脸的数量、所述证件中包括的人脸的位置信息。
- 根据权利要求29所述的装置,所述证件确定单元,配置为:根据所述人脸检测结果中包括的人脸在所述待处理图像中的位置信息和所述证件 检测结果中包括的证件在所述待处理图像中的位置信息,确定所述证件中包括的人脸的数量和/或位置信息。
- 根据权利要求29至31任一所述的装置,所述身份认证确定单元,配置为响应于所述证件检测结果中证件的数量满足第一预设要求、所述人脸检测结果中人脸的数量满足第二预设要求、且检测到的所述证件人脸信息包括的证件中人脸的数量满足第三预设要求,确定所述待处理图像为有效的身份认证图像。
- 根据权利要求32所述的装置,下列中的至少一项成立:第一预设要求包括:所述证件检测结果中包括的证件的数量为1;第二预设要求包括:所述人脸检测结果中包括的人脸的数量大于或等于2;第三预设要求包括:证件中人脸的数量为1。
- 根据权利要求26至33任一所述的装置,所述认证模块配置为:基于所述人脸检测结果和所述证件检测结果,确定所述证件中包括的第一人脸和所述待处理图像中位于所述证件之外的第二人脸之间的相似度;根据所述第一人脸和所述第二人脸之间的相似度,得到身份检验的结果。
- 根据权利要求34所述的装置,所述认证模块包括:第一获取单元,配置为基于所述人脸检测结果和所述证件检测结果,从所述待处理图像中获取所述第一人脸的图像和所述第二人脸的图像;特征提取单元,配置为对所述第一人脸的图像进行特征提取,得到第一特征,并对所述第二人脸的图像进行特征提取,得到第二特征;第一确定单元,配置为基于所述第一特征与所述第二特征,确定所述第一人脸与所述第二人脸之间的相似度;认证单元,配置为根据所述第一人脸和所述第二人脸之间的相似度,得到身份检验的结果。
- 根据权利要求34或35所述的装置,还包括:第二确定模块,配置为在所述待处理图像中包括的人脸的数量大于2的情况下,将所述待处理图像中位于所述证件之外的至少两个人脸中的最大人脸确定为所述第二人脸。
- 根据权利要求35或36所述的装置,所述认证模块还包括:文本识别单元,配置为响应于确定所述第一人脸和所述第二人脸之间的相似度大于预设阈值,对所述证件进行文本识别,得到所述证件的文本信息,所述文本信息包括姓名和证件号码中的至少一项;所述认证单元,还配置为基于用户信息数据库对所述文本信息进行认证,得到身份检验的结果。
- 根据权利要求34至37任一所述的装置,所述认证模块还包括:存储处理单元,配置为响应于确定所述身份认证结果为通过身份认证,在业务数据库中存储用户信息,所述用户信息包括以下任意一项或多项:所述文本信息、所述待处 理图像、所述第二人脸的图像、所述第二人脸的特征信息。
- 根据权利要求38所述的装置,所述认证模块,还包括查询单元;所述第一获取单元,还配置为响应于接收到身份认证请求,获取包括待认证人脸的图像;所述查询单元,配置为查询所述业务数据库中是否存在与所述待认证人脸的图像匹配的用户信息;所述第一确定单元,还配置为根据所述查询的结果,确定所述待认证人脸的认证结果。
- 根据权利要求26至39任一所述的装置,所述认证模块,还配置为根据所述人脸检测结果和所述证件检测结果进行防伪检测,得到防伪检测结果;基于所述防伪检测结果和所述身份检验结果,确定所述待处理图像的身份认证结果。
- 根据权利要求26至39任一所述的装置,所述认证模块,还配置为根据所述人脸检测结果和所述证件检测结果进行防伪检测,得到防伪检测结果。
- 根据权利要求40或41所述的装置,所述认证模块,包括:第二获取单元,配置为基于所述人脸检测结果和所述证件检测结果,从所述待处理图像中获取人脸区域图像和证件区域图像;伪造线索检测单元,配置为分别对所述待处理图像、所述人脸区域图像和所述证件区域图像进行伪造线索检测;第二确定单元,配置为基于所述伪造线索检测的结果,得到所述待处理图像的防伪检测结果。
- 根据权利要求42所述的装置,所述人脸区域图像中包括的人脸在所述人脸区域图像中所占的比例满足第四预设要求;和/或,所述证件区域图像中包括的证件在所述证件区域图像中所占的比例满足所述第四预设要求。
- 根据权利要求43所述的装置,所述第四预设要求包括:所述比例大于或等于1/4且小于或等于9/10。
- 根据权利要求42至44任一项任一所述的装置,所述伪造线索检测单元配置为:分别对所述待处理图像、所述人脸区域图像和所述证件区域图像进行特征提取,得到所述待处理图像的特征、所述人脸区域图像的特征和所述证件区域图像的特征;以及检测所述待处理图像的特征、所述人脸区域的特征和所述证件区域的特征中是否包含伪造线索信息。
- 根据权利要求45所述的装置,所述伪造线索信息具有可见光条件下的人眼可观测性。
- 根据权利要求44至46任一所述的装置,所述伪造线索检测单元配置为检测所述待处理图像的特征、所述人脸区域的特征和所述证件区域的特征中是否包含伪造线索信息,包括:所述伪造线索检测单元配置为对所述待处理图像的特征进行检测,确定所 述待处理图像的特征中是否包含伪造线索信息;对所述人脸区域图像的特征进行检测,确定所述人脸区域图像的特征中是否包含伪造线索信息;以及对所述证件区域图像的特征进行检测,确定所述证件区域图像的特征中是否包含伪造线索信息。
- 根据权利要求44至46任一所述的装置,所述伪造线索检测单元配置为检测所述待处理图像的特征、所述人脸区域的特征和所述证件区域的特征中是否包含伪造线索信息,包括:所述伪造线索检测单元配置为将所述待处理图像的特征、所述人脸区域图像的特征和所述证件区域图像的特征进行连接,得到连接特征;以及确定所述连接特征中是否包含伪造线索信息。
- 根据权利要求44至46任一所述的装置,所述伪造线索检测单元配置为分别对所述待处理图像、所述人脸区域图像和所述证件区域图像进行伪造线索检测,包括:所述伪造线索检测单元配置为通过第三神经网络分别对所述待处理图像、所述人脸区域图像和所述证件区域图像进行伪造线索检测。
- 根据权利要求42至49任一所述的装置,所述第二确定单元,配置为:响应于所述伪造线索检测的结果表明所述待处理图像、所述人脸区域图像和所述证件区域图像中均不包含伪造线索,确定所述待处理图像的防伪检测结果为通过防伪检测;和/或,响应于所述伪造线索检测的结果表明所述待处理图像、所述人脸区域图像和所述证件区域图像中的任意一项或多项包含伪造线索,确定所述待处理图像的防伪检测结果为未通过防伪检测。
- 一种电子设备,包括:存储器,配置为存储计算机程序;处理器,配置为执行所述存储器中存储的计算机程序,且所述计算机程序被执行时,实现权利要求1至24任一所述的身份认证方法。
- 一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时,实现权利要求1至24任一所述的身份认证方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020550841A JP7165746B2 (ja) | 2018-08-13 | 2019-06-04 | Id認証方法および装置、電子機器並びに記憶媒体 |
SG11202008549SA SG11202008549SA (en) | 2018-08-13 | 2019-06-04 | Identity authentication method and apparatus, electronic device, and storage medium |
KR1020207025865A KR102406432B1 (ko) | 2018-08-13 | 2019-06-04 | 신원 인증 방법 및 장치, 전자 기기 및 저장 매체 |
US17/015,509 US20200410074A1 (en) | 2018-08-13 | 2020-09-09 | Identity authentication method and apparatus, electronic device, and storage medium |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810918697.9 | 2018-08-13 | ||
CN201810918699.8 | 2018-08-13 | ||
CN201810918697.9A CN109255299A (zh) | 2018-08-13 | 2018-08-13 | 身份认证方法和装置、电子设备和存储介质 |
CN201810918699.8A CN109359502A (zh) | 2018-08-13 | 2018-08-13 | 防伪检测方法和装置、电子设备、存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/015,509 Continuation US20200410074A1 (en) | 2018-08-13 | 2020-09-09 | Identity authentication method and apparatus, electronic device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020034733A1 true WO2020034733A1 (zh) | 2020-02-20 |
Family
ID=69525080
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/090034 WO2020034733A1 (zh) | 2018-08-13 | 2019-06-04 | 身份认证方法和装置、电子设备和存储介质 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20200410074A1 (zh) |
JP (1) | JP7165746B2 (zh) |
KR (1) | KR102406432B1 (zh) |
SG (1) | SG11202008549SA (zh) |
WO (1) | WO2020034733A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220179976A1 (en) * | 2020-12-03 | 2022-06-09 | Capital One Services, Llc | Systems and methods for processing requests for access |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI725443B (zh) * | 2019-06-03 | 2021-04-21 | 銓鴻資訊有限公司 | 用於第三方認證的身分的註冊與存取控制方法 |
CN111401407B (zh) * | 2020-02-25 | 2021-05-14 | 浙江工业大学 | 一种基于特征重映射的对抗样本防御方法和应用 |
IL299752A (en) * | 2020-06-22 | 2023-03-01 | ID Metrics Group Incorporated | Data processing and transaction decision system |
KR102502631B1 (ko) * | 2020-11-16 | 2023-02-23 | 고큐바테크놀로지 주식회사 | 사용자를 인증하기 위한 기법 |
KR102561460B1 (ko) * | 2020-12-09 | 2023-07-28 | 도시공유플랫폼 주식회사 | 본인 인증을 위해 인공지능 카메라를 활용한 이상행동 감지 시스템 |
KR102677846B1 (ko) * | 2021-05-10 | 2024-06-21 | 도시공유플랫폼 주식회사 | 보스메인중앙서버에 의한 무인매장관리시스템 |
CN113656843B (zh) * | 2021-08-18 | 2022-08-12 | 北京百度网讯科技有限公司 | 信息验证方法、装置、设备和介质 |
KR102524163B1 (ko) * | 2021-09-16 | 2023-04-21 | 국민대학교산학협력단 | 신분증 인식 방법 및 장치 |
KR102445257B1 (ko) * | 2022-02-23 | 2022-09-23 | 주식회사 룰루랩 | 인공신경망에 기반하여 모공을 검출하고 검출된 모공을 시각화하는 방법 및 장치 |
JP7239047B1 (ja) | 2022-07-19 | 2023-03-14 | 凸版印刷株式会社 | 認証システム、認証方法、及びプログラム |
WO2024044185A1 (en) * | 2022-08-23 | 2024-02-29 | SparkCognition, Inc. | Face image matching based on feature comparison |
CN115375998B (zh) * | 2022-10-24 | 2023-03-17 | 成都新希望金融信息有限公司 | 一种证件识别方法、装置、电子设备及存储介质 |
US11961315B1 (en) * | 2023-12-05 | 2024-04-16 | Daon Technology | Methods and systems for enhancing detection of a fraudulent identity document in an image |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104504321A (zh) * | 2015-01-05 | 2015-04-08 | 湖北微模式科技发展有限公司 | 一种基于摄像头实现远程用户身份验证的方法与*** |
CN105844206A (zh) * | 2015-01-15 | 2016-08-10 | 北京市商汤科技开发有限公司 | 身份认证方法及设备 |
CN107844748A (zh) * | 2017-10-17 | 2018-03-27 | 平安科技(深圳)有限公司 | 身份验证方法、装置、存储介质和计算机设备 |
CN108229120A (zh) * | 2017-09-07 | 2018-06-29 | 北京市商汤科技开发有限公司 | 人脸解锁及其信息注册方法和装置、设备、程序、介质 |
CN108229499A (zh) * | 2017-10-30 | 2018-06-29 | 北京市商汤科技开发有限公司 | 证件识别方法及装置、电子设备和存储介质 |
CN109255299A (zh) * | 2018-08-13 | 2019-01-22 | 北京市商汤科技开发有限公司 | 身份认证方法和装置、电子设备和存储介质 |
CN109359502A (zh) * | 2018-08-13 | 2019-02-19 | 北京市商汤科技开发有限公司 | 防伪检测方法和装置、电子设备、存储介质 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5892838A (en) * | 1996-06-11 | 1999-04-06 | Minnesota Mining And Manufacturing Company | Biometric recognition using a classification neural network |
KR20080025243A (ko) * | 2006-09-15 | 2008-03-20 | 주식회사 닷위저드 | 신분증 발급 신청 시스템 및 그 방법 |
JP2009211381A (ja) * | 2008-03-04 | 2009-09-17 | Nec Corp | 使用者認証システム、使用者認証方法、および使用者認証プログラム |
JP2010079393A (ja) * | 2008-09-24 | 2010-04-08 | Japan Tobacco Inc | データ処理装置、そのコンピュータプログラムおよびデータ処理方法 |
WO2010106587A1 (ja) * | 2009-03-18 | 2010-09-23 | パナソニック株式会社 | ニューラルネットワークシステム |
RU2427911C1 (ru) * | 2010-02-05 | 2011-08-27 | Фирма "С1 Ко., Лтд." | Способ обнаружения лиц на изображении с применением каскада классификаторов |
CA3024995A1 (en) * | 2016-05-24 | 2017-11-30 | Morphotrust Usa, Llc | Document image quality assessment |
KR102324468B1 (ko) * | 2017-03-28 | 2021-11-10 | 삼성전자주식회사 | 얼굴 인증을 위한 장치 및 방법 |
GB201908530D0 (en) * | 2019-06-13 | 2019-07-31 | Microsoft Technology Licensing Llc | Robutness against manipulations n machine learning |
US11449714B2 (en) * | 2019-10-30 | 2022-09-20 | Google Llc | Efficient convolutional neural networks and techniques to reduce associated computational costs |
-
2019
- 2019-06-04 JP JP2020550841A patent/JP7165746B2/ja active Active
- 2019-06-04 SG SG11202008549SA patent/SG11202008549SA/en unknown
- 2019-06-04 WO PCT/CN2019/090034 patent/WO2020034733A1/zh active Application Filing
- 2019-06-04 KR KR1020207025865A patent/KR102406432B1/ko active IP Right Grant
-
2020
- 2020-09-09 US US17/015,509 patent/US20200410074A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104504321A (zh) * | 2015-01-05 | 2015-04-08 | 湖北微模式科技发展有限公司 | 一种基于摄像头实现远程用户身份验证的方法与*** |
CN105844206A (zh) * | 2015-01-15 | 2016-08-10 | 北京市商汤科技开发有限公司 | 身份认证方法及设备 |
CN108229120A (zh) * | 2017-09-07 | 2018-06-29 | 北京市商汤科技开发有限公司 | 人脸解锁及其信息注册方法和装置、设备、程序、介质 |
CN107844748A (zh) * | 2017-10-17 | 2018-03-27 | 平安科技(深圳)有限公司 | 身份验证方法、装置、存储介质和计算机设备 |
CN108229499A (zh) * | 2017-10-30 | 2018-06-29 | 北京市商汤科技开发有限公司 | 证件识别方法及装置、电子设备和存储介质 |
CN109255299A (zh) * | 2018-08-13 | 2019-01-22 | 北京市商汤科技开发有限公司 | 身份认证方法和装置、电子设备和存储介质 |
CN109359502A (zh) * | 2018-08-13 | 2019-02-19 | 北京市商汤科技开发有限公司 | 防伪检测方法和装置、电子设备、存储介质 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220179976A1 (en) * | 2020-12-03 | 2022-06-09 | Capital One Services, Llc | Systems and methods for processing requests for access |
US11972003B2 (en) * | 2020-12-03 | 2024-04-30 | Capital One Services, Llc | Systems and methods for processing requests for access |
Also Published As
Publication number | Publication date |
---|---|
KR20200118842A (ko) | 2020-10-16 |
SG11202008549SA (en) | 2020-10-29 |
US20200410074A1 (en) | 2020-12-31 |
KR102406432B1 (ko) | 2022-06-08 |
JP2021516819A (ja) | 2021-07-08 |
JP7165746B2 (ja) | 2022-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020034733A1 (zh) | 身份认证方法和装置、电子设备和存储介质 | |
KR102324706B1 (ko) | 얼굴인식 잠금해제 방법 및 장치, 기기, 매체 | |
US20210200995A1 (en) | Face anti-counterfeiting detection methods and systems, electronic devices, programs and media | |
US11030752B1 (en) | System, computing device, and method for document detection | |
US11669607B2 (en) | ID verification with a mobile device | |
US11354917B2 (en) | Detection of fraudulently generated and photocopied credential documents | |
WO2018086543A1 (zh) | 活体判别方法、身份认证方法、终端、服务器和存储介质 | |
WO2019153739A1 (zh) | 基于人脸识别的身份认证方法、装置、设备和存储介质 | |
CN109255299A (zh) | 身份认证方法和装置、电子设备和存储介质 | |
US11263441B1 (en) | Systems and methods for passive-subject liveness verification in digital media | |
WO2016131083A1 (en) | Identity verification. method and system for online users | |
WO2016084072A1 (en) | Anti-spoofing system and methods useful in conjunction therewith | |
CN109359502A (zh) | 防伪检测方法和装置、电子设备、存储介质 | |
WO2019200872A1 (zh) | 身份验证方法和装置、电子设备、计算机程序和存储介质 | |
US11373449B1 (en) | Systems and methods for passive-subject liveness verification in digital media | |
CN112200136A (zh) | 证件真伪识别方法、装置、计算机可读介质及电子设备 | |
US20240005691A1 (en) | Validating identification documents | |
US11798305B1 (en) | Methods and systems for determining the authenticity of an identity document | |
US11900755B1 (en) | System, computing device, and method for document detection and deposit processing | |
US11842573B1 (en) | Methods and systems for enhancing liveness detection of image data | |
US20240046709A1 (en) | System and method for liveness verification | |
EP4266264A1 (en) | Unconstrained and elastic id document identification in an rgb image | |
US20240021016A1 (en) | Method and system for identity verification | |
Forczmański | Web System for Biometric Verification of Facial Portraits |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19849428 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20207025865 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2020550841 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19849428 Country of ref document: EP Kind code of ref document: A1 |