CN112115866A - Face recognition method and device, electronic equipment and computer readable storage medium - Google Patents

Face recognition method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN112115866A
CN112115866A CN202010987418.1A CN202010987418A CN112115866A CN 112115866 A CN112115866 A CN 112115866A CN 202010987418 A CN202010987418 A CN 202010987418A CN 112115866 A CN112115866 A CN 112115866A
Authority
CN
China
Prior art keywords
face
local
face image
image
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010987418.1A
Other languages
Chinese (zh)
Inventor
程禹
申省梅
谢佩博
马原
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Pengsi Technology Co ltd
Original Assignee
Beijing Pengsi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Pengsi Technology Co ltd filed Critical Beijing Pengsi Technology Co ltd
Priority to CN202010987418.1A priority Critical patent/CN112115866A/en
Publication of CN112115866A publication Critical patent/CN112115866A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a face recognition method, a face recognition device, an electronic device and a computer-readable storage medium. One embodiment of the method comprises: acquiring a face image to be recognized; determining whether the face image to be recognized is an occlusion face image; in response to the determination, extracting the human face features of the non-occlusion area of the human face image to be recognized by using a local human face feature extraction model to obtain target local human face features; performing feature comparison in a pre-established local face feature library by using the target local face features to obtain and output a face identity which meets a preset comparison condition; in response to the determination of no, extracting the global face features of the face image to be recognized by using a global face feature extraction model to obtain target global face features; and performing feature comparison in a pre-established global face feature library by using the target global face features to obtain and output a face identity which meets a preset comparison condition. The embodiment improves the accuracy and robustness of face recognition.

Description

Face recognition method and device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of image recognition application technologies, and in particular, to a face recognition method, an apparatus, an electronic device, and a computer-readable storage medium.
Background
Face recognition is a biometric technique for identifying an identity based on facial feature information of a person. Face recognition plays an important role in the field of artificial intelligence as an important computer vision technology.
At present, the face recognition technology has been widely applied to application scenarios such as access control management, identity verification, smart retail, social entertainment and the like.
Disclosure of Invention
The disclosure provides a face recognition method, a face recognition device, an electronic device and a computer-readable storage medium.
In a first aspect, the present disclosure provides a face recognition method, including: acquiring a face image to be recognized; determining whether the face image to be recognized is an occlusion face image; in response to the determination, extracting the human face features of the non-occlusion area of the human face image to be recognized by using a local human face feature extraction model to obtain target local human face features; performing feature comparison in a pre-established local face feature library by using the target local face features to obtain and output a face identity which meets a preset comparison condition; the system comprises a local face feature library, a local face feature database and a local face feature database, wherein at least one local face feature and a corresponding face identity are stored in the local face feature library; in response to the determination of no, extracting the global face features of the face image to be recognized by using a global face feature extraction model to obtain target global face features; performing feature comparison in a pre-established global face feature library by using the target global face features to obtain and output a face identity which meets a preset comparison condition; at least one global face feature and a corresponding face identity are stored in the global face feature library.
In some optional embodiments, before extracting the face features of the non-occlusion region of the face image to be recognized by using the local face feature extraction model to obtain the target local face features, the method further includes: cutting the obtained face image to be recognized to obtain a face image to be recognized comprising an unobstructed face area, wherein the cut face image to be recognized meets the input size of a local face feature extraction model; or setting the pixel value of the face shielding region in the obtained face image to be recognized as a preset value, and cutting the obtained face image to be recognized, wherein the cut face image to be recognized meets the input size of the local face feature extraction model.
In some alternative embodiments, the occlusion face image comprises an upper half face occlusion image, and the local face feature library comprises a lower half face feature library; or the shielding face image comprises a lower half face shielding image, and the local face feature library comprises an upper half face feature library.
In some optional embodiments, the local face feature extraction model is obtained by pre-training through the following steps: acquiring a first sample set, wherein the first sample in the first sample set comprises an occlusion face image and corresponding non-occlusion region face features; and taking the shielded face image as input, taking the corresponding non-shielded region face features as expected output, and training a first initial deep learning model to obtain a local face feature extraction model.
In some alternative embodiments, the local face feature library is pre-established by the following steps: acquiring at least one face identity label and a corresponding face image, wherein the face image is an occluded face image and/or a non-occluded face image; acquiring corresponding human face images in the non-shielding areas according to the human face images; respectively extracting local face features of the face image of each non-shielding area by using a local face feature extraction model; and storing each local face feature, the corresponding face identity and the face image into a local face feature library in an associated manner.
In some optional embodiments, obtaining a corresponding non-occlusion region face image according to each face image includes: cutting each face image to obtain a non-shielding area face image, wherein the non-shielding area face image meets the input size of a local face feature extraction model; or setting the pixel value of the shielding region in each face image as a preset value, and cutting the face image, wherein the cut face image meets the input size of the local face feature extraction model.
In some optional embodiments, before extracting the global face features of the face image to be recognized by using the global face feature extraction model to obtain the target global face features, the method further includes: and processing the acquired face image to be recognized, wherein the processed face image to be recognized meets the input size of the global face feature extraction model.
In some optional embodiments, the global face feature extraction model is obtained by pre-training through the following steps: acquiring a second sample set, wherein a second sample in the second sample set comprises an unobstructed face image and corresponding global face features; and taking the non-shielding face image as input, taking the corresponding global face features as expected output, and training a second initial deep learning model to obtain a global face feature extraction model.
In some alternative embodiments, the global face feature library is pre-established by the following steps: acquiring at least one face identity and a corresponding non-shielding face image; respectively extracting the global face features of the non-shielding face images by using a global face feature extraction model; and storing all the global face features, the corresponding face identity identifications and the face images into a global face feature library in an associated manner.
In some optional embodiments, after acquiring at least one face identity and a corresponding unobstructed face image, the method further comprises: and preprocessing each non-shielding face image so that the preprocessed non-shielding face image meets the input size of the global face feature extraction model.
In some optional embodiments, determining whether the face image to be recognized is an occlusion face image includes: and determining whether the face image to be recognized is a shielding face image or not by using a face shielding judgment model obtained by pre-training.
In some optional embodiments, the face occlusion determination model is obtained by pre-training through the following steps: acquiring a third sample set, wherein a third sample in the third sample set comprises a corresponding non-occlusion face image and an occlusion face image; and taking the non-shielding face image as a positive sample, taking the shielding face image as a negative sample, and training a third initial deep learning model to obtain a face shielding judgment model.
In a second aspect, the present disclosure provides a face recognition apparatus, comprising: an acquisition unit configured to acquire a face image to be recognized; a determination unit configured to determine whether the face image to be recognized is an occlusion face image; a first output unit configured to extract, in response to the determination, a non-occlusion region face feature of the face image to be recognized using the local face feature extraction model, resulting in a target local face feature; performing feature comparison in a pre-established local face feature library by using the target local face features to obtain and output a face identity which meets a preset comparison condition; the system comprises a local face feature library, a local face feature database and a local face feature database, wherein at least one local face feature and a corresponding face identity are stored in the local face feature library; the second output unit is configured to respond to the judgment of no, extract the global face features of the face image to be recognized by using the global face feature extraction model, and obtain target global face features; performing feature comparison in a pre-established global face feature library by using the target global face features to obtain and output a face identity which meets a preset comparison condition; at least one global face feature and a corresponding face identity are stored in the global face feature library.
In some optional embodiments, before extracting the face features of the non-occlusion region of the face image to be recognized by using the local face feature extraction model to obtain the target local face features, the method further includes: cutting the obtained face image to be recognized to obtain a face image to be recognized comprising an unobstructed face area, wherein the cut face image to be recognized meets the input size of a local face feature extraction model; or setting the pixel value of the face shielding region in the obtained face image to be recognized as a preset value, and cutting the obtained face image to be recognized, wherein the cut face image to be recognized meets the input size of the local face feature extraction model.
Correspondingly, the face recognition device further comprises: the first processing unit is configured to cut the acquired face image to be recognized to obtain a face image to be recognized including an unobstructed face region before extracting the face feature of the unobstructed region of the face image to be recognized to obtain a target local face feature by using the local face feature extraction model, wherein the cut face image to be recognized meets the input size of the local face feature extraction model; or the second processing unit is configured to set a pixel value of a face area which is shielded in the acquired face image to be recognized to a preset value before extracting the face feature of the non-shielded area of the face image to be recognized by using the local face feature extraction model, and cut the acquired face image to be recognized, wherein the cut face image to be recognized meets the input size of the local face feature extraction model.
In some alternative embodiments, the occlusion face image comprises an upper half face occlusion image, and the local face feature library comprises a lower half face feature library; or the shielding face image comprises a lower half face shielding image, and the local face feature library comprises an upper half face feature library.
In some optional embodiments, the local face feature extraction model is obtained by pre-training through the following steps: acquiring a first sample set, wherein the first sample in the first sample set comprises an occlusion face image and corresponding non-occlusion region face features; and taking the shielded face image as input, taking the corresponding non-shielded region face features as expected output, and training a first initial deep learning model to obtain a local face feature extraction model.
Correspondingly, the face recognition device further comprises: the local face feature extraction model pre-training unit is configured to pre-train a local face feature extraction model, and comprises: a first acquisition unit configured to acquire a first sample set, wherein a first sample in the first sample set comprises an occlusion face image and a corresponding non-occlusion region face feature; a first training unit configured to: and taking the shielded face image as input, taking the corresponding non-shielded region face features as expected output, and training a first initial deep learning model to obtain a local face feature extraction model.
In some alternative embodiments, the local face feature library is pre-established by the following steps: acquiring at least one face identity label and a corresponding face image, wherein the face image is an occluded face image and/or a non-occluded face image; acquiring corresponding human face images in the non-shielding areas according to the human face images; respectively extracting local face features of the face image of each non-shielding area by using a local face feature extraction model; and storing each local face feature, the corresponding face identity and the face image into a local face feature library in an associated manner.
Correspondingly, the face recognition device further comprises: a local face feature library pre-establishment unit configured to pre-establish a local face feature library, including: the second acquisition unit is configured to acquire at least one face identity and a corresponding face image, wherein the face image is an occluded face image and/or a non-occluded face image; the third acquisition unit is configured to acquire corresponding face images of the non-occlusion areas according to the face images; a first extraction unit configured to extract local face features of each non-occlusion region face image respectively using a local face feature extraction model; the first storage unit is configured to store each local face feature, the corresponding face identity and the face image into a local face feature library in an associated manner.
In some optional embodiments, obtaining a corresponding non-occlusion region face image according to each face image includes: cutting each face image to obtain a non-shielding area face image, wherein the non-shielding area face image meets the input size of a local face feature extraction model; or setting the pixel value of the shielding region in each face image as a preset value, and cutting the face image, wherein the cut face image meets the input size of the local face feature extraction model.
Correspondingly, the local face feature library pre-establishing unit further comprises: a third processing unit, configured to crop a non-occlusion region face image from each face image, where the non-occlusion region face image satisfies an input size of the local face feature extraction model; or the image processing method is configured to set the pixel value of the shielding region in each face image as a preset value, and cut the face image, wherein the cut face image meets the input size of the local face feature extraction model.
In some optional embodiments, before extracting the global face features of the face image to be recognized by using the global face feature extraction model to obtain the target global face features, the method further includes: and processing the acquired face image to be recognized, wherein the processed face image to be recognized meets the input size of the global face feature extraction model.
Correspondingly, the face recognition device can further comprise: a fourth processing unit configured to: before the global face feature of the face image to be recognized is extracted by using the global face feature extraction model to obtain the target global face feature, the obtained face image to be recognized is processed, and the processed face image to be recognized meets the input size of the global face feature extraction model.
In some optional embodiments, the global face feature extraction model is obtained by pre-training through the following steps: acquiring a second sample set, wherein a second sample in the second sample set comprises an unobstructed face image and corresponding global face features; and taking the non-shielding face image as input, taking the corresponding global face features as expected output, and training a second initial deep learning model to obtain a global face feature extraction model.
Correspondingly, the face recognition device can further comprise: a global face feature extraction model pre-training unit configured to pre-train a global face feature extraction model, comprising: a fourth obtaining unit, configured to obtain a second sample set, where a second sample in the second sample set includes an unobstructed face image and a corresponding global face feature; and the second training unit is configured to take the non-shielding face image as input, take the corresponding global face feature as expected output, and train the second initial deep learning model to obtain a global face feature extraction model.
In some alternative embodiments, the global face feature library is pre-established by the following steps: acquiring at least one face identity and a corresponding non-shielding face image; respectively extracting the global face features of the non-shielding face images by using a global face feature extraction model; and storing all the global face features, the corresponding face identity identifications and the face images into a global face feature library in an associated manner.
Correspondingly, the face recognition device can further comprise: a global face feature library pre-establishment unit configured to pre-establish a global face feature library, including: the fifth acquisition unit is configured to acquire at least one face identity and a corresponding non-occlusion face image; the second extraction unit is configured to extract global face features of the non-shielding face images by using the global face feature extraction model respectively; and the second storage unit is configured to store each global face feature, the corresponding face identity and the face image into a global face feature library in an associated manner.
In some optional embodiments, after obtaining at least one face identity and a corresponding non-occlusion face image, the method further comprises: and preprocessing each non-shielding face image so that the preprocessed non-shielding face image meets the input size of the global face feature extraction model.
Correspondingly, the face recognition device can further comprise: a fifth processing unit configured to: after at least one face identity and a corresponding non-shielding face image are obtained, preprocessing each non-shielding face image so that the preprocessed non-shielding face image meets the input size of a global face feature extraction model.
In some optional embodiments, the determining unit is further configured to: and determining whether the face image to be recognized is a shielding face image or not by using a face shielding judgment model obtained by pre-training.
In some optional embodiments, the face occlusion determination model is obtained by pre-training through the following steps: acquiring a third sample set, wherein a third sample in the third sample set comprises a corresponding non-occlusion face image and an occlusion face image; and taking the non-shielding face image as a positive sample, taking the shielding face image as a negative sample, and training a third initial deep learning model to obtain a face shielding judgment model.
Correspondingly, the face recognition device can further comprise: a face occlusion determination model pre-training unit configured to pre-train a face occlusion determination model, comprising: a sixth obtaining unit, configured to obtain a third sample set, where a third sample in the third sample set includes a corresponding non-occlusion face image and an occlusion face image; and the third training unit is configured to train a third initial deep learning model to obtain a face occlusion judgment model by taking the non-occlusion face image as a positive sample and taking the occlusion face image as a negative sample.
In a third aspect, the present disclosure provides an electronic device, comprising: a memory storing a computer program and a processor implementing the method as described in any of the implementations of the first aspect when the processor executes the computer program.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by one or more processors, implements the method as described in any of the implementations of the first aspect.
The face recognition method, the face recognition device, the electronic equipment and the computer-readable storage medium are characterized in that firstly, a face image to be recognized is obtained, then whether the face image to be recognized is an occlusion face image is determined, and in response to the determination, a local face feature extraction model is used for extracting the face features of an occlusion-free area of the face image to be recognized, so that target local face features are obtained; performing feature comparison in a pre-established local face feature library by using the target local face features to obtain and output a face identity which meets a preset comparison condition; in response to the determination of no, extracting the global face features of the face image to be recognized by using a global face feature extraction model to obtain target global face features; and performing feature comparison in a pre-established global face feature library by using the target global face features to obtain and output a face identity which meets a preset comparison condition.
Under the condition that the face is shielded by a shielding object, the shielding object destroys the inherent structure and the geometric characteristics of the face in the face recognition process, so that the face cannot be accurately recognized by a general face recognition algorithm. The occlusion of the face image will introduce local features brought by the occlusion, and the influence capability of these local features is proportional to the occlusion proportion of the face region. Along with the increase of the shielding proportion of the face area, the accuracy of the face recognition algorithm is greatly reduced. Occlusion face images cause inaccuracies in face recognition due to incomplete face features.
In an actual scene, the global face features of the non-occlusion face image are more representative, the lower half face details of the non-occlusion face image (the lower half face occlusion image) are lost, the upper half face features are more representative, the method and the device for judging whether the face is occluded or not carry out judgment operation in advance, and targeted face recognition operation is carried out according to the judgment result of face occlusion. For a partially shielded face, if the judgment operation of whether the face is shielded is not carried out in advance, the global face features of the face image to be recognized are directly adopted for face comparison, in the process, the local features corresponding to the shielding objects can be introduced into the local shielding, the shielding objects can influence the accuracy of face recognition, if the judgment operation of whether the face is shielded is carried out in advance, according to the face shielding judgment result, more representative non-shielded area face features can be adopted for face comparison, the non-shielded area face features are fully utilized, and the influence of the shielding objects on the face recognition is reduced. Corresponding face features are extracted according to the judgment result of whether the human face is shielded or not in the whole process, so that the blindness of extracting the face features is reduced, and the accuracy and the robustness of face recognition can be improved.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram of one embodiment of a face recognition method according to the present disclosure;
FIG. 3 is a flow diagram of one embodiment of a method for creating a local face feature library and a global face feature library;
FIG. 4 is a schematic block diagram of one embodiment of a face recognition apparatus according to the present disclosure;
FIG. 5 is a schematic block diagram of a computer system suitable for use in implementing the electronic device of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the face recognition method or face recognition apparatus of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include a terminal device 101, a network 102, a network 104, a face recognition server 103, a global face feature library server 105, and a local face feature library server 106. The network 102 serves as a medium for providing a communication link between the terminal device 101 and the face recognition server 103. The network 104 is used to provide a medium for communication links between the face recognition server 103 and the global face feature library server 105 and the local face feature library server 106. Networks 102, 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user can use the terminal device 101 to interact with the face recognition server 103 via the network 102 to receive or send messages or the like. Various communication client applications, such as an image acquisition application, an image processing application, a face recognition application, a search application, and the like, may be installed on the terminal device 101.
The terminal apparatus 101 may be hardware or software. When the terminal device 101 is hardware, it may be various electronic devices having an image acquisition device, for example, a to-be-recognized face image acquisition device operating on a gate, and the to-be-recognized face image acquisition device may include a face acquisition camera, a guide display screen (for guiding a user to align a face with the face acquisition camera), a light supplement lamp, and the like. When the terminal apparatus 101 is software, it can be installed in the electronic apparatuses listed above. It may be implemented as a plurality of software or software modules (for example, for providing a face image capture service), or as a single software or software module. And is not particularly limited herein.
The face recognition server 103 may be a server that provides various services, such as a background server that provides face recognition services for a face image to be recognized sent by the terminal device 101. The background server may perform processing such as analysis on the received face image to be recognized, and feed back a processing result (for example, a face recognition result) to the terminal device 101.
The global face feature library server 105 may be a database server that provides data support to the face recognition server 103. The global face feature library server 105 may be a storage server, and may store at least one identity and corresponding global face features. The face recognition server 103 may obtain global face features matching target global face features of the face image to be recognized and the associated identity from the global face feature library server 105.
The local face feature library server 106 may be a database server that provides data support to the face recognition server 103. The local facial feature library server 106 may be a storage server, and may store at least one identification and corresponding local facial features. The face recognition server 103 may obtain local face features matching target local face features of the face image to be recognized and the associated identity from the local face feature library server 106.
In some cases, the face recognition method provided by the present disclosure may be executed by the face recognition server 103, and accordingly, a face recognition device may also be disposed in the face recognition server 103.
In some cases, the global face feature library and the local face feature library may be directly stored locally in the face recognition server 103, and the face recognition server 103 may directly obtain the global face feature matching the target global face feature of the face image to be recognized and the associated identity from the local global face feature library, or may directly obtain the local face feature matching the target local face feature of the face image to be recognized and the associated identity from the local face feature library. At this point, the exemplary system architecture 100 may not include the network 104, the global facial feature library server 105, and the local facial feature library server 106.
In some cases, the face recognition method provided by the present disclosure may be executed by the terminal device 101, and accordingly, a face recognition apparatus may also be provided in the terminal device 101.
In some cases, the global face feature library and the local face feature library may be directly stored locally in the terminal device 101, and the terminal device 101 may directly obtain the global face feature matched with the target global face feature of the face image to be recognized and the associated identity from the local global face feature library, or may directly obtain the local face feature matched with the face feature of the non-occlusion area of the face image to be recognized and the associated identity from the local face feature library. At this point, the exemplary system architecture 100 may not include the network 102, the network 104, the face recognition server 103, the global face feature library server 105, and the local face feature library server 106.
The face recognition server 103 may be hardware or software. When the face recognition server 103 is hardware, it may be implemented as a distributed server cluster composed of multiple servers, or may be implemented as a single server. When the face recognition server 103 is software, it may be implemented as a plurality of software or software modules (for example, for providing face recognition service), or may be implemented as a single software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, face recognition servers, global face feature library servers, and local face feature library servers in fig. 1 are merely illustrative. Any number of terminal devices, networks, face recognition servers, global face feature library servers, and local face feature library servers may be provided as desired.
With continued reference to fig. 2, a flow 200 of one embodiment of a face recognition method according to the present disclosure is shown. The face recognition method comprises the following steps:
step 201, acquiring a face image to be recognized;
in this embodiment, an executing subject (for example, the face recognition server 103 shown in fig. 1) of the face recognition method may remotely acquire a face image to be recognized from another electronic device (for example, the terminal device 101 shown in fig. 1) connected to the executing subject through a network, or the executing subject may acquire the face image to be recognized from an image acquisition device (for example, a camera, a video camera, etc.) connected to the executing subject in communication. The face displayed in the face image to be recognized may be a full face (a face without occlusion on the front side) or a partial face (a face with occlusion on the lower half). In practice, a user may wear a mask, a veil, or other such obstruction, causing the lower half of the face to be obstructed.
In practice, when a user arrives at a gate running a face recognition system, the user needs to pass the gate after performing identity verification through face recognition. Specifically, the user can be guided to aim at the face to be recognized to the camera for shooting through the face image to be recognized collecting equipment running on the gate, so that the face image to be recognized is collected.
Step 202, determining whether the face image to be recognized is an occlusion face image.
In this embodiment, the execution subject may determine whether the face image to be recognized is an occlusion face image by using various face occlusion determination methods. The occluded face image may be a face image in which a face is occluded by an occluding object (e.g., a mask, sunglasses, etc.). Here, the mask face image may be an upper half face mask image or a lower half face mask image.
In an actual scene, people may wear shelters such as sunglasses, and the upper half face of a person is sheltered. People may wear masks, face yarns and other shelters, so that the lower half part of the face is sheltered.
In some optional implementation manners, the execution subject may determine whether the face image to be recognized is a face-shielding image by judging whether a pixel average value of a face region of the face image to be recognized belongs to a pixel range corresponding to a skin color, and if not, may determine that the face is shielded by a shielding object, and if so, may determine that the face is not shielded, and the face image to be recognized is not a face-shielding image.
In some optional implementation manners, the executing agent may further determine whether the face image to be recognized is an occluded face image through a face occlusion determination model, where the face occlusion determination model may be, for example, a pre-trained support vector machine. As an example, the face occlusion determination model may be pre-trained by the following training steps: acquiring a third sample set, wherein a third sample in the third sample set comprises a corresponding non-occlusion face image and an occlusion face image; and taking the non-shielding face image as a positive sample, taking the shielding face image as a negative sample, and training a third initial deep learning model to obtain a face shielding judgment model.
In this alternative implementation, the third initial deep learning model may be various neural network models that are built in advance. As an example, in the process of training the third initial deep learning model, inputting a positive sample or a negative sample into the third initial deep learning model to obtain a corresponding determination result, then calculating a third difference between the determination result and a labeling result indicating whether the positive sample or the negative sample is a face image occlusion result, adjusting parameters of the third initial deep learning model based on the third difference, and obtaining the face occlusion determination model when it is determined that the current third initial deep learning model satisfies a preset training completion condition (the third difference is smaller than a third preset difference threshold value/the number of times of training reaches a third preset number/the time of training reaches a third preset duration).
If it is determined in step 202 that the face image to be recognized is an occlusion face image, steps 203 to 204 are performed, and if it is determined in step 202 that the face image to be recognized is not an occlusion face image, steps 205 to 206 are performed.
Step 203, in response to the determination, extracting the human face features of the non-occlusion area of the human face image to be recognized by using the local human face feature extraction model to obtain the target local human face features.
In the embodiment, the non-occlusion region human face features of the human face image to be recognized may include the shapes, colors, positions, scale features and the like of eyebrows, eyes, a nose and a mouth region of the face.
In some optional implementations, the local face feature extraction model may be obtained by pre-training through the following steps: acquiring a first sample set, wherein the first sample in the first sample set comprises an occlusion face image and corresponding non-occlusion region face features; and taking the shielded face image as input, taking the corresponding non-shielded region face features as expected output, and training a first initial deep learning model to obtain a local face feature extraction model.
In this alternative implementation, the first initial deep learning model may be various neural network models that are built in advance. As an example, in the process of training the first initial deep learning model, a first initial deep learning model may be used to perform feature extraction on an occluded face image in a first sample to obtain an actual non-occluded region face feature, then a preset loss function may be used to calculate a first difference between the actual non-occluded region face feature and the non-occluded region face feature in the first sample, a network parameter of the first initial deep learning model is adjusted, and when it is determined that the current first initial deep learning model meets a preset training completion condition (the first difference is smaller than a first preset difference threshold value/the number of times of training reaches a first preset number/the training time reaches a first preset duration), a local face feature extraction model is obtained.
In some optional implementation manners, before extracting the non-occlusion region face features of the face image to be recognized by using the local face feature extraction model to obtain the target local face features, the method may further include: cutting the obtained face image to be recognized to obtain a face image to be recognized comprising an unobstructed face region, wherein the cut face image to be recognized meets the input size of a local face feature extraction model (i.e. the size of the cut face image to be recognized is consistent with the input data size of the local face feature extraction model); or setting the pixel value of the face shielding region in the obtained face image to be recognized as a preset value, and cutting the obtained face image to be recognized, wherein the cut face image to be recognized meets the input size of the local face feature extraction model (that is, the size of the cut face image to be recognized is consistent with the input data size of the local face feature extraction model).
For example, if the input data size of the local face feature extraction model is 112 × 64, the size of the face image to be recognized is scaled to 112 × 64.
In some optional implementation manners, the obtained face image to be recognized is cut to obtain a face image to be recognized including an unobstructed face region, and the cut face image to be recognized meets the input size of the local face feature extraction model; or, setting a pixel value of a face-shielding region in the obtained face image to be recognized as a preset value, and clipping the obtained face image to be recognized, wherein before the clipped face image to be recognized meets the input size of the local face feature extraction model, the method may further include: and carrying out face detection and face alignment on the image to be recognized. For example, face alignment generally requires five key point detections: left eye center, right eye center, nose tip, left mouth corner and right mouth corner.
And 204, performing feature comparison in a pre-established local human face feature library by using the target local human face features to obtain and output a human face identity which meets a preset comparison condition.
In this embodiment, at least one local face feature and a corresponding face identity may be stored in the local face feature library. Here, if the occlusion face image may be an upper half face occlusion image, the local face feature library may be a lower half face feature library; or the occlusion face image may be a lower half face occlusion image, and the local face feature library may be an upper half face feature library. The preset contrast condition of the local face feature library may be that a first similarity between the local face feature and the face feature of the non-occlusion region is greater than a first similarity threshold.
In some optional implementations, at least one local facial feature, a corresponding facial identity, and a corresponding facial image may be stored in the local facial feature library. And the execution main body can obtain and output the face identity identification and the corresponding face image which accord with the preset comparison condition.
In some alternative implementations, the local face feature library may be pre-established by: acquiring at least one face identity label and a corresponding face image, wherein the face image is an occluded face image and/or a non-occluded face image; acquiring corresponding human face images in the non-shielding areas according to the human face images; respectively extracting local face features of the face image of each non-shielding area by using a local face feature extraction model; and storing each local face feature, the corresponding face identity and the face image into a local face feature library in an associated manner.
In some optional implementations, a corresponding face image without an occlusion region is obtained according to each face image, and the method further includes: cutting each face image to obtain a non-shielding area face image, wherein the non-shielding area face image meets the input size of a local face feature extraction model; or setting the pixel value of the shielding region in each face image as a preset value, and cutting the face image, wherein the cut face image meets the input size of the local face feature extraction model.
In some optional implementations, before obtaining the corresponding non-occlusion region face image according to each face image, the method further includes: and carrying out face detection and face alignment on each face image.
In some alternative implementations, the execution subject may calculate a first similarity between the non-occlusion region face feature and each local face feature in the local face feature library. And in response to the fact that the first highest similarity is larger than a first preset similarity threshold value, determining the face identity identification associated with the local face feature corresponding to the first highest similarity as a face recognition result corresponding to the face image to be recognized.
In this alternative implementation, the first highest similarity is the maximum value among the calculated first similarities. The execution body may calculate the first similarity by various similarity calculation methods, such as euclidean distance, manhattan distance, and the like. The higher the first similarity is, the more similar the human face features of the non-occlusion region are to the corresponding local human face features in the local human face feature library.
In practice, the identity verification is performed according to the identity associated with the local face features matched with the human face features of the non-shielding area of the user in the local face feature library by comparing the features of the human face features of the non-shielding area of the user with the features of the local face features in the local face feature library, so that the passing verification personnel are allowed to enter, and the non-passing verification personnel are rejected to enter.
And step 205, in response to the determination result, extracting the global face features of the face image to be recognized by using the global face feature extraction model to obtain the target global face features.
In the embodiment, the global facial features of the facial image to be recognized may include facial contour features, shapes, colors, positions, scale features and the like of facial five-sense organ regions.
In some optional implementations, the global face feature extraction model is obtained by pre-training through the following steps: acquiring a second sample set, wherein a second sample in the second sample set comprises an unobstructed face image and corresponding global face features; and taking the non-shielding face image as input, taking the corresponding global face features as expected output, and training a second initial deep learning model to obtain a global face feature extraction model.
In this optional implementation manner, the second initial deep learning model may be various neural network models built in advance, and the first initial network learning model and the second initial network learning model may be models having the same network structure or different network structures. As an example, in the process of training the second initial deep learning model, the second initial deep learning model may be used to perform feature extraction on an unobstructed face image in a second sample to obtain an actual global face feature, then a preset loss function may be used to calculate a second difference between the actual global face feature and the global face feature in the second sample, a network parameter of the second initial deep learning model is adjusted, and when it is determined that the current second initial deep learning model satisfies a preset training completion condition (the second difference is smaller than a second preset difference threshold value/the number of times of training reaches a second preset number/the training time reaches a second preset duration), the global face feature extraction model is obtained.
In some optional implementation manners, before extracting the face features of the face image to be recognized by using the global face feature extraction model to obtain the target global face features, the method may further include: and processing the acquired face image to be recognized, wherein the processed face image to be recognized meets the input size of the global face feature extraction model.
For example, if the input data size of the global face feature extraction model is 112 × 112, the size of the face image to be recognized is scaled to 112 × 112.
In some optional implementations, before processing the acquired to-be-recognized face image, where the processed to-be-recognized face image satisfies an input size of the global face feature extraction model, the method further includes: and carrying out face detection and face alignment on the face image to be recognized.
And step 206, performing feature comparison in a pre-established global face feature library by using the target global face features to obtain and output a face identity meeting preset comparison conditions.
In this embodiment, at least one global face feature and a corresponding face identity are stored in the global face feature library. The preset contrast condition may be that a second similarity between the global face feature and the face feature is greater than a second similarity threshold.
In some optional implementations, at least one global face feature, a corresponding face identity, and a corresponding face image may be stored in the global face feature library. And the execution main body can obtain and output the face identity identification and the corresponding face image which accord with the preset comparison condition.
In some alternative implementations, the global face feature library is pre-established by: acquiring at least one face identity and a corresponding non-shielding face image; respectively extracting the global face features of the non-shielding face images by using a global face feature extraction model; and storing all the global face features, the corresponding face identity identifications and the face images into a global face feature library in an associated manner.
In some optional implementations, after acquiring at least one face identity and a corresponding unobstructed face image, the method further includes: and preprocessing each non-shielding face image so that the preprocessed non-shielding face image meets the input size of the global face feature extraction model.
In some optional implementations, before preprocessing each non-occlusion face image so that the preprocessed non-occlusion face image satisfies the input size of the global face feature extraction model, the method further includes: and carrying out face detection and face alignment on each non-shielding face image.
In some alternative implementations, the execution subject may calculate a second similarity between the face feature and each global face feature in the global face feature library. And in response to the fact that the second highest similarity is larger than a second preset similarity threshold, determining the face identity identification associated with the global face feature corresponding to the second highest similarity as a face recognition result corresponding to the face image to be recognized.
In this alternative implementation, the second highest similarity is the maximum of the calculated second similarities. The executing body may calculate the second similarity through various similarity calculation methods, such as mahalanobis distance, cosine distance of included angle, and the like. The higher the second similarity, the more similar the face features are to the corresponding global face features in the global face feature database.
In practice, the identity verification is performed according to the identity associated with the global face features matched with the face features of the user in the global face feature library by comparing the face features of the user with the global face features in the global face feature library, so that the passing verification personnel is allowed to enter, and the non-passing verification personnel is rejected to enter.
According to the method provided by the embodiment of the disclosure, the judgment operation of whether the human face is shielded or not is carried out in advance, and the targeted human face identification operation is carried out according to the judgment result of the human face shielding. And for the face without the shielding, the global face features of the more representative face image to be recognized are adopted for face comparison. For the partially shielded face, the more representative partial face features are adopted for face comparison, the face features of the non-shielded area are fully utilized, and the influence of a shielding object on face recognition is reduced. Corresponding face features are extracted according to the judgment result of whether the human face is shielded or not in the whole process, so that the blindness of extracting the face features is reduced, and the accuracy and the robustness of face recognition can be improved. Moreover, the face recognition technology disclosed by the invention is not only suitable for a face recognition scene with a partially shielded face, but also suitable for a face recognition scene without a shielded face, and the application range and the scene of face recognition are expanded.
Under the condition that the face is shielded by a shielding object, in the process of face recognition, the shielding object destroys the inherent structure and geometric features of the face, so that the face cannot be accurately recognized by a general face recognition algorithm, and the accuracy of face recognition is influenced due to incomplete shielding face images. The method aims at the shielded face image, trains a local face feature extraction model in advance and establishes a corresponding local face feature library in advance, can use the local face feature extraction model to purposefully extract the face features of the non-shielded area of the shielded face image, and determines the face identity identification meeting the preset conditions in the local face feature library, thereby improving the accuracy and robustness of face recognition.
With further reference to FIG. 3, a flow 300 of one embodiment of a local face feature library and global face feature library creation method is illustrated. The process 300 of the face recognition method includes the following steps:
step 301, at least one face identification and a corresponding face image are obtained.
In this alternative implementation, the face image may be a partially occluded face image and/or an unoccluded face image. The id may be used to identify identification information of different users, such as a mobile phone number, an identification number, a name, etc. of the user.
Step 302-step 304 are steps of establishing a local face feature library, and step 305-step 306 are steps of establishing a global face feature library.
And 302, acquiring a corresponding face image of the non-occlusion area according to each face image.
In this embodiment, the face image is an occlusion face image and/or a non-occlusion face image.
In some alternative implementations, the execution subject may perform face detection and face alignment on each face image.
In some optional implementations, a corresponding face image without an occlusion region is obtained according to each face image, and the method further includes: cutting each face image to obtain a non-shielding area face image, wherein the non-shielding area face image meets the input size of a local face feature extraction model; or setting the pixel value of the shielding region in each face image as a preset value, and cutting the face image, wherein the cut face image meets the input size of the local face feature extraction model.
For example, if the input data size of the local face feature extraction model is 112 × 64, the size of each clipped face image without the occlusion region is scaled to 112 × 64.
In some optional implementations, in acquiring a corresponding non-occlusion region face image according to each face image, the method further includes: cutting each face image to obtain a non-shielding area face image, wherein the non-shielding area face image meets the input size of a local face feature extraction model; or, setting the pixel value of the shielding region in each face image as a preset value, and clipping the face image, wherein before the clipped face image meets the input size of the local face feature extraction model, the method further comprises: and carrying out face detection and face alignment on each face image.
And 303, respectively extracting the local face features of the face images of the non-shielding areas by using the local face feature extraction model.
In this embodiment, the local face features may be top half face features or bottom half face features. As an example, the top half human face features may include shape, color, position, scale features, and the like of the eyes and eyebrow regions of the face. The lower half face features may include lower half face contour features, shape, color, position, and scale features of the ears, nose, and mouth of the face, and the like.
And 304, storing each local face feature and the corresponding face identity in a local face feature library in an associated manner.
In some optional implementations, the execution subject may store each local face feature, the corresponding face identity, and the corresponding face image in a local face feature library in an associated manner.
In practice, user registration information meeting preset conditions, such as local facial features and corresponding identification of a user, needs to be stored in advance. The local face features of the user can be obtained from the local shielding face image and the non-shielding face image of the user, and a local face feature library for correspondingly storing the local face features and the corresponding identity marks is specially constructed.
And 305, respectively extracting the global face features of the non-shielding face images by using the global face feature extraction model.
In this embodiment, the face image is an unobstructed face image.
In some optional implementations, after acquiring at least one face identity and a corresponding unobstructed face image, the method further includes: and preprocessing each non-shielding face image so that the preprocessed non-shielding face image meets the input size of the global face feature extraction model.
For example, if the input data size of the global face feature extraction model is 112 × 112, the size of each face image is scaled to 112 × 112.
In some optional implementations, before preprocessing each non-occlusion face image so that the preprocessed non-occlusion face image satisfies the input size of the global face feature extraction model, the method further includes: and carrying out face detection and face alignment on each non-shielding face image.
And step 306, storing all the global face features, the corresponding face identity identifications and the face images into a global face feature library in an associated manner.
In some optional implementation manners, the execution subject may store each global face feature, the corresponding face identity, and the corresponding face image in a global face feature library in an associated manner.
In practice, user registration information meeting preset conditions, such as global face features and corresponding identification of a user, needs to be stored in advance. The global face features of the user can be obtained from the non-shielding face image of the user, and a global face feature library for storing the global face features and the corresponding identity marks is specially constructed.
The method provided by the above embodiment of the present disclosure is specially configured for the local face feature library and the global face feature library, so that the local face features and the global face features of the same user can be guaranteed to be stored, and the face features of the same user are enriched, so that the method can be applied to different face recognition application scenarios.
With further reference to fig. 4, as an implementation of the methods shown in the above-mentioned figures, the present disclosure provides an embodiment of a face recognition apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which can be applied in various electronic devices.
As shown in fig. 4, the face recognition apparatus 400 of the present embodiment includes: an acquisition unit 401 configured to acquire a face image to be recognized; a determination unit 402 configured to determine whether the face image to be recognized is an occlusion face image; a first output unit 403 configured to, in response to the determination being yes, extract a face feature of an unobstructed area of the face image to be recognized using the local face feature extraction model, resulting in a target local face feature; performing feature comparison in a pre-established local face feature library by using the target local face features to obtain and output a face identity which meets a preset comparison condition; the system comprises a local face feature library, a local face feature database and a local face feature database, wherein at least one local face feature and a corresponding face identity are stored in the local face feature library; a second output unit 404 configured to, in response to the determination of no, extract global face features of the face image to be recognized using the global face feature extraction model, to obtain target global face features; performing feature comparison in a pre-established global face feature library by using the target global face features to obtain and output a face identity which meets a preset comparison condition; at least one global face feature and a corresponding face identity are stored in the global face feature library.
In this embodiment, specific processing of the obtaining unit 401, the determining unit 402, the first output unit 403, and the second output unit 404 of the face recognition apparatus 400 and technical effects thereof may refer to related descriptions of step 201, step 202, step 203, and step 204 in the corresponding embodiment of fig. 2, which are not repeated herein.
In some optional embodiments, the face recognition apparatus and the corresponding method further include: a first processing unit (not shown in the figure), configured to, before extracting the non-occlusion region face features of the face image to be recognized by using the local face feature extraction model to obtain target local face features, crop the obtained face image to be recognized to obtain a face image to be recognized including the non-occlusion region face, where the cropped face image to be recognized meets the input size of the local face feature extraction model; or, the second processing unit (not shown in the figure) is configured to set a pixel value of a face area, which is shielded in the acquired face image to be recognized, to a preset value before extracting a face feature of a non-shielded area of the face image to be recognized by using the local face feature extraction model, and crop the acquired face image to be recognized, where the cropped face image to be recognized meets an input size of the local face feature extraction model.
In some alternative embodiments, the occlusion face image comprises an upper half face occlusion image, and the local face feature library comprises a lower half face feature library; or the shielding face image comprises a lower half face shielding image, and the local face feature library comprises an upper half face feature library.
In some optional embodiments, the face recognition apparatus and the corresponding method further include: a local face feature extraction model pre-training unit (not shown in the figure) configured to pre-train the local face feature extraction model, including: a first obtaining unit (not shown in the figure) configured to obtain a first sample set, wherein the first samples in the first sample set comprise an occlusion face image and corresponding non-occlusion region face features; a first training unit (not shown in the figures) configured to: and taking the shielded face image as input, taking the corresponding non-shielded region face features as expected output, and training a first initial deep learning model to obtain a local face feature extraction model.
In some optional embodiments, the face recognition apparatus and the corresponding method further include: a local face feature library pre-establishing unit (not shown in the figure) configured to pre-establish a local face feature library, including: a second obtaining unit (not shown in the figures) configured to obtain at least one face identity and a corresponding face image, where the face image is an occluded face image and/or a non-occluded face image; a third acquiring unit (not shown in the figure) configured to acquire a corresponding face image of the non-occlusion area from each face image; a first extraction unit (not shown in the figure) configured to extract local face features of each non-occlusion region face image respectively using a local face feature extraction model; and a first storage unit (not shown in the figure) configured to store each local face feature, the corresponding face identity and the face image into a local face feature library in an associated manner.
In some optional embodiments, the local face feature library pre-establishing unit (not shown in the figure) further includes: a third processing unit (not shown in the figure) configured to crop a non-occlusion region face image from each face image, the non-occlusion region face image satisfying an input size of the local face feature extraction model; or the image processing method is configured to set the pixel value of the shielding region in each face image as a preset value, and cut the face image, wherein the cut face image meets the input size of the local face feature extraction model.
In some optional embodiments, the face recognition apparatus and the corresponding method may further include: a fourth processing unit (not shown in the figure) configured to: before the global face feature of the face image to be recognized is extracted by using the global face feature extraction model to obtain the target global face feature, the obtained face image to be recognized is processed, and the processed face image to be recognized meets the input size of the global face feature extraction model.
In some optional embodiments, the face recognition apparatus and the corresponding method may further include: a global face feature extraction model pre-training unit (not shown in the figure) configured to pre-train the global face feature extraction model, including: a fourth obtaining unit (not shown in the figure) configured to obtain a second sample set, where a second sample in the second sample set includes an unobstructed face image and a corresponding global face feature; and a second training unit (not shown in the figure) configured to train a second initial deep learning model to obtain a global face feature extraction model by taking the non-occlusion face image as input and the corresponding global face feature as expected output.
In some optional embodiments, the face recognition apparatus and the corresponding method may further include: a global face feature library pre-establishing unit (not shown in the figure) configured to pre-establish a global face feature library, including: a fifth obtaining unit (not shown in the figure) configured to obtain at least one face identity and a corresponding non-occlusion face image; a second extraction unit (not shown in the figure) configured to extract global face features of the non-occlusion face images respectively by using the global face feature extraction model; and a second storage unit (not shown in the figure) configured to store each global face feature, the corresponding face identity and the face image into the global face feature library in an associated manner.
In some optional embodiments, the face recognition apparatus and the corresponding method may further include: a fifth processing unit (not shown in the figure) configured to: after at least one face identity and a corresponding non-shielding face image are obtained, preprocessing each non-shielding face image so that the preprocessed non-shielding face image meets the input size of a global face feature extraction model.
In some optional embodiments, the determining unit 402 is further configured to: and determining whether the face image to be recognized is a shielding face image or not by using a face shielding judgment model obtained by pre-training.
In some optional embodiments, the face recognition apparatus and the corresponding method may further include: a face occlusion determination model pre-training unit (not shown in the drawings) configured to pre-train a face occlusion determination model, including: a sixth obtaining unit (not shown in the figure) configured to obtain a third sample set, where a third sample in the third sample set includes a corresponding non-occlusion face image and an occlusion face image; and a third training unit (not shown in the figure) configured to train a third initial deep learning model to obtain a face occlusion determination model by taking the non-occlusion face image as a positive sample and taking the occlusion face image as a negative sample.
It should be noted that details of implementation and technical effects of each unit in the face recognition device provided by the present disclosure may refer to descriptions of other embodiments in the present disclosure, and are not described herein again.
Referring now to FIG. 5, a block diagram of a computer system 500 suitable for use in implementing the electronic device of the present disclosure is shown. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the present disclosure.
As shown in fig. 5, the computer system 500 includes a Central Processing Unit (CPU)501 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the system 500 are also stored. The CPU 501, ROM 502, and RAM 503 are connected to each other via a bus 504. An Input/Output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input section 506 including a touch screen, a tablet, a keyboard, a mouse, or the like; an output section 507 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage portion 508 including a hard disk and the like; and a communication section 509 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509. The above-described functions defined in the method of the present disclosure are performed when the computer program is executed by a Central Processing Unit (CPU) 501. It should be noted that the computer readable medium of the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, Python, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in this disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a determination unit, a first output unit, and a second output unit. The names of these units do not in some cases constitute a limitation on the unit itself, and for example, the acquisition unit may also be described as a "unit that acquires a face image to be recognized".
As another aspect, the present disclosure also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring a face image to be recognized; determining whether the face image to be recognized is an occlusion face image; in response to the determination, extracting the human face features of the non-occlusion area of the human face image to be recognized by using a local human face feature extraction model to obtain target local human face features; performing feature comparison in a pre-established local face feature library by using the target local face features to obtain and output a face identity which meets a preset comparison condition; the system comprises a local face feature library, a local face feature database and a local face feature database, wherein at least one local face feature and a corresponding face identity are stored in the local face feature library; in response to the determination of no, extracting the global face features of the face image to be recognized by using a global face feature extraction model to obtain target global face features; performing feature comparison in a pre-established global face feature library by using the target global face features to obtain and output a face identity which meets a preset comparison condition; at least one global face feature and a corresponding face identity are stored in the global face feature library.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept as defined above. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (10)

1. A face recognition method, comprising:
acquiring a face image to be recognized;
determining whether the face image to be recognized is an occlusion face image;
in response to the determination, extracting the human face features of the non-shielding area of the human face image to be recognized by using a local human face feature extraction model to obtain target local human face features; performing feature comparison in a pre-established local face feature library by using the target local face features to obtain and output a face identity which meets a preset comparison condition; the system comprises a local face feature library, a local face feature database and a local face feature database, wherein at least one local face feature and a corresponding face identity are stored in the local face feature library;
in response to the determination of no, extracting the global face features of the face image to be recognized by using a global face feature extraction model to obtain target global face features; performing feature comparison in a pre-established global face feature library by using the target global face features to obtain and output a face identity which meets a preset comparison condition; at least one global face feature and a corresponding face identity are stored in the global face feature library.
2. The method of claim 1, wherein the occlusion face image comprises a top half face occlusion image and the local face feature library comprises a bottom half face feature library; alternatively, the first and second electrodes may be,
the occlusion face image comprises a lower half face occlusion image, and the local face feature library comprises an upper half face feature library.
3. The method of claim 1, wherein the local face feature extraction model is pre-trained by:
acquiring a first sample set, wherein the first sample in the first sample set comprises an occlusion face image and corresponding non-occlusion region face features;
and taking the shielded face image as input, taking the corresponding non-shielded region face features as expected output, and training a first initial deep learning model to obtain the local face feature extraction model.
4. A method according to claim 1 or 3, wherein the local face feature library is pre-established by:
acquiring at least one face identity and a corresponding face image, wherein the face image is an occluded face image and/or a non-occluded face image;
acquiring a corresponding face image of an unobstructed area according to each face image;
respectively extracting the local face features of the face image of each non-shielding area by using the local face feature extraction model;
storing each local face feature, the corresponding face identity and the face image in the local face feature library in an associated manner;
preferably, the obtaining of the corresponding face image without the occlusion region according to each of the face images includes:
cutting each face image to obtain a non-shielding area face image, wherein the non-shielding area face image meets the input size of the local face feature extraction model; alternatively, the first and second electrodes may be,
and setting the pixel value of the shielding area in each face image as a preset value, and cutting the face image, wherein the cut face image meets the input size of the local face feature extraction model.
5. The method of claim 1, wherein the global face feature extraction model is pre-trained by:
acquiring a second sample set, wherein a second sample in the second sample set comprises an unobstructed face image and corresponding global face features;
and taking the non-shielding face image as input, taking the corresponding global face features as expected output, and training a second initial deep learning model to obtain the global face feature extraction model.
6. The method of claim 1 or 5, wherein the global face feature library is pre-established by:
acquiring at least one face identity and a corresponding non-shielding face image;
respectively extracting the global face features of the non-shielding face images by using the global face feature extraction model;
storing all the global face features, the corresponding face identity identifications and the face images into a global face feature library in an associated manner;
preferably, after the acquiring at least one face identity and the corresponding non-occlusion face image, the method further includes:
and preprocessing each non-shielding face image so that the preprocessed non-shielding face image meets the input size of the global face feature extraction model.
7. The method according to claim 1, wherein the determining whether the face image to be recognized is an occlusion face image comprises:
determining whether the face image to be recognized is a shielding face image or not by using a face shielding judgment model obtained by pre-training;
preferably, the face occlusion determination model is obtained by pre-training through the following steps:
acquiring a third sample set, wherein a third sample in the third sample set comprises a corresponding non-occlusion face image and an occlusion face image;
and taking the non-shielding face image as a positive sample, taking the shielding face image as a negative sample, and training a third initial deep learning model to obtain the face shielding judgment model.
8. A face recognition apparatus comprising:
an acquisition unit configured to acquire a face image to be recognized;
a determining unit configured to determine whether the face image to be recognized is an occlusion face image;
a first output unit configured to, in response to the determination, extract a non-occlusion region face feature of the face image to be recognized using a local face feature extraction model to obtain a target local face feature; performing feature comparison in a pre-established local face feature library by using the target local face features to obtain and output a face identity which meets a preset comparison condition; the system comprises a local face feature library, a local face feature database and a local face feature database, wherein at least one local face feature and a corresponding face identity are stored in the local face feature library;
the second output unit is configured to respond to the judgment of no, extract the global face features of the face image to be recognized by using a global face feature extraction model, and obtain target global face features; performing feature comparison in a pre-established global face feature library by using the target global face features to obtain and output a face identity which meets a preset comparison condition; at least one global face feature and a corresponding face identity are stored in the global face feature library.
9. An electronic device, comprising:
a memory storing a computer program and a processor implementing the method of any one of claims 1-7 when the processor executes the computer program.
10. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202010987418.1A 2020-09-18 2020-09-18 Face recognition method and device, electronic equipment and computer readable storage medium Pending CN112115866A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010987418.1A CN112115866A (en) 2020-09-18 2020-09-18 Face recognition method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010987418.1A CN112115866A (en) 2020-09-18 2020-09-18 Face recognition method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112115866A true CN112115866A (en) 2020-12-22

Family

ID=73799828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010987418.1A Pending CN112115866A (en) 2020-09-18 2020-09-18 Face recognition method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112115866A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241689A (en) * 2020-09-24 2021-01-19 北京澎思科技有限公司 Face recognition method and device, electronic equipment and computer readable storage medium
CN112800922A (en) * 2021-01-22 2021-05-14 杭州海康威视数字技术股份有限公司 Face recognition method and device, electronic equipment and storage medium
CN113205649A (en) * 2021-05-11 2021-08-03 中国工商银行股份有限公司 Intelligent storage cabinet, storing and taking method and device
CN113221922A (en) * 2021-05-31 2021-08-06 深圳市商汤科技有限公司 Image processing method and related device
CN113221771A (en) * 2021-05-18 2021-08-06 北京百度网讯科技有限公司 Living body face recognition method, living body face recognition device, living body face recognition equipment, storage medium and program product
WO2021203718A1 (en) * 2020-04-10 2021-10-14 嘉楠明芯(北京)科技有限公司 Method and system for facial recognition
CN113743195A (en) * 2021-07-23 2021-12-03 北京眼神智能科技有限公司 Face occlusion quantitative analysis method and device, electronic equipment and storage medium
CN114399813A (en) * 2021-12-21 2022-04-26 马上消费金融股份有限公司 Face shielding detection method, model training method and device and electronic equipment
WO2022213349A1 (en) * 2021-04-09 2022-10-13 鸿富锦精密工业(武汉)有限公司 Method and apparatus for recognizing face with mask, and computer storage medium
CN115619410A (en) * 2022-10-19 2023-01-17 闫雪 Self-adaptive financial payment platform
WO2023016007A1 (en) * 2021-08-13 2023-02-16 北京百度网讯科技有限公司 Method and apparatus for training facial recognition model, and computer program product
CN116128514A (en) * 2022-11-28 2023-05-16 武汉利楚商务服务有限公司 Face brushing payment method and device under multi-face intervention
CN116894839A (en) * 2023-09-07 2023-10-17 深圳市谱汇智能科技有限公司 Chip wafer defect detection method, device, terminal equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232369A (en) * 2019-06-20 2019-09-13 深圳和而泰家居在线网络科技有限公司 A kind of face identification method and electronic equipment
CN111523431A (en) * 2020-04-16 2020-08-11 支付宝(杭州)信息技术有限公司 Face recognition method, device and equipment
CN111582090A (en) * 2020-04-27 2020-08-25 杭州宇泛智能科技有限公司 Face recognition method and device and electronic equipment
CN111597910A (en) * 2020-04-22 2020-08-28 深圳英飞拓智能技术有限公司 Face recognition method, face recognition device, terminal equipment and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232369A (en) * 2019-06-20 2019-09-13 深圳和而泰家居在线网络科技有限公司 A kind of face identification method and electronic equipment
CN111523431A (en) * 2020-04-16 2020-08-11 支付宝(杭州)信息技术有限公司 Face recognition method, device and equipment
CN111597910A (en) * 2020-04-22 2020-08-28 深圳英飞拓智能技术有限公司 Face recognition method, face recognition device, terminal equipment and medium
CN111582090A (en) * 2020-04-27 2020-08-25 杭州宇泛智能科技有限公司 Face recognition method and device and electronic equipment

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021203718A1 (en) * 2020-04-10 2021-10-14 嘉楠明芯(北京)科技有限公司 Method and system for facial recognition
CN112241689A (en) * 2020-09-24 2021-01-19 北京澎思科技有限公司 Face recognition method and device, electronic equipment and computer readable storage medium
CN112800922A (en) * 2021-01-22 2021-05-14 杭州海康威视数字技术股份有限公司 Face recognition method and device, electronic equipment and storage medium
WO2022213349A1 (en) * 2021-04-09 2022-10-13 鸿富锦精密工业(武汉)有限公司 Method and apparatus for recognizing face with mask, and computer storage medium
CN113205649A (en) * 2021-05-11 2021-08-03 中国工商银行股份有限公司 Intelligent storage cabinet, storing and taking method and device
CN113221771A (en) * 2021-05-18 2021-08-06 北京百度网讯科技有限公司 Living body face recognition method, living body face recognition device, living body face recognition equipment, storage medium and program product
CN113221771B (en) * 2021-05-18 2023-08-04 北京百度网讯科技有限公司 Living body face recognition method, device, apparatus, storage medium and program product
CN113221922A (en) * 2021-05-31 2021-08-06 深圳市商汤科技有限公司 Image processing method and related device
CN113743195A (en) * 2021-07-23 2021-12-03 北京眼神智能科技有限公司 Face occlusion quantitative analysis method and device, electronic equipment and storage medium
CN113743195B (en) * 2021-07-23 2024-05-17 北京眼神智能科技有限公司 Face shielding quantitative analysis method and device, electronic equipment and storage medium
WO2023016007A1 (en) * 2021-08-13 2023-02-16 北京百度网讯科技有限公司 Method and apparatus for training facial recognition model, and computer program product
CN114399813B (en) * 2021-12-21 2023-09-26 马上消费金融股份有限公司 Face shielding detection method, model training method, device and electronic equipment
CN114399813A (en) * 2021-12-21 2022-04-26 马上消费金融股份有限公司 Face shielding detection method, model training method and device and electronic equipment
CN115619410A (en) * 2022-10-19 2023-01-17 闫雪 Self-adaptive financial payment platform
CN115619410B (en) * 2022-10-19 2024-01-26 闫雪 Self-adaptive financial payment platform
CN116128514B (en) * 2022-11-28 2023-10-13 武汉利楚商务服务有限公司 Face brushing payment method and device under multi-face intervention
CN116128514A (en) * 2022-11-28 2023-05-16 武汉利楚商务服务有限公司 Face brushing payment method and device under multi-face intervention
CN116894839A (en) * 2023-09-07 2023-10-17 深圳市谱汇智能科技有限公司 Chip wafer defect detection method, device, terminal equipment and storage medium
CN116894839B (en) * 2023-09-07 2023-12-05 深圳市谱汇智能科技有限公司 Chip wafer defect detection method, device, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112115866A (en) Face recognition method and device, electronic equipment and computer readable storage medium
US11487995B2 (en) Method and apparatus for determining image quality
CN109359548B (en) Multi-face recognition monitoring method and device, electronic equipment and storage medium
CN108701216B (en) Face recognition method and device and intelligent terminal
CN111428581B (en) Face shielding detection method and system
CN112241689A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN108229376B (en) Method and device for detecting blinking
CN108447159B (en) Face image acquisition method and device and entrance and exit management system
CN111563480B (en) Conflict behavior detection method, device, computer equipment and storage medium
CN108491823B (en) Method and device for generating human eye recognition model
CN110163096B (en) Person identification method, person identification device, electronic equipment and computer readable medium
CN110263755B (en) Eye ground image recognition model training method, eye ground image recognition method and eye ground image recognition device
CN108133197B (en) Method and apparatus for generating information
CN107832721B (en) Method and apparatus for outputting information
CN108108711B (en) Face control method, electronic device and storage medium
CN108229375B (en) Method and device for detecting face image
CN110796101A (en) Face recognition method and system of embedded platform
CN111095268A (en) User identity identification method and device and electronic equipment
CN111597910A (en) Face recognition method, face recognition device, terminal equipment and medium
CN111931628B (en) Training method and device of face recognition model and related equipment
CN113657195A (en) Face image recognition method, face image recognition equipment, electronic device and storage medium
CN108449514A (en) Information processing method and device
CN111783677B (en) Face recognition method, device, server and computer readable medium
CN113435280A (en) Testimony verification method
CN112633222A (en) Gait recognition method, device, equipment and medium based on confrontation network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination