WO2019179295A1 - 面部识别的方法和装置 - Google Patents

面部识别的方法和装置 Download PDF

Info

Publication number
WO2019179295A1
WO2019179295A1 PCT/CN2019/076398 CN2019076398W WO2019179295A1 WO 2019179295 A1 WO2019179295 A1 WO 2019179295A1 CN 2019076398 W CN2019076398 W CN 2019076398W WO 2019179295 A1 WO2019179295 A1 WO 2019179295A1
Authority
WO
WIPO (PCT)
Prior art keywords
recognition result
facial
face
confidence
distance
Prior art date
Application number
PCT/CN2019/076398
Other languages
English (en)
French (fr)
Inventor
李习华
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2020542097A priority Critical patent/JP6973876B2/ja
Priority to EP19770980.1A priority patent/EP3757873A4/en
Publication of WO2019179295A1 publication Critical patent/WO2019179295A1/zh
Priority to US16/890,484 priority patent/US11138412B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to a method and apparatus for face recognition.
  • the face identifier may be an identifier of a certain person or an identifier of a corresponding face image. Face recognition can be achieved using a pre-trained classifier. Each category of the classifier is each face ID in the database. After the server obtains the facial image to be identified, the feature vector of the facial image may be extracted. Further, the extracted feature vector may be input into a pre-trained classifier to obtain a facial identifier corresponding to the facial image.
  • Embodiments of the present application provide a method and apparatus for face recognition, which can improve the efficiency of face recognition.
  • Determining the facial recognition corresponding to the facial image according to the first recognition result, the first confidence corresponding to the first recognition result, the second recognition result, and the second confidence corresponding to the second recognition result a confidence level between the result and the facial recognition result
  • An acquiring module configured to acquire a facial image to be identified, and extract at least one target feature vector corresponding to the facial image
  • a first calculating module configured to calculate a distance between each target feature vector and an average vector corresponding to each face identifier stored in the database, to obtain a first distance set corresponding to each target feature vector, and determine each first distance set a face identifier corresponding to the smallest distance; in the face identifier corresponding to the smallest distance among the first distance sets, determining a face identifier having the most occurrences as the first recognition result corresponding to the face image; according to the first Determining, by the minimum distance corresponding to each of the first distance sets, a first confidence level corresponding to the first recognition result;
  • a second calculating module configured to calculate a distance of each target feature vector and each feature vector corresponding to each face identifier stored in the database, to obtain a second distance set corresponding to each target feature vector, and determine each second a face identifier corresponding to a target distance in the set that meets a preset selection condition; in each of the second distance sets, a face identifier corresponding to each target distance is determined, and a target face identifier having the most occurrence number is determined; and in each target face identifier, determining a target face identifier having the highest number of occurrences as a second recognition result corresponding to the face image; determining a second corresponding to the second recognition result according to a minimum distance corresponding to the second recognition result in each second distance set Two confidence levels;
  • a determining module configured to determine the face according to the first recognition result, the first confidence level corresponding to the first recognition result, the second recognition result, and the second confidence level corresponding to the second recognition result A confidence level corresponding to the facial recognition result of the image and the facial recognition result.
  • the server of the embodiment of the present application may include a processor and a memory, where the memory stores at least one instruction, at least one program, a code set or a set of instructions, the at least one instruction, the at least one program, the code set or The set of instructions is loaded by the processor and executed to implement the method of face recognition described above.
  • the computer readable storage medium of the embodiments of the present application may store at least one instruction, at least one program, code set or instruction set, and the at least one instruction, the at least one program, the code set or the instruction set is loaded by a processor. And a method of implementing facial recognition as described above.
  • the server may extract at least one target feature vector corresponding to the facial image, and further, calculate a distance between each target feature vector and the mean vector corresponding to each facial identifier, and obtain a first distance set corresponding to each target feature vector, and determining, according to each first distance set, a first recognition result corresponding to the face image and a corresponding first confidence.
  • the distance between each target feature vector and each feature vector corresponding to each face identifier may be calculated, and a second distance set corresponding to each target feature vector is obtained, and then, based on each The two distance sets determine a second recognition result corresponding to the facial image and a corresponding second confidence.
  • the facial recognition result corresponding to the facial image and the corresponding confidence degree thereof may be determined.
  • the face recognition result corresponding to the face image can be determined by fusing the first recognition result and the second recognition result, without pre-training the classifier, and further, when the new face identifier is added, the classifier can be retrained, thereby Improve the efficiency of facial recognition.
  • FIG. 1(a) is a schematic diagram of an implementation environment provided by an embodiment of the present application.
  • FIG. 1(b) is a flowchart of a method for facial recognition provided by an embodiment of the present application
  • FIG. 2(a) is a schematic diagram of a frame provided by an embodiment of the present application.
  • FIG. 2(b) is a schematic diagram of a frame provided by an embodiment of the present application.
  • FIG. 3 is a flowchart of a method for facial recognition provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a device for face recognition according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a device for facial recognition according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a server according to an embodiment of the present application.
  • the embodiment of the present application provides a method for facial recognition.
  • the execution subject of the method is one or more computing devices, such as a terminal or a server, and may also be implemented by a terminal and a server, as shown in FIG. 1( a ).
  • the terminal may be any terminal having a facial recognition function, such as a terminal such as a personal computer.
  • the server may be a server with facial recognition function, such as a background server that may be a facial recognition function.
  • the terminal and the server jointly implement, after the terminal acquires the facial image to be recognized, the terminal may send the facial image as a facial image to be recognized, and further.
  • the server may determine a face recognition result corresponding to the face image to be recognized.
  • the following takes the execution subject as the server as an example for detailed description. Other situations are similar, and will not be described again.
  • the server may include components such as a processor, a memory, and the like.
  • the processor can be a CPU or the like.
  • the processor may perform determining, according to the average search algorithm, a first recognition result corresponding to the facial image, determining a second recognition result corresponding to the facial image according to the neighbor search algorithm, and determining a facial recognition result corresponding to the facial image according to the first recognition result and the second recognition result. Wait for processing.
  • the memory may be RAM, Flash, or the like, and may be used to store received data, data required for processing, data generated during processing, and the like, such as a first recognition result and a second recognition result.
  • the server may perform facial recognition processing on the facial image to be recognized.
  • the server can acquire the face image to be recognized through the image capturing device, and perform facial recognition processing on the face image.
  • an image capturing device may be deployed at a plurality of locations, and the server may obtain a facial image transmitted by each image capturing device after the image is acquired by the image capturing device. It performs facial recognition to determine whether a person in the face image is a person who needs to find.
  • the feature vector of each sample face image may be extracted, and each class is trained based on each feature vector and its corresponding face identifier.
  • Classifier Each category of the classifier is each face identifier in the database. In this case, whenever a new face identifier needs to be added in the database, the classifier needs to be retrained to obtain a classifier whose category includes an increased face identifier, and the classifier has poor scalability and high cost.
  • the first recognition result and the second recognition result corresponding to the facial image may be determined based on the mean search algorithm and the neighbor search algorithm, respectively, based on the first recognition result and the second recognition result.
  • Obtaining a face recognition result corresponding to the face image that is, obtaining a face identifier corresponding to the face image.
  • the server can determine the facial identifier corresponding to the facial image by combining the mean search algorithm and the neighbor search algorithm. This scheme does not need to train the classifier to avoid retraining the classifier when adding new facial markers. This program can also improve the efficiency of face recognition.
  • the method provided by the embodiment of the present application has a wide application scope, for example, it can be applied to SIPP (single image per-person, each person has only one training sample facial image), and can also be applied to a fusion scene, and other needs.
  • SIPP single image per-person, each person has only one training sample facial image
  • the processing flow of some embodiments may include the following steps.
  • Step 101 Acquire a facial image to be identified, and extract at least one target feature vector corresponding to the facial image.
  • the facial image may be an image including a face, for example, may be a human face image including a human face.
  • the server may perform facial recognition processing on the facial image to be recognized.
  • the server can acquire the face image to be recognized through the image capturing device, and further, can perform facial recognition processing on the face image.
  • an image capturing device may be deployed at a plurality of locations, and the server may obtain a facial image transmitted by each image capturing device after the image is acquired by the image capturing device. It performs facial recognition to determine whether a person in the face image is a person who needs to find.
  • the server may acquire the facial image to be identified, and then extract at least one feature vector (which may be referred to as a target feature vector) q corresponding to the facial image through the depth network.
  • the q may be a feature vector corresponding to the face image itself, or may be a feature vector corresponding to each of the plurality of face images after the image enhancement, or may be an average vector of the feature vectors corresponding to the plurality of face images after the image enhancement.
  • the server may acquire a facial image to be recognized based on the application scenario.
  • the server may acquire a facial image captured by an image capturing device deployed by the enterprise; and for determining whether the acquired facial image is a scene of a person to be searched by facial recognition, the server may acquire a department for traffic management and the like.
  • the image captured by the image capture device may be acquired.
  • Step 102 Calculate a distance between each target feature vector and an average vector corresponding to each face identifier stored in the database, obtain a first distance set corresponding to each target feature vector, and determine a minimum distance in each first distance set.
  • Corresponding face identifier determining, in the face identifier corresponding to the smallest distance among the first distance sets, the face identifier having the most occurrences as the first recognition result corresponding to the facial image; and the first distance set according to the first recognition result The corresponding minimum distance is determined, and the first confidence corresponding to the first recognition result is determined.
  • the first confidence level may be used to indicate the possibility and credibility of the face identifier corresponding to the facial image as the first recognition result.
  • a sample facial image corresponding to each facial identifier may be pre-stored in the server.
  • the number of sample facial images corresponding to each facial identifier may be the same or different.
  • the sample face image corresponding to each face identifier may include a directly acquired original face image (the original face image may be a captured face image), and may also include a sample face image obtained by image enhancement according to the original face image.
  • the server may extract a feature vector of each sample facial image corresponding to the facial identifier by using a depth network, obtain at least one feature vector corresponding to the facial identifier, and calculate an average value of at least one feature vector corresponding to the facial identifier. vector.
  • each face identifier corresponds to mk sample face images (where m is a positive integer), and at least one feature vector corresponding to each face identifier
  • the mean vector can be as follows:
  • P11, p12,...p1m1 mean vector: mp1
  • the target feature vector q is a feature vector corresponding to the face image itself, or a mean vector corresponding to the image-enhanced multiple face images
  • the server may calculate the target feature vector q and each of the above
  • the server may determine the face identifier corresponding to the smallest distance as the first recognition result id1 corresponding to the facial image to be identified.
  • a distance threshold may also be preset in the server.
  • the distance threshold may be a distance threshold of the cosine distance, and may be a distance threshold of the Euclidean distance.
  • the minimum distance can be compared with the preset distance threshold. If the minimum distance is less than the preset distance threshold, the face identifier corresponding to the smallest distance is determined as the first recognition result id1 corresponding to the face image to be recognized; otherwise, the recognition fails.
  • a face image to be recognized (where the plurality of face images include a face image to be recognized and at least one face image after image enhancement), that is, for a face
  • the image corresponds to a plurality of target feature vectors.
  • the server may calculate a distance between the target feature vector of the face image and the mean vector mpi corresponding to each of the face identifiers, to obtain a first distance set corresponding to the face image. And determining the facial identifier corresponding to the smallest distance as the facial identifier corresponding to the facial image.
  • the server may determine the face ID with the most occurrences as the first recognition result corresponding to the face image to be recognized. That is, after extracting at least one target feature vector corresponding to the face image, the server may determine, by the mean search algorithm, the first confidence result corresponding to the first recognition result corresponding to the face image and the first recognition result.
  • the mean search algorithm may be an algorithm for determining the recognition result by comparing with the mean vector.
  • the server may determine the first confidence s1 corresponding to the first recognition result in addition to the first recognition result corresponding to the facial image. For example, the server may obtain a distance corresponding to the first recognition result in each first distance set, and determine a minimum distance among the distances. The first confidence is determined according to the smallest distance among the distances corresponding to the first recognition result. For example, the reciprocal of the smallest distance among the distances corresponding to the first recognition result may be determined as the first confidence.
  • Step 103 Calculate a distance of each target feature vector and each feature vector corresponding to each face identifier stored in the database in advance, and obtain a second distance set corresponding to each target feature vector, and determine that each second distance set satisfies Determining a face identifier corresponding to a target distance of the selection condition; determining, in each of the second distance sets, a target face identifier having the highest number of occurrences in the face identifier corresponding to each target distance; and determining, in each target facial identifier, the most frequently occurring number of occurrences
  • the target face identifier is used as a second recognition result corresponding to the face image; and the second confidence level corresponding to the second recognition result is determined according to the second smallest recognition result in the second distance set corresponding to the second recognition result.
  • the second confidence level may be used to indicate the possibility and credibility of the face identifier corresponding to the facial image as the second recognition result.
  • the target feature vector q is a feature vector corresponding to the face image itself, or a mean vector corresponding to the image-enhanced multiple face images
  • the server may calculate the target feature vector. q The distance from the feature vector of each sample facial image is obtained to obtain a second distance set, and in the second distance set, a preset number of distances with the smallest distance or a target distance smaller than the preset distance threshold is selected. After selecting the target distance, the server may determine the facial identifier corresponding to each target distance. Further, the number of occurrences corresponding to each different facial identifier may be counted, and the target facial identifier having the most occurrence number is determined as the corresponding image of the facial image to be recognized. Second, identify the result.
  • the server may calculate the face image A distance between the target feature vector and each of the feature vectors corresponding to each of the face identifiers is obtained, and a second distance set corresponding to the face image is obtained. Further, in each of the second distance sets, a preset number of distances with the smallest distance is selected or a target distance smaller than the preset distance threshold is selected.
  • the server may determine the face identifier corresponding to each target distance, count the number of occurrences corresponding to each different face identifier, and determine the target face identifier with the most occurrences. After the target face identifier corresponding to each target feature vector is obtained, the target face identifier with the highest number of occurrences may be determined as the second recognition result corresponding to the face image in each target face identifier. That is, after extracting at least one target feature vector corresponding to the face image, the server may determine, by the neighbor search algorithm, the second recognition result corresponding to the face image and the second confidence level corresponding to the second recognition result.
  • the neighbor search algorithm may be an algorithm for determining the recognition result by comparing the feature vectors corresponding to each sample face image, and may be an LSH (local sensitive hashing) algorithm.
  • the server may determine the second confidence s2 corresponding to the second recognition result in addition to the second recognition result corresponding to the facial image. For example, the server may obtain each distance corresponding to the second recognition result in each second distance set, and further, may determine a minimum distance corresponding to the second recognition result in the distance, and may be in accordance with each distance corresponding to the second recognition result. The minimum distance determines the second confidence. For example, the reciprocal of the smallest distance among the distances corresponding to the second recognition result may be determined as the second confidence.
  • Step 104 Determine, according to the first recognition result, the first confidence level corresponding to the first recognition result, the second recognition result, and the second confidence level corresponding to the second recognition result, the facial recognition result corresponding to the facial image and the facial recognition result. Confidence.
  • the server may perform the first identification according to the first recognition result.
  • the first confidence level corresponding to the result and the second confidence result corresponding to the second recognition result and the second recognition result determine the facial recognition result f_id corresponding to the facial image to be recognized and the confidence f_score corresponding to the facial recognition result.
  • one of the first recognition result and the second recognition result may be determined as a face recognition result corresponding to the face image based on certain conditions, wherein the processing framework of the present scheme is as shown in FIG. 2(a).
  • the overall process of the solution may be as shown in FIG. 2(b), wherein CNN_fes is a deep network for extracting feature vectors, and the baseset may be a face identifier with a larger number of corresponding sample face images.
  • the set, novelset may be a set of face identifiers with a smaller number of samples of the corresponding sample face image, augmented-novelset may be image-enhanced novelset,
  • Modified SVD augmentation module may be an improved SVD enhancement module
  • step 104 the processing of step 104 can be varied based on the manner in which the facial recognition results are determined. The implementation of several situations is given below.
  • Case 1 when the first recognition result is the same as the second recognition result, determining the first recognition result or the second recognition result as the face recognition result corresponding to the face image, and maximizing the first confidence degree and the second confidence degree The confidence is determined as the confidence level corresponding to the facial recognition result.
  • the server may determine whether the first recognition result and the second recognition result are the same. If the first recognition result is the same as the second recognition result, the server may determine the first recognition result or the second recognition result as the face recognition result corresponding to the face image. Correspondingly, the server can determine the maximum confidence of the first confidence level and the second confidence level, and determine it as the confidence level corresponding to the facial recognition result.
  • the first recognition result is the same as the second recognition result, indicating that the probability of correct recognition is higher, that is, the face identifier corresponding to the face image is more likely to be the first recognition result or the second recognition result, and the first recognition is
  • the result or the second recognition result is determined to be a greater degree of confidence in the face recognition result corresponding to the face image. Therefore, such processing can improve the correct rate of face recognition, and at the same time, can improve the confidence of the recognition result, that is, can give a higher confidence to the correct recognition, thereby improving the recall rate of the face recognition.
  • the server may acquire the first confidence corresponding to the first recognition result determined by the above manner and the second confidence corresponding to the second recognition result. Based on the magnitude relationship of the first confidence level and the second confidence level, one of the first recognition result and the second recognition result may be determined as a face recognition result corresponding to the face image.
  • the server may further compare the sizes of the first confidence level and the second confidence level. If the difference between the second confidence level and the first confidence is greater than the first preset threshold, the server may determine the second recognition result as the facial recognition result corresponding to the facial image. Correspondingly, the server can determine the second confidence as the confidence level corresponding to the facial recognition result.
  • the second confidence level is greater than the first confidence level.
  • the first preset threshold indicates that the second recognition result determined by the neighbor search algorithm is more confident than the first recognition result determined by the mean search algorithm. Therefore, such processing can improve the correct rate of face recognition, and at the same time, can improve the confidence of the recognition result, that is, can give a higher confidence to the correct recognition, thereby improving the recall rate of the face recognition.
  • Case 3 when the first recognition result is different from the second recognition result, and the difference between the first confidence level and the second confidence level is greater than the second preset threshold, determining the first recognition result as the face recognition corresponding to the facial image As a result, the first confidence is determined as the confidence level corresponding to the face recognition result.
  • a second preset threshold may be pre-stored in the server.
  • the first preset threshold and the second preset threshold may be the same or different. If the first facial identifier is different from the second facial identifier, and the difference between the first confidence level and the second confidence is greater than the second preset threshold, the server may determine the first facial identifier as the facial corresponding to the facial image Identify results and confidence. Correspondingly, the first confidence can be determined as the confidence level corresponding to the facial recognition result.
  • the second confidence is lower than the first confidence.
  • the second preset threshold indicates that the second facial identifier determined by the neighbor search algorithm is less confident than the first facial identifier determined by the mean search algorithm. Therefore, such processing can improve the correct rate of face recognition, and at the same time, can improve the confidence of the recognition result, that is, can give a higher confidence to the correct recognition, thereby improving the recall rate of the face recognition.
  • the second confidence level is lower than the first confidence level.
  • the second preset threshold indicates that the second recognition result determined by the neighbor search algorithm is not more confident than the first recognition result determined by the mean search algorithm. Therefore, such processing can improve the correct rate of face recognition, and at the same time, can improve the confidence of the recognition result, that is, can give a higher confidence to the correct recognition, thereby improving the recall rate of the face recognition.
  • the first recognition result is determined as the facial recognition result corresponding to the facial image
  • the minimum confidence of the first confidence degree and the second confidence is determined as the confidence level corresponding to the facial recognition result.
  • the server may determine the first recognition result as the face recognition result corresponding to the face image.
  • the server can determine the minimum confidence of the first confidence and the second confidence as the confidence corresponding to the facial image.
  • the correctness of the recognition result determined by the mean search algorithm is higher than the correctness of the recognition result determined by the neighbor search algorithm, and the first recognition result is different from the second recognition result, indicating that the probability of correct recognition is relatively low, so This kind of processing can correctly reflect the situation of facial recognition.
  • the first recognition result, the first confidence level corresponding to the first recognition result, the second recognition result, and the confidence corresponding to the second recognition result are input into the pre-trained facial recognition model to obtain facial recognition corresponding to the facial image.
  • the confidence that the result corresponds to the face recognition result is input into the pre-trained facial recognition model to obtain facial recognition corresponding to the facial image.
  • a facial recognition model trained according to the sample facial image may be pre-stored in the server, wherein the input of the facial recognition model may be a facial recognition result determined by the mean search algorithm and its corresponding confidence and a neighbor search.
  • the face recognition result determined by the algorithm and its corresponding confidence may be a decision model, may be a neural network model, or the like.
  • the server After determining, by the server, the first recognition result, the first confidence level corresponding to the first recognition result, the second recognition result, and the second confidence level corresponding to the second recognition result, the first recognition result, the first confidence level, and the second The recognition result and the second confidence are input into the pre-trained facial recognition model to obtain an output of the facial recognition model, and the facial recognition result corresponding to the facial image and the corresponding confidence degree thereof are obtained.
  • the server can support the combination of any of the above cases and the situation four.
  • the training process of the facial recognition model may be as follows: calculating a distance between a feature vector corresponding to each training facial image and an average vector corresponding to each facial identifier stored in the database to obtain a first distance set. Determining a face identifier corresponding to the smallest distance in the first distance set as a first recognition result corresponding to each training face image. Determining, according to the first recognition result, a first confidence corresponding to the first recognition result in a corresponding minimum distance in the first distance set. Calculating a distance between a feature vector corresponding to each training facial image and each feature vector corresponding to each face identifier stored in the database in advance, to obtain a second distance set.
  • a facial identifier corresponding to a target distance that satisfies a preset selection condition in the second distance set is determined as the second recognition result corresponding to each of the training face images. And determining, according to the second recognition result, a second confidence corresponding to the second recognition result in the corresponding minimum distance in the second distance set.
  • the first recognition result corresponding to each training facial image, the first confidence degree and the second recognition result, and the second confidence degree are input into the facial recognition model, and the recognition result and the confidence corresponding to each training facial image are obtained. Adjusting the model parameters of the facial recognition model according to the obtained facial recognition result and confidence corresponding to each training facial image, and the facial recognition result and confidence corresponding to each preset training facial image, and obtaining the trained facial Identify the model.
  • a training facial image corresponding to a large number of facial identifiers may be pre-stored in the server.
  • the server may perform image enhancement processing on the directly acquired original facial image to obtain a plurality of sample facial images (in this case, the facial
  • the training face image corresponding to the identification includes an original face image and a training face image obtained by image enhancement).
  • the server may extract the feature vector of the training facial image through the depth network, and further, may calculate the distance between the feature vector corresponding to each training facial image and the mean vector corresponding to each facial identifier stored in the database in advance. , get the first distance set.
  • the server can obtain a minimum distance corresponding to the first recognition result in the first distance set. Further, the first confidence corresponding to the first recognition result may be determined based on the minimum distance.
  • the server may determine the second recognition result and its corresponding second confidence. For example, a distance between a feature vector corresponding to each training face image and each feature vector corresponding to each face identifier stored in the database may be calculated to obtain a second distance set. Determining a facial identifier corresponding to a target distance that satisfies a preset selection condition in the second distance set. Further, in the face identifier corresponding to each target distance, the target face identifier having the most occurrences may be determined as the second recognition result corresponding to each training face image. Correspondingly, the server can obtain a minimum distance corresponding to the second recognition result in the second distance set. Further, the second confidence corresponding to the second recognition result may be determined based on the minimum distance.
  • the server may first, the first recognition result, the first confidence degree, and the second corresponding to the training facial image
  • the recognition result and the second confidence are input into the facial recognition model, and the facial recognition result and the confidence corresponding to each training facial image are obtained, wherein the facial recognition result and the confidence obtained by the facial recognition model include the facial recognition model Model parameters.
  • the obtained facial recognition result may be approximated to the facial recognition result corresponding to the preset training facial image, and the obtained confidence degree approaches the preset training facial surface.
  • the training principle of the confidence level corresponding to the image for example, the difference between the two can be determined as the objective function, and then the facial recognition model is trained by calculating the minimum value of the objective function), and the facial recognition model is trained, that is, The model parameters of the facial recognition model are adjusted (where the facial recognition model can be trained by the gradient descent method) to obtain the trained facial recognition model.
  • the facial recognition result corresponding to the preset training facial image may be a facial recognition result corresponding to the training facial image, and for each training facial image, if the corresponding first recognition result and/or the second recognition result correspond to the real If the facial identifiers are the same, the corresponding confidence level may be set to be higher. If the corresponding first recognition result and/or the second recognition result are different from the real corresponding facial identifiers, the corresponding confidence level may be set to be lower.
  • the correct rate of the face recognition can be improved, and the confidence determined according to the above two algorithms can be fused by the fusion. Gives higher confidence to correct recognition, which can increase the recall rate of facial recognition.
  • the solution may be applicable to multiple scenarios, and may be applicable to a SIPP scenario or a fused scenario.
  • the fused scenario refers to a certain facial identifier corresponding to a sample facial image in the database. Some facial markers correspond to scenes with multiple sample facial images).
  • the embodiment of the present application further provides a flow as shown in FIG. 3 .
  • Step 301 Acquire a facial image to be identified, and extract at least one target feature vector corresponding to the facial image.
  • Step 302 Calculate a distance between each target feature vector and an average vector corresponding to each face identifier stored in the database, obtain a first distance set corresponding to each target feature vector, and determine a minimum distance in each first distance set.
  • Corresponding face identifier determining, in the face identifier corresponding to the smallest distance among the first distance sets, the face identifier having the most occurrences as the first recognition result corresponding to the facial image; and the first distance set according to the first recognition result The corresponding minimum distance is determined, and the first confidence corresponding to the first recognition result is determined.
  • Step 303 Calculate a distance of each target feature vector and each feature vector corresponding to each face identifier stored in the database in advance, and obtain a second distance set corresponding to each target feature vector, and determine that each second distance set satisfies Determining a face identifier corresponding to a target distance of the selection condition; determining, in each of the second distance sets, a target face identifier having the highest number of occurrences in the face identifier corresponding to each target distance; and determining, in each target facial identifier, the most frequently occurring number of occurrences
  • the target face identifier is used as a second recognition result corresponding to the face image; and the second confidence level corresponding to the second recognition result is determined according to the second smallest recognition result in the second distance set corresponding to the second recognition result.
  • Step 304 Input the first recognition result, the first confidence level corresponding to the first recognition result, the second recognition result, and the confidence corresponding to the second recognition result into the pre-trained facial recognition model to obtain facial recognition corresponding to the facial image. The confidence that the result corresponds to the face recognition result.
  • a face recognition model trained according to the sample face image may be pre-stored in the server.
  • the input of the facial recognition model may be the facial recognition result determined by the mean search algorithm and its corresponding confidence and the facial recognition result determined by the neighbor search algorithm and the corresponding confidence.
  • the facial recognition model may be a decision model, may be a neural network model, or the like.
  • the server After determining, by the server, the first recognition result, the first confidence level corresponding to the first recognition result, the second recognition result, and the second confidence level corresponding to the second recognition result, the first recognition result, the first confidence level, and the second The recognition result and the second confidence are input into the pre-trained facial recognition model to obtain an output of the facial recognition model, and the facial recognition result corresponding to the facial image and the corresponding confidence degree thereof are obtained.
  • the server may extract at least one target feature vector corresponding to the facial image, and further, calculate a distance between each target feature vector and the mean vector corresponding to each facial identifier, and obtain a first distance set corresponding to each target feature vector, and determining, according to each first distance set, a first recognition result corresponding to the face image and a corresponding first confidence.
  • the distance between each target feature vector and each feature vector corresponding to each face identifier may be calculated, and a second distance set corresponding to each target feature vector is obtained, and then, based on each The two distance sets determine a second recognition result corresponding to the facial image and a corresponding second confidence.
  • the facial recognition result corresponding to the facial image and the corresponding confidence degree thereof may be determined.
  • the face recognition result corresponding to the face image can be determined by fusing the first recognition result and the second recognition result, without pre-training the classifier, and further, when the new face identifier is added, the classifier can be retrained, thereby Improve the efficiency of facial recognition.
  • the embodiment of the present application further provides a device for facial recognition.
  • the device may be the server, and the device includes:
  • the acquiring module 410 is configured to acquire a facial image to be identified, and extract at least one target feature vector corresponding to the facial image;
  • the first calculating module 420 is configured to calculate a distance between each target feature vector and an average vector corresponding to each face identifier stored in the database, to obtain a first distance set corresponding to each target feature vector, and determine each first distance. a face identifier corresponding to a minimum distance in the set; a face identifier corresponding to the smallest number of distances in each of the first distance sets, determining a face identifier having the highest number of occurrences as a first recognition result corresponding to the face image; Determining, according to a minimum distance corresponding to each of the first distance sets, a first confidence level corresponding to the first recognition result;
  • the second calculating module 430 is configured to calculate a distance of each target feature vector and each feature vector corresponding to each face identifier stored in the database, and obtain a second distance set corresponding to each target feature vector, and determine each a facial identifier corresponding to a target distance that satisfies a preset selection condition in the second distance set; in each of the second distance sets, a facial identifier corresponding to the target distance is determined, and in each target facial identifier, Determining, as a second recognition result corresponding to the facial image, a target recognition target having the highest number of occurrences; determining, according to the second recognition result, a corresponding minimum distance in each second distance set, corresponding to the second recognition result Second confidence level;
  • a determining module 440 configured to determine, according to the first identification result, the first confidence level corresponding to the first identification result, the second recognition result, and the second confidence level corresponding to the second recognition result, A confidence level corresponding to the face recognition result corresponding to the face image and the face recognition result.
  • the determining module 440 is configured to:
  • the first recognition result or the second recognition result Determining the first recognition result or the second recognition result as a face recognition result corresponding to the face image when the first recognition result is the same as the second recognition result; and the first confidence
  • the maximum confidence in the degree and the second confidence is determined as the confidence corresponding to the facial recognition result.
  • the determining module 440 is configured to:
  • the determining module 440 is configured to:
  • the determining module 440 is configured to:
  • the first identification result is different from the second identification result, and the difference between the second confidence level and the first confidence is less than a first preset threshold, and the first confidence level is When the difference of the second confidence is less than the second preset threshold, determining the first recognition result as a facial recognition result corresponding to the facial image, and the first confidence and the second confidence The minimum confidence in the determination is determined as the confidence level corresponding to the facial recognition result.
  • the determining module 440 is configured to:
  • a confidence level corresponding to the face recognition result corresponding to the face image and the face recognition result is obtained.
  • the device further includes:
  • the third calculating module 450 is configured to calculate a distance between a feature vector corresponding to each training facial image and an average vector corresponding to each facial identifier stored in the database, to obtain a first distance set, and determine a minimum distance in the first distance set. a corresponding facial identifier, as a first recognition result corresponding to each training facial image, determining a first confidence level corresponding to the first recognition result according to the minimum distance corresponding to the first recognition result in the first distance set;
  • the fourth calculating module 460 is configured to calculate a distance between each feature vector corresponding to each face identifier pre-stored in each database in the feature vector corresponding to each training facial image, to obtain a second distance set, and determine that the second distance set satisfies Presetting the face identifier corresponding to the target distance of the selection condition; determining, in the face identifier corresponding to each target distance, the target face identifier having the most occurrences as the second recognition result corresponding to each training facial image, according to the second recognition result Determining a second confidence level corresponding to the second recognition result by a corresponding minimum distance in the second distance set;
  • the input module 470 is configured to input a first recognition result corresponding to each training facial image, a first confidence degree and a second recognition result, and a second confidence degree into the facial recognition model to obtain a face corresponding to each training facial image. Identify results and confidence;
  • the training module 480 is configured to: perform model parameters of the facial recognition model according to the obtained facial recognition result and confidence level corresponding to each training facial image, and the facial recognition result and confidence corresponding to each preset training facial image. Make adjustments to get a trained facial recognition model.
  • the server may extract at least one target feature vector corresponding to the facial image, and further, calculate a distance between each target feature vector and the mean vector corresponding to each facial identifier, and obtain a first distance set corresponding to each target feature vector, and determining, according to each first distance set, a first recognition result corresponding to the face image and a corresponding first confidence.
  • the distance between each target feature vector and each feature vector corresponding to each face identifier may be calculated, and a second distance set corresponding to each target feature vector is obtained, and then, based on each The two distance sets determine a second recognition result corresponding to the facial image and a corresponding second confidence.
  • the facial recognition result corresponding to the facial image and the corresponding confidence degree thereof may be determined.
  • the face recognition result corresponding to the face image can be determined by fusing the first recognition result and the second recognition result, without pre-training the classifier, and further, when the new face identifier is added, the classifier can be retrained, thereby Improve the efficiency of facial recognition.
  • the device for recognizing the face provided by the above embodiment is only illustrated by the division of the above functional modules. In actual applications, the function distribution may be completed by different functional modules as needed. The internal structure of the server is divided into different functional modules to complete all or part of the functions described above.
  • the device for the face recognition provided by the above embodiment is the same as the method for the method of the face recognition. The specific implementation process is described in detail in the method embodiment, and details are not described herein again.
  • FIG. 6 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure.
  • the computer device 600 may have a large difference due to different configurations or performances, and may include one or more central processing units (CPUs) 601. And one or more memories 602, wherein the memory 602 stores at least one instruction that is loaded by the processor 601 and executed to implement the method of face recognition described above.
  • CPUs central processing units
  • memories 602 wherein the memory 602 stores at least one instruction that is loaded by the processor 601 and executed to implement the method of face recognition described above.
  • a non-transitory computer readable storage medium having stored therein at least one instruction, at least one program, a code set or a set of instructions, the at least one instruction, the At least a portion of the program, the set of codes, or a set of instructions is loaded by a processor and executed to implement the method of facial recognition described above.
  • the server may extract at least one target feature vector corresponding to the facial image, and further, calculate a distance between each target feature vector and the mean vector corresponding to each facial identifier, and obtain a first distance set corresponding to each target feature vector, and determining, according to each first distance set, a first recognition result corresponding to the face image and a corresponding first confidence.
  • the distance between each target feature vector and each feature vector corresponding to each face identifier may be calculated, and a second distance set corresponding to each target feature vector is obtained, and then, based on each The two distance sets determine a second recognition result corresponding to the facial image and a corresponding second confidence.
  • the facial recognition result corresponding to the facial image and the corresponding confidence degree thereof may be determined.
  • the face recognition result corresponding to the face image can be determined by fusing the first recognition result and the second recognition result, without pre-training the classifier, and further, when the new face identifier is added, the classifier can be retrained, thereby Improve the efficiency of facial recognition.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

本申请实施例公开了一种面部识别的方法和装置,属于图像处理技术领域。所述方法包括:获取待识别的面部图像,并提取面部图像的目标特征向量;计算每个目标特征向量与每个面部标识对应的均值向量的距离,得到第一距离集合,根据各第一距离集合,确定面部图像对应的第一识别结果及其对应的第一置信度;计算每个目标特征向量与每个面部标识对应的每个特征向量的距离,得到第二距离集合,根据各第二距离集合,确定面部图像对应的第二识别结果及其对应的第二置信度;根据第一识别结果、第一置信度和第二识别结果、第二置信度,确定面部图像对应的面部识别结果和面部识别结果对应的置信度。采用本申请,可以提高面部识别的效率和精度。

Description

面部识别的方法和装置
本申请要求于2018年03月22日提交中国专利局、申请号为201810239389.3、发明名称为“面部识别的方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及图像处理技术领域,特别涉及一种面部识别的方法和装置。
背景
某些情况下,需要对获取到的面部图像进行面部识别,确定面部图像对应的面部标识。面部标识可以是某个人物的标识,也可以是对应的人脸图像的标识。可以利用预先训练好的分类器实现人脸识别。分类器的每个类别即为数据库中的每个面部标识。服务器每获取到待识别的面部图像后,可以提取该面部图像的特征向量,进而,可以将提取的特征向量输入到预先训练的分类器中,得到该面部图像对应的面部标识。
技术内容
本申请实施例提供了一种面部识别的方法和装置,可以提高面部识别的效率。
本申请实施例的面部识别的方法可以包括:
获取待识别的面部图像,并提取所述面部图像对应的至少一个目标特征向量;
计算每个目标特征向量与数据库中预先存储的每个面部标识对应的均值向量的距离,得到每个目标特征向量对应的第一距离集合,确定每个第一距离集合中最小的距离对应的面部标识;在各第一距离集合中最小的距离对应的面部标识中,确定出现次数最多的面部标识,作为所述面部图像对应的第一识别结果;根据所述第一识别结果在各第一距离集合中对应的最小的距离,确定所述第一识别结果对应的第一置信度;
计算每个目标特征向量与数据库中预先存储的每个面部标识对应的每个特征向量的距离,得到每个目标特征向量对应的第二距离集合,确定每个第二距离集合中满足预设选取条件的目标距离对应的面部标识;在每个第二距离集合中各目标距离对应的面部标识中,确定出现次数最多的目标面部标识;在各目标面部标识中, 确定出现次数最多的目标面部标识,作为所述面部图像对应的第二识别结果;根据所述第二识别结果在各第二距离集合中对应的最小的距离,确定所述第二识别结果对应的第二置信度;
根据所述第一识别结果、所述第一识别结果对应的第一置信度和所述第二识别结果、所述第二识别结果对应的第二置信度,确定所述面部图像对应的面部识别结果和所述面部识别结果对应的置信度
本申请实施例的面部识别的装置可以包括:
获取模块,用于获取待识别的面部图像,并提取所述面部图像对应的至少一个目标特征向量;
第一计算模块,用于计算每个目标特征向量与数据库中预先存储的每个面部标识对应的均值向量的距离,得到每个目标特征向量对应的第一距离集合,确定每个第一距离集合中最小的距离对应的面部标识;在各第一距离集合中最小的距离对应的面部标识中,确定出现次数最多的面部标识,作为所述面部图像对应的第一识别结果;根据所述第一识别结果在各第一距离集合中对应的最小的距离,确定所述第一识别结果对应的第一置信度;
第二计算模块,用于计算每个目标特征向量与数据库中预先存储的每个面部标识对应的每个特征向量的距离,得到每个目标特征向量对应的第二距离集合,确定每个第二距离集合中满足预设选取条件的目标距离对应的面部标识;在每个第二距离集合中各目标距离对应的面部标识中,确定出现次数最多的目标面部标识;在各目标面部标识中,确定出现次数最多的目标面部标识,作为所述面部图像对应的第二识别结果;根据所述第二识别结果在各第二距离集合中对应的最小的距离,确定所述第二识别结果对应的第二置信度;
确定模块,用于根据所述第一识别结果、所述第一识别结果对应的第一置信度和所述第二识别结果、所述第二识别结果对应的第二置信度,确定所述面部图像对应的面部识别结果和所述面部识别结果对应的置信度。
本申请实施例的服务器可以包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现上述所述的面部识别的方法。
本申请实施例的计算机可读存储介质可以存储有至少一条指令、至少一段程序、 代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现上述所述的面部识别的方法。
本申请实施例中,服务器获取到待识别的面部图像后,可以提取面部图像对应的至少一个目标特征向量,进而,可以计算每个目标特征向量与每个面部标识对应的均值向量的距离,得到每个目标特征向量对应的第一距离集合,基于各第一距离集合,确定面部图像对应的第一识别结果及其对应的第一置信度。确定出至少一个目标特征向量后,还可以计算每个目标特征向量与每个面部标识对应的每个特征向量的距离,得到每个目标特征向量对应的第二距离集合,进而,可以基于各第二距离集合,确定面部图像对应的第二识别结果及其对应的第二置信度。确定出第一识别结果及其对应的第一置信度、第二识别结果及其对应的第二置信度后,可以确定面部图像对应的面部识别结果及其对应的置信度。这样,可以通过融合第一识别结果和第二识别结果,来确定面部图像对应的面部识别结果,无需预先训练分类器,进而,可以避免增加新的面部标识时,重新训练分类器,从而,可以提高面部识别的效率。
附图简要说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1(a)是本申请实施例提供的一种实施环境示意图;
图1(b)是本申请实施例提供的一种面部识别的方法流程图;
图2(a)是本申请实施例提供的一种框架示意图;
图2(b)是本申请实施例提供的一种框架示意图;
图3是本申请实施例提供的一种面部识别的方法流程图;
图4是本申请实施例提供的一种面部识别的装置结构示意图;
图5是本申请实施例提供的一种面部识别的装置结构示意图;
图6是本申请实施例提供的一种服务器的结构示意图。
实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
本申请实施例提供了一种面部识别的方法,该方法的执行主体为一个或多个计算设备,例如终端或服务器,也可以由终端和服务器共同实现,如图1(a)所示。其中,终端可以是具有面部识别功能的任一终端,比如可以是个人电脑等终端。服务器可以是具有面部识别功能的服务器,比如可以是面部识别功能的后台服务器。针对由终端和服务器共同实现的情况,终端获取到待识别的面部图像后,可以向服务器发送该面部图像作为待识别的面部图像,进而。服务器可以确定待识别的面部图像对应的面部识别结果。下面将以执行主体为服务器为例进行详细说明,其他情况与之类似,不再进行赘述。
服务器可以包括处理器、存储器等部件。处理器可以为CPU等。处理器可以执行根据均值搜索算法确定面部图像对应的第一识别结果、根据近邻搜索算法确定面部图像对应的第二识别结果、根据第一识别结果和第二识别结果确定面部图像对应的面部识别结果等处理。存储器可以为RAM、Flash等,可以用于存储接收到的数据、处理过程所需的数据、处理过程中生成的数据等,如第一识别结果和第二识别结果等。
某些场景下,服务器可以对待识别的面部图像进行面部识别处理。比如,在人脸打卡场景下,服务器可以通过图像拍摄装置获取到待识别的人脸图像,并对其进行面部识别处理。又例如,通过面部识别确定获取到的面部图像是否是需要查找的人物的场景下,在多个位置可以部署有图像拍摄装置,服务器获取到每个图像拍摄装置发送的面部图像后,均可以对其进行面部识别,确定面部图像中的人物是否是需要寻找的人物。
在面部识别的相关技术中,服务器获取到数据库中每个面部标识对应的样本面部图像后,可以提取每个样本面部图像的特征向量,基于每个特征向量及其对应的面部标识,训练多类分类器。分类器的每个类别是数据库中的每个面部标识。此种情况下,每当数据库中需要增加新的面部标识时,需要重新训练分类器,得到类别包含增加的面部标识的分类器,分类器的扩展性差、成本较高。
本方案中,服务器获取到待识别的面部图像后,可以分别基于均值搜索算法和 近邻搜索算法,确定面部图像对应的第一识别结果和第二识别结果,基于第一识别结果和第二识别结果,得到面部图像对应的面部识别结果,即得到面部图像对应的面部标识。这样,服务器可以通过融合均值搜索算法和近邻搜索算法,来确定面部图像对应的面部标识。本方案无需训练分类器,避免增加新的面部标识时,重新训练分类器。本方案还可以提高面部识别的效率。另外,本方案中,通过融合根据均值搜索算法确定出的面部识别结果、置信度和根据近邻搜索算法确定出的面部识别结果、置信度,可以增强面部识别的正确性,并可以对正确识别赋予高的置信度,从而可以提高面部识别的召回率。此外,本申请实施例提供的方法适用范围比较广,比如,可以适用于SIPP(single image per-person,每个人只有一张训练样本面部图像),也可以适用于者融合场景,以及其他需要进行面部识别的场景。
如图1(b)所示,一些实施例的处理流程可以包括如下步骤。
步骤101,获取待识别的面部图像,并提取面部图像对应的至少一个目标特征向量。
其中,面部图像可以是包含面部的图像,比如,可以是包含人脸的人脸图像。
在实施中,某些场景下,服务器可以对待识别的面部图像进行面部识别处理。比如,通过人脸打卡场景下,服务器可以通过图像拍摄装置获取到待识别的人脸图像,进而,可以对其进行面部识别处理。又例如,通过面部识别确定获取到的面部图像是否是需要查找的人物的场景下,在多个位置可以部署有图像拍摄装置,服务器获取到每个图像拍摄装置发送的面部图像后,均可以对其进行面部识别,确定面部图像中的人物是否是需要寻找的人物。
服务器可以获取待识别的面部图像,进而通过深度网络提取面部图像对应的至少一个特征向量(可称为目标特征向量)q。其中,q可以是面部图像本身对应的特征向量,也可以是图像增强后的多张面部图像分别对应的特征向量,也可以是图像增强后的多张面部图像对应的特征向量的均值向量。服务器可以基于应用场景获取待识别的面部图像。比如,针对人脸打卡场景,服务器可以获取企业部署的图像拍摄装置拍摄到的面部图像;针对通过面部识别确定获取到的面部图像是否是需要查找的人物的场景,服务器可以获取交通管理等部门部署的图像拍摄装置拍摄到的面部图像。
步骤102,计算每个目标特征向量与数据库中预先存储的每个面部标识对应的 均值向量的距离,得到每个目标特征向量对应的第一距离集合,确定每个第一距离集合中最小的距离对应的面部标识;在各第一距离集合中最小的距离对应的面部标识中,确定出现次数最多的面部标识,作为面部图像对应的第一识别结果;根据第一识别结果在各第一距离集合中对应的最小的距离,确定第一识别结果对应的第一置信度。
其中,第一置信度可以用于表示面部图像对应的面部标识为第一识别结果的可能性、可信度。
在实施中,服务器中可以预先存储有每个面部标识对应的样本面部图像。其中,每个面部标识对应的样本面部图像的数量可以相同,也可以不同。每个面部标识对应的样本面部图像可以包括直接获取到的原始面部图像(原始面部图像可以是拍摄得到的面部图像),也可以包括根据原始面部图像通过图像增强得到的样本面部图像。对于每个面部标识,服务器可以通过深度网络提取该面部标识对应的每个样本面部图像的特征向量,得到该面部标识对应的至少一个特征向量,进而计算该面部标识对应的至少一个特征向量的均值向量。假设数据库中的面部标识的数目为k(其中,k为正整数),每个面部标识对应有mk张样本面部图像(其中,m为正整数),每个面部标识对应的至少一个特征向量和均值向量可以如下:
p11,p12,…p1m1,均值向量:mp1
p21,p22,…p2m2,均值向量:mp2
…,…,…
pk1,pk2,…pkmk,均值向量:mpk
确定出至少一个目标特征向量后,针对目标特征向量q是面部图像本身对应的特征向量,或者是图像增强后的多张面部图像对应的均值向量的情况,服务器可以计算目标特征向量q与上述每个面部标识对应的均值向量mpi(其中,i=1…k)的距离(其中,该距离可以是欧式距离,可以是余弦距离,可以是其他类型的向量距离),得到第一距离集合。得到第一距离集合后,服务器可以将最小的距离对应的面部标识确定为待识别的面部图像对应的第一识别结果id1。
一些实施例中,服务器中还可以预先设置有距离阈值。其中,距离阈值可以是余弦距离的距离阈值,可以是欧氏距离的距离阈值。服务器得到第一距离集合后,可以将最小的距离与预设的距离阈值进行比较。如果最小的距离小于预设的距离阈 值,则将最小的距离对应的面部标识确定为待识别的面部图像对应的第一识别结果id1;否则,识别失败。
一些实施例中,针对通过图像增强得到待识别的面部图像的多张面部图像(其中,多张面部图像包括待识别的面部图像和图像增强后的至少一张面部图像)的情况,即针对面部图像对应多个目标特征向量的情况,对于每张面部图像,服务器可以计算该面部图像的目标特征向量与上述每个面部标识对应的均值向量mpi的距离,得到该面部图像对应的第一距离集合,进而将最小的距离对应的面部标识确定为该面部图像对应的面部标识。得到每张面部图像对应的面部标识后,服务器可以将出现次数最多的面部标识确定为待识别的面部图像对应的第一识别结果。也就是说,提取到面部图像对应的至少一个目标特征向量后,服务器可以通过均值搜索算法,确定面部图像对应的第一识别结果和第一识别结果对应的第一置信度。其中,均值搜索算法可以是通过与均值向量进行比较,确定识别结果的算法。
获取到待识别的面部图像后,服务器除了确定面部图像对应的第一识别结果之外,还可以确定第一识别结果对应的第一置信度s1。例如,服务器可以获取第一识别结果在各第一距离集合中对应的距离,并确定各距离中最小的距离。根据第一识别结果对应的各距离中最小的距离,确定第一置信度。例如,可以将第一识别结果对应的各距离中最小的距离的倒数,确定为第一置信度。
步骤103,计算每个目标特征向量与数据库中预先存储的每个面部标识对应的每个特征向量的距离,得到每个目标特征向量对应的第二距离集合,确定每个第二距离集合中满足预设选取条件的目标距离对应的面部标识;在每个第二距离集合中各目标距离对应的面部标识中,确定出现次数最多的目标面部标识;在各目标面部标识中,确定出现次数最多的目标面部标识,作为面部图像对应的第二识别结果;根据第二识别结果在各第二距离集合中对应的最小的距离,确定第二识别结果对应的第二置信度。
其中,第二置信度可以用于表示面部图像对应的面部标识为第二识别结果的可能性、可信度。
在实施中,确定出至少一个目标特征向量后,针对目标特征向量q是面部图像本身对应的特征向量,或者是图像增强后的多张面部图像对应的均值向量的情况,服务器可以计算目标特征向量q与每个样本面部图像的特征向量的距离,得到第二 距离集合,进而在第二距离集合中,选择距离最小的预设数目个距离或者选择小于预设距离阈值的目标距离。选择出目标距离后,服务器可以确定每个目标距离对应的面部标识,进而,可以统计每个不同面部标识对应的出现次数,将出现次数最多的目标面部标识确定为待识别的面部图像对应的第二识别结果。
一些实施例中,针对通过图像增强得到待识别的面部图像的多张面部图像的情况,即针对面部图像对应多个目标特征向量的情况,对于每张面部图像,服务器均可以计算该面部图像的目标特征向量与上述每个面部标识对应的每个特征向量的距离,得到该面部图像对应的第二距离集合。进而在每个第二距离集合中,选择距离最小的预设数目个距离或者选择小于预设距离阈值的目标距离。选择出每个第二距离集合对应的目标距离后,服务器可以确定每个目标距离对应的面部标识,统计每个不同面部标识对应的出现次数,确定出现次数最多的目标面部标识。得到每个目标特征向量对应的目标面部标识后,可以在各个目标面部标识中,将对应的出现次数最多的目标面部标识,确定为面部图像对应的第二识别结果。也就是说,提取到面部图像对应的至少一个目标特征向量后,服务器可以通过近邻搜索算法,确定面部图像对应的第二识别结果和第二识别结果对应的第二置信度。其中,近邻搜索算法可以是通过与每个样本面部图像对应的特征向量进行比较,确定识别结果的算法,可以是LSH(local sensitive hashing,局部敏感哈希)算法。
服务器获取到待识别的面部图像后,服务器除了确定面部图像对应的第二识别结果之外,还可以确定第二识别结果对应的第二置信度s2。例如,服务器可以获取第二识别结果在各第二距离集合中对应的各距离,进而,可以确定第二识别结果在距离中对应的最小的距离,并可以根据第二识别结果对应的各距离中最小的距离,确定第二置信度。例如,可以将第二识别结果对应的各距离中最小的距离的倒数,确定为第二置信度。
步骤104,根据第一识别结果、第一识别结果对应的第一置信度和第二识别结果、第二识别结果对应的第二置信度,确定面部图像对应的面部识别结果和面部识别结果对应的置信度。
在实施中,确定出第一识别结果、第一识别结果对应的第一置信度和第二识别结果、第二识别结果对应的第二置信度后,服务器可以根据第一识别结果、第一识别结果对应的第一置信度和第二识别结果、第二识别结果对应的第二置信度,确定 待识别的面部图像对应的面部识别结果f_id以及面部识别结果对应的置信度f_score。比如,可以基于某些条件,将第一识别结果和第二识别结果中的一个确定为面部图像对应的面部识别结果,其中,本方案的处理框架如图2(a)所示。
一些实施例中,本方案的整体流程可以如图2(b)所示,其中,CNN_fes为用于提取特征向量的深度网络,baseset可以是对应的样本面部图像的张数较大的面部标识的集合,novelset可以是对应的样本面部图像的张数较小的面部标识的集合,augmented-novelset可以是图像增强后的novelset,Modified SVD augmentation module可以是改进的SVD增强模块,NN module:LSH feature pool可以是基于LSH的近邻搜索模块,Mean search feature pool可以是均值搜索模块,Hyper merge module可以是步骤104对应的融合模块。
各实施例中,基于确定面部识别结果的方式不同,步骤104的处理过程可以多种多样。以下给出了几种情况的实现方式。
情况一,当第一识别结果与第二识别结果相同时,将第一识别结果或第二识别结果确定为面部图像对应的面部识别结果,并将第一置信度和第二置信度中最大的置信度确定为面部识别结果对应的置信度。
在实施中,确定出第一识别结果和第二识别结果后,服务器可以判断第一识别结果与第二识别结果是否相同。如果第一识别结果与第二识别结果相同,服务器可以将第一识别结果或第二识别结果确定为面部图像对应的面部识别结果。相应的,服务器可以确定第一置信度和第二置信度中的最大的置信度,进而将其确定为面部识别结果对应的置信度。
其中,第一识别结果与第二识别结果相同说明此次正确识别的概率较高,即面部图像真实对应的面部标识为第一识别结果或第二识别结果的可能性比较大,将第一识别结果或第二识别结果确定为面部图像对应的面部识别结果的可信度较大。因此,此种处理,可以提高面部识别的正确率,同时可以提高识别结果的置信度,即可以对正确识别赋予较高的置信度,从而可以提高面部识别的召回率。
情况二,当第一识别结果与第二识别结果不同、且第二置信度与第一置信度的差值大于第一预设阈值时,将第二识别结果确定为面部图像对应的面部识别结果,并将第二置信度确定为面部识别结果对应的置信度。
在实施中,当第一识别结果与第二识别结果不相同时,服务器可以获取通过上 述方式确定出的第一识别结果对应的第一置信度和第二识别结果对应的第二置信度。基于第一置信度和第二置信度的大小关系,可以将第一识别结果和第二识别结果中的一个确定为面部图像对应的面部识别结果。
例如,如果第一识别结果与第二识别结果不相同,则服务器可以进一步比较第一置信度与第二置信度的大小。如果第二置信度与第一置信度的差值大于第一预设阈值,则服务器可以将第二识别结果确定为面部图像对应的面部识别结果。相应的,服务器可以将第二置信度确定为面部识别结果对应的置信度。
其中,第二置信度大于第一置信度第一预设阈值说明,通过近邻搜索算法确定出的第二识别结果相对于通过均值搜索算法确定出的第一识别结果更置信。因此,此种处理,可以提高面部识别的正确率,同时可以提高识别结果的置信度,即可以对正确识别赋予较高的置信度,从而,可以提高面部识别的召回率。
情况三,当第一识别结果与第二识别结果不相同、且第一置信度与第二置信度的差值大于第二预设阈值时,将第一识别结果确定为面部图像对应的面部识别结果,并将第一置信度确定为面部识别结果对应的置信度。
在实施中,服务器中可以预先存储有第二预设阈值。其中,第一预设阈值与第二预设阈值可以相同,也可以不同。如果第一面部标识与第二面部标识不相同、且第一置信度与第二置信度的差值大于第二预设阈值,则服务器可以将第一面部标识确定为面部图像对应的面部识别结果和置信度。相应的,可以将第一置信度确定为面部识别结果对应的置信度。
第二置信度低于第一置信度第二预设阈值说明,通过近邻搜索算法确定出的第二面部标识不如通过均值搜索算法确定出的第一面部标识更置信。因此,此种处理,可以提高面部识别的正确率,同时可以提高识别结果的置信度,即可以对正确识别赋予较高的置信度,从而,可以提高面部识别的召回率。
其中,第二置信度低于第一置信度第二预设阈值说明,通过近邻搜索算法确定出的第二识别结果不如通过均值搜索算法确定出的第一识别结果更置信。因此,此种处理,可以提高面部识别的正确率,同时可以提高识别结果的置信度,即可以对正确识别赋予较高的置信度,从而,可以提高面部识别的召回率。
情况四,当第一识别结果与第二识别结果不相同、且第二置信度与第一置信度的差值小于第一预设阈值、且第一置信度与第二置信度的差值小于第二预设阈值时, 将第一识别结果确定为面部图像对应的面部识别结果,并将第一置信度和第二置信度中最小的置信度确定为面部识别结果对应的置信度。
在实施中,当第一识别结果与第二识别结果不相同、且第二置信度与第一置信度的差值小于第一预设阈值、且第一置信度与第二置信度的差值小于第二预设阈值时,服务器可以将第一识别结果确定为面部图像对应的面部识别结果。相应的,服务器可以将第一置信度和第二置信度中最小的置信度确定为面部图像对应的置信度。
其中,往往均值搜索算法确定出的识别结果的正确性比近邻搜索算法确定出的识别结果的正确性高,第一识别结果与第二识别结果不相同说明此次正确识别的概率比较低,因此,此种处理可以正确反映此次面部识别的情况。
情况五,将第一识别结果、第一识别结果对应的第一置信度和第二识别结果、第二识别结果对应的置信度输入到预先训练的面部识别模型中,得到面部图像对应的面部识别结果和面部识别结果对应的置信度。
在实施中,服务器中可以预先存储有根据样本面部图像训练好的面部识别模型,其中,面部识别模型的输入可以是通过均值搜索算法确定出的面部识别结果及其对应的置信度和通过近邻搜索算法确定出的面部识别结果及其对应的置信度。其中,面部识别模型可以是决策模型、可以是神经网络模型等。服务器确定出第一识别结果、第一识别结果对应的第一置信度和第二识别结果、第二识别结果对应的第二置信度后,可以将第一识别结果、第一置信度和第二识别结果、第二置信度输入到预先训练的面部识别模型中,得到面部识别模型的输出,即可得到面部图像对应的面部识别结果及其对应的置信度。
其中,服务器可以支持上述情况一到情况四多种情况中的任意情况组合的处理。
各实施例中,面部识别模型的训练过程可以如下:计算每个训练面部图像对应的特征向量与数据库中预先存储的每个面部标识对应的均值向量的距离,得到第一距离集合。确定第一距离集合中最小的距离对应的面部标识作为每个训练面部图像对应的第一识别结果。根据所述第一识别结果在第一距离集合中对应的最小的距离,确定第一识别结果对应的第一置信度。计算每个训练面部图像对应的特征向量与数据库中预先存储的每个面部标识对应的每个特征向量的距离,得到第二距离集合。确定第二距离集合中满足预设选取条件的目标距离对应的面部标识。在各目标距离对应的面部标识中,确定出现次数最多的目标面部标识作为每个训练面部图像对应 的第二识别结果。根据第二识别结果在第二距离集合中对应的最小的距离,确定第二识别结果对应的第二置信度。将每个训练面部图像对应的第一识别结果、第一置信度和第二识别结果、第二置信度,输入到面部识别模型中,得到每个训练面部图像对应的识别结果和置信度。根据得到的每个训练面部图像对应的面部识别结果和置信度、以及预设的每个训练面部图像对应的面部识别结果和置信度,对面部识别模型的模型参数进行调整,得到训练后的面部识别模型。
在实施中,服务器中可以预先存储有大量面部标识对应的训练面部图像。其中,如果直接获取到的某面部标识对应的原始面部图像的数量较少,则服务器可以对直接获取到的原始面部图像进行图像增强处理,得到多张样本面部图像(此种情况下,该面部标识对应的训练面部图像包括原始面部图像和通过图像增强得到的训练面部图像)。对于每个训练面部图像,服务器可以通过深度网络提取该训练面部图像的特征向量,进而,可以计算每个训练面部图像对应的特征向量与数据库中预先存储的每个面部标识对应的均值向量的距离,得到第一距离集合。确定第一距离集合中最小的距离对应的面部标识,作为每个训练面部图像对应的第一识别结果。相应的,服务器可以获取第一识别结果在第一距离集合中对应的最小的距离。进而,可以基于最小的距离,确定第一识别结果对应的第一置信度。
除了确定第一识别结果和第一置信度之外,服务器还可以确定第二识别结果及其对应的第二置信度。例如,可以计算每个训练面部图像对应的特征向量与数据库中预先存储的每个面部标识对应的每个特征向量的距离,得到第二距离集合。确定第二距离集合中满足预设选取条件的目标距离对应的面部标识。进而,可以在各目标距离对应的面部标识中,确定出现次数最多的目标面部标识作为每个训练面部图像对应的第二识别结果。相应的,服务器可以获取第二识别结果在第二距离集合中对应的最小的距离。进而,可以基于最小的距离确定第二识别结果对应的第二置信度。
确定出每个训练面部图像对应的第一识别结果、第一置信度和第二识别结果、第二置信度后,服务器可以将训练面部图像对应的第一识别结果、第一置信度和第二识别结果、第二置信度输入到面部识别模型中,得到每个训练面部图像对应的面部识别结果和置信度,其中,通过面部识别模型得到的面部识别结果和置信度中包含有面部识别模型中的模型参数。
得到每个训练面部图像对应的面部识别结果和置信度后,可以基于得到的面部识别结果趋近于预设的训练面部图像对应的面部识别结果、得到的置信度趋近于预设的训练面部图像对应的置信度的训练原则(比如,可以将两者的差值确定为目标函数,进而通过计算目标函数的最小值的方法,对面部识别模型进行训练),对面部识别模型进行训练,即对面部识别模型的模型参数进行调整(其中,可以通过梯度下降法对面部识别模型进行训练),得到训练后的面部识别模型。其中,预设的训练面部图像对应的面部识别结果可以是训练面部图像真实对应的面部识别结果,对于每个训练面部图像,如果其对应的第一识别结果和/或第二识别结果与真实对应的面部标识相同,则可以设置其对应的置信度较高,如果其对应的第一识别结果和/或第二识别结果与真实对应的面部标识不同,则可以设置其对应的置信度较低。
本方案中,通过融合根据均值搜索算法确定出的面部识别结果和根据近邻搜索算法确定出的面部识别结果,可以提高面部识别的正确率,通过融合根据上述两个算法确定出的置信度,可以对正确识别赋予较高的置信度,从而,可以提升面部识别的召回率。
一些实施例中,本方案可以适用于多种场景,使用范围广,可以适用于SIPP场景,也可以适用于融合场景(其中,融合场景是指数据库中某些面部标识对应有一张样本面部图像,有些面部标识对应有多张样本面部图像的场景)。
一些实施例中,针对通过面部识别模型确定面部图像对应的面部识别结果的情况,本申请实施例还提供了如图3所示的流程。
步骤301,获取待识别的面部图像,并提取面部图像对应的至少一个目标特征向量。
步骤302,计算每个目标特征向量与数据库中预先存储的每个面部标识对应的均值向量的距离,得到每个目标特征向量对应的第一距离集合,确定每个第一距离集合中最小的距离对应的面部标识;在各第一距离集合中最小的距离对应的面部标识中,确定出现次数最多的面部标识,作为面部图像对应的第一识别结果;根据第一识别结果在各第一距离集合中对应的最小的距离,确定第一识别结果对应的第一置信度。
步骤303,计算每个目标特征向量与数据库中预先存储的每个面部标识对应的每个特征向量的距离,得到每个目标特征向量对应的第二距离集合,确定每个第二 距离集合中满足预设选取条件的目标距离对应的面部标识;在每个第二距离集合中各目标距离对应的面部标识中,确定出现次数最多的目标面部标识;在各目标面部标识中,确定出现次数最多的目标面部标识,作为面部图像对应的第二识别结果;根据第二识别结果在各第二距离集合中对应的最小的距离,确定第二识别结果对应的第二置信度。
步骤304,将第一识别结果、第一识别结果对应的第一置信度和第二识别结果、第二识别结果对应的置信度输入到预先训练的面部识别模型中,得到面部图像对应的面部识别结果和面部识别结果对应的置信度。
在实施中,服务器中可以预先存储有根据样本面部图像训练好的面部识别模型。其中,面部识别模型的输入可以是通过均值搜索算法确定出的面部识别结果及其对应的置信度和通过近邻搜索算法确定出的面部识别结果及其对应的置信度。其中,面部识别模型可以是决策模型、可以是神经网络模型等。服务器确定出第一识别结果、第一识别结果对应的第一置信度和第二识别结果、第二识别结果对应的第二置信度后,可以将第一识别结果、第一置信度和第二识别结果、第二置信度输入到预先训练的面部识别模型中,得到面部识别模型的输出,即可得到面部图像对应的面部识别结果及其对应的置信度。
本申请实施例中,服务器获取到待识别的面部图像后,可以提取面部图像对应的至少一个目标特征向量,进而,可以计算每个目标特征向量与每个面部标识对应的均值向量的距离,得到每个目标特征向量对应的第一距离集合,基于各第一距离集合,确定面部图像对应的第一识别结果及其对应的第一置信度。确定出至少一个目标特征向量后,还可以计算每个目标特征向量与每个面部标识对应的每个特征向量的距离,得到每个目标特征向量对应的第二距离集合,进而,可以基于各第二距离集合,确定面部图像对应的第二识别结果及其对应的第二置信度。确定出第一识别结果及其对应的第一置信度、第二识别结果及其对应的第二置信度后,可以确定面部图像对应的面部识别结果及其对应的置信度。这样,可以通过融合第一识别结果和第二识别结果,来确定面部图像对应的面部识别结果,无需预先训练分类器,进而,可以避免增加新的面部标识时,重新训练分类器,从而,可以提高面部识别的效率。
基于相同的技术构思,本申请实施例还提供了一种面部识别的装置,如图4所 示,该装置可以是上述服务器,该装置包括:
获取模块410,用于获取待识别的面部图像,并提取所述面部图像对应的至少一个目标特征向量;
第一计算模块420,用于计算每个目标特征向量与数据库中预先存储的每个面部标识对应的均值向量的距离,得到每个目标特征向量对应的第一距离集合,确定每个第一距离集合中最小的距离对应的面部标识;在各第一距离集合中最小的距离对应的面部标识中,确定出现次数最多的面部标识,作为所述面部图像对应的第一识别结果;根据所述第一识别结果在各第一距离集合中对应的最小的距离,确定所述第一识别结果对应的第一置信度;
第二计算模块430,用于计算每个目标特征向量与数据库中预先存储的每个面部标识对应的每个特征向量的距离,得到每个目标特征向量对应的第二距离集合,确定每个第二距离集合中满足预设选取条件的目标距离对应的面部标识;在每个第二距离集合中各目标距离对应的面部标识中,确定出现次数最多的目标面部标识;在各目标面部标识中,确定出现次数最多的目标面部标识,作为所述面部图像对应的第二识别结果;根据所述第二识别结果在各第二距离集合中对应的最小的距离,确定所述第二识别结果对应的第二置信度;
确定模块440,用于根据所述第一识别结果、所述第一识别结果对应的第一置信度和所述第二识别结果、所述第二识别结果对应的第二置信度,确定所述面部图像对应的面部识别结果和所述面部识别结果对应的置信度。
各实施例中,所述确定模块440,用于:
当所述第一识别结果与所述第二识别结果相同时,将所述第一识别结果或所述第二识别结果确定为所述面部图像对应的面部识别结果;并将所述第一置信度和所述第二置信度中最大的置信度确定为所述面部识别结果对应的置信度。
各实施例中,所述确定模块440,用于:
当所述第一识别结果与所述第二识别结果不同、且所述第二置信度与所述第一置信度的差值大于第一预设阈值时,将所述第二识别结果确定为所述面部图像对应的面部识别结果,并将所述第二置信度,确定为所述面部识别结果对应的置信度。
各实施例中,所述确定模块440,用于:
当所述第一识别结果与所述第二识别结果不相同、且所述第一置信度与所述第 二置信度的差值大于第二预设阈值时,将所述第一识别结果确定为所述面部图像对应的面部识别结果,并将所述第一置信度确定为所述面部识别结果对应的置信度。
各实施例中,所述确定模块440,用于:
当所述第一识别结果与所述第二识别结果不相同、且所述第二置信度与所述第一置信度的差值小于第一预设阈值、且所述第一置信度与所述第二置信度的差值小于第二预设阈值时,将所述第一识别结果确定为所述面部图像对应的面部识别结果,并将所述第一置信度和所述第二置信度中最小的置信度确定为所述面部识别结果对应的置信度。
各实施例中,所述确定模块440,用于:
将所述第一识别结果、所述第一识别结果对应的第一置信度和所述第二识别结果、所述第二识别结果对应的第二置信度输入到预先训练的面部识别模型中,得到所述面部图像对应的面部识别结果和所述面部识别结果对应的置信度。
各实施例中,如图5所示,所述装置还包括:
第三计算模块450,用于计算每个训练面部图像对应的特征向量与数据库中预先存储的每个面部标识对应的均值向量的距离,得到第一距离集合,确定第一距离集合中最小的距离对应的面部标识,作为每个训练面部图像对应的第一识别结果,根据所述第一识别结果在第一距离集合中对应的最小的距离,确定第一识别结果对应的第一置信度;
第四计算模块460,用于计算每个训练面部图像对应的特征向量与数据库中预先存储的每个面部标识对应的每个特征向量的距离,得到第二距离集合,确定第二距离集合中满足预设选取条件的目标距离对应的面部标识;在各目标距离对应的面部标识中,确定出现次数最多的目标面部标识,作为每个训练面部图像对应的第二识别结果,根据第二识别结果在第二距离集合中对应的最小的距离,确定第二识别结果对应的第二置信度;
输入模块470,用于将每个训练面部图像对应的第一识别结果、第一置信度和第二识别结果、第二置信度,输入到面部识别模型中,得到每个训练面部图像对应的面部识别结果和置信度;
训练模块480,用于根据得到的每个训练面部图像对应的面部识别结果和置信度、以及预设的每个训练面部图像对应的面部识别结果和置信度,对所述面部识别 模型的模型参数进行调整,得到训练后的面部识别模型。
本申请实施例中,服务器获取到待识别的面部图像后,可以提取面部图像对应的至少一个目标特征向量,进而,可以计算每个目标特征向量与每个面部标识对应的均值向量的距离,得到每个目标特征向量对应的第一距离集合,基于各第一距离集合,确定面部图像对应的第一识别结果及其对应的第一置信度。确定出至少一个目标特征向量后,还可以计算每个目标特征向量与每个面部标识对应的每个特征向量的距离,得到每个目标特征向量对应的第二距离集合,进而,可以基于各第二距离集合,确定面部图像对应的第二识别结果及其对应的第二置信度。确定出第一识别结果及其对应的第一置信度、第二识别结果及其对应的第二置信度后,可以确定面部图像对应的面部识别结果及其对应的置信度。这样,可以通过融合第一识别结果和第二识别结果,来确定面部图像对应的面部识别结果,无需预先训练分类器,进而,可以避免增加新的面部标识时,重新训练分类器,从而,可以提高面部识别的效率。
需要说明的是:上述实施例提供的面部识别的装置在面部识别时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将服务器的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。一些实施例中,上述实施例提供的面部识别的装置与面部识别的方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图6是本申请实施例提供的一种计算机设备的结构示意图,该计算机设备600可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器(central processing units,CPU)601和一个或一个以上的存储器602,其中,所述存储器602中存储有至少一条指令,所述至少一条指令由所述处理器601加载并执行以实现上述所述的面部识别的方法。
在示例性实施例中,还提供了一种非临时性计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现上述所述的面部识别的方法。
本申请实施例中,服务器获取到待识别的面部图像后,可以提取面部图像对应的至少一个目标特征向量,进而,可以计算每个目标特征向量与每个面部标识对应 的均值向量的距离,得到每个目标特征向量对应的第一距离集合,基于各第一距离集合,确定面部图像对应的第一识别结果及其对应的第一置信度。确定出至少一个目标特征向量后,还可以计算每个目标特征向量与每个面部标识对应的每个特征向量的距离,得到每个目标特征向量对应的第二距离集合,进而,可以基于各第二距离集合,确定面部图像对应的第二识别结果及其对应的第二置信度。确定出第一识别结果及其对应的第一置信度、第二识别结果及其对应的第二置信度后,可以确定面部图像对应的面部识别结果及其对应的置信度。这样,可以通过融合第一识别结果和第二识别结果,来确定面部图像对应的面部识别结果,无需预先训练分类器,进而,可以避免增加新的面部标识时,重新训练分类器,从而,可以提高面部识别的效率。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
综上所述,权利要求的范围不应局限于以上描述的例子中的实施方式,而应当将说明书作为一个整体并给予最宽泛的解释。

Claims (15)

  1. 一种面部识别的方法,由至少一个计算设备执行,包括:
    获取待识别的面部图像,并提取所述面部图像对应的至少一个目标特征向量;
    计算每个目标特征向量与数据库中预先存储的每个面部标识对应的均值向量的距离,得到每个目标特征向量对应的第一距离集合,确定每个第一距离集合中最小的距离对应的面部标识;在各第一距离集合中最小的距离对应的面部标识中,确定出现次数最多的面部标识,作为所述面部图像对应的第一识别结果;根据所述第一识别结果在各第一距离集合中对应的最小的距离,确定所述第一识别结果对应的第一置信度;
    计算每个目标特征向量与数据库中预先存储的每个面部标识对应的每个特征向量的距离,得到每个目标特征向量对应的第二距离集合,确定每个第二距离集合中满足预设选取条件的目标距离对应的面部标识;在每个第二距离集合中各目标距离对应的面部标识中,确定出现次数最多的目标面部标识;在各目标面部标识中,确定出现次数最多的目标面部标识,作为所述面部图像对应的第二识别结果;根据所述第二识别结果在各第二距离集合中对应的最小的距离,确定所述第二识别结果对应的第二置信度;
    根据所述第一识别结果、所述第一识别结果对应的第一置信度和所述第二识别结果、所述第二识别结果对应的第二置信度,确定所述面部图像对应的面部识别结果和所述面部识别结果对应的置信度。
  2. 根据权利要求1所述的方法,其中,所述根据所述第一识别结果、所述第一识别结果对应的第一置信度和所述第二识别结果、所述第二识别结果对应的第二置信度,确定所述面部图像对应的面部识别结果和所述面部识别结果对应的置信度,包括:
    当所述第一识别结果与所述第二识别结果相同时,将所述第一识别结果或所述第二识别结果确定为所述面部图像对应的面部识别结果;并将所述第一置信度和所述第二置信度中最大的置信度确定为所述面部识别结果对应的置信度。
  3. 根据权利要求1所述的方法,其中,所述根据所述第一识别结果、所述第一识别结果对应的第一置信度和所述第二识别结果、所述第二识别结果对应的第二置 信度,确定所述面部图像对应的面部识别结果和所述面部识别结果对应的置信度,包括:
    当所述第一识别结果与所述第二识别结果不同、且所述第二置信度与所述第一置信度的差值大于第一预设阈值时,将所述第二识别结果确定为所述面部图像对应的面部识别结果,并将所述第二置信度,确定为所述面部识别结果对应的置信度。
  4. 根据权利要求1所述的方法,其中,所述根据所述第一识别结果、所述第一识别结果对应的第一置信度和所述第二识别结果、所述第二识别结果对应的第二置信度,确定所述面部图像对应的面部识别结果和所述面部识别结果对应的置信度,包括:
    当所述第一识别结果与所述第二识别结果不相同、且所述第一置信度与所述第二置信度的差值大于第二预设阈值时,将所述第一识别结果确定为所述面部图像对应的面部识别结果,并将所述第一置信度确定为所述面部识别结果对应的置信度。
  5. 根据权利要求1所述的方法,其中,所述根据所述第一识别结果、所述第一识别结果对应的第一置信度和所述第二识别结果、所述第二识别结果对应的第二置信度,确定所述面部图像对应的面部识别结果和所述面部识别结果对应的置信度,包括:
    当所述第一识别结果与所述第二识别结果不相同、且所述第二置信度与所述第一置信度的差值小于第一预设阈值、且所述第一置信度与所述第二置信度的差值小于第二预设阈值时,将所述第一识别结果确定为所述面部图像对应的面部识别结果,并将所述第一置信度和所述第二置信度中最小的置信度确定为所述面部识别结果对应的置信度。
  6. 根据权利要求1所述的方法,其中,所述根据所述第一识别结果、所述第一识别结果对应的第一置信度和所述第二识别结果、所述第二识别结果对应的第二置信度,确定所述面部图像对应的面部识别结果和所述面部识别结果对应的置信度,包括:
    将所述第一识别结果、所述第一识别结果对应的第一置信度和所述第二识别结果、所述第二识别结果对应的第二置信度输入到预先训练的面部识别模型中,得到所述面部图像对应的面部识别结果和所述面部识别结果对应的置信度。
  7. 根据权利要求6所述的方法,进一步包括:
    计算每个训练面部图像对应的特征向量与数据库中预先存储的每个面部标识对应的均值向量的距离,得到第一距离集合,确定第一距离集合中最小的距离对应的面部标识,作为每个训练面部图像对应的第一识别结果,根据第一识别结果在第一距离集合中对应的最小的距离,确定第一识别结果对应的第一置信度;
    计算每个训练面部图像对应的特征向量与数据库中预先存储的每个面部标识对应的每个特征向量的距离,得到第二距离集合,确定第二距离集合中满足预设选取条件的目标距离对应的面部标识;在各目标距离对应的面部标识中,确定出现次数最多的目标面部标识,作为每个训练面部图像对应的第二识别结果,根据第二识别结果在第二距离集合中对应的最小的距离,确定第二识别结果对应的第二置信度;
    将每个训练面部图像对应的第一识别结果、第一置信度和第二识别结果、第二置信度,输入到面部识别模型中,得到每个训练面部图像对应的面部识别结果和置信度;
    根据得到的每个训练面部图像对应的面部识别结果和置信度、以及预设的每个训练面部图像对应的面部识别结果和置信度,对所述面部识别模型的模型参数进行调整,得到训练后的面部识别模型。
  8. 一种面部识别的装置,包括:
    获取模块,用于获取待识别的面部图像,并提取所述面部图像对应的至少一个目标特征向量;
    第一计算模块,用于计算每个目标特征向量与数据库中预先存储的每个面部标识对应的均值向量的距离,得到每个目标特征向量对应的第一距离集合,确定每个第一距离集合中最小的距离对应的面部标识;在各第一距离集合中最小的距离对应的面部标识中,确定出现次数最多的面部标识,作为所述面部图像对应的第一识别结果;根据所述第一识别结果在各第一距离集合中对应的最小的距离,确定所述第一识别结果对应的第一置信度;
    第二计算模块,用于计算每个目标特征向量与数据库中预先存储的每个面部标识对应的每个特征向量的距离,得到每个目标特征向量对应的第二距离集合,确定每个第二距离集合中满足预设选取条件的目标距离对应的面部标识;在每个第二距离集合中各目标距离对应的面部标识中,确定出现次数最多的目标面部标识;在各目标面部标识中,确定出现次数最多的目标面部标识,作为所述面部图像对应的第 二识别结果;根据所述第二识别结果在各第二距离集合中对应的最小的距离,确定所述第二识别结果对应的第二置信度;
    确定模块,用于根据所述第一识别结果、所述第一识别结果对应的第一置信度和所述第二识别结果、所述第二识别结果对应的第二置信度,确定所述面部图像对应的面部识别结果和所述面部识别结果对应的置信度。
  9. 根据权利要求8所述的装置,其中,所述确定模块,用于:
    当所述第一识别结果与所述第二识别结果相同时,将所述第一识别结果或所述第二识别结果确定为所述面部图像对应的面部识别结果;并将所述第一置信度和所述第二置信度中最大的置信度确定为所述面部识别结果对应的置信度。
  10. 根据权利要求8所述的装置,其中,所述确定模块,用于:
    当所述第一识别结果与所述第二识别结果不同、且所述第二置信度与所述第一置信度的差值大于第一预设阈值时,将所述第二识别结果确定为所述面部图像对应的面部识别结果,并将所述第二置信度,确定为所述面部识别结果对应的置信度。
  11. 根据权利要求8所述的装置,其中,所述确定模块,用于:
    当所述第一识别结果与所述第二识别结果不相同、且所述第一置信度与所述第二置信度的差值大于第二预设阈值时,将所述第一识别结果确定为所述面部图像对应的面部识别结果,并将所述第一置信度确定为所述面部识别结果对应的置信度。
  12. 根据权利要求8所述的装置,其中,所述确定模块,用于:
    当所述第一识别结果与所述第二识别结果不相同、且所述第二置信度与所述第一置信度的差值小于第一预设阈值、且所述第一置信度与所述第二置信度的差值小于第二预设阈值时,将所述第一识别结果确定为所述面部图像对应的面部识别结果,并将所述第一置信度和所述第二置信度中最小的置信度确定为所述面部识别结果对应的置信度。
  13. 根据权利要求8所述的装置,其中,所述确定模块,用于:
    将所述第一识别结果、所述第一识别结果对应的第一置信度和所述第二识别结果、所述第二识别结果对应的第二置信度输入到预先训练的面部识别模型中,得到所述面部图像对应的面部识别结果和所述面部识别结果对应的置信度。
  14. 一种服务器,包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代 码集或指令集由所述处理器加载并执行以实现如权利要求1至7任一所述的面部识别的方法。
  15. 一种计算机可读存储介质,存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如权利要求1至7任一所述的面部识别的方法。
PCT/CN2019/076398 2018-03-22 2019-02-28 面部识别的方法和装置 WO2019179295A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2020542097A JP6973876B2 (ja) 2018-03-22 2019-02-28 顔認識方法、顔認識装置及び顔認識方法を実行するコンピュータプログラム
EP19770980.1A EP3757873A4 (en) 2018-03-22 2019-02-28 FACIAL RECOGNITION METHOD AND DEVICE
US16/890,484 US11138412B2 (en) 2018-03-22 2020-06-02 Facial recognition method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810239389.3A CN108491794B (zh) 2018-03-22 2018-03-22 面部识别的方法和装置
CN201810239389.3 2018-03-22

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/890,484 Continuation US11138412B2 (en) 2018-03-22 2020-06-02 Facial recognition method and apparatus

Publications (1)

Publication Number Publication Date
WO2019179295A1 true WO2019179295A1 (zh) 2019-09-26

Family

ID=63319001

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/076398 WO2019179295A1 (zh) 2018-03-22 2019-02-28 面部识别的方法和装置

Country Status (5)

Country Link
US (1) US11138412B2 (zh)
EP (1) EP3757873A4 (zh)
JP (1) JP6973876B2 (zh)
CN (1) CN108491794B (zh)
WO (1) WO2019179295A1 (zh)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108491794B (zh) * 2018-03-22 2023-04-07 腾讯科技(深圳)有限公司 面部识别的方法和装置
CN109766882B (zh) * 2018-12-18 2020-11-20 北京诺亦腾科技有限公司 人体光点的标签识别方法、装置
WO2020124390A1 (zh) * 2018-12-18 2020-06-25 华为技术有限公司 一种面部属性的识别方法及电子设备
CN109815839B (zh) * 2018-12-29 2021-10-08 深圳云天励飞技术有限公司 微服务架构下的徘徊人员识别方法及相关产品
US11074733B2 (en) * 2019-03-15 2021-07-27 Neocortext, Inc. Face-swapping apparatus and method
CN110210307B (zh) * 2019-04-30 2023-11-28 ***股份有限公司 人脸样本库部署方法、基于人脸识别业务处理方法及装置
CN110163175A (zh) * 2019-05-28 2019-08-23 杭州电子科技大学 一种基于改进vgg-16网络的步态识别方法及***
CN110505498B (zh) * 2019-09-03 2021-04-02 腾讯科技(深圳)有限公司 视频的处理、播放方法、装置及计算机可读介质
CN111242230A (zh) * 2020-01-17 2020-06-05 腾讯科技(深圳)有限公司 基于人工智能的图像处理方法及图像分类模型训练方法
CN111507232B (zh) * 2020-04-10 2023-07-21 盛景智能科技(嘉兴)有限公司 多模态多策略融合的陌生人识别方法和***
CN111626193A (zh) * 2020-05-26 2020-09-04 北京嘀嘀无限科技发展有限公司 一种面部识别方法、面部识别装置及可读存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810466A (zh) * 2012-11-01 2014-05-21 三星电子株式会社 用于面部识别的装置和方法
CN107403168A (zh) * 2017-08-07 2017-11-28 青岛有锁智能科技有限公司 一种面部识别***
CN108491794A (zh) * 2018-03-22 2018-09-04 腾讯科技(深圳)有限公司 面部识别的方法和装置

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101178773B (zh) * 2007-12-13 2010-08-11 北京中星微电子有限公司 基于特征提取和分类器的图像识别***及方法
CN101226590B (zh) * 2008-01-31 2010-06-02 湖南创合世纪智能技术有限公司 一种人脸识别方法
CN101561874B (zh) * 2008-07-17 2011-10-26 清华大学 一种人脸虚拟图像生成的方法
CN101719222B (zh) * 2009-11-27 2014-02-12 北京中星微电子有限公司 分类器训练方法和装置以及人脸认证方法和装置
US11074495B2 (en) * 2013-02-28 2021-07-27 Z Advanced Computing, Inc. (Zac) System and method for extremely efficient image and pattern recognition and artificial intelligence platform
KR101475155B1 (ko) * 2013-10-14 2014-12-19 삼성물산 주식회사 전기 비저항을 이용한 터널 막장 전방 지반조건 예측방법
EP3105105B1 (en) * 2014-02-12 2019-04-10 TECNEVA S.r.l. Control system of a self-moving cart, in particular a golf caddie
CN104484650B (zh) * 2014-12-09 2017-09-12 北京信息科技大学 素描人脸识别的方法和装置
CN105844283B (zh) * 2015-01-16 2019-06-07 阿里巴巴集团控股有限公司 用于识别图像类目归属的方法、图像搜索方法及装置
CN104808921A (zh) * 2015-05-08 2015-07-29 三星电子(中国)研发中心 进行信息提醒的方法及装置
US10769255B2 (en) * 2015-11-11 2020-09-08 Samsung Electronics Co., Ltd. Methods and apparatuses for adaptively updating enrollment database for user authentication
CN105956518A (zh) * 2016-04-21 2016-09-21 腾讯科技(深圳)有限公司 一种人脸识别方法、装置和***
CN107423306B (zh) * 2016-05-24 2021-01-29 华为技术有限公司 一种图像检索方法及装置
CN106022317A (zh) * 2016-06-27 2016-10-12 北京小米移动软件有限公司 人脸识别方法及装置
CN106250858B (zh) * 2016-08-05 2021-08-13 重庆中科云从科技有限公司 一种融合多种人脸识别算法的识别方法及***
CN107220614B (zh) * 2017-05-24 2021-08-10 北京小米移动软件有限公司 图像识别方法、装置及计算机可读存储介质
CN107315795B (zh) * 2017-06-15 2019-08-02 武汉大学 联合特定人物和场景的视频实例检索方法及***
CN107545277B (zh) * 2017-08-11 2023-07-11 腾讯科技(上海)有限公司 模型训练、身份验证方法、装置、存储介质和计算机设备
CN107818313B (zh) * 2017-11-20 2019-05-14 腾讯科技(深圳)有限公司 活体识别方法、装置和存储介质

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810466A (zh) * 2012-11-01 2014-05-21 三星电子株式会社 用于面部识别的装置和方法
CN107403168A (zh) * 2017-08-07 2017-11-28 青岛有锁智能科技有限公司 一种面部识别***
CN108491794A (zh) * 2018-03-22 2018-09-04 腾讯科技(深圳)有限公司 面部识别的方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3757873A4 *

Also Published As

Publication number Publication date
EP3757873A1 (en) 2020-12-30
US20200293761A1 (en) 2020-09-17
JP6973876B2 (ja) 2021-12-01
JP2021513700A (ja) 2021-05-27
CN108491794A (zh) 2018-09-04
EP3757873A4 (en) 2021-03-31
CN108491794B (zh) 2023-04-07
US11138412B2 (en) 2021-10-05

Similar Documents

Publication Publication Date Title
WO2019179295A1 (zh) 面部识别的方法和装置
US10803359B2 (en) Image recognition method, apparatus, server, and storage medium
KR102084900B1 (ko) 사용자 신원 검증 방법, 장치 및 시스템
WO2017166586A1 (zh) 基于卷积神经网络的图片鉴别方法、***和电子设备
US10984225B1 (en) Masked face recognition
WO2016084071A1 (en) Systems and methods for recognition of faces e.g. from mobile-device-generated images of faces
CN109871821B (zh) 自适应网络的行人重识别方法、装置、设备及存储介质
EP3364337A2 (en) Persistent feature descriptors for video
US11126827B2 (en) Method and system for image identification
US11335127B2 (en) Media processing method, related apparatus, and storage medium
KR20160011916A (ko) 얼굴 인식을 통한 사용자 식별 방법 및 장치
US20220147735A1 (en) Face-aware person re-identification system
JP2016062253A (ja) オブジェクト識別装置、オブジェクト識別方法及びプログラム
CN111079816A (zh) 图像的审核方法、装置和服务器
KR20220076398A (ko) Ar장치를 위한 객체 인식 처리 장치 및 방법
KR102261054B1 (ko) 카메라에 연결되는 고속 얼굴 인식 장치
WO2020052275A1 (zh) 图像处理方法、装置、终端设备、服务器及***
CN113591758A (zh) 一种人体行为识别模型训练方法、装置及计算机设备
KR20210058157A (ko) 얼굴 이미지 기반 사용자 식별 장치 및 방법
US20240221426A1 (en) Behavior detection method, electronic device, and computer readable storage medium
CN111753583A (zh) 一种识别方法及装置
CN115984977A (zh) 活体检测方法和***
Zhao et al. Research on face recognition based on embedded system
US11087121B2 (en) High accuracy and volume facial recognition on mobile platforms
KR102224218B1 (ko) 비디오 시간 정보를 활용하는 딥러닝 기반 물체 검출 방법 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19770980

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020542097

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019770980

Country of ref document: EP

Effective date: 20200921