CN107958223B - Face recognition method and device, mobile equipment and computer readable storage medium - Google Patents

Face recognition method and device, mobile equipment and computer readable storage medium Download PDF

Info

Publication number
CN107958223B
CN107958223B CN201711329593.6A CN201711329593A CN107958223B CN 107958223 B CN107958223 B CN 107958223B CN 201711329593 A CN201711329593 A CN 201711329593A CN 107958223 B CN107958223 B CN 107958223B
Authority
CN
China
Prior art keywords
face image
histogram
target
segmentation
dimensional face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711329593.6A
Other languages
Chinese (zh)
Other versions
CN107958223A (en
Inventor
万韶华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201711329593.6A priority Critical patent/CN107958223B/en
Publication of CN107958223A publication Critical patent/CN107958223A/en
Application granted granted Critical
Publication of CN107958223B publication Critical patent/CN107958223B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Computer Security & Cryptography (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The disclosure provides a face recognition method and device, a mobile device and a computer readable storage medium. The method comprises the following steps: acquiring a three-dimensional face image; preprocessing the three-dimensional face image to obtain a preprocessed three-dimensional face image; performing histogram processing on the preprocessed three-dimensional face image according to different segmentation granularity to obtain a plurality of target histograms; and matching the target histograms and the pre-stored histograms to obtain a face recognition result. As can be seen, in the embodiment, by acquiring a plurality of target histograms under different segmentation granularities, because the target histograms corresponding to different segmentation granularities have different accuracies, when each target histogram is matched with a pre-stored histogram, a face recognition result can adapt to different use environments, and the reliability of the face recognition result is improved.

Description

Face recognition method and device, mobile equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a face recognition method and apparatus, a mobile device, and a computer-readable storage medium.
Background
Currently, face recognition technology has been applied in intelligent terminals. However, when the light of the environment where the user is located changes or a human face prosthesis is used, the human face cannot be accurately recognized, and the unlocking is performed by mistake.
Disclosure of Invention
The embodiment of the disclosure provides a face recognition method and device, a mobile device and a computer readable storage medium, so as to solve the problems in the related art.
According to a first aspect of the embodiments of the present disclosure, there is provided a face recognition method, the method including:
acquiring a three-dimensional face image;
preprocessing the three-dimensional face image to obtain a preprocessed three-dimensional face image;
performing histogram processing on the preprocessed three-dimensional face image according to different segmentation granularity to obtain a plurality of target histograms;
and matching the target histograms and the pre-stored histograms to obtain a face recognition result.
Optionally, the preprocessing the three-dimensional face image to obtain a preprocessed three-dimensional face image includes:
detecting the human face characteristic points of the three-dimensional human face image to obtain a first number of designated characteristic points in the three-dimensional human face image;
and normalizing the three-dimensional face image based on the first number of designated feature points to obtain a normalized three-dimensional face image.
Optionally, normalizing the three-dimensional face image based on the first number of designated feature points to obtain a normalized three-dimensional face image, including:
acquiring the position relation among the first number of designated feature points and the position relation of the feature points in the face template; the feature points in the face template correspond to the first number of designated feature points one by one; the position relation comprises the distance and the deflection angle between any two specified characteristic points;
and adjusting the size and the deflection angle of the three-dimensional face image to enable the distance value between the specified feature point in the three-dimensional face image and the corresponding feature point in the face template to be smaller than or equal to a set threshold value, and obtaining the three-dimensional face image after registration and alignment.
Optionally, performing histogram processing on the preprocessed three-dimensional face image according to different segmentation granularities to obtain a plurality of target histograms, including:
sequentially acquiring a segmentation granularity from a plurality of preset segmentation granularities, or acquiring a plurality of segmentation granularities in parallel;
and for each acquired segmentation granularity, acquiring a target histogram corresponding to the segmentation granularity based on the preprocessed three-dimensional face image.
Optionally, obtaining a target histogram corresponding to the segmentation granularity based on the preprocessed three-dimensional face image includes:
acquiring a square area containing a face from the three-dimensional face image;
dividing the square area based on the division granularity to obtain a plurality of division units;
calculating a depth value corresponding to each segmentation unit according to a coordinate value of each pixel point in each segmentation unit on a Z coordinate axis to obtain a target histogram corresponding to the three-dimensional face image under the segmentation granularity;
and the Z coordinate axis is parallel to the optical axis of the equipment shooting module for collecting the three-dimensional face image.
Optionally, performing matching processing based on the target histograms and a pre-stored histogram to obtain a face recognition result, including:
determining a depth value vector of each target histogram and a depth value vector of a pre-stored histogram having the same segmentation granularity as each target histogram based on the depth value corresponding to each segmentation unit in each target histogram;
calculating a distance value of each target histogram and a corresponding pre-stored histogram based on the depth value vector;
calculating a distance identification value of the three-dimensional face image based on the distance value and the weight coefficient of each target histogram; the weight coefficient is positively correlated with a segmentation granularity of the target histogram;
if the distance recognition value is smaller than or equal to the recognition value threshold, determining the face recognition result as a correct face; and if so, determining that the face recognition result is an error face.
Optionally, after obtaining the face recognition result, the method further includes:
and controlling the mobile equipment to unlock according to the face recognition result.
According to a second aspect of the embodiments of the present disclosure, there is provided a face recognition apparatus, the apparatus including:
the three-dimensional image acquisition module is used for acquiring a three-dimensional face image;
the preprocessing module is used for preprocessing the three-dimensional face image to obtain a preprocessed three-dimensional face image;
the histogram processing module is used for carrying out histogram processing on the preprocessed three-dimensional face image according to different segmentation granularity to obtain a plurality of target histograms;
and the matching processing module is used for performing matching processing based on the target histograms and the pre-stored histogram to obtain a face recognition result.
Optionally, the preprocessing module comprises:
the feature point detection unit is used for detecting the human face feature points of the three-dimensional human face image and acquiring a first number of specified feature points in the three-dimensional human face image;
and the normalization processing unit is used for performing normalization processing on the three-dimensional face image based on the first number of designated feature points to obtain a three-dimensional face image after normalization processing.
Optionally, the normalization processing unit includes:
a position relation obtaining subunit, configured to obtain a position relation between the first number of specified feature points and a position relation of feature points in the face template; the feature points in the face template correspond to the first number of designated feature points one by one; the position relation comprises the distance and the deflection angle between any two specified characteristic points;
and the registration alignment subunit is used for adjusting the size and the deflection angle of the three-dimensional face image so as to enable the distance value between the specified feature point in the three-dimensional face image and the corresponding feature point in the face template to be smaller than or equal to a set threshold value, and obtaining the three-dimensional face image after registration alignment.
Optionally, the histogram processing module includes:
the device comprises a segmentation granularity acquisition unit, a segmentation granularity acquisition unit and a segmentation granularity acquisition unit, wherein the segmentation granularity acquisition unit is used for sequentially acquiring one segmentation granularity from a plurality of preset segmentation granularities or acquiring a plurality of segmentation granularities in parallel;
and the target histogram acquisition unit is used for acquiring a target histogram corresponding to each acquired segmentation granularity based on the preprocessed three-dimensional face image.
Optionally, the target histogram obtaining unit includes:
a square region acquiring subunit, configured to acquire a square region including a face from the three-dimensional face image;
a square region segmentation subunit, configured to segment the square region based on the segmentation granularity to obtain a plurality of segmentation units;
the depth value calculation operator unit is used for calculating the depth value corresponding to each segmentation unit according to the coordinate value of each pixel point in each segmentation unit on the Z coordinate axis to obtain a target histogram corresponding to the three-dimensional face image under the segmentation granularity;
and the Z coordinate axis is parallel to the optical axis of the shooting module for collecting the three-dimensional face image.
Optionally, the matching processing module includes:
the vector determining unit is used for determining a depth value vector of each target histogram and a depth value vector of a pre-stored histogram with the same segmentation granularity as that of each target histogram based on the depth value corresponding to each segmentation unit in each target histogram;
a distance value calculation unit for calculating a distance value of each target histogram and a corresponding pre-stored histogram based on the depth value vector;
the identification value calculating unit is used for calculating a distance identification value of the three-dimensional face image based on the distance value and the weight coefficient of each target histogram; the weight coefficient is positively correlated with a segmentation granularity of the target histogram;
the face recognition unit is used for determining the face recognition result as a correct face when the distance recognition value is smaller than or equal to a recognition value threshold; and the face recognition module is further used for determining that the face recognition result is an error face when the distance recognition value is greater than the recognition value threshold.
Optionally, the apparatus further comprises:
and the unlocking control module is used for controlling the mobile equipment to unlock according to the face recognition result.
According to a third aspect of the embodiments of the present disclosure, there is provided a mobile device, the terminal including: the system comprises a shooting module for collecting a three-dimensional face image, a processor and a memory for storing an executable instruction of the processor; wherein the processor is configured to:
acquiring a three-dimensional face image acquired by the shooting module;
preprocessing the three-dimensional face image to obtain a preprocessed three-dimensional face image;
performing histogram processing on the preprocessed three-dimensional face image according to different segmentation granularity to obtain a plurality of target histograms;
and matching the target histograms and the pre-stored histograms to obtain a face recognition result.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements:
acquiring a three-dimensional face image;
preprocessing the three-dimensional face image to obtain a preprocessed three-dimensional face image;
performing histogram processing on the preprocessed three-dimensional face image according to different segmentation granularity to obtain a plurality of target histograms;
and matching the target histograms and the pre-stored histograms to obtain a face recognition result.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the embodiment of the disclosure, a three-dimensional face image is obtained; then, preprocessing the three-dimensional face image to obtain a preprocessed three-dimensional face image; then, carrying out histogram processing on the preprocessed three-dimensional face image according to different segmentation granularity to obtain a plurality of target histograms; and finally, matching processing is carried out based on the target histograms and the pre-stored histogram, and a face recognition result is obtained. As can be seen, in the embodiment, by acquiring a plurality of target histograms under different segmentation granularities, because the target histograms corresponding to different segmentation granularities have different accuracies, when each target histogram is matched with a pre-stored histogram, a face recognition result can adapt to different use environments, and the reliability of the face recognition result is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a face recognition method according to an exemplary embodiment of the present disclosure;
FIG. 2 is a flow diagram illustrating a face recognition method according to another exemplary embodiment of the present disclosure;
FIG. 3 is a flow diagram illustrating a face recognition method according to yet another exemplary embodiment of the present disclosure;
FIGS. 4(a) - (f) are schematic diagrams of three-dimensional face images at various stages in a recognition process shown in the present disclosure according to an exemplary embodiment;
FIG. 5 is a flow diagram illustrating a face recognition method according to yet another exemplary embodiment of the present disclosure;
FIG. 6 is a flow diagram illustrating a face recognition method according to yet another exemplary embodiment of the present disclosure;
FIG. 7 is a block diagram illustrating a face recognition apparatus according to an exemplary embodiment of the present disclosure;
FIG. 8 is a block diagram of a face recognition apparatus according to another exemplary embodiment of the present disclosure;
fig. 9 to 13 are block diagrams of a face recognition apparatus according to still another exemplary embodiment of the present disclosure;
fig. 14 is a block diagram illustrating the structure of a mobile device according to an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Fig. 1 is a flow chart diagram illustrating a face recognition method according to an exemplary embodiment of the present disclosure. The face recognition method is suitable for mobile equipment integrated into a 3D shooting module. The mobile Device may be a smart phone, a PAD (Portable Android Device), a personal digital assistant, a wearable Device, a digital camera, or the like. Referring to fig. 1, the method includes steps 101 to 104:
101, acquiring a three-dimensional face image.
In an embodiment, a three-dimensional face image can be obtained through a 3D shooting module. The three-dimensional face image comprises pixel points in a coordinate system XYZ. Each pixel point includes 3 coordinate values, i.e., P (X, Y, Z), which correspond to an X coordinate axis, a Y coordinate axis, and a Z coordinate axis in the coordinate system XYZ, respectively.
The X coordinate axis and the Y coordinate axis are located on a plane parallel to a lens plane in the 3D shooting module, and the Z coordinate axis is parallel to an optical axis of the 3D shooting module.
In one embodiment, the origin O of the coordinate system XYZ may select the nasal cusp of the face in the three-dimensional face image. Of course, the origin O may also be selected from other position points, which is not limited herein.
And 102, preprocessing the three-dimensional face image to obtain a preprocessed three-dimensional face image.
In one embodiment, facial feature points of a three-dimensional facial image are detected to obtain a first number of designated feature points. In this embodiment, the first number is set to be 3 to 5, which are respectively the central point of the two eyes, the nose tip point and the vertex of the two corners of the mouth. Of course, the first number may be set according to a specific scenario and is not limited herein.
After a first number of designated feature points are selected, normalization processing is performed on the three-dimensional face image based on the first number of designated feature points, so that a normalized three-dimensional face image is obtained, and normalization will be described in a subsequent embodiment and will not be described here.
103, performing histogram processing on the preprocessed three-dimensional face image according to different segmentation granularities to obtain a plurality of target histograms.
In an embodiment, the histogram processing may be performed on the preprocessed three-dimensional face image according to different segmentation granularities, so as to obtain a plurality of target histograms. The segmentation granularity refers to the number of squares in a square region where the face is located in the three-dimensional face image after the square region is segmented.
In this embodiment, the segmentation granularities of two adjacent target histograms are in an N-fold relationship. N is a positive integer greater than or equal to 2. In one embodiment, N is 2. For example, the partition particle size may be 8 × 8, 4 × 4, 2 × 2, and 1 × 1. Of course, the value of N may be selected according to a specific scenario, and is not limited herein.
And 104, performing matching processing based on the target histograms and the pre-stored histogram to obtain a face recognition result.
In an embodiment, for each target histogram of the plurality of target histograms, matching each target histogram and a pre-stored histogram having the same segmentation granularity as each target histogram, a distance value between the target histogram and the corresponding pre-stored histogram may be obtained, and then a face recognition result is obtained based on the distance value and the weight coefficient of each target histogram.
It can be understood that the face recognition result can be a correct face or a wrong face.
As can be seen, in the embodiment, by acquiring the plurality of target histograms under different segmentation granularities, because the target histograms corresponding to different segmentation granularities have different accuracies, when each target histogram is matched with a pre-stored histogram, the face recognition result can adapt to different use environments, and the reliability of the face recognition result is improved.
Fig. 2 is a flow chart diagram illustrating a face recognition method according to another exemplary embodiment of the present disclosure. Referring to fig. 2, the face recognition method includes:
and 201, acquiring a three-dimensional face image.
The specific method and principle of step 201 and step 101 are the same, please refer to fig. 1 and related contents of step 101 for detailed description, which is not repeated herein.
202, performing human face feature point detection on the three-dimensional human face image, and acquiring a first number of designated feature points in the three-dimensional human face image.
In one embodiment, facial feature points of a three-dimensional facial image are detected to obtain a first number of designated feature points. The detection of the facial feature points may be implemented by using a detection algorithm in the related art, which is not limited herein.
In this embodiment, the first number is set to be 3 to 5, which are respectively the central point of the two eyes, the nose tip point and the vertex of the two corners of the mouth. Of course, the first number may be set according to a specific scenario and is not limited herein.
After a first number of designated feature points are selected, normalization processing is carried out on the three-dimensional face image based on the first number of designated feature points, and therefore the normalized three-dimensional face image is obtained. Wherein the three-dimensional face image normalization process comprises step 203 and step 204.
203, obtaining the position relationship among the first number of designated feature points and the position relationship of the feature points in the face template; the feature points in the face template correspond to the first number of designated feature points one by one; the positional relationship includes a distance and a deflection angle between any two specified feature points.
The face template may be set in advance, for example, the mobile terminal obtains a plurality of images at preset angles, such as a front face image, a left side face image, a right side face image, an overhead image, and a bottom image of the face, and then obtains the face template based on the plurality of images at the preset angles and a template generation algorithm. The template generation algorithm may be implemented as an algorithm in the related art, and is not limited herein.
In one embodiment, the position relationship among a first number of specified feature points and the position relationship of the feature points in the face template are obtained. The position relation comprises the distance and the deflection angle between any two appointed characteristic points of the first number of appointed characteristic points.
Wherein, the distance between any two designated feature points can be calculated according to the coordinate values of the two designated feature points on the X coordinate axis and the Y coordinate axis, such as the distance
Figure BDA0001506356480000091
Deflection angle
Figure BDA0001506356480000092
Understandably, the feature points in the face template correspond to the first number of designated feature points one to one.
And 204, adjusting the size and the deflection angle of the three-dimensional face image to enable the distance value between the specified feature point in the three-dimensional face image and the corresponding feature point in the face template to be smaller than or equal to a set threshold value, and obtaining the three-dimensional face image after registration and alignment.
In an embodiment, the size and the deflection angle of the three-dimensional face image are adjusted so that the distance value between the first number point in the three-dimensional face image and the corresponding feature point in the face model is smaller than or equal to a set threshold value, and thus the three-dimensional face image after registration and alignment is obtained.
It can be understood that, in the registration alignment process, the more the first number of points coincides with the corresponding feature points, the higher the registration accuracy thereof is, and the first number and the registration accuracy can be adjusted according to a specific scenario, which is not limited herein.
And 205, performing histogram processing on the preprocessed three-dimensional face image according to different segmentation granularities to obtain a plurality of target histograms.
The specific method and principle of step 205 and step 103 are the same, and please refer to fig. 1 and related contents of step 103 for detailed description, which is not repeated herein.
And 206, performing matching processing based on the target histograms and the pre-stored histogram to obtain a face recognition result.
The specific method and principle of step 206 and step 104 are the same, and please refer to fig. 1 and the related contents of step 104 for detailed description, which is not repeated herein.
Therefore, in this embodiment, based on the plurality of designated detection points in the three-dimensional face image, the size and the deflection angle of the three-dimensional face image are adjusted, so that the three-dimensional face image and the face template are aligned in registration, and normalization processing on the three-dimensional face image is realized, thereby improving the accuracy of subsequently obtained face recognition results and improving the reliability of the face recognition results.
Fig. 3 is a flowchart illustrating a face recognition method according to another exemplary embodiment of the present disclosure. Referring to fig. 3, the face recognition method includes steps 301 to 305:
301, obtaining a three-dimensional face image.
In this embodiment, a three-dimensional face image as shown in fig. 4(a) is acquired. The specific method and principle of step 301 and step 101 are the same, please refer to fig. 1 and related contents of step 101 for detailed description, which is not repeated herein.
And 302, preprocessing the three-dimensional face image to obtain a preprocessed three-dimensional face image.
In this embodiment, after the three-dimensional face image is preprocessed, the three-dimensional face image shown in fig. 4(b) can be obtained. The specific method and principle of step 302 and steps 202 to 204 are consistent, and please refer to fig. 2 and the related contents of step 202 to step 204 for detailed description, which is not repeated herein.
303, sequentially obtaining one segmentation granularity from a plurality of preset segmentation granularities, or obtaining a plurality of segmentation granularities in parallel.
In one embodiment, a plurality of segmentation granularities are preset, such as 8 × 8, 4 × 4, 2 × 2 and 1 × 1 in step 103. In this embodiment, one segmentation granularity may be obtained sequentially, or a plurality of segmentation granularities may be obtained in parallel. It can be understood that, as the number of the division granularities increases, the calculation amount also increases, when the parallel calculation amount is large, the division granularities can be acquired in sequence, that is, the serial calculation, when the calculation resources are large enough, the parallel acquisition of the division granularities, that is, the parallel calculation, can be adopted, and the serial and parallel acquisition can be performed simultaneously. The acquisition mode may be selected according to a specific scenario, and is not limited herein.
For example, the division granularity 8 × 8 is obtained by dividing a square region (the outer frame of fig. 4(c) to (f)) including a face into 8 × 8 squares. As shown in fig. 4(c), the square area is divided into 64 division units. And calculating the sum or average value of the z coordinate values of the pixel points (x, y, z) falling in each block, and the depth value of each segmentation unit, and continuing to refer to fig. 4(c), taking the segmentation unit of the first row as an example, the depth value of each segmentation unit is sequentially {1,5, 20, 25, 26, 25, 20, 1}, and the segmentation units of other rows are similar to the first row. In this way, the target histogram for granularity 8 x 8 may be segmented.
For the target histograms corresponding to the segmentation granularities 4 × 4, 2 × 2, 1 × 1, as shown in fig. 4(d) - (f), respectively, the specific obtaining method may refer to the obtaining method of the target histogram corresponding to the segmentation granularity 8 × 8, and details thereof are not repeated herein.
And 304, for each acquired segmentation granularity, acquiring a target histogram corresponding to the segmentation granularity based on the preprocessed three-dimensional face image.
In one embodiment, referring to fig. 5, a square region containing a face is obtained from a three-dimensional face image (corresponding to step 501). For example, the maximum value and the minimum value of coordinate values of each pixel point in the normalized three-dimensional face image on an X coordinate axis, a Y coordinate axis and a Z coordinate axis are respectively determined. A square area is determined based on the maximum value Xmax and the minimum value Xmin on the X coordinate axis and the maximum value Ymax and the minimum value Ymin on the Y coordinate axis. Namely determining the circumscribed square of the face.
In one embodiment, if one granularity is sequentially obtained, the square area is divided based on the obtained granularity to obtain a plurality of divided units (step 502).
Each segmentation unit comprises a plurality of pixel points, and the depth value of the segmentation unit can be obtained according to the sum or average value of coordinate values of the pixel points on the Z coordinate axis. After obtaining the depth values of all segmentation units, a corresponding target histogram at the segmentation granularity is obtained (corresponding to step 503).
In another embodiment, a plurality of segmentation granularities are acquired in parallel, and a plurality of target histograms are acquired simultaneously based on the above steps, so that the calculation efficiency can be improved.
And 305, performing matching processing based on the target histograms and the pre-stored histogram to obtain a face recognition result.
Referring to fig. 6, in an embodiment, the obtaining of the face recognition result includes the following steps:
first, based on the depth value of each segmentation unit in each target histogram, a depth value vector corresponding to the target histogram is converted (corresponding to step 601). Meanwhile, a depth value vector of a pre-stored histogram having the same segmentation granularity as the target direct map is determined. For example, the target histogram corresponding to the segmentation granularity 2 x 2
Figure BDA0001506356480000121
Obtaining a depth value vector {1,5 in a line-by-line writing mode; 5,4, of course, the calculation can also be directly carried out in a matrix mode. It will be understood thatThe number of calculations increases with the increase in the dimension of the vector, and the dimension of the depth value vector may be selected according to the calculation speed, the division granularity, and the like, which is not limited herein.
Then, based on the depth value vector of each target histogram and the depth value vectors of the corresponding pre-stored histograms, distance values of the two histograms are calculated according to a vector calculation formula (corresponding to step 602). The number of calculations coincides with the number of target histograms, and the same number of distance values D1, D2, … …, Dn, n as the number of target histograms can be obtained, representing the number of target histograms.
Then, based on the distance values D1, D2, … …, Dn and the respective weight coefficients a1, a2, … …, an, a distance recognition value S of the three-dimensional face image can be calculated (corresponding to step 603). For example S-D1 a1+ D2 a2+ … … + Dn.
It can be understood that, as the segmentation granularity increases, the more obvious the details in the three-dimensional face image, at this time, the larger the weight coefficient can be set, that is, the weight coefficient is positively correlated with the segmentation granularity of the target histogram. The specific value of the weight coefficient may be selected according to a specific scenario, and is not limited herein.
Finally, comparing the distance recognition value with a recognition value threshold, and if the distance recognition value is smaller than or equal to the recognition value threshold, determining that the face recognition result is a correct face; if yes, the face recognition result is determined to be an error face (corresponding to step 604).
As can be seen, in this embodiment, by obtaining a plurality of target histograms under different segmentation granularities, and during the matching process between each target histogram and the pre-stored histogram, a distance value corresponding to each target histogram is obtained, and a distance identification value can be calculated according to the distance value and the weight coefficient. The histogram images obtained by different segmentation granularities can reflect different details, so that the distance values obtained when the histograms are matched have different accuracies, the weight coefficients can be adjusted, the face recognition result can adapt to different use environments, and the reliability of the face recognition result is improved.
Fig. 7 is a block diagram illustrating a face recognition apparatus according to an example embodiment of the present disclosure. Referring to fig. 7, the apparatus 700 includes:
a three-dimensional image obtaining module 701, configured to obtain a three-dimensional face image;
a preprocessing module 702, configured to preprocess the three-dimensional face image to obtain a preprocessed three-dimensional face image;
a histogram processing module 703, configured to perform histogram processing on the preprocessed three-dimensional face image according to different segmentation granularities to obtain multiple target histograms;
and the matching processing module 704 is configured to perform matching processing based on the multiple target histograms and a pre-stored histogram to obtain a face recognition result.
Fig. 8 is a block diagram illustrating a face recognition apparatus according to another exemplary embodiment of the present disclosure. Referring to fig. 8, based on the embodiment shown in fig. 7, the preprocessing module 702 includes:
a feature point detection unit 801, configured to perform face feature point detection on the three-dimensional face image, and acquire a first number of specified feature points in the three-dimensional face image;
a normalization processing unit 802, configured to perform normalization processing on the three-dimensional face image based on the first number of designated feature points, so as to obtain a normalized three-dimensional face image.
Fig. 9 is a block diagram illustrating a face recognition apparatus according to another exemplary embodiment of the present disclosure. Referring to fig. 9, on the basis of the embodiment shown in fig. 8, the normalization processing unit 802 includes:
a position relation obtaining subunit 901, configured to obtain position relations between the first number of specified feature points and position relations of feature points in the face template; the feature points in the face template correspond to the first number of designated feature points one by one; the position relation comprises the distance and the deflection angle between any two specified characteristic points;
a registration alignment subunit 902, configured to adjust the size and the deflection angle of the three-dimensional face image, so that a distance value between a specified feature point in the three-dimensional face image and a corresponding feature point in the face template is smaller than or equal to a set threshold, and a three-dimensional face image after registration alignment is obtained.
Fig. 10 is a block diagram illustrating a face recognition apparatus according to another exemplary embodiment of the present disclosure. Referring to fig. 10, on the basis of the embodiment shown in fig. 7, the histogram processing module 703 includes:
a division granularity acquisition unit 1001 configured to sequentially acquire one division granularity from a plurality of preset division granularities, or acquire a plurality of division granularities in parallel;
a target histogram obtaining unit 1002, configured to, for each obtained segmentation granularity, obtain a target histogram corresponding to the segmentation granularity based on the preprocessed three-dimensional face image.
Fig. 11 is a block diagram illustrating a face recognition apparatus according to another exemplary embodiment of the present disclosure. Referring to fig. 11, on the basis of the embodiment shown in fig. 10, the target histogram acquisition unit 1002 includes:
a square region acquiring subunit 1101, configured to acquire a square region including a human face from the three-dimensional human face image;
a square region segmentation subunit 1102, configured to segment the square region based on the segmentation granularity to obtain a plurality of segmentation units;
a depth value calculation operator unit 1103, configured to calculate, according to a coordinate value of each pixel point in each partition unit on a Z coordinate axis, a depth value corresponding to each partition unit, so as to obtain a target histogram corresponding to the three-dimensional face image at the partition granularity;
and the Z coordinate axis is parallel to the optical axis of the shooting module for collecting the three-dimensional face image.
Fig. 12 is a block diagram illustrating a face recognition apparatus according to another exemplary embodiment of the present disclosure. Referring to fig. 12, on the basis of the embodiment shown in fig. 7, the matching processing module 704 includes:
a vector determining unit 1201, configured to determine, based on the depth value corresponding to each segmentation unit in each target histogram, a depth value vector of each target histogram and a depth value vector of a pre-stored histogram having the same segmentation granularity as that of each target histogram;
a distance value calculating unit 1202, configured to calculate a distance value of each target histogram and a corresponding pre-stored histogram based on the depth value vector;
an identification value calculation unit 1203, configured to calculate a distance identification value of the three-dimensional face image based on the distance value and the weight coefficient of each target histogram; the weight coefficient is positively correlated with a segmentation granularity of the target histogram;
a face recognition unit 1204, configured to determine that the face recognition result is a correct face when the distance recognition value is less than or equal to a recognition value threshold; and the face recognition module is further used for determining that the face recognition result is an error face when the distance recognition value is greater than the recognition value threshold.
Fig. 13 is a block diagram illustrating a face recognition apparatus according to another exemplary embodiment of the present disclosure. Referring to fig. 13, on the basis of the embodiment shown in fig. 7, the face recognition apparatus 700 further includes:
and the unlocking control module 1301 is used for controlling the mobile device to unlock according to the face recognition result.
It should be noted that, the face recognition apparatus provided in the embodiment of the present invention has been described in detail in the embodiment of the face recognition method, and reference may be made to part of the description of the embodiment of the method for relevant points. In addition, along with the change of the use scene, the face recognition method can be correspondingly adjusted, and the face recognition device can be readjusted by adopting different functional components. And will not be described in detail herein.
FIG. 14 is a block diagram illustrating a mobile device in accordance with an example embodiment. For example, the mobile device 1400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and so forth.
Referring to fig. 14, mobile device 1400 may include one or more of the following components: a processing component 1402, a memory 1404, a power component 1406, a multimedia component 1408, an audio component 1410, an input/output (I/O) interface 1412, a sensor component 1414, a communication component 1416, and a camera module 1418. The shooting module 1418 collects a three-dimensional face image. The memory 1404 is used to store instructions executable by the processing component 1402. Processing component 1402 reads instructions from memory 1404 to implement:
acquiring a three-dimensional face image;
preprocessing the three-dimensional face image to obtain a preprocessed three-dimensional face image;
performing histogram processing on the preprocessed three-dimensional face image according to different segmentation granularity to obtain a plurality of target histograms;
and matching the target histograms and the pre-stored histograms to obtain a face recognition result.
The processing component 1402 generally controls the overall operation of the device 1400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing component 1402 may include one or more processors 1420 to execute instructions. Further, processing component 1402 can include one or more modules that facilitate interaction between processing component 1402 and other components. For example, the processing component 1402 can include a multimedia module to facilitate interaction between the multimedia component 1408 and the processing component 1402.
The memory 1404 is configured to store various types of data to support operations at the apparatus 1400. Examples of such data include instructions for any application or method operating on device 1400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1404 may be implemented by any type of volatile or non-volatile storage device or combination of devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 1406 provides power to the various components of the device 1400. The power components 1406 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 1400.
The multimedia component 1408 includes a screen that provides an output interface between the device 1400 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1408 includes a front-facing camera and/or a rear-facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 1400 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1410 is configured to output and/or input audio signals. For example, the audio component 1410 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 1400 is in operating modes, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1404 or transmitted via the communication component 1416. In some embodiments, audio component 1410 further includes a speaker for outputting audio signals.
I/O interface 1412 provides an interface between processing component 1402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 1414 includes one or more sensors for providing various aspects of state assessment for the apparatus 1400. For example, the sensor component 1414 may detect an open/closed state of the apparatus 1400, a relative positioning of components, such as a display and keypad of the apparatus 1400, a change in position of the apparatus 1400 or a component of the apparatus 1400, the presence or absence of user contact with the apparatus 1400, an orientation or acceleration/deceleration of the apparatus 1400, and a change in temperature of the apparatus 1400. The sensor assembly 1414 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 1414 may also include a photosensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1416 is configured to facilitate wired or wireless communication between the apparatus 1400 and other devices. The device 1400 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1416 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the camera module 1418 may be a 3D structured light camera or a 3D camera module.
In an exemplary embodiment, the apparatus 1400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided that includes instructions, such as the memory 1404 that includes instructions, that are executable by the processor 1420 of the apparatus 1400. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. A face recognition method, comprising:
acquiring a three-dimensional face image;
preprocessing the three-dimensional face image to obtain a preprocessed three-dimensional face image;
performing histogram processing on the preprocessed three-dimensional face image according to different segmentation granularity to obtain a plurality of target histograms; each segmentation unit in each target histogram corresponds to one depth value;
matching processing is carried out on the basis of the target histograms and the pre-stored histogram, and a face recognition result is obtained;
matching processing is carried out based on the target histograms and the pre-stored histogram, and a face recognition result is obtained, wherein the matching processing comprises the following steps:
determining a depth value vector of each target histogram and a depth value vector of a pre-stored histogram having the same segmentation granularity as each target histogram based on the depth value corresponding to each segmentation unit in each target histogram;
calculating a distance value of each target histogram and a corresponding pre-stored histogram based on the depth value vector;
calculating a distance identification value of the three-dimensional face image based on the distance value and the weight coefficient of each target histogram; the weight coefficient is positively correlated with a segmentation granularity of the target histogram;
if the distance recognition value is smaller than or equal to the recognition value threshold, determining the face recognition result as a correct face; and if so, determining that the face recognition result is an error face.
2. The face recognition method of claim 1, wherein the preprocessing the three-dimensional face image to obtain a preprocessed three-dimensional face image comprises:
detecting the human face characteristic points of the three-dimensional human face image to obtain a first number of designated characteristic points in the three-dimensional human face image;
and normalizing the three-dimensional face image based on the first number of designated feature points to obtain a normalized three-dimensional face image.
3. The face recognition method of claim 2, wherein normalizing the three-dimensional face image based on the first number of designated feature points to obtain a normalized three-dimensional face image comprises:
acquiring the position relation among the first number of designated feature points and the position relation of the feature points in the face template; the feature points in the face template correspond to the first number of designated feature points one by one; the position relation comprises the distance and the deflection angle between any two specified characteristic points;
and adjusting the size and the deflection angle of the three-dimensional face image to enable the distance value between the specified feature point in the three-dimensional face image and the corresponding feature point in the face template to be smaller than or equal to a set threshold value, and obtaining the three-dimensional face image after registration and alignment.
4. The face recognition method of claim 1, wherein performing histogram processing on the preprocessed three-dimensional face image according to different segmentation granularities to obtain a plurality of target histograms comprises:
sequentially acquiring a segmentation granularity from a plurality of preset segmentation granularities, or acquiring a plurality of segmentation granularities in parallel;
and for each acquired segmentation granularity, acquiring a target histogram corresponding to the segmentation granularity based on the preprocessed three-dimensional face image.
5. The method of claim 4, wherein obtaining the target histogram corresponding to the segmentation granularity based on the preprocessed three-dimensional face image comprises:
acquiring a square area containing a face from the three-dimensional face image;
dividing the square area based on the division granularity to obtain a plurality of division units;
calculating a depth value corresponding to each segmentation unit according to a coordinate value of each pixel point in each segmentation unit on a Z coordinate axis to obtain a target histogram corresponding to the three-dimensional face image under the segmentation granularity;
and the Z coordinate axis is parallel to the optical axis of the shooting module for collecting the three-dimensional face image.
6. The face recognition method of claim 1, wherein after obtaining the face recognition result, the method further comprises:
and controlling the mobile equipment to unlock according to the face recognition result.
7. An apparatus for face recognition, the apparatus comprising:
the three-dimensional image acquisition module is used for acquiring a three-dimensional face image;
the preprocessing module is used for preprocessing the three-dimensional face image to obtain a preprocessed three-dimensional face image;
the histogram processing module is used for carrying out histogram processing on the preprocessed three-dimensional face image according to different segmentation granularity to obtain a plurality of target histograms; each segmentation unit in each target histogram corresponds to one depth value;
the matching processing module is used for performing matching processing based on the target histograms and the pre-stored histogram to obtain a face recognition result;
wherein the matching processing module comprises:
the vector determining unit is used for determining a depth value vector of each target histogram and a depth value vector of a pre-stored histogram with the same segmentation granularity as that of each target histogram based on the depth value corresponding to each segmentation unit in each target histogram;
a distance value calculation unit for calculating a distance value of each target histogram and a corresponding pre-stored histogram based on the depth value vector;
the identification value calculating unit is used for calculating a distance identification value of the three-dimensional face image based on the distance value and the weight coefficient of each target histogram; the weight coefficient is positively correlated with a segmentation granularity of the target histogram;
the face recognition unit is used for determining the face recognition result as a correct face when the distance recognition value is smaller than or equal to a recognition value threshold; and the face recognition module is further used for determining that the face recognition result is an error face when the distance recognition value is greater than the recognition value threshold.
8. The face recognition device of claim 7, wherein the preprocessing module comprises:
the feature point detection unit is used for detecting the human face feature points of the three-dimensional human face image and acquiring a first number of specified feature points in the three-dimensional human face image;
and the normalization processing unit is used for performing normalization processing on the three-dimensional face image based on the first number of designated feature points to obtain a three-dimensional face image after normalization processing.
9. The face recognition apparatus according to claim 8, wherein the normalization processing unit comprises:
a position relation obtaining subunit, configured to obtain a position relation between the first number of specified feature points and a position relation of feature points in the face template; the feature points in the face template correspond to the first number of designated feature points one by one; the position relation comprises the distance and the deflection angle between any two specified characteristic points;
and the registration alignment subunit is used for adjusting the size and the deflection angle of the three-dimensional face image so as to enable the distance value between the specified feature point in the three-dimensional face image and the corresponding feature point in the face template to be smaller than or equal to a set threshold value, and obtaining the three-dimensional face image after registration alignment.
10. The face recognition apparatus of claim 7, wherein the histogram processing module comprises:
the device comprises a segmentation granularity acquisition unit, a segmentation granularity acquisition unit and a segmentation granularity acquisition unit, wherein the segmentation granularity acquisition unit is used for sequentially acquiring one segmentation granularity from a plurality of preset segmentation granularities or acquiring a plurality of segmentation granularities in parallel;
and the target histogram acquisition unit is used for acquiring a target histogram corresponding to each acquired segmentation granularity based on the preprocessed three-dimensional face image.
11. The face recognition apparatus according to claim 10, wherein the target histogram obtaining unit includes:
a square region acquiring subunit, configured to acquire a square region including a face from the three-dimensional face image;
a square region segmentation subunit, configured to segment the square region based on the segmentation granularity to obtain a plurality of segmentation units;
the depth value calculation operator unit is used for calculating the depth value corresponding to each segmentation unit according to the coordinate value of each pixel point in each segmentation unit on the Z coordinate axis to obtain a target histogram corresponding to the three-dimensional face image under the segmentation granularity;
and the Z coordinate axis is parallel to the optical axis of the shooting module for collecting the three-dimensional face image.
12. The face recognition apparatus of claim 7, wherein the apparatus further comprises:
and the unlocking control module is used for controlling the mobile equipment to unlock according to the face recognition result.
13. A mobile device, characterized in that the mobile device comprises: the system comprises a shooting module for collecting a three-dimensional face image, a processor and a memory for storing an executable instruction of the processor; wherein the processor is configured to:
acquiring a three-dimensional face image acquired by the shooting module;
preprocessing the three-dimensional face image to obtain a preprocessed three-dimensional face image;
performing histogram processing on the preprocessed three-dimensional face image according to different segmentation granularity to obtain a plurality of target histograms; each segmentation unit in each target histogram corresponds to one depth value;
matching processing is carried out on the basis of the target histograms and the pre-stored histogram, and a face recognition result is obtained;
matching processing is carried out based on the target histograms and the pre-stored histogram, and a face recognition result is obtained, wherein the matching processing comprises the following steps:
determining a depth value vector of each target histogram and a depth value vector of a pre-stored histogram having the same segmentation granularity as each target histogram based on the depth value corresponding to each segmentation unit in each target histogram;
calculating a distance value of each target histogram and a corresponding pre-stored histogram based on the depth value vector;
calculating a distance identification value of the three-dimensional face image based on the distance value and the weight coefficient of each target histogram; the weight coefficient is positively correlated with a segmentation granularity of the target histogram;
if the distance recognition value is smaller than or equal to the recognition value threshold, determining the face recognition result as a correct face; and if so, determining that the face recognition result is an error face.
14. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing:
acquiring a three-dimensional face image;
preprocessing the three-dimensional face image to obtain a preprocessed three-dimensional face image;
performing histogram processing on the preprocessed three-dimensional face image according to different segmentation granularity to obtain a plurality of target histograms; each segmentation unit in each target histogram corresponds to one depth value;
matching processing is carried out on the basis of the target histograms and the pre-stored histogram, and a face recognition result is obtained;
matching processing is carried out based on the target histograms and the pre-stored histogram, and a face recognition result is obtained, wherein the matching processing comprises the following steps:
determining a depth value vector of each target histogram and a depth value vector of a pre-stored histogram having the same segmentation granularity as each target histogram based on the depth value corresponding to each segmentation unit in each target histogram;
calculating a distance value of each target histogram and a corresponding pre-stored histogram based on the depth value vector;
calculating a distance identification value of the three-dimensional face image based on the distance value and the weight coefficient of each target histogram; the weight coefficient is positively correlated with a segmentation granularity of the target histogram;
if the distance recognition value is smaller than or equal to the recognition value threshold, determining the face recognition result as a correct face; and if so, determining that the face recognition result is an error face.
CN201711329593.6A 2017-12-13 2017-12-13 Face recognition method and device, mobile equipment and computer readable storage medium Active CN107958223B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711329593.6A CN107958223B (en) 2017-12-13 2017-12-13 Face recognition method and device, mobile equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711329593.6A CN107958223B (en) 2017-12-13 2017-12-13 Face recognition method and device, mobile equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN107958223A CN107958223A (en) 2018-04-24
CN107958223B true CN107958223B (en) 2020-09-18

Family

ID=61958825

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711329593.6A Active CN107958223B (en) 2017-12-13 2017-12-13 Face recognition method and device, mobile equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN107958223B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108632283A (en) * 2018-05-10 2018-10-09 Oppo广东移动通信有限公司 A kind of data processing method and device, computer readable storage medium
CN109994202A (en) * 2019-03-22 2019-07-09 华南理工大学 A method of the face based on deep learning generates prescriptions of traditional Chinese medicine
CN112581357A (en) * 2020-12-16 2021-03-30 珠海格力电器股份有限公司 Face data processing method and device, electronic equipment and storage medium
CN113689402B (en) * 2021-08-24 2022-04-12 北京长木谷医疗科技有限公司 Deep learning-based femoral medullary cavity form identification method, device and storage medium
CN114241590B (en) * 2022-02-28 2022-07-22 深圳前海清正科技有限公司 Self-learning face recognition terminal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104166847A (en) * 2014-08-27 2014-11-26 华侨大学 2DLDA (two-dimensional linear discriminant analysis) face recognition method based on ULBP (uniform local binary pattern) feature sub-spaces
US20160132718A1 (en) * 2014-11-06 2016-05-12 Intel Corporation Face recognition using gradient based feature analysis
CN105760865A (en) * 2016-04-12 2016-07-13 中国民航大学 Facial image recognizing method capable of increasing comparison correct rate

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104166847A (en) * 2014-08-27 2014-11-26 华侨大学 2DLDA (two-dimensional linear discriminant analysis) face recognition method based on ULBP (uniform local binary pattern) feature sub-spaces
US20160132718A1 (en) * 2014-11-06 2016-05-12 Intel Corporation Face recognition using gradient based feature analysis
CN105760865A (en) * 2016-04-12 2016-07-13 中国民航大学 Facial image recognizing method capable of increasing comparison correct rate

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于几何特征与深度数据的三维人脸识别";陈立生等;《电脑知识与技术》;20130331;第9卷(第8期);第1864-1868页 *

Also Published As

Publication number Publication date
CN107958223A (en) 2018-04-24

Similar Documents

Publication Publication Date Title
CN107958223B (en) Face recognition method and device, mobile equipment and computer readable storage medium
CN105488527B (en) Image classification method and device
CN107945133B (en) Image processing method and device
CN108470322B (en) Method and device for processing face image and readable storage medium
CN107944367B (en) Face key point detection method and device
CN106845398B (en) Face key point positioning method and device
CN110288716B (en) Image processing method, device, electronic equipment and storage medium
US11030733B2 (en) Method, electronic device and storage medium for processing image
CN106557759B (en) Signpost information acquisition method and device
CN107463903B (en) Face key point positioning method and device
CN104077585B (en) Method for correcting image, device and terminal
CN110909654A (en) Training image generation method and device, electronic equipment and storage medium
CN106503682B (en) Method and device for positioning key points in video data
CN111105454A (en) Method, device and medium for acquiring positioning information
CN112541400B (en) Behavior recognition method and device based on sight estimation, electronic equipment and storage medium
CN112188091B (en) Face information identification method and device, electronic equipment and storage medium
CN112581358A (en) Training method of image processing model, image processing method and device
CN110930351A (en) Light spot detection method and device and electronic equipment
CN112202962A (en) Screen brightness adjusting method and device and storage medium
CN107239758B (en) Method and device for positioning key points of human face
CN108154090B (en) Face recognition method and device
CN107729886B (en) Method and device for processing face image
CN109934168B (en) Face image mapping method and device
CN108846321B (en) Method and device for identifying human face prosthesis and electronic equipment
CN113642551A (en) Nail key point detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant