WO2021000832A1 - 匹配人脸的方法、装置、计算机设备和存储介质 - Google Patents

匹配人脸的方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2021000832A1
WO2021000832A1 PCT/CN2020/098807 CN2020098807W WO2021000832A1 WO 2021000832 A1 WO2021000832 A1 WO 2021000832A1 CN 2020098807 W CN2020098807 W CN 2020098807W WO 2021000832 A1 WO2021000832 A1 WO 2021000832A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
matrix
value
tested
human
Prior art date
Application number
PCT/CN2020/098807
Other languages
English (en)
French (fr)
Inventor
张磊
王俊强
李方君
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021000832A1 publication Critical patent/WO2021000832A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • This application relates to the field of computer technology in artificial intelligence, and in particular to a method, device, computer equipment, and storage medium for matching human faces.
  • the existing methods for matching faces mostly use traversal queries to match.
  • the database of feature values of the face database needs to be cached first, and then the dynamic comparison search is performed.
  • the characteristic value is time, it takes a long time and the matching is slow.
  • the main purpose of this application is to provide a method, device, computer equipment and storage medium for matching human faces, aiming to quickly match human faces and shorten the time for human face matching.
  • this application provides a method for matching human faces, including the following steps:
  • the number of each of the first face feature values in the first matrix is copied to be the same as the number of the first face in the second matrix.
  • the number of human faces is the same, and all the copied first human face feature values are sorted in the order of the first human face feature values of the first matrix to form a third matrix;
  • the face corresponding to the minimum value in the total matching value is obtained, and the face corresponding to the minimum value in the total matching value is searched from the face database.
  • This application also provides a device for matching human faces, including:
  • An extraction module for extracting multiple designated facial features of the face to be tested
  • the first obtaining module is configured to obtain corresponding accurate feature values according to each of the facial features to obtain multiple first facial feature values, wherein the accurate feature values are based on actual parts of the facial features Calculated by matching projection method and module matching method;
  • a sorting module configured to sort all the first face feature values in a specified order to generate a first matrix, where the first matrix is a horizontal row or a vertical row;
  • the second acquisition module is configured to acquire a plurality of first human faces designated and stored in the face database, and acquire a second matrix composed of second human face feature values corresponding to each of the first human faces, where:
  • the multiple specified first human faces are obtained by recognizing the gender or age level of the face to be tested, and obtaining from the face database according to the gender or age level, the second matrix
  • the arrangement order of all the second face feature values of each of the first human faces in is the same as the arrangement order of the first face feature values of the first matrix;
  • the first generating module is configured to copy the number of the feature values of each first face in the first matrix to the number of the first face in the second matrix according to the number of the first face in the second matrix.
  • the number of the first face is the same, and all the copied first face feature values are sorted in the order of the first face feature values of the first matrix to form Third matrix
  • a third acquiring module configured to subtract the third matrix from the second matrix to obtain a fourth matrix
  • a fourth obtaining module configured to perform absolute value operations on each value in the fourth matrix to obtain a fifth matrix
  • a fifth acquisition module configured to add the absolute values of all the numerical values corresponding to each of the first faces in the fifth matrix to obtain the total matching value of each first face
  • the sixth obtaining module is configured to compare all the total matching values and obtain the minimum value among the total matching values
  • a judging module for judging whether the minimum value in the total matching value is less than a face threshold
  • the search module is configured to, if yes, obtain the face corresponding to the minimum value in the total matching value, and search for the face corresponding to the minimum value in the total matching value from the face database.
  • the present application also provides a computer device, including a memory and a processor, and a computer program is stored in the memory.
  • a computer program is stored in the memory.
  • the processor executes the computer program, a method for matching human faces is implemented:
  • the number of each of the first face feature values in the first matrix is copied to be the same as the number of the first face in the second matrix.
  • the number of human faces is the same, and all the copied first human face feature values are sorted in the order of the first human face feature values of the first matrix to form a third matrix;
  • the face corresponding to the minimum value in the total matching value is obtained, and the face corresponding to the minimum value in the total matching value is searched from the face database.
  • the present application also provides a computer storage medium on which a computer program is stored.
  • a method for matching a human face is implemented, wherein the method for matching a human face includes the following steps :
  • the number of each of the first face feature values in the first matrix is copied to be the same as the number of the first face in the second matrix.
  • the number of human faces is the same, and all the copied first human face feature values are sorted in the order of the first human face feature values of the first matrix to form a third matrix;
  • the face corresponding to the minimum value in the total matching value is obtained, and the face corresponding to the minimum value in the total matching value is searched from the face database.
  • the method, device, computer equipment, and storage medium for matching human faces provided in this application generate a first matrix by specifying feature values of the face to be tested, and a set of face feature values of the specified first face in the face database
  • the first matrix is derived into the third matrix and the absolute value of the second matrix is subtracted and then added together to obtain the total value of each face matching.
  • the face threshold is compared, and if it is less than the face threshold, a face matching the face to be tested in the face database can be obtained, which can quickly match the face and shorten the time for face matching.
  • FIG. 1 is a schematic diagram of the steps of a method for matching faces in an embodiment of the present application
  • Fig. 2 is a block diagram of a device for matching faces in an embodiment of the present application
  • FIG. 3 is a schematic block diagram of the structure of a computer device according to an embodiment of the present application.
  • a method for matching human faces including the following steps:
  • Step S1 extracting multiple designated facial features of the face to be tested
  • Step S2 Obtain corresponding accurate feature values according to each of the facial features to obtain multiple first facial feature values, where the accurate feature values are based on the actual parts of the facial features using projection methods and modules Calculated by matching method;
  • Step S3 Sort all the first face feature values in a specified order to generate a first matrix, where the first matrix is a horizontal row or a vertical row;
  • Step S4 Obtain a plurality of designated first human faces stored in the face database, and acquire a second matrix composed of second facial feature values corresponding to each of the first human faces, wherein the designated plurality
  • the first face is obtained by recognizing the gender or age level of the face to be tested, and obtaining from the face database according to the gender or age level, each of the second matrix
  • An arrangement order of all the second face feature values of the first face is the same as the arrangement order of the first face feature values of the first matrix;
  • Step S5 according to the number of the first human face in the second matrix, copy the number of each of the first human face feature values in the first matrix to the number of the first human face in the second matrix.
  • the number of the first human faces is the same, and all the copied first human face feature values are sorted in the sequence of the first human face feature values of the first matrix to form a third matrix;
  • Step S6 subtracting the third matrix and the second matrix to obtain a fourth matrix
  • Step S7 Perform an absolute value operation on each value in the fourth matrix to obtain a fifth matrix
  • Step S8 adding the absolute values of all the numerical values corresponding to each of the first human faces in the fifth matrix to obtain the total matching value of each first human face;
  • Step S9 comparing all the total matching values to obtain the minimum value among the total matching values
  • Step S10 judging whether the minimum value in the total matching value is less than a face threshold
  • Step S11 If yes, obtain the face corresponding to the minimum value in the total matching value, and search for the face corresponding to the minimum value in the total matching value from the face database.
  • the face to be tested that needs to be matched is obtained in advance, and multiple specified facial features of the face to be tested are extracted, wherein the corresponding accurate feature value is obtained according to the extracted face feature to obtain the corresponding person Face feature value.
  • the face feature value is used to distinguish the difference between different faces.
  • the element extraction of the face feature is based on the gray-scale characteristics of the face image and the algorithm that matches the projection map and the feature description initially determines the person The position of each part of the face is then used to accurately determine the position of the pupil and other facial features using projection method and module matching method.
  • the eyes of the face to be tested are selected as an example, the number of the eyes is 1, and the eyes of the face to be tested are large and round, and the corresponding feature value is 0.123, so Each part of the face is measured and labeled, and then the feature value corresponding to each label is determined according to the characteristics of each labeled part.
  • the feature value can be set according to the different characteristics of eyes, nose, eyebrows, texture, skin color, etc., and the nose is For example: the characteristic value corresponding to the height of the nose bridge is 0.9124, the characteristic value corresponding to the low bridge of the nose is 0.9125, the characteristic value corresponding to the width of the nose wing is 0.9126, and the characteristic value corresponding to the narrow wing of the nose is 0.9127 (the height, width, and narrowness mentioned here are only for ease of understanding.
  • the specific feature value needs to be determined according to the exact feature of the actual part), and the corresponding feature value is different according to the morphological feature of the part; the feature value of a face to be tested is M, and the specific value of M can be selected according to the actual process.
  • the corresponding parts in turn are eyes, nose, mouth, eyebrows, skin color, etc., assuming that the feature value corresponding to each label is A[0.123,0.269, 0.725,0.834,0.537,...,0.5569] includes a total of 512 eigenvalues; these 512 eigenvalues form a matrix with 1 row and 512 columns, that is, the first matrix.
  • the specific algorithm can be used to generate the first matrix
  • c++ can generate the first matrix through opencv
  • python can generate the first matrix through the numpy library, or other algorithms can be used to generate it. I will not repeat it here.
  • the above 512 eigenvalues can also form one The first matrix with 1 column and 512 rows.
  • the face database includes multiple faces, and each face has a different face feature value, so it includes multiple face feature values.
  • each face has M feature values.
  • N face feature values can generate N
  • the method of generating the second matrix with N facial feature values can refer to the above method of forming the first matrix.
  • the second matrix can be generated through opencv, and Python can generate the second sub-matrix through the numpy library, and other algorithms can also be used. Generated, do not repeat it here.
  • the first matrix is a matrix with 1 row and 512 columns
  • the second matrix is a matrix with N rows and 512 columns
  • the first matrix Generate a matrix of N rows and 512 columns, where each row is the face feature value of each element of the face to be tested arranged in a specified order, that is, copy the first row and 512 columns of the first matrix into a matrix of N rows and 512 columns, Form the third matrix.
  • the third matrix is subtracted from the second matrix to obtain a new fourth matrix, where the fourth matrix is the matching value of the face feature difference between each face, and each value in the fourth matrix represents the person to be tested
  • the face feature value is the result of calculating the feature value of the face in the face database corresponding to the value, and the magnitude of the value represents the degree of matching between the face to be tested and the face corresponding to one of the columns in the second matrix.
  • the absolute value of each face feature difference matching value in the fourth matrix is calculated to obtain a fifth matrix in which each value in the matrix is greater than or equal to zero, and each element in the fifth matrix
  • the absolute values of the corresponding face feature difference matching values are added to obtain the total matching value of each face, and the total matching value is the total difference between each face in the face database and the face to be tested.
  • the total matching value includes N numerical values, which are the numerical values obtained by calculating the feature values of the face to be tested and the N facial feature values in the face database respectively, and the N numerical values in the total matching value are selected Determine whether the minimum value is greater than the preset face threshold.
  • the face of the face database corresponding to the minimum value is the most matching face to be tested.
  • the threshold may be set too low, or the image of the face to be tested is not clear, resulting in inaccurate feature values of the face to be tested; you can enter a larger threshold, Then perform arithmetic matching, or process the image of the face to be tested to obtain a clearer image; if the number of values less than the threshold is less, the screening range is reduced, and it can be directly recognized by human eyes; if If it is greater than the threshold, the match fails.
  • Step S41 Identify the gender of the face to be tested, where the gender includes male and female;
  • Step S42 According to the gender of the face to be tested, search for a face consistent with the gender of the face to be tested from the face database to form the preset second matrix.
  • the gender of the face to be tested is recognized, where the gender includes male and female, and according to the gender of the face to be tested, from the face database Perform preliminary screening of human faces. For example, in a specific embodiment, when it is recognized that the face to be tested is male, the male face in the face database is extracted, and then the corresponding human face is obtained according to the acquired male face The eigenvalues are used to generate the second matrix, so that the range can be reduced and the time for subsequent operations can be saved.
  • the method for recognizing the gender of a face image is to obtain a large number of gradient features (HOG features) of face images in advance, and input the extracted gradient features of face images into SVM (Support Vector Machine) for training, Train the face image by establishing a console project and configuring the OpenCv environment to obtain the corresponding HOG features and present them in the form of a floating-point number container.
  • HOG features gradient features
  • SVM Small Vector Machine
  • all faces in the face database can be pre-made into two large matrices based on gender.
  • the matrix of the corresponding gender can be directly extracted for calculation.
  • Subsequent new faces are added to the face database, which can be placed in the corresponding gender matrix according to their gender.
  • Step S401 Identify the age level at which the face to be tested is located, where the age level includes babies, youth, middle-aged, and old;
  • Step S402 According to the age level of the face to be tested, search for a face consistent with the age level of the face to be tested from the face database to form the preset second matrix.
  • the age level of the face to be tested is identified, wherein the preset age levels are divided into baby, young, middle-aged, and old; according to the age level of the face to be tested, go to the face database Find the faces that are consistent with the age level of the face to be tested to form a preset second matrix.
  • the face to be tested is at a middle-aged age level from the face database. Find all the faces in the middle-aged age, number them according to the specified order and form the corresponding preset second matrix, so as to narrow the scope of calculation, save calculation time, and make the matching of human faces faster.
  • Age recognition can be obtained according to neural network training, such as multi-layer feedforward neural network (BP network) training.
  • BP network feedforward neural network
  • Image preprocessing is for easier feature extraction. Feature extraction is to remove a large amount of redundant information from the image, that is, to achieve data compression, reduce the complexity of the neural network structure, and improve the training efficiency and convergence rate of the neural network. This paper takes the designated standard face as the research object.
  • image compression image sampling, input vector standardization and other image preprocessing
  • the input image is sent to the BP neural network for training.
  • competitive selection the recognition result is obtained.
  • all the faces in the face database can be pre-made into multiple matrices based on age levels. After the age level of the face to be tested is recognized, the corresponding age level matrix can be directly extracted for calculation , When new faces are added to the subsequent face database, they can be placed in the corresponding age level matrix according to their age levels.
  • the step S1 of extracting multiple designated facial features of the face to be tested includes:
  • Step S11 Perform image preprocessing on the acquired image of the face to be tested
  • Step S12 Input the preprocessed image of the face to be tested into an extraction model for extraction, and extract a plurality of the face features of the face to be tested, wherein the detection model uses a known Face image, obtained based on convolutional neural network training.
  • the image of the face to be tested can be collected by the camera, such as static images, dynamic images, different positions, different expressions, etc. can be well collected.
  • the capture device will automatically search for and shoot the face image, or directly obtain the paper photo of the face to be tested, and then upload it to the electronic terminal by scanning and other processing methods, such as a computer, Mobile phones, processors, etc.
  • Face detection is mainly used in the preprocessing of face recognition in practice, that is, the position and size of the face are accurately calibrated in the image, because in the image to be recognized, the proportion of the face in the entire image is different, that is, The difference between the head shot and the standard shot, as well as the difference in the position of the face in the image, so it is necessary to locate the position of each feature of the face.
  • the face image contains very rich pattern features, such as histogram features, color features, template features and For structural features, face detection is to pick out the useful information and use these features to realize face detection.
  • the gray-scale distribution characteristics of the face image data can be analyzed, and the projection method and template matching method can be used to locate the position of the pupil, and the features of the eyes can be extracted more accurately, that is, the image standardization process is used for feature recognition
  • the specific process of image standardization is: the distance between the pupils of the eye is taken as the reference in the horizontal direction and the position of the eye is the reference in the vertical direction for coordinate translation.
  • a known face image is used to obtain a corresponding extraction model based on convolutional neural network training, so that image analysis can be performed on the preprocessed image of the face to be tested to obtain the face to be tested Of each face feature.
  • the method of performing image preprocessing on the acquired image of the face to be tested includes but is not limited to one or more of the following:
  • the preprocessing of the face image is based on the result of face detection, the image is processed and the final service and feature value extraction process.
  • the original image obtained by the system is often not directly used due to various conditions and random interference.
  • image preprocessing such as gray-scale correction and noise filtering.
  • the image preprocessing process mainly includes light compensation, gray scale transformation, histogram equalization, normalization, geometric correction, filtering and sharpening of the face image.
  • noise filtering is a common noise in digital images. Salt and pepper noise means that black and white pixels appear randomly on the image.
  • the median filter is a common noise
  • the basic principle of a nonlinear balanced filter is to replace the value of a point in a digital image or digital sequence with the median value of each point in a field of that point. Its main function is to make the difference in the gray value of surrounding pixels relatively large. The pixel is changed to a value close to the surrounding pixel value, which can eliminate isolated noise points. Therefore, the median filter is very effective in filtering out the salt and pepper noise of the image. The median filter can remove the noise and protect the edge of the image. In the actual calculation process, the statistical characteristics of the image are not required.
  • the step S11 of obtaining the face corresponding to the minimum value in the total matching value, and searching for the face corresponding to the minimum value in the total matching value from the face database includes:
  • Step S111 judging whether each face feature difference matching value of the fifth matrix corresponding to the minimum value in the total matching value is smaller than the corresponding preset threshold value of the face feature;
  • Step S112 if yes, the face whose face feature value of the preset second matrix corresponding to the minimum value of the total matching value matches is taken as the most matched face.
  • each face feature difference matching value of the fifth matrix corresponding to the minimum value in the total matching value is obtained, and it is judged whether each face feature difference matching value is less than the corresponding face feature preset threshold, if so, Then the face whose face feature value of the preset second matrix corresponding to the minimum value in the total matching value matches is taken as the most matched face, and if not, the face corresponding to the minimum value in the total matching value is excluded As the matching face.
  • the matching value of the face feature difference of the right eye size is 0.09, and in the preset threshold of the set face features, the preset threshold of the right eye is 0.07, even if the total matching value is less than the face Threshold, but because the matching value of the feature difference of the right eye does not meet the matching requirements, the face corresponding to the total matching value still cannot be used as the corresponding matching face.
  • the method for matching faces generates a first matrix from the feature values of the face to be tested, and the feature values of the faces in the face database are assembled to generate a multi-level second Matrix, after generating the third matrix corresponding to the second matrix from the first matrix, subtracting from the second matrix and then adding the absolute value to get the total matching value between each face, according to the minimum value in the total matching value It is compared with a preset face threshold, and then a face that matches the face to be tested is found from the face database.
  • an embodiment of the present application also provides a device for matching human faces, including:
  • the extraction module 10 is used to extract multiple designated facial features of the face to be tested;
  • the first obtaining module 20 is configured to obtain corresponding accurate feature values according to each of the facial features to obtain multiple first facial feature values, wherein the accurate feature values are based on actual parts of the facial features Calculated by matching projection method and module matching method;
  • the sorting module 30 is configured to sort all the first face feature values in a specified order to generate a first matrix, where the first matrix is a horizontal row or a vertical row;
  • the second obtaining module 40 is configured to obtain a plurality of first human faces designated and stored in the face database, and obtain a second matrix composed of second facial feature values corresponding to each of the first human faces, where ,
  • the multiple designated first faces are obtained by recognizing the gender or age level of the face to be tested, and obtaining from the face database according to the gender or age level, and the second The arrangement order of all the second face feature values of each of the first face in the matrix is the same as the arrangement order of the first face feature values of the first matrix;
  • the first generating module 50 is configured to copy the number of each of the first face feature values in the first matrix to the number of the first face in the second matrix according to the number of the first face in the second matrix.
  • the number of the first face in the second matrix is the same, and all the copied first face feature values are sorted in the order of the first face feature values of the first matrix Form the third matrix;
  • the third obtaining module 60 is configured to subtract the third matrix and the second matrix to obtain a fourth matrix
  • the fourth obtaining module 70 is configured to perform an absolute value operation on each value in the fourth matrix to obtain a fifth matrix
  • the fifth obtaining module 80 is configured to add the absolute values of all the numerical values corresponding to each face of the fifth matrix to obtain the total matching value of each face;
  • the sixth obtaining module 90 is configured to compare all the total matching values and obtain the minimum value among the total matching values
  • the judging module 100 is used to judge whether the minimum value in the total matching value is less than a face threshold
  • the searching module 110 is configured to, if yes, obtain the face corresponding to the minimum value in the total matching value, and search for the face corresponding to the minimum value in the total matching value from the face database.
  • the extraction module 10 extracts multiple designated facial features of the face to be tested, wherein the corresponding facial feature value is assigned according to the extracted facial feature, the The face feature value is used to distinguish the difference between different faces.
  • the element extraction of the face feature is based on the gray-scale characteristics of the face image, and the location of each part of the face is initially determined by the algorithm that matches the projection map and the feature description. , And then use the projection method and module matching method to accurately determine the position of the pupil and other facial features.
  • the eyes of the face to be tested are selected as an example, the number of the eyes is 1, and the eyes of the face to be tested are large and round, and the corresponding feature value is 0.123, so Each part of the face is measured and labeled, and the first acquisition module 20 assigns the feature value corresponding to each label according to the feature of each labeled part.
  • the feature value can be set according to the different features of eyes, nose, eyebrows, texture, skin color, etc.
  • the characteristic value corresponding to high nose bridge is 0.9124
  • the characteristic value corresponding to low nose bridge is 0.9125
  • the characteristic value corresponding to nose width is 0.9126
  • the characteristic value corresponding to narrow nose is 0.9127 (the height, width, and narrowness mentioned here are only
  • the specific feature value needs to be determined according to the exact feature of the actual part), and the corresponding feature value is different according to the morphological feature of the part;
  • the feature value of a face to be tested is M, and the specific value of M can be determined according to the actual The process selects the number of parts of the face to be tested.
  • the corresponding parts in turn are eyes, nose, mouth, eyebrows, skin color, etc., assuming that the feature value corresponding to each label is A[0.123,0.269, 0.725,0.834,0.537,...,0.5569] includes a total of 512 eigenvalues; these 512 eigenvalues form a matrix with 1 row and 512 columns, that is, the first matrix.
  • the specific algorithm can be used to generate the first matrix
  • c++ can generate the first matrix through opencv
  • python can generate the first matrix through the numpy library, or other algorithms can be used to generate it. I will not repeat it here.
  • the above 512 eigenvalues can also form one The first matrix with 1 column and 512 rows.
  • the face database includes multiple faces, and each face has a different face feature value, so it includes multiple face feature values.
  • each face has M feature values.
  • N face feature values can generate N
  • the method of generating the second matrix with N facial feature values can refer to the above method of forming the first matrix.
  • the second matrix can be generated through opencv, and Python can generate the second sub-matrix through the numpy library, and other algorithms can also be used. Generated, do not repeat it here.
  • the first matrix is a matrix with 1 row and 512 columns
  • the second matrix is a matrix with N rows and 512 columns
  • the first matrix Generate a matrix of N rows and 512 columns, where each row is the face feature value of each element of the face to be tested arranged in a specified order, that is, copy the first row and 512 columns of the first matrix into a matrix of N rows and 512 columns, Form the third matrix.
  • the second acquiring module 50 acquires a plurality of designated first faces stored in the face database, and acquires a second matrix composed of second face feature values corresponding to each of the first faces, and the third acquiring module 60 The third matrix and the second matrix are subtracted to obtain a new fourth matrix, where the fourth matrix is the matching value of the face feature difference between each face, and each value in the fourth matrix represents the test
  • the face feature value is calculated from the feature value of the face in the face database corresponding to the value, and the magnitude of the value represents the matching degree between the face to be tested and the face corresponding to one of the columns in the second matrix.
  • the fourth obtaining module 70 performs an absolute value operation on each face feature difference matching value in the fourth matrix to obtain a fifth matrix with each value in the matrix greater than or equal to zero.
  • the fifth obtaining module 80 The absolute value of the matching value of the face feature difference corresponding to each element of each face in the fifth matrix is added to obtain the total matching value of each face.
  • the total matching value is each face and waiting value in the face database.
  • the total value of the difference between human faces is measured; the sixth obtaining module 90 compares all the total matching values to obtain the smallest value among the total matching values.
  • the total matching value includes N numerical values, which are the numerical values obtained by calculating the feature values of the face to be tested and the N facial feature values in the face database respectively, and the N numerical values in the total matching value are selected.
  • the judgment module 100 judges whether the minimum value is greater than the preset face threshold, and if it is less than the threshold, the face of the face database corresponding to the minimum value is the most matching the face to be tested, and the search module 110 starts from The face database is searched for the face corresponding to the smallest value in the total matching value.
  • the threshold may be set too low, or the image of the face to be tested is not clear, resulting in inaccurate feature values of the face to be tested; Enter a larger threshold, and then perform arithmetic matching, or process the image of the face to be tested to get a clearer image; if the number of values less than the threshold is less, the scope of the filter can be reduced. Direct recognition by human eyes; if it is greater than the threshold, the matching fails.
  • the device for matching human faces further includes:
  • the first recognition module is configured to recognize the gender of the face to be tested, where the gender includes male and female;
  • the third searching module is configured to search for a face consistent with the gender of the face to be measured from the face database according to the gender of the face to be measured to form the preset second matrix.
  • the gender of the face to be tested is identified, where the gender includes male and female, and according to the gender of the face to be tested, Preliminary screening is performed on the face in the database. For example, in a specific embodiment, when the face to be tested is recognized as a male, the male face in the face database is extracted, and then the corresponding male face is obtained according to the acquired male face. The face feature value is generated and the second matrix is generated, so that the range can be reduced and the time for subsequent calculations can be saved.
  • the method for recognizing the gender of a face image is to obtain a large number of gradient features (HOG features) of face images in advance, and input the extracted gradient features of face images into SVM (Support Vector Machine) for training, Train the face image by establishing a console project and configuring the OpenCv environment to obtain the corresponding HOG features and present them in the form of a floating-point number container.
  • HOG features gradient features
  • SVM Small Vector Machine
  • all faces in the face database can be pre-made into two large matrices based on gender.
  • the matrix of the corresponding gender can be directly extracted for calculation.
  • Subsequent new faces are added to the face database, which can be placed in the corresponding gender matrix according to their gender.
  • the device for matching human faces further includes:
  • the second recognition module is used to recognize the age level at which the face to be tested is located, where the age level includes babies, youth, middle-aged, and old;
  • the third recognition module is configured to search the face database from the face database for the face consistent with the age level of the face to be tested according to the age level of the face to be tested to form the preset second matrix.
  • the age level of the face to be tested is identified, and the preset age levels are divided into baby, young, middle-aged, and old; according to the age level of the face to be tested,
  • the face database is searched for faces that are consistent with the age level of the face to be tested to form a preset second matrix.
  • the face to be tested is in the middle-aged age level. Find all middle-aged faces in the library, number them according to the specified sequence and form a corresponding preset second matrix, so as to narrow the scope of calculation, save calculation time, and make it match faces faster.
  • Age recognition can be obtained according to neural network training, such as multi-layer feedforward neural network (BP network) training.
  • BP network feedforward neural network
  • Image preprocessing is for easier feature extraction. Feature extraction is to remove a large amount of redundant information from the image, that is, to achieve data compression, reduce the complexity of the neural network structure, and improve the training efficiency and convergence rate of the neural network. This paper takes the designated standard face as the research object.
  • image compression image sampling, input vector standardization and other image preprocessing
  • the input image is sent to the BP neural network for training.
  • competitive selection the recognition result is obtained.
  • all the faces in the face database can be pre-made into multiple matrices based on age levels. After the age level of the face to be tested is recognized, the corresponding age level matrix can be directly extracted for calculation , When new faces are added to the subsequent face database, they can be placed in the corresponding age level matrix according to their age levels.
  • the extraction module 10 includes:
  • a preprocessing unit configured to perform image preprocessing on the acquired image of the face to be tested
  • the extraction unit is configured to input the preprocessed image of the face to be tested into an extraction model for extraction, and extract a plurality of facial features of the face to be tested, wherein the detection model uses the Known face images are obtained based on convolutional neural network training.
  • the image of the face to be measured can be collected by the camera, such as static images, dynamic images, different positions, different expressions, etc., can be well collected.
  • the capture device will automatically search for and shoot the face image, or directly obtain the paper photo of the face to be tested, and then upload it to the electronic terminal by scanning and other processing methods, such as a computer, Mobile phones, processors, etc.
  • Face detection is mainly used in the preprocessing of face recognition in practice, that is, the position and size of the face are accurately calibrated in the image, because in the image to be recognized, the proportion of the face in the entire image is different, that is, The difference between the head shot and the standard shot, as well as the difference in the position of the face in the image, so it is necessary to locate the position of each feature of the face.
  • the face image contains very rich pattern features, such as histogram features, color features, template features and For structural features, face detection is to pick out the useful information and use these features to realize face detection.
  • the gray-scale distribution characteristics of the face image data can be analyzed, and the projection method and template matching method can be used to locate the position of the pupil, and the features of the eyes can be extracted more accurately, that is, the image standardization process is used for feature recognition
  • the specific process of image standardization is: the distance between the pupils of the eye is taken as the reference in the horizontal direction, and the position of the eye is the reference in the vertical direction for coordinate translation.
  • a known face image is used to obtain the corresponding extraction model based on convolutional neural network training, so that image analysis can be performed on the preprocessed image of the face to be tested to obtain the person to be tested Every face feature of the face.
  • the method of performing image preprocessing on the acquired image of the face to be tested includes but is not limited to one or more of the following:
  • the preprocessing of the face image is based on the result of face detection, the image is processed and the final service and feature value extraction process.
  • the original image obtained by the system is often not directly used due to various conditions and random interference.
  • image preprocessing such as gray-scale correction and noise filtering.
  • the image preprocessing process mainly includes light compensation, gray scale transformation, histogram equalization, normalization, geometric correction, filtering and sharpening of the face image.
  • noise filtering is a common noise in digital images. Salt and pepper noise means that black and white pixels appear randomly on the image.
  • the median filter is a common noise
  • the basic principle of a nonlinear balanced filter is to replace the value of a point in a digital image or digital sequence with the median value of each point in a field of that point. Its main function is to make the difference in the gray value of surrounding pixels relatively large. The pixel is changed to a value close to the surrounding pixel value, which can eliminate isolated noise points. Therefore, the median filter is very effective in filtering out the salt and pepper noise of the image. The median filter can remove the noise and protect the edge of the image. In the actual calculation process, the statistical characteristics of the image are not required.
  • the fifth acquiring module 80 includes:
  • a judging unit configured to judge whether each face feature difference matching value of the fifth matrix corresponding to the minimum value in the total matching value is less than a corresponding preset threshold of face features
  • the execution unit is configured to, if yes, use the face whose face feature value of the preset second matrix corresponding to the minimum value in the total matching value matches as the most matched face.
  • each face feature difference matching value of the fifth matrix corresponding to the minimum value in the total matching value is obtained, and it is judged whether each face feature difference matching value is less than the corresponding face feature preset threshold value, If yes, the face whose face feature value of the preset second matrix corresponding to the minimum value in the total matching value matches is taken as the most matching face; if not, the minimum value in the total matching value is excluded The corresponding face is used as a matching face.
  • the matching value of the face feature difference of the right eye size is 0.09, and in the preset threshold of the set face features, the preset threshold of the right eye is 0.07, even if the total matching value is less than the face Threshold, but because the matching value of the feature difference of the right eye does not meet the matching requirements, the face corresponding to the total matching value still cannot be used as the corresponding matching face.
  • the device for matching faces provided in this embodiment of the application generates a first matrix from the feature values of the face to be tested, and the feature values of the faces in the face database are assembled to generate a multi-level second Matrix, after generating the third matrix corresponding to the second matrix from the first matrix, subtracting from the second matrix and then adding the absolute value to get the total matching value between each face, according to the minimum value in the total matching value It is compared with a preset face threshold, and then a face that matches the face to be tested is found from the face database.
  • an embodiment of the present application also provides a computer device.
  • the computer device may be a server, and its internal structure may be as shown in FIG. 3.
  • the computer equipment includes a processor, a memory, a network interface and a database connected through a system bus. Among them, the computer designed processor is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, a computer program, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the database of the computer device is used to store data such as preset thresholds and facial feature values of the face database.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer program is executed by the processor to realize a method of matching human faces.
  • the aforementioned processor executes the steps of the aforementioned method for matching human faces:
  • the number of each of the first face feature values in the first matrix is copied to be the same as the number of the first face in the second matrix.
  • the number of human faces is the same, and all the copied first human face feature values are sorted in the order of the first human face feature values of the first matrix to form a third matrix;
  • the face corresponding to the minimum value in the total matching value is obtained, and the face corresponding to the minimum value in the total matching value is searched from the face database.
  • FIG. 3 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
  • An embodiment of the present application also provides a computer storage medium on which a computer program is stored.
  • the computer-readable storage medium may be non-volatile or volatile.
  • the computer program is executed by the processor to achieve a The method of matching faces is specifically:
  • the number of each of the first face feature values in the first matrix is copied to be the same as the number of the first face in the second matrix.
  • the number of human faces is the same, and all the copied first human face feature values are sorted in the order of the first human face feature values of the first matrix to form a third matrix;
  • the face corresponding to the minimum value in the total matching value is obtained, and the face corresponding to the minimum value in the total matching value is searched from the face database.
  • the method, device, computer equipment, and storage medium for matching faces provided in the embodiments of this application generate a first matrix by generating the specified feature values of the face to be tested, and the specified first matrix in the face database is
  • the facial eigenvalues of the face are assembled to generate a multi-level second matrix, the first matrix is derived into the third matrix and the absolute value of the second matrix is subtracted and then added together to obtain the total value of each face matching.
  • the minimum value of the total face matching value is compared with the face threshold. If it is less than the face threshold, a face matching the face to be tested in the face database can be obtained, which can quickly match the face and shorten the time of face matching.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual-rate data rate SDRAM (SSRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • SDRAM dual-rate data rate SDRAM
  • SSRSDRAM dual-rate data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

申请涉及人工智能的技术领域,本申请中提供了一种匹配人脸的方法、装置、计算机设备和存储介质,该方法包括:将待测人脸的多个第一人脸特征值按指定顺序排序生成第一矩阵;获取存储于人脸库中多个第一人脸对应的第二人脸特征值所组成的第二矩阵;将第一矩阵内的每一第一人脸特征值的个数复制成与第二矩阵中第一人脸的数量相同的个数,并排序形成第三矩阵;第三矩阵与第二矩阵相减得到第四矩阵并将每一数值绝对值运算,并对应进行相加得到每一人脸的匹配总值;对比所有匹配总值,获取其最小值是否小于人脸阈值;以得到匹配总值中的最小值对应的人脸,从人脸库中查找与匹配总值中最小值对应的人脸。实现快速匹配人脸,缩短人脸匹配的时间。

Description

匹配人脸的方法、装置、计算机设备和存储介质
本申请要求于2019年07月3日提交中国专利局、申请号为201910598842.4,发明名称为“匹配人脸的方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能中的计算机技术领域,特别涉及一种匹配人脸的方法、装置、计算机设备和存储介质。
背景技术
目前,现有匹配人脸的方法多采用遍历查询的方式进行匹配,需先将人脸库特征值的数据库进行缓存,然后再做动态比对检索,发明人发现在缓存的人脸库中提取特征值时,耗时长,匹配慢。
技术问题
本申请的主要目的为提供一种匹配人脸的方法、装置、计算机设备和存储介质,旨在快速匹配人脸,缩短人脸匹配的时间。
技术解决方案
为实现上述目的,本申请提供了一种匹配人脸的方法,包括以下步骤:
提取所述待测人脸的多个指定人脸特征;
根据各所述人脸特征获取对应的准确特征值,以得到多个第一人脸特征值,其中,所述准确特征值是根据所述人脸特征的实际部位采用投影法和模块匹配法进行匹配计算得到的;
将所有所述第一人脸特征值按指定顺序排序生成第一矩阵,其中,所述第一矩阵为一横排或者一竖排的矩阵;
获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵,其中,指定的多个所述第一人脸是通过识别所述待测人脸的性别或者年龄层次,并根据所述性别或者年龄层次从所述人脸库中进行获取得到的,所述第二矩阵中的每一所述第一人脸的所有所述第二人脸特征值的排列顺序与所述第一矩阵的所述第一人脸特征值的排列顺序相同;
根据所述第二矩阵中所述第一人脸的数量,将所述第一矩阵内的每一所述第一人脸特征值的个数复制成与所述第二矩阵中所述第一人脸的数量相同的个数,且将复制的所有所述第一人脸特征值以所述第一矩阵的所述第一人脸特征值的排列顺序进行排序形成第三矩阵;
将所述第三矩阵与所述第二矩阵进行相减得到第四矩阵;
将所述第四矩阵中的每一数值进行绝对值运算,得到第五矩阵;
将所述第五矩阵中对应的每一所述第一人脸所对应的所有数值的绝对值进行相加,得到每一第一人脸的匹配总值;
对比所有所述匹配总值,获取所述匹配总值中的最小值;
判断所述匹配总值中的最小值是否小于人脸阈值;
若是,则得到所述匹配总值中的最小值对应的人脸,并从所述人脸库中查找与所述匹配总值中的最小值对应的人脸。
本申请还提供了一种匹配人脸的装置,包括:
提取模块,用于提取所述待测人脸的多个指定人脸特征;
第一获取模块,用于根据各所述人脸特征进行获取对应的准确特征值,以得到多个第一人脸特征值,其中,所述准确特征值是根据所述人脸特征的实际部位采用投影法和模块匹配法进行匹配计算得到的;
排序模块,用于将所有所述第一人脸特征值按指定顺序排序生成第一矩阵,其中,所述第一矩阵为一横排或者一竖排的矩阵;
第二获取模块,用于获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵,其中,指定的多个所述第一人脸是通过识别所述待测人脸的性别或者年龄层次,并根据所述性别或者年龄层次从所述人脸库中进行获取得到的,所述第二矩阵中的每一所述第一人脸的所有所述第二人脸特征值的排列顺序与所述第一矩阵的所述第一人脸特征值的排列顺序相同;
第一生成模块,用于根据所述第二矩阵中所述第一人脸的数量,将所述第一矩阵内的每一所述第一人脸特征值的个数复制成与所述第二矩阵中所述第一人脸的数量相同的个数,且将复制的所有所述第一人脸特征值以所述第一矩阵的所述第一人脸特征值的排列顺序进行排序形成第三矩阵;
第三获取模块,用于将所述第三矩阵与所述第二矩阵进行相减得到第四矩阵;
第四获取模块,用于将所述第四矩阵中的每一数值进行绝对值运算,得到第五矩阵;
第五获取模块,用于将所述第五矩阵中对应的每一所述第一人脸所对应的所有数值的绝对值进行相 加,得到每一第一人脸的匹配总值;
第六获取模块,用于对比所有所述匹配总值,获取所述匹配总值中的最小值;
判断模块,用于判断所述匹配总值中的最小值是否小于人脸阈值;
查找模块,用于若是,则得到所述匹配总值中的最小值对应的人脸,并从所述人脸库中查找与所述匹配总值中的最小值对应的人脸。
本申请还提供一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机程序,其中,所述处理器执行所述计算机程序时,实现一种匹配人脸的方法:
提取待测人脸的多个指定人脸特征;
根据各所述人脸特征获取对应的准确特征值,以得到多个第一人脸特征值,其中,所述准确特征值是根据所述人脸特征的实际部位采用投影法和模块匹配法进行匹配计算得到的;
将所有所述第一人脸特征值按指定顺序排序生成第一矩阵,其中,所述第一矩阵为一横排或者一竖排的矩阵;
获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵,其中,指定的多个所述第一人脸是通过识别所述待测人脸的性别或者年龄层次,并根据所述性别或者年龄层次从所述人脸库中进行获取得到的,所述第二矩阵中的每一所述第一人脸的所有所述第二人脸特征值的排列顺序与所述第一矩阵的所述第一人脸特征值的排列顺序相同;
根据所述第二矩阵中所述第一人脸的数量,将所述第一矩阵内的每一所述第一人脸特征值的个数复制成与所述第二矩阵中所述第一人脸的数量相同的个数,且将复制的所有所述第一人脸特征值以所述第一矩阵的所述第一人脸特征值的排列顺序进行排序形成第三矩阵;
将所述第三矩阵与所述第二矩阵进行相减得到第四矩阵;
将所述第四矩阵中的每一数值进行绝对值运算,得到第五矩阵;
将所述第五矩阵中对应的每一所述第一人脸所对应的所有数值的绝对值进行相加,得到每一第一人脸的匹配总值;
对比所有所述匹配总值,获取所述匹配总值中的最小值;
判断所述匹配总值中的最小值是否小于人脸阈值;
若是,则得到所述匹配总值中的最小值对应的人脸,并从所述人脸库中查找与所述匹配总值中的最小值对应的人脸。
本申请还提供一种计算机存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时,实现一种匹配人脸的方法,其中,所述匹配人脸的方法包括以下步骤:
提取待测人脸的多个指定人脸特征;
根据各所述人脸特征获取对应的准确特征值,以得到多个第一人脸特征值,其中,所述准确特征值是根据所述人脸特征的实际部位采用投影法和模块匹配法进行匹配计算得到的;
将所有所述第一人脸特征值按指定顺序排序生成第一矩阵,其中,所述第一矩阵为一横排或者一竖排的矩阵;
获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵,其中,指定的多个所述第一人脸是通过识别所述待测人脸的性别或者年龄层次,并根据所述性别或者年龄层次从所述人脸库中进行获取得到的,所述第二矩阵中的每一所述第一人脸的所有所述第二人脸特征值的排列顺序与所述第一矩阵的所述第一人脸特征值的排列顺序相同;
根据所述第二矩阵中所述第一人脸的数量,将所述第一矩阵内的每一所述第一人脸特征值的个数复制成与所述第二矩阵中所述第一人脸的数量相同的个数,且将复制的所有所述第一人脸特征值以所述第一矩阵的所述第一人脸特征值的排列顺序进行排序形成第三矩阵;
将所述第三矩阵与所述第二矩阵进行相减得到第四矩阵;
将所述第四矩阵中的每一数值进行绝对值运算,得到第五矩阵;
将所述第五矩阵中对应的每一所述第一人脸所对应的所有数值的绝对值进行相加,得到每一第一人脸的匹配总值;
对比所有所述匹配总值,获取所述匹配总值中的最小值;
判断所述匹配总值中的最小值是否小于人脸阈值;
若是,则得到所述匹配总值中的最小值对应的人脸,并从所述人脸库中查找与所述匹配总值中的最小值对应的人脸。
有益效果
本申请中提供的匹配人脸的方法、装置、计算机设备和存储介质,通过将待测人脸的指定特征值生成第一矩阵,人脸库中的指定第一人脸的人脸特征值集合起来生成一个多阶的第二矩阵,将第一矩阵衍生成第三矩阵与第二矩阵相减之后绝对值再相加运算得到每一人脸匹配总值,根据人脸匹配总值的最小 值与人脸阈值进行比较,若小于人脸阈值则可得出人脸库中与待测人脸相匹配的人脸,快速匹配人脸,缩短人脸匹配的时间。
附图说明
图1是本申请一实施例中匹配人脸的方法步骤示意图;
图2是本申请一实施例中匹配人脸的装置结构框图;
图3是本申请一实施例的计算机设备的结构示意框图。
本发明的最佳实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
参照图1,为本申请一实施例中提供了一种匹配人脸的方法,包括以下步骤:
步骤S1,提取所述待测人脸的多个指定人脸特征;
步骤S2,根据各所述人脸特征获取对应的准确特征值,以得到多个第一人脸特征值,其中,所述准确特征值是根据所述人脸特征的实际部位采用投影法和模块匹配法进行匹配计算得到的;
步骤S3,将所有所述第一人脸特征值按指定顺序排序生成第一矩阵,其中,所述第一矩阵为一横排或者一竖排的矩阵;
步骤S4,获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵,其中,指定的多个所述第一人脸是通过识别所述待测人脸的性别或者年龄层次,并根据所述性别或者年龄层次从所述人脸库中进行获取得到的,所述第二矩阵中的每一所述第一人脸的所有所述第二人脸特征值的排列顺序与所述第一矩阵的所述第一人脸特征值的排列顺序相同;
步骤S5,根据所述第二矩阵中所述第一人脸的数量,将所述第一矩阵内的每一所述第一人脸特征值的个数复制成与所述第二矩阵中所述第一人脸的数量相同的个数,且将复制的所有所述第一人脸特征值以所述第一矩阵的所述第一人脸特征值的排列顺序进行排序形成第三矩阵;
步骤S6,将所述第三矩阵与所述第二矩阵进行相减得到第四矩阵;
步骤S7,将所述第四矩阵中的每一数值进行绝对值运算,得到第五矩阵;
步骤S8,将所述第五矩阵中对应的每一所述第一人脸所对应的所有数值的绝对值进行相加,得到每一第一人脸的匹配总值;
步骤S9,对比所有所述匹配总值,获取所述匹配总值中的最小值;
步骤S10,判断所述匹配总值中的最小值是否小于人脸阈值;
步骤S11,若是,则得到所述匹配总值中的最小值对应的人脸,并从所述人脸库中查找与所述匹配总值中的最小值对应的人脸。
以上步骤中,预先获取所需要匹配的待测人脸,并提取所述待测人脸的多个指定人脸特征,其中根据提取的人脸特征得到对应的准确特征值,以得到对应的人脸特征值,该人脸特征值为用于区分不同人脸之间的区别,人脸特征的元素提取是根据人脸图像的灰度特性用投影图和特性描述相匹配的算法初步确定了人脸各部分的位置,然后利用投影法和模块匹配法准确的确定了瞳孔的位置以及其他面部特征。如在一具体实施例中,选取待测人脸的眼睛为例,眼睛的标号为1,且待测人脸的眼睛又大又圆,又大又圆所对应的特征值为0.123,所以对待测人脸的各个部位进行标号,然后根据各个标号部位的特征来确定各个标号所对应的特征值,特征值可以依据眼睛、鼻子、眉毛、纹理、肤色等特征的不同来设定,以鼻子为例:鼻梁高对应的特征值为0.9124,鼻梁低对应的特征值为0.9125,鼻翼宽对应的特征值为0.9126,鼻翼窄对应的特征值为0.9127(此处所述的高低宽窄仅为了便于理解,具体特征值需根据实际部位的准确特征来确定),根据部位的形态特征的不同对应的特征值也不同;一个待测人脸特征值为M,M的具体数值可根据在实际过程选取待测人脸的各个部位的数量。在本实施例中,比如一个待测人脸有512个标号数组,即M=512,例如a[1,2,3,4,5,......,512],其中[1,2,3,4,5,......,512]依次分别对应的部位为眼睛、鼻子、嘴、眉毛、肤色等等,假设每个标号对应的特征值为A[0.123,0.269,0.725,0.834,0.537,......,0.5569]共包括512个特征值;此512个特征值形成一个1行512列的矩阵,即第一矩阵,具体可运用算法来生成第一矩阵,例如c++可以通过opencv生成第一矩阵,python可以通过numpy库来生成第一矩阵,也可运用其他算法生成,在此不做赘述,在其他实施例中,上述512个特征值还可以形成一个1列512行的第一矩阵。
在本实施例中,人脸库包括多个人脸,每个人脸的人脸特征值都不同,因此包括多个人脸特征值,当有N个人脸时,每个人脸上有M个特征值,那么总共就有N*M个特征值,以一个待测人脸包括512个特征值为例,当有N个人脸时,就有N个人脸特征值,N个人脸特征值就可以生成一个N行512列的矩阵,即第二矩阵,多个人脸特征值就生成一个多阶矩阵。其中N个人脸特征值生成第二矩阵的方法,可参照上述形成第一矩阵的方法,运用c++可以通过opencv生成第二矩阵,python可以通过numpy库来生成第二子矩阵,也可运用其他算法生成,在此不做赘述。
在一实施例中,以一个待测人脸有512个标号数组为例,第一矩阵为一个1行512列的矩阵,第二矩阵为一个N行512列的矩阵,将所述第一矩阵生成N行512列的矩阵,其中,每一行均为待测人脸的各元素按照指定顺序排列的人脸特征值,也即将第一矩阵第1行512列复制成N行512列的矩阵,形成第三矩阵。
将第三矩阵与第二矩阵进行相减,得到新的第四矩阵,其中,第四矩阵为每一人脸之间的人脸特征差异匹配值,第四矩阵内的每个数值代表待测人脸特征值与该数值所对应的人脸库中的人脸的特征值计算得到的结果,数值的大小代表待测人脸与第二矩阵中的其中一列所对应的人脸的匹配程度。
上述步骤中,将第四矩阵中的每一人脸特征差异匹配值进行绝对值运算,得到一个矩阵内的每一数值均大于或等于零的第五矩阵,将第五矩阵中的每一人脸各元素对应的人脸特征差异匹配值的绝对值进行相加,得到每一人脸的匹配总值,该匹配总值为人脸库中的每一人脸与待测人脸之间的差异总值。在一具体实施例中,所述匹配总值包括N个数值,是待测人脸特征值与人脸库中的N个人脸特征值分别计算得到的数值,选取匹配总值中N个数值中的最小值,判断所述最小值是否大于预设的人脸阈值,如果小于阈值,则所述最小值对应的人脸库的人脸与待测人脸最匹配,在其他实施例中,如果有多个数值均小于所述阈值,则可能由于阈值设定的过低,或者由于待测人脸的图像不清晰,导致待测人脸的特征值不准确;可以输入一个较大的阈值,然后再进行运算匹配,或者对待测人脸的图像进行处理,得到更加清晰的图像;如果得到的小于阈值的数值的个数较少时,缩小了筛选的范围,可以直接通过人眼识别;如果大于阈值,则匹配失败。
在一实施例中,所述获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵的步骤S4之前:
步骤S41,识别所述待测人脸的性别,其中,所述性别包括男性和女性;
步骤S42,根据所述待测人脸的性别,从所述人脸库中查找与所述待测人脸性别一致的人脸做成所述预设第二矩阵。
以上步骤中,获取到所述待测人脸后,识别所述待测人脸的性别,其中,所述性别包括男性和女性,根据所述待测人脸的性别,从人脸库中的人脸进行初步筛选,如在一具体实施例中,识别到待测人脸为男性时,将人脸库中的男性人脸提取出来,然后根据获取到的男性人脸获取到对应的人脸特征值并生成所述第二矩阵,如此便可以缩小范围,并且节省了后续运算的时间。
在本实施例中,识别人脸图像的性别的方法是:预先获取大量的人脸图像的梯度特征(HOG特征),将提取的人脸图像梯度特征输入到SVM(支持向量机)中训练,通过建立控制台project以及配置OpenCv环境对人脸图像进行训练,得到对应的HOG特征,并以浮点数容器的形式呈现,当获取到待测人脸的图像时,进而可以得到每一元素对应的浮点数值,且获取到对应的人脸性别。
在其他实施例中,还可以将人脸库中的所有人脸预先根据性别做成两个大矩阵,当识别到待测人脸的性别后,可以直接提取对应的性别的矩阵进行运算,待后续人脸库中增加新的人脸,可以根据其性别对应放置到对应的性别矩阵中。
在一实施例中,所述获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵的步骤S4之前:
步骤S401,识别所述待测人脸位于的年龄层次,其中,所述年龄层次包括婴孩、青年、中年以及老年;
步骤S402,根据所述待测人脸的年龄层次,从所述人脸库中查找与所述待测人脸年龄层次一致的人脸做成所述预设第二矩阵。
以上步骤中,识别待测人脸所在的年龄层次,其中预设的年龄层次的级别分为婴孩、青年、中年以及老年;根据所述待测人脸的年龄层次,到所述人脸库中查找与所述待测人脸年龄层次一致的人脸做成预设的第二矩阵,如在一具体实施例中,所述待测人脸处于中年的年龄层次,从人脸库中查找所有的处于中年年龄层次的人脸,根据指定的顺序编号并形成对应的预设第二矩阵,以此缩小运算的范围,节省运算时间,使其匹配人脸的速度更快。
年龄识别可以根据神经网络训练得到,如可以通过多层前馈神经网络(BP网络)训练得到,先对输入图像实行图像预处理,之后进行人脸特征提取,接下来就是BP神经网络训练,最后用训练好的网络进行识别,获得识别结果。图像预处理是为了更易特征提取,特征提取就是将图像中大量的冗余信息去除,即实现数据压缩,降低了神经网络结构的复杂度,提高了神经网络的训练效率和收敛率。本文以指定的标准脸为研究对象,将输入图像进行图像压压缩、图像抽样、输入矢量标准化等图像预处理后,送入BP神经网络训练,经过竞争选择,获得识别结果。
在其他实施例中,还可以将人脸库中的所有人脸预先根据年龄层次做成多个矩阵,当识别到待测人脸的年龄层次后,可以直接提取对应的年龄层次的矩阵进行运算,待后续人脸库中增加新的人脸,可以根据其年龄层次对应放置到对应的年龄层次矩阵中。
在一实施例中,所述提取所述待测人脸的多个指定人脸特征的步骤S1,包括:
步骤S11,对获取到的所述待测人脸的图像进行图像预处理;
步骤S12,将预处理后的所述待测人脸的图像输入到提取模型中进行提取,提取所述待测人脸的多个所述人脸特征,其中,所述检测模型利用已知的人脸图像,基于卷积神经网络训练得到。
以上步骤中,待测人脸的图像能通过摄像头采集下来,比如静态图像、动态图像、不同的位置、不同的表情等方面都可以得到很好的采集。当用户在采集设备的拍摄范围内时,采集设备会自动搜索并拍摄人脸图像,或者直接获取待测人脸的纸质照片,然后进行扫描等处理方式上传到电子终端,电子终端例如电脑、手机、处理器等。人脸检测在实际中主要用于人脸识别的预处理,即在图像中准确标定出人脸的位置和大小,因为在待识别的图像中,人脸部分占整个图像的比例不同,即有大头照、标准照之分,以及人脸在图像中位置的差异,因此需对定位人脸各个特征的位置,人脸图像中包含的模式特征十分丰富,如直方图特征、颜色特征、模板特征及结构特征等,人脸检测就是把这其中有用的信息挑出来,并利用这些特征实现人脸检测。在一些实施例中,可对人脸图像数据的灰度分布特性进行分析,采用投影法和模板匹配的方法定位瞳孔的位置,较准确的提取出眼睛的特征,即利用图像标准化处理为特征识别提供可靠数据,对图像标准化处理的具体过程为:以眼睛瞳孔之间距离为水平方向的基准,眼睛的位置为垂直方向的基准进行坐标平移。
在本实施例中,利用已知的人脸图像,基于卷积神经网络训练,以得到对应的提取模型,以便可以对预处理后的待测人脸的图像进行图像分析,获取待测人脸的每一人脸特征。
在一实施例中,所述对获取到的所述待测人脸的图像进行图像预处理的方法包括但不限于以下的一种或者多种:
人脸图像的光线补偿,灰度变换,直方图均衡化,归一化,几何校正,噪声过滤以及锐化。
对于人脸图像预处理是基于人脸检测结果,对图像进行处理并最终服务与特征值提取的过程,***获取的原始图像由于受到各种条件的限制和随机干扰,往往不能直接使用,必须在图像处理的早起阶段对它进行灰度校正、噪声过滤等图像预处理。对于人脸图像而言,其图像预处理过程主要包括人脸图像的光线补偿、灰度变换、直方图均衡化、归一化、几何校正、滤波以及锐化等。以噪声过滤为例,其中椒盐噪声是数字图像的一个常见噪声,椒盐噪声就是在图像上随机出现黑色白色的像素,那么使用中值滤波器对图像进行修改;中值滤波器是一种常用的非线性平衡滤波器,其基本原理是把数字图像或数字序列中一点的值用该点的一个领域中各点值的中值代换其主要功能是让周围像素灰度值的差比较大的像素改取与周围的像素值接近的值,从而可以消除孤立的噪声点,所以中值滤波对于滤除图像的椒盐噪声非常有效,中值滤波器可以做到既去除噪声又能保护图像的边缘,在实际运算过程中不需要图像的统计特性。
在一实施例中,所述获得到所述匹配总值中的最小值对应的人脸,并从所述人脸库中查找与所述匹配总值中的最小值对应的人脸的步骤S11,包括:
步骤S111,判断所述匹配总值中的最小值所对应的所述第五矩阵的每一人脸特征差异匹配值是否小于所对应的人脸特征预设阈值;
步骤S112,若是,则将所述匹配总值中的最小值对应的所述预设第二矩阵的人脸特征值吻合的人脸作为最匹配的人脸。
以上步骤中,获取匹配总值中的最小值所对应的第五矩阵的每一人脸特征差异匹配值,并判断每一人脸特征差异匹配值是否小于所对应的人脸特征预设阈值,若是,则将所述匹配总值中的最小值对应的所述预设第二矩阵的人脸特征值吻合的人脸作为最匹配的人脸,若否,则排除该匹配总值中最小值所对应的人脸作为匹配人脸。如在一具体实施例中,若右眼睛大小的人脸特征差异匹配值为0.09,而在设置的人脸特征预设阈值中,右眼的预设阈值为0.07,即使匹配总值小于人脸阈值,但由于右眼的特征差异匹配值不符合匹配要求,该匹配总值对应的人脸依然不能作为对应的匹配人脸。
综上所述,为本申请实施例中提供的匹配人脸的方法,将待测人脸的特征值生成第一矩阵,人脸库中的人脸特征值集合起来生成一个多阶的第二矩阵,将第一矩阵生成与第二矩阵对应的第三矩阵后,与第二矩阵运用相减然后绝对值再相加运算得到每一人脸之间匹配总值,根据匹配总值中的最小值与预设的人脸阈值进行比较,进而从人脸库中查找与待测人脸相匹配的人脸。
参照图2,本申请一实施例中还提供了一种匹配人脸的装置,包括:
提取模块10,用于提取所述待测人脸的多个指定人脸特征;
第一获取模块20,用于根据各所述人脸特征获取对应的准确特征值,以得到多个第一人脸特征值,其中,所述准确特征值是根据所述人脸特征的实际部位采用投影法和模块匹配法进行匹配计算得到的;
排序模块30,用于将所有所述第一人脸特征值按指定顺序排序生成第一矩阵,其中,所述第一矩阵为一横排或者一竖排的矩阵;
第二获取模块40,用于获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的 第二人脸特征值所组成的第二矩阵,其中,指定的多个所述第一人脸是通过识别所述待测人脸的性别或者年龄层次,并根据所述性别或者年龄层次从所述人脸库中进行获取得到的,所述第二矩阵中的每一所述第一人脸的所有所述第二人脸特征值的排列顺序与所述第一矩阵的所述第一人脸特征值的排列顺序相同;
第一生成模块50,用于根据所述第二矩阵中所述第一人脸的数量,将所述第一矩阵内的每一所述第一人脸特征值的个数复制成与所述第二矩阵中所述第一人脸的数量相同的个数,且将复制的所有所述第一人脸特征值以所述第一矩阵的所述第一人脸特征值的排列顺序进行排序形成第三矩阵;
第三获取模块60,用于将所述第三矩阵与所述第二矩阵进行相减得到第四矩阵;
第四获取模块70,用于将所述第四矩阵中的每一数值进行绝对值运算,得到第五矩阵;
第五获取模块80,用于将所述第五矩阵的每一人脸所对应的所有数值的绝对值进行相加,得到每一人脸的匹配总值;
第六获取模块90,用于对比所有所述匹配总值,获取所述匹配总值中的最小值;
判断模块100,用于判断所述匹配总值中的最小值是否小于人脸阈值;
查找模块110,用于若是,则得到所述匹配总值中的最小值对应的人脸,并从所述人脸库中查找与所述匹配总值中的最小值对应的人脸。
本实施例中,提取模块10获取所需要匹配的待测人脸后,提取所述待测人脸的多个指定人脸特征,其中根据提取的人脸特征赋予对应的人脸特征值,该人脸特征值为用于区分不同人脸之间的区别,人脸特征的元素提取是根据人脸图像的灰度特性用投影图和特性描述相匹配的算法初步确定了人脸各部分的位置,然后利用投影法和模块匹配法准确的确定了瞳孔的位置以及其他面部特征。如在一具体实施例中,选取待测人脸的眼睛为例,眼睛的标号为1,且待测人脸的眼睛又大又圆,又大又圆所对应的特征值为0.123,所以对待测人脸的各个部位进行标号,第一获取模块20根据各个标号部位的特征来赋予各个标号所对应的特征值,特征值可以依据眼睛、鼻子、眉毛、纹理、肤色等特征的不同来设定,以鼻子为例:鼻梁高对应的特征值为0.9124,鼻梁低对应的特征值为0.9125,鼻翼宽对应的特征值为0.9126,鼻翼窄对应的特征值为0.9127(此处所述的高低宽窄仅为了便于理解,具体特征值需根据实际部位的准确特征来确定),根据部位的形态特征的不同对应的特征值也不同;一个待测人脸特征值为M,M的具体数值可根据在实际过程选取待测人脸的各个部位的数量。在本实施例中,比如一个待测人脸有512个标号数组,即M=512,例如a[1,2,3,4,5,......,512],其中[1,2,3,4,5,......,512]依次分别对应的部位为眼睛、鼻子、嘴、眉毛、肤色等等,假设每个标号对应的特征值为A[0.123,0.269,0.725,0.834,0.537,......,0.5569]共包括512个特征值;此512个特征值形成一个1行512列的矩阵,即第一矩阵,具体可运用算法来生成第一矩阵,例如c++可以通过opencv生成第一矩阵,python可以通过numpy库来生成第一矩阵,也可运用其他算法生成,在此不做赘述,在其他实施例中,上述512个特征值还可以形成一个1列512行的第一矩阵。
在本实施例中,人脸库包括多个人脸,每个人脸的人脸特征值都不同,因此包括多个人脸特征值,当有N个人脸时,每个人脸上有M个特征值,那么总共就有N*M个特征值,以一个待测人脸包括512个特征值为例,当有N个人脸时,就有N个人脸特征值,N个人脸特征值就可以生成一个N行512列的矩阵,即第二矩阵,多个人脸特征值就生成一个多阶矩阵。其中N个人脸特征值生成第二矩阵的方法,可参照上述形成第一矩阵的方法,运用c++可以通过opencv生成第二矩阵,python可以通过numpy库来生成第二子矩阵,也可运用其他算法生成,在此不做赘述。
在一实施例中,以一个待测人脸有512个标号数组为例,第一矩阵为一个1行512列的矩阵,第二矩阵为一个N行512列的矩阵,将所述第一矩阵生成N行512列的矩阵,其中,每一行均为待测人脸的各元素按照指定顺序排列的人脸特征值,也即将第一矩阵第1行512列复制成N行512列的矩阵,形成第三矩阵。
第二获取模块50获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵,第三获取模块60将第三矩阵与第二矩阵进行相减,得到新的第四矩阵,其中,第四矩阵为每一人脸之间的人脸特征差异匹配值,第四矩阵内的每个数值代表待测人脸特征值与该数值所对应的人脸库中的人脸的特征值计算得到的结果,数值的大小代表待测人脸与第二矩阵中的其中一列所对应的人脸的匹配程度。
在本实施例中,第四获取模块70将第四矩阵中的每一人脸特征差异匹配值进行绝对值运算,得到一个矩阵内的每一数值均大于或等于零的第五矩阵,第五获取模块80将第五矩阵中的每一人脸各元素对应的人脸特征差异匹配值的绝对值进行相加,得到每一人脸的匹配总值,该匹配总值为人脸库中的每一人脸与待测人脸之间的差异总值;第六获取模块90对比所有所述匹配总值,获取所述匹配总值中的最小值。在一具体实施例中,所述匹配总值包括N个数值,是待测人脸特征值与人脸库中的N个人脸特征值分别计算得到的数值,选取匹配总值中N个数值中的最小值,判断模块100判断所述最小值是否大 于预设的人脸阈值,如果小于阈值,则所述最小值对应的人脸库的人脸与待测人脸最匹配,查找模块110从所述人脸库中查找与所述匹配总值中的最小值对应的人脸。在其他实施例中,如果有多个数值均小于所述阈值,则可能由于阈值设定的过低,或者由于待测人脸的图像不清晰,导致待测人脸的特征值不准确;可以输入一个较大的阈值,然后再进行运算匹配,或者对待测人脸的图像进行处理,得到更加清晰的图像;如果得到的小于阈值的数值的个数较少时,缩小了筛选的范围,可以直接通过人眼识别;如果大于阈值,则匹配失败。
在一实施例中,匹配人脸的装置还包括:
第一识别模块,用于识别所述待测人脸的性别,其中,所述性别包括男性和女性;
第三查找模块,用于根据所述待测人脸的性别,从所述人脸库中查找与所述待测人脸性别一致的人脸做成所述预设第二矩阵。
在本实施例中,获取到所述待测人脸后,识别所述待测人脸的性别,其中,所述性别包括男性和女性,根据所述待测人脸的性别,从人脸库中的人脸进行初步筛选,如在一具体实施例中,识别到待测人脸为男性时,将人脸库中的男性人脸提取出来,然后根据获取到的男性人脸获取到对应的人脸特征值并生成所述第二矩阵,如此便可以缩小范围,并且节省了后续运算的时间。
在本实施例中,识别人脸图像的性别的方法是:预先获取大量的人脸图像的梯度特征(HOG特征),将提取的人脸图像梯度特征输入到SVM(支持向量机)中训练,通过建立控制台project以及配置OpenCv环境对人脸图像进行训练,得到对应的HOG特征,并以浮点数容器的形式呈现,当获取到待测人脸的图像时,进而可以得到每一元素对应的浮点数值,且获取到对应的人脸性别。
在其他实施例中,还可以将人脸库中的所有人脸预先根据性别做成两个大矩阵,当识别到待测人脸的性别后,可以直接提取对应的性别的矩阵进行运算,待后续人脸库中增加新的人脸,可以根据其性别对应放置到对应的性别矩阵中。
在一实施例中,匹配人脸的装置还包括:
第二识别模块,用于识别所述待测人脸位于的年龄层次,其中,所述年龄层次包括婴孩、青年、中年以及老年;
第三识别模块,用于根据所述待测人脸的年龄层次,从所述人脸库中查找与所述待测人脸年龄层次一致的人脸做成所述预设第二矩阵。
在本实施例中,识别待测人脸所在的年龄层次,其中预设的年龄层次的级别分为婴孩、青年、中年以及老年;根据所述待测人脸的年龄层次,到所述人脸库中查找与所述待测人脸年龄层次一致的人脸做成预设的第二矩阵,如在一具体实施例中,所述待测人脸处于中年的年龄层次,从人脸库中查找所有的处于中年年龄层次的人脸,根据指定的顺序编号并形成对应的预设第二矩阵,以此缩小运算的范围,节省运算时间,使其匹配人脸的速度更快。
年龄识别可以根据神经网络训练得到,如可以通过多层前馈神经网络(BP网络)训练得到,先对输入图像实行图像预处理,之后进行人脸特征提取,接下来就是BP神经网络训练,最后用训练好的网络进行识别,获得识别结果。图像预处理是为了更易特征提取,特征提取就是将图像中大量的冗余信息去除,即实现数据压缩,降低了神经网络结构的复杂度,提高了神经网络的训练效率和收敛率。本文以指定的标准脸为研究对象,将输入图像进行图像压压缩、图像抽样、输入矢量标准化等图像预处理后,送入BP神经网络训练,经过竞争选择,获得识别结果。
在其他实施例中,还可以将人脸库中的所有人脸预先根据年龄层次做成多个矩阵,当识别到待测人脸的年龄层次后,可以直接提取对应的年龄层次的矩阵进行运算,待后续人脸库中增加新的人脸,可以根据其年龄层次对应放置到对应的年龄层次矩阵中。
在一实施例中,提取模块10包括:
预处理单元,用于对获取到的所述待测人脸的图像进行图像预处理;
提取单元,用于将预处理后的所述待测人脸的图像输入到提取模型中进行提取,提取所述待测人脸的多个所述人脸特征,其中,所述检测模型利用已知的人脸图像,基于卷积神经网络训练得到。
在本实施例,待测人脸的图像能通过摄像头采集下来,比如静态图像、动态图像、不同的位置、不同的表情等方面都可以得到很好的采集。当用户在采集设备的拍摄范围内时,采集设备会自动搜索并拍摄人脸图像,或者直接获取待测人脸的纸质照片,然后进行扫描等处理方式上传到电子终端,电子终端例如电脑、手机、处理器等。人脸检测在实际中主要用于人脸识别的预处理,即在图像中准确标定出人脸的位置和大小,因为在待识别的图像中,人脸部分占整个图像的比例不同,即有大头照、标准照之分,以及人脸在图像中位置的差异,因此需对定位人脸各个特征的位置,人脸图像中包含的模式特征十分丰富,如直方图特征、颜色特征、模板特征及结构特征等,人脸检测就是把这其中有用的信息挑出来,并利用这些特征实现人脸检测。在一些实施例中,可对人脸图像数据的灰度分布特性进行分析,采用投影法和模板匹配的方法定位瞳孔的位置,较准确的提取出眼睛的特征,即利用图像标准化处理为特征识别 提供可靠数据,对图像标准化处理的具体过程为:以眼睛瞳孔之间距离为水平方向的基准,眼睛的位置为垂直方向的基准进行坐标平移。
在本实施例中,是利用已知的人脸图像,基于卷积神经网络训练,以得到对应的提取模型,以便可以对预处理后的待测人脸的图像进行图像分析,获取待测人脸的每一人脸特征。
在一实施例中,所述对获取到的所述待测人脸的图像进行图像预处理的方法包括但不限于以下的一种或者多种:
人脸图像的光线补偿,灰度变换,直方图均衡化,归一化,几何校正,噪声过滤以及锐化。
对于人脸图像预处理是基于人脸检测结果,对图像进行处理并最终服务与特征值提取的过程,***获取的原始图像由于受到各种条件的限制和随机干扰,往往不能直接使用,必须在图像处理的早起阶段对它进行灰度校正、噪声过滤等图像预处理。对于人脸图像而言,其图像预处理过程主要包括人脸图像的光线补偿、灰度变换、直方图均衡化、归一化、几何校正、滤波以及锐化等。以噪声过滤为例,其中椒盐噪声是数字图像的一个常见噪声,椒盐噪声就是在图像上随机出现黑色白色的像素,那么使用中值滤波器对图像进行修改;中值滤波器是一种常用的非线性平衡滤波器,其基本原理是把数字图像或数字序列中一点的值用该点的一个领域中各点值的中值代换其主要功能是让周围像素灰度值的差比较大的像素改取与周围的像素值接近的值,从而可以消除孤立的噪声点,所以中值滤波对于滤除图像的椒盐噪声非常有效,中值滤波器可以做到既去除噪声又能保护图像的边缘,在实际运算过程中不需要图像的统计特性。
在一实施例中,第五获取模块80包括:
判断单元,用于判断所述匹配总值中的最小值所对应的所述第五矩阵的每一人脸特征差异匹配值是否小于所对应的人脸特征预设阈值;
执行单元,用于若是,则将所述匹配总值中的最小值对应的所述预设第二矩阵的人脸特征值吻合的人脸作为最匹配的人脸。
在本实施例中,获取匹配总值中的最小值所对应的第五矩阵的每一人脸特征差异匹配值,并判断每一人脸特征差异匹配值是否小于所对应的人脸特征预设阈值,若是,则将所述匹配总值中的最小值对应的所述预设第二矩阵的人脸特征值吻合的人脸作为最匹配的人脸,若否,则排除该匹配总值中最小值所对应的人脸作为匹配人脸。如在一具体实施例中,若右眼睛大小的人脸特征差异匹配值为0.09,而在设置的人脸特征预设阈值中,右眼的预设阈值为0.07,即使匹配总值小于人脸阈值,但由于右眼的特征差异匹配值不符合匹配要求,该匹配总值对应的人脸依然不能作为对应的匹配人脸。
综上所述,为本申请实施例中提供的匹配人脸的装置,将待测人脸的特征值生成第一矩阵,人脸库中的人脸特征值集合起来生成一个多阶的第二矩阵,将第一矩阵生成与第二矩阵对应的第三矩阵后,与第二矩阵运用相减然后绝对值再相加运算得到每一人脸之间匹配总值,根据匹配总值中的最小值与预设的人脸阈值进行比较,进而从人脸库中查找与待测人脸相匹配的人脸。
参照图3,本申请实施例中还提供一种计算机设备,该计算机设备可以是服务器,其内部结构可以如图3所示。该计算机设备包括通过***总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设计的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作***、计算机程序和数据库。该内存储器为非易失性存储介质中的操作***和计算机程序的运行提供环境。该计算机设备的数据库用于存储预设阈值、人脸库的人脸特征值等数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种匹配人脸的方法。
上述处理器执行上述匹配人脸的方法的步骤:
提取所述待测人脸的多个指定人脸特征;
根据各所述人脸特征获取对应的准确特征值,以得到多个第一人脸特征值,其中,所述准确特征值是根据所述人脸特征的实际部位采用投影法和模块匹配法进行匹配计算得到的;
将所有所述第一人脸特征值按指定顺序排序生成第一矩阵,其中,所述第一矩阵为一横排或者一竖排的矩阵;
获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵,其中,指定的多个所述第一人脸是通过识别所述待测人脸的性别或者年龄层次,并根据所述性别或者年龄层次从所述人脸库中进行获取得到的,所述第二矩阵中的每一所述第一人脸的所有所述第二人脸特征值的排列顺序与所述第一矩阵的所述第一人脸特征值的排列顺序相同;
根据所述第二矩阵中所述第一人脸的数量,将所述第一矩阵内的每一所述第一人脸特征值的个数复制成与所述第二矩阵中所述第一人脸的数量相同的个数,且将复制的所有所述第一人脸特征值以所述第一矩阵的所述第一人脸特征值的排列顺序进行排序形成第三矩阵;
将所述第三矩阵与所述第二矩阵进行相减得到第四矩阵;
将所述第四矩阵中的每一数值进行绝对值运算,得到第五矩阵;
将所述第五矩阵中对应的每一所述第一人脸所对应的所有数值的绝对值进行相加,得到每一第一人脸的匹配总值;
对比所有所述匹配总值,获取所述匹配总值中的最小值;
判断所述匹配总值中的最小值是否小于人脸阈值;
若是,则得到所述匹配总值中的最小值对应的人脸,并从所述人脸库中查找与所述匹配总值中的最小值对应的人脸。
本领域技术人员可以理解,图3中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定。
本申请一实施例还提供一种计算机存储介质,其上存储有计算机程序,所述计算机可读存储介质可以是非易失性,也可以是易失性,计算机程序被处理器执行时实现一种匹配人脸的方法,具体为:
提取所述待测人脸的多个指定人脸特征;
根据各所述人脸特征获取对应的准确特征值,以得到多个第一人脸特征值,其中,所述准确特征值是根据所述人脸特征的实际部位采用投影法和模块匹配法进行匹配计算得到的;
将所有所述第一人脸特征值按指定顺序排序生成第一矩阵,其中,所述第一矩阵为一横排或者一竖排的矩阵;
获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵,其中,指定的多个所述第一人脸是通过识别所述待测人脸的性别或者年龄层次,并根据所述性别或者年龄层次从所述人脸库中进行获取得到的,所述第二矩阵中的每一所述第一人脸的所有所述第二人脸特征值的排列顺序与所述第一矩阵的所述第一人脸特征值的排列顺序相同;
根据所述第二矩阵中所述第一人脸的数量,将所述第一矩阵内的每一所述第一人脸特征值的个数复制成与所述第二矩阵中所述第一人脸的数量相同的个数,且将复制的所有所述第一人脸特征值以所述第一矩阵的所述第一人脸特征值的排列顺序进行排序形成第三矩阵;
将所述第三矩阵与所述第二矩阵进行相减得到第四矩阵;
将所述第四矩阵中的每一数值进行绝对值运算,得到第五矩阵;
将所述第五矩阵中对应的每一所述第一人脸所对应的所有数值的绝对值进行相加,得到每一第一人脸的匹配总值;
对比所有所述匹配总值,获取所述匹配总值中的最小值;
判断所述匹配总值中的最小值是否小于人脸阈值;
若是,则得到所述匹配总值中的最小值对应的人脸,并从所述人脸库中查找与所述匹配总值中的最小值对应的人脸。
综上所述,为本申请实施例中提供的匹配人脸的方法、装置、计算机设备和存储介质,通过将待测人脸的指定特征值生成第一矩阵,人脸库中的指定第一人脸的人脸特征值集合起来生成一个多阶的第二矩阵,将第一矩阵衍生成第三矩阵与第二矩阵相减之后绝对值再相加运算得到每一人脸匹配总值,根据人脸匹配总值的最小值与人脸阈值进行比较,若小于人脸阈值则可得出人脸库中与待测人脸相匹配的人脸,快速匹配人脸,缩短人脸匹配的时间。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储与一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的和实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可以包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM通过多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双速据率SDRAM(SSRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上所述仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其它相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种匹配人脸的方法,其中,包括以下步骤:
    提取待测人脸的多个指定人脸特征;
    根据各所述人脸特征获取对应的准确特征值,以得到多个第一人脸特征值,其中,所述准确特征值是根据所述人脸特征的实际部位采用投影法和模块匹配法进行匹配计算得到的;
    将所有所述第一人脸特征值按指定顺序排序生成第一矩阵,其中,所述第一矩阵为一横排或者一竖排的矩阵;
    获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵,其中,指定的多个所述第一人脸是通过识别所述待测人脸的性别或者年龄层次,并根据所述性别或者年龄层次从所述人脸库中进行获取得到的,所述第二矩阵中的每一所述第一人脸的所有所述第二人脸特征值的排列顺序与所述第一矩阵的所述第一人脸特征值的排列顺序相同;
    根据所述第二矩阵中所述第一人脸的数量,将所述第一矩阵内的每一所述第一人脸特征值的个数复制成与所述第二矩阵中所述第一人脸的数量相同的个数,且将复制的所有所述第一人脸特征值以所述第一矩阵的所述第一人脸特征值的排列顺序进行排序形成第三矩阵;
    将所述第三矩阵与所述第二矩阵进行相减得到第四矩阵;
    将所述第四矩阵中的每一数值进行绝对值运算,得到第五矩阵;
    将所述第五矩阵中对应的每一所述第一人脸所对应的所有数值的绝对值进行相加,得到每一第一人脸的匹配总值;
    对比所有所述匹配总值,获取所述匹配总值中的最小值;
    判断所述匹配总值中的最小值是否小于人脸阈值;
    若是,则得到所述匹配总值中的最小值对应的人脸,并从所述人脸库中查找与所述匹配总值中的最小值对应的人脸。
  2. 根据权利要求1所述的匹配人脸的方法,其中,所述获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵的步骤之前:
    识别所述待测人脸的性别,其中,所述性别包括男性和女性;
    根据所述待测人脸的性别,从所述人脸库中查找与所述待测人脸性别一致的所述第一人脸做成所述第二矩阵。
  3. 根据权利要求1所述的匹配人脸的方法,其中,所述获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵的步骤之前:
    识别所述待测人脸的年龄层次,其中,所述年龄层次包括婴孩、青年、中年以及老年;
    根据所述待测人脸的年龄层次,从所述人脸库中查找与所述待测人脸年龄层次一致的人脸做成所述第二矩阵。
  4. 根据权利要求1所述的匹配人脸的方法,其中,所述提取所述待测人脸的多个指定人脸特征的步骤,包括:
    对获取到的所述待测人脸的图像进行图像预处理;
    将预处理后的所述待测人脸的图像输入到提取模型中进行提取,提取所述待测人脸的多个所述人脸特征,其中,所述检测模型利用已知的人脸图像,基于卷积神经网络训练得到。
  5. 根据权利要求4所述的匹配人脸的方法,其中,所述对获取到的所述待测人脸的图像进行图像预处理的方法包括但不限于以下的一种或者多种:
    人脸图像的光线补偿,灰度变换,直方图均衡化,归一化,几何校正,噪声过滤以及锐化。
  6. 根据权利要求1所述的匹配人脸的方法,其中,所述得到所述匹配总值中的最小值对应的人脸,并从所述人脸库中查找与所述匹配总值中的最小值对应的人脸的步骤,包括:
    判断所述匹配总值中的最小值所对应的所述第五矩阵的每一人脸特征值的绝对值是否小于所对应的人脸特征预设阈值;
    若是,则将所述匹配总值中的最小值对应的人脸作为最匹配的人脸。
  7. 一种匹配人脸的装置,其中,包括:
    提取模块,用于提取待测人脸的多个指定人脸特征;
    第一获取模块,用于根据各所述人脸特征获取对应的准确特征值,以得到多个第一人脸特征值,其中,所述准确特征值是根据所述人脸特征的实际部位采用投影法和模块匹配法进行匹配计算得到的;
    排序模块,用于将所有所述第一人脸特征值按指定顺序排序生成第一矩阵,其中,所述第一矩阵为一横排或者一竖排的矩阵;
    第二获取模块,用于获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵,其中,指定的多个所述第一人脸是通过识别所述待测人脸的性别或者年龄层次,并根据所述性别或者年龄层次从所述人脸库中进行获取得到的,所述第二矩阵中的每一所述第一人脸的所有所述第二人脸特征值的排列顺序与所述第一矩阵的所述第一人脸特征值的排列顺序相同;
    第一生成模块,用于根据所述第二矩阵中所述第一人脸的数量,将所述第一矩阵内的每一所述第一人脸特征值的个数复制成与所述第二矩阵中所述第一人脸的数量相同的个数,且将复制的所有所述第一人脸特征值以所述第一矩阵的所述第一人脸特征值的排列顺序进行排序形成第三矩阵;
    第三获取模块,用于将所述第三矩阵与所述第二矩阵进行相减得到第四矩阵;
    第四获取模块,用于将所述第四矩阵中的每一数值进行绝对值运算,得到第五矩阵;
    第五获取模块,用于将所述第五矩阵中对应的每一所述第一人脸所对应的所有数值的绝对值进行相加,得到每一第一人脸的匹配总值;
    第六获取模块,用于对比所有所述匹配总值,获取所述匹配总值中的最小值;
    判断模块,用于判断所述匹配总值中的最小值是否小于人脸阈值;
    第一查找模块,用于若是,则得到所述匹配总值中的最小值对应的人脸,并从所述人脸库中查找与所述匹配总值中的最小值对应的人脸。
  8. 根据权利要求7所述的匹配人脸的装置,其中,所述提取模块包括:
    预处理单元,用于对获取到的所述待测人脸的图像进行图像预处理;
    提取单元,用于将预处理后的所述待测人脸的图像输入到提取模型中进行提取,提取所述待测人脸的多个所述人脸特征,其中,所述检测模型利用已知的人脸图像,基于卷积神经网络训练得到。
  9. 根据权利要求7所述的匹配人脸的装置,其中,第五获取模块包括:
    判断单元,用于判断所述匹配总值中的最小值所对应的所述第五矩阵的每一人脸特征值的绝对值是否小于所对应的人脸特征预设阈值;
    执行单元,用于若是,则将所述匹配总值中的最小值对应的人脸作为最匹配的人脸。
  10. 一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机程序,其中,所述处理器执行所述计算机程序时,实现一种匹配人脸的方法:
    提取待测人脸的多个指定人脸特征;
    根据各所述人脸特征获取对应的准确特征值,以得到多个第一人脸特征值,其中,所述准确特征值是根据所述人脸特征的实际部位采用投影法和模块匹配法进行匹配计算得到的;
    将所有所述第一人脸特征值按指定顺序排序生成第一矩阵,其中,所述第一矩阵为一横排或者一竖排的矩阵;
    获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵,其中,指定的多个所述第一人脸是通过识别所述待测人脸的性别或者年龄层次,并根据所述性别或者年龄层次从所述人脸库中进行获取得到的,所述第二矩阵中的每一所述第一人脸的所有所述第二人脸特征值的排列顺序与所述第一矩阵的所述第一人脸特征值的排列顺序相同;
    根据所述第二矩阵中所述第一人脸的数量,将所述第一矩阵内的每一所述第一人脸特征值的个数复制成与所述第二矩阵中所述第一人脸的数量相同的个数,且将复制的所有所述第一人脸特征值以所述第一矩阵的所述第一人脸特征值的排列顺序进行排序形成第三矩阵;
    将所述第三矩阵与所述第二矩阵进行相减得到第四矩阵;
    将所述第四矩阵中的每一数值进行绝对值运算,得到第五矩阵;
    将所述第五矩阵中对应的每一所述第一人脸所对应的所有数值的绝对值进行相加,得到每一第一人脸的匹配总值;
    对比所有所述匹配总值,获取所述匹配总值中的最小值;
    判断所述匹配总值中的最小值是否小于人脸阈值;
    若是,则得到所述匹配总值中的最小值对应的人脸,并从所述人脸库中查找与所述匹配总值中的最小值对应的人脸。
  11. 根据权利要求10所述的计算机设备,其中,所述获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵的步骤之前:
    识别所述待测人脸的性别,其中,所述性别包括男性和女性;
    根据所述待测人脸的性别,从所述人脸库中查找与所述待测人脸性别一致的所述第一人脸做成所述第二矩阵。
  12. 根据权利要求10所述的计算机设备,其中,所述获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵的步骤之前:
    识别所述待测人脸的年龄层次,其中,所述年龄层次包括婴孩、青年、中年以及老年;
    根据所述待测人脸的年龄层次,从所述人脸库中查找与所述待测人脸年龄层次一致的人脸做成所述第二矩阵。
  13. 根据权利要求10所述的计算机设备,其中,所述提取所述待测人脸的多个指定人脸特征的步骤,包括:
    对获取到的所述待测人脸的图像进行图像预处理;
    将预处理后的所述待测人脸的图像输入到提取模型中进行提取,提取所述待测人脸的多个所述人脸特征,其中,所述检测模型利用已知的人脸图像,基于卷积神经网络训练得到。
  14. 根据权利要求13所述的计算机设备,其中,所述对获取到的所述待测人脸的图像进行图像预处理的方法包括但不限于以下的一种或者多种:
    人脸图像的光线补偿,灰度变换,直方图均衡化,归一化,几何校正,噪声过滤以及锐化。
  15. 根据权利要求10所述的计算机设备,其中,所述得到所述匹配总值中的最小值对应的人脸,并从所述人脸库中查找与所述匹配总值中的最小值对应的人脸的步骤,包括:
    判断所述匹配总值中的最小值所对应的所述第五矩阵的每一人脸特征值的绝对值是否小于所对应的人脸特征预设阈值;
    若是,则将所述匹配总值中的最小值对应的人脸作为最匹配的人脸。
  16. 一种计算机存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时,实现一种匹配人脸的方法,其中,所述匹配人脸的方法包括以下步骤:
    提取待测人脸的多个指定人脸特征;
    根据各所述人脸特征获取对应的准确特征值,以得到多个第一人脸特征值,其中,所述准确特征值是根据所述人脸特征的实际部位采用投影法和模块匹配法进行匹配计算得到的;
    将所有所述第一人脸特征值按指定顺序排序生成第一矩阵,其中,所述第 一矩阵为一横排或者一竖排的矩阵;
    获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵,其中,指定的多个所述第一人脸是通过识别所述待测人脸的性别或者年龄层次,并根据所述性别或者年龄层次从所述人脸库中进行获取得到的,所述第二矩阵中的每一所述第一人脸的所有所述第二人脸特征值的排列顺序与所述第一矩阵的所述第一人脸特征值的排列顺序相同;
    根据所述第二矩阵中所述第一人脸的数量,将所述第一矩阵内的每一所述第一人脸特征值的个数复制成与所述第二矩阵中所述第一人脸的数量相同的个数,且将复制的所有所述第一人脸特征值以所述第一矩阵的所述第一人脸特征值的排列顺序进行排序形成第三矩阵;
    将所述第三矩阵与所述第二矩阵进行相减得到第四矩阵;
    将所述第四矩阵中的每一数值进行绝对值运算,得到第五矩阵;
    将所述第五矩阵中对应的每一所述第一人脸所对应的所有数值的绝对值进行相加,得到每一第一人脸的匹配总值;
    对比所有所述匹配总值,获取所述匹配总值中的最小值;
    判断所述匹配总值中的最小值是否小于人脸阈值;
    若是,则得到所述匹配总值中的最小值对应的人脸,并从所述人脸库中查找与所述匹配总值中的最小值对应的人脸。
  17. 根据权利要求16所述的计算机存储介质,其中,所述获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵的步骤之前:
    识别所述待测人脸的性别,其中,所述性别包括男性和女性;
    根据所述待测人脸的性别,从所述人脸库中查找与所述待测人脸性别一致的所述第一人脸做成所述第二矩阵。
  18. 根据权利要求16所述的计算机存储介质,其中,所述获取存储于人脸库中指定的多个第一人脸,并获取每一所述第一人脸对应的第二人脸特征值所组成的第二矩阵的步骤之前:
    识别所述待测人脸的年龄层次,其中,所述年龄层次包括婴孩、青年、中年以及老年;
    根据所述待测人脸的年龄层次,从所述人脸库中查找与所述待测人脸年龄层次一致的人脸做成所述第二矩阵。
  19. 根据权利要求16所述的计算机设备,其中,所述提取所述待测人脸的多个指定人脸特征的步骤,包括:
    对获取到的所述待测人脸的图像进行图像预处理;
    将预处理后的所述待测人脸的图像输入到提取模型中进行提取,提取所述待测人脸的多个所述人脸特征,其中,所述检测模型利用已知的人脸图像,基于卷积神经网络训练得到。
  20. 根据权利要求16所述的计算机存储介质,其中,所述得到所述匹配 总值中的最小值对应的人脸,并从所述人脸库中查找与所述匹配总值中的最小值对应的人脸的步骤,包括:
    判断所述匹配总值中的最小值所对应的所述第五矩阵的每一人脸特征值的绝对值是否小于所对应的人脸特征预设阈值;
    若是,则将所述匹配总值中的最小值对应的人脸作为最匹配的人脸。
PCT/CN2020/098807 2019-07-03 2020-06-29 匹配人脸的方法、装置、计算机设备和存储介质 WO2021000832A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910598842.4A CN110458007B (zh) 2019-07-03 2019-07-03 匹配人脸的方法、装置、计算机设备和存储介质
CN201910598842.4 2019-07-03

Publications (1)

Publication Number Publication Date
WO2021000832A1 true WO2021000832A1 (zh) 2021-01-07

Family

ID=68482084

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/098807 WO2021000832A1 (zh) 2019-07-03 2020-06-29 匹配人脸的方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN110458007B (zh)
WO (1) WO2021000832A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505765A (zh) * 2021-09-09 2021-10-15 北京轻松筹信息技术有限公司 基于用户头像的年龄预测方法、装置及电子设备
CN114140861A (zh) * 2021-12-13 2022-03-04 中电云数智科技有限公司 人脸检测去重的方法与装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458007B (zh) * 2019-07-03 2023-10-27 平安科技(深圳)有限公司 匹配人脸的方法、装置、计算机设备和存储介质
CN111079718A (zh) * 2020-01-15 2020-04-28 中云智慧(北京)科技有限公司 人脸快速比对方法
CN111552828A (zh) * 2020-04-26 2020-08-18 上海天诚比集科技有限公司 一种1比n人脸比对方法
CN113743354A (zh) * 2021-09-15 2021-12-03 于洪海 一种实现人脸识别的方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016110244A (ja) * 2014-12-03 2016-06-20 富士ゼロックス株式会社 顔認識装置
CN106954075A (zh) * 2016-01-06 2017-07-14 睿致科技股份有限公司 图像处理装置及其图像压缩方法
CN109389015A (zh) * 2017-08-10 2019-02-26 丽宝大数据股份有限公司 脸部相似度评估方法与电子装置
CN110458007A (zh) * 2019-07-03 2019-11-15 平安科技(深圳)有限公司 匹配人脸的方法、装置、计算机设备和存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740838A (zh) * 2016-02-06 2016-07-06 河北大学 针对不同尺度人脸图像的识别方法
CN108985232A (zh) * 2018-07-18 2018-12-11 平安科技(深圳)有限公司 人脸图像比对方法、装置、计算机设备及存储介质
CN109214273A (zh) * 2018-07-18 2019-01-15 平安科技(深圳)有限公司 人脸图像比对方法、装置、计算机设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016110244A (ja) * 2014-12-03 2016-06-20 富士ゼロックス株式会社 顔認識装置
CN106954075A (zh) * 2016-01-06 2017-07-14 睿致科技股份有限公司 图像处理装置及其图像压缩方法
CN109389015A (zh) * 2017-08-10 2019-02-26 丽宝大数据股份有限公司 脸部相似度评估方法与电子装置
CN110458007A (zh) * 2019-07-03 2019-11-15 平安科技(深圳)有限公司 匹配人脸的方法、装置、计算机设备和存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505765A (zh) * 2021-09-09 2021-10-15 北京轻松筹信息技术有限公司 基于用户头像的年龄预测方法、装置及电子设备
CN113505765B (zh) * 2021-09-09 2022-02-08 北京轻松筹信息技术有限公司 基于用户头像的年龄预测方法、装置及电子设备
CN114140861A (zh) * 2021-12-13 2022-03-04 中电云数智科技有限公司 人脸检测去重的方法与装置

Also Published As

Publication number Publication date
CN110458007B (zh) 2023-10-27
CN110458007A (zh) 2019-11-15

Similar Documents

Publication Publication Date Title
WO2021000832A1 (zh) 匹配人脸的方法、装置、计算机设备和存储介质
WO2020253629A1 (zh) 检测模型训练方法、装置、计算机设备和存储介质
CN109558810B (zh) 基于部位分割与融合目标人物识别方法
WO2021036436A1 (zh) 一种人脸识别方法及装置
CN111667464B (zh) 危险品三维图像检测方法、装置、计算机设备及存储介质
CN110728225B (zh) 一种用于考勤的高速人脸搜索方法
US8498454B2 (en) Optimal subspaces for face recognition
KR20170006355A (ko) 모션벡터 및 특징벡터 기반 위조 얼굴 검출 방법 및 장치
US20150347804A1 (en) Method and system for estimating fingerprint pose
Sun et al. Optic disc segmentation from retinal fundus images via deep object detection networks
CN106650574A (zh) 基于PCANet的人脸识别方法
CN111178252A (zh) 多特征融合的身份识别方法
CN111415339B (zh) 一种复杂纹理工业产品图像缺陷检测方法
CN112784712B (zh) 一种基于实时监控的失踪儿童预警实现方法、装置
CN113012093B (zh) 青光眼图像特征提取的训练方法及训练***
CN110929570B (zh) 虹膜快速定位装置及其定位方法
WO2017054276A1 (zh) 一种生物特征身份识别方法及装置
WO2023035538A1 (zh) 车辆损伤的检测方法、装置、设备及存储介质
CN113436735A (zh) 基于人脸结构度量的体重指数预测方法、设备和存储介质
CN107145820B (zh) 基于hog特征和fast算法的双眼定位方法
CN112257730A (zh) 植物害虫图像识别方法、装置、设备及存储介质
Pathak et al. Entropy based CNN for segmentation of noisy color eye images using color, texture and brightness contour features
Prema et al. A review: face recognition techniques for differentiate similar faces and twin faces
CN112801013B (zh) 一种基于关键点识别校验的人脸识别方法、***及装置
Moravcik et al. Detection of determined eye features in digital image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20835025

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.06.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20835025

Country of ref document: EP

Kind code of ref document: A1