CN108710823B - Face similarity comparison method - Google Patents

Face similarity comparison method Download PDF

Info

Publication number
CN108710823B
CN108710823B CN201810311000.1A CN201810311000A CN108710823B CN 108710823 B CN108710823 B CN 108710823B CN 201810311000 A CN201810311000 A CN 201810311000A CN 108710823 B CN108710823 B CN 108710823B
Authority
CN
China
Prior art keywords
feature
blocks
points
feature points
matched
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810311000.1A
Other languages
Chinese (zh)
Other versions
CN108710823A (en
Inventor
郭婧
刘尉
陈祖希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinling Institute of Technology
Original Assignee
Jinling Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinling Institute of Technology filed Critical Jinling Institute of Technology
Priority to CN201810311000.1A priority Critical patent/CN108710823B/en
Publication of CN108710823A publication Critical patent/CN108710823A/en
Application granted granted Critical
Publication of CN108710823B publication Critical patent/CN108710823B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention relates to a human face similarity comparison method, which extracts the feature points of a human face image, divides the feature blocks, contains the feature points in the feature blocks, extracts the feature points in a layered way, further extracts the feature points in the feature blocks, and simultaneously calculates the proportion of the similar blocks and the matched feature points so as to compare the similarity of the two human face images. The method has great practicability, and is more careful to the characteristics of the face image, and the comparative calculation process is also more strict.

Description

Face similarity comparison method
Technical Field
The invention belongs to the field of artificial intelligence, and particularly relates to a novel human face similarity comparison method.
Background
With the rapid development of computer networks and multimedia technologies, image-based face detection, recognition and retrieval technologies have increasingly become a particularly active research category. One important research subject is face similarity measurement, which is the key basis and important content of face detection, recognition and retrieval technology, so that the face similarity research has important practical value and research significance.
Disclosure of Invention
In view of the above, the present invention provides a novel human face similarity comparison method for solving or partially solving the problem of human face similarity evaluation.
Specifically, the invention adopts the following technical scheme:
a face similarity comparison method, characterized in that the method comprises: 1) setting characteristic points: collecting and photographing two face images, and setting characteristic points on the face images, wherein the characteristic points are points with symbolic characteristics on the face images; 2) dividing the characteristic blocks: dividing two face images into a plurality of feature blocks respectively, wherein each feature block comprises at least two feature points, the shape of each feature block on each face is indefinite, but the feature blocks on the two faces are in one-to-one correspondence, and the corresponding feature blocks have the same feature points; 3) comparing the characteristic blocks: comparing similarity of a pair of corresponding feature blocks on two faces, wherein the feature blocks are amplified at the same expansion rate, the amplification times are the same, the corresponding feature points are matched after amplification, a connection line is connected between the corresponding feature points, two feature points of which the connected line is a horizontal line are matched feature points, the number of the matched feature points is recorded as m, if all the corresponding feature points in a pair of feature blocks are matched feature points, the pair of feature blocks are similar blocks, the number of the similar blocks is recorded as n, the proportion of the similar blocks to all the feature blocks is calculated, if the proportion is more than 50%, the matched feature points in other feature blocks except the similar blocks are further counted, the proportion of the matched feature points in the feature blocks to all the feature points is calculated as the weight of other feature blocks, and the feature blocks are arranged in a descending order according to the weight, performing further division on the feature blocks ranked in the top 50% if more than one feature point is contained in the feature blocks, obtaining divided feature blocks, wherein the number of the feature points contained in the divided feature blocks is only 50% of the number of the feature points in the feature blocks before the division, performing further feature point matching, if all the feature points in a pair of the divided feature blocks are matched feature points, the divided feature blocks are secondary similar blocks, recording the proportion of the secondary similar blocks to the number of the divided feature blocks, then counting the number of the matched feature points in other divided feature blocks, and performing face similarity measurement through the following formula:
Figure 100002_DEST_PATH_IMAGE002
wherein m is the number of matched feature points in the feature block, N is the number of similar blocks, j is the number of matched feature points in the divided feature block, c is the number of secondary similar blocks, N is the number of divided feature blocks,
Figure 100002_DEST_PATH_IMAGE004
Figure 100002_DEST_PATH_IMAGE006
the adjustment coefficients of the feature block and the divided feature block are any real numbers, and w is a numerical value of human face similarity measurement.
Preferably, the points having landmark features on the face as feature points include points related to the five sense organs. Further, the feature points include two points of the edge of the eyebrow, a middle point of the eyebrow, a point of an eyeball in the eye, a point of a nose tip on the nose, points of the edge of two mouths, and a middle point of the mouth.
In addition, when the feature points are matched, if the matched feature points are not clear enough, the feature points are locally amplified, then secondary feature points are taken from the locally amplified feature points for further matching, and the matched secondary feature points are obtained and are used as matched feature points.
The invention has the beneficial effects that: the novel human face similarity comparison method provided by the invention has the advantages that the similar feature points of the human face images are compared, the feature points are contained in the feature blocks, the feature blocks are extracted in a layered mode, and the similarity of the two human face images is compared.
Detailed Description
The face recognition is a research hotspot of computer vision and machine learning at present, and has wide application prospect. How to obtain effective human face feature expression and design a powerful classifier become research keys, and uncontrollable factors in the actual environment increase the difficulty of obtaining. With the proposition and development of the compressive sensing theory, the research of the face recognition method based on the sparse coding model arouses the wide attention and great interest of researchers. Firstly, a face recognition method Based on Sparse Representation Classification (SRC) is provided, which has good performance in the aspect of robust face recognition and has good effect on face recognition with brightness change, noise and occlusion. The basic idea is that if training samples with known class attributes but belonging to different classes are vectorized in a space domain or a characteristic domain thereof to form a representation dictionary, an image to be tested belonging to a certain class can be represented by sparse coding of the dictionary after the same vectorization, and obtained non-zero coefficients are mainly concentrated in representation coefficients of the test image with respect to samples belonging to the same class, so that the error of the test image linearly represented by the training samples of the corresponding class is minimized, and the correct class of the tested image is distinguished accordingly.
Many experts and scholars at home and abroad carry out a great deal of research work on the SRC frame-based face recognition method. To solve the problem of dimension too high due to the use of an occlusion dictionary, Yang et al propose an occlusion dictionary based on Gabor transformation to reduce the computational complexity of the system. Considering that SRC solves by using the l1 norm of the regularized coding coefficient, and its operation has higher computational complexity, Zhang et al proposes a coding method that uses the regularized l2 norm instead of the regularized l1 norm, and proposes a concept of Collaborative Representation Classification (CRC). The SRC can be regarded as popularization of nearest neighbor classification and nearest neighbor feature subspace classification, although the final recognition performance is not significantly affected by selecting different feature representations in the SRC-based face recognition under the condition of sufficiently high feature dimension. However, under the condition of low feature dimension selection, the degree of freedom of sparse representation will be increased, and thus, the classification and identification performance based on sparse representation is greatly reduced. Wang et al propose location Constrained Linear Coding (LLC) that can also be effectively used for image classification by making use of the location constraint existing between samples to make the regularized Coding coefficients sparse similar to the sparse Coding coefficients.
Chao et al propose an SRC face recognition method based on location constraint and group sparsity constraint. Likewise, Lu et al and Guo et al propose SRC (Weighted SRC, WSRC) methods based on location (similarity) weighting, respectively. Timofte et al and Waqas et al propose weighted CRC, etc. for face recognition, respectively. Therefore, position (or similarity) information between the test sample and the training sample is embedded in the linear/sparse coding representation classification process, so that the discrimination capability of the coding coefficient is effectively improved, and the classification performance is enhanced. However, in an actual face recognition application, because the face image to be recognized acquired under an uncontrolled scene may have expression changes and intentional partial occlusion and disguise, the image global similarity measurement is difficult to truly reflect the position relationship of each other, so that the weighted coding based on the image global similarity represents that the face classification performance is reduced. The method is a problem which is worthy of being explored in the research of a position constraint framework-based weighted coding representation face recognition method. Therefore, the face recognition based on the maximum similarity embedded sparse representation of the image blocks is discussed aiming at the problems of expression change, partial occlusion and camouflage of the uncontrolled face image. The similarity between corresponding blocks is calculated by carrying out non-overlapping blocking on a training image and a test image, the similarity between the images is measured by using the maximum value of the similarity, and the extracted maximum block similarity information is embedded into sparse code representation classification, so that the stability of low-dimensional feature selection and sparse code removal and the recognition performance of a system are effectively improved.
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention more apparent, the present invention is described in detail below with reference to the embodiments. It should be noted that the specific embodiments described herein are only for illustrating the present invention and are not to be construed as limiting the present invention, and products that can achieve the same functions are included in the scope of the present invention. The specific method comprises the following steps:
the method comprises the steps of collecting and photographing two face images through camera shooting collection equipment, setting feature points on the face images, wherein the feature points are points with symbolic features on the face images and comprise two points of the edges of eyebrows, middle points of the eyebrows, the points of eyeballs in the eyes, the points of nose tips on the nose, two points of the edges of mouths and the middle points of the mouths, carrying out block division on the two face images and dividing the two face images into a plurality of feature blocks, wherein the shape of each feature block is indefinite and at least comprises two feature points, and the feature blocks are divided on the two face images in a one-to-one correspondence mode, namely the feature blocks appear in pairs, and each pair of feature blocks are respectively arranged on one face image and have the same feature points. The same feature points are a combination of any several points, all of which are two points of the edge of the eyebrow, the middle point of the eyebrow, the point of the eyeball in the eye, the point of the tip of the nose, the two points of the edge of the mouth, and the middle point of the mouth.
Matching a pair of feature blocks to compare similarity, which comprises the following steps:
and expanding a pair of feature blocks by using the same expansion rate, wherein the expansion rate is the expansion speed of the feature blocks, and the expansion speed is equal to the amplification speed. And matching the feature points contained in the expanded feature blocks after expansion, and connecting the corresponding feature points. The corresponding characteristic points are two points at the edge of the eyebrow, or the middle point of the eyebrow. Two feature points whose connected lines are horizontal lines are matching feature points. And setting the number of the matched feature points as m, wherein m is a positive integer greater than 0. If the matching feature points are not clear enough, the matching feature points are required to be locally amplified, secondary feature points are taken from the locally amplified matching feature points after local amplification, and the secondary feature points are points selected from the locally amplified matching feature points and used for further matching. And connecting the corresponding further matched points, and setting the feature block where the matched feature point is located as a similar block if all the connected lines are horizontal lines. And counting the number of the similar blocks, wherein the number is n, n is a positive integer, and the value of n is divided by the number of the characteristic blocks to obtain the specific gravity. And if the proportion is more than 50%, further counting the matched feature points in the feature blocks except the similar blocks, and counting the proportion of the number of the matched feature points in the feature blocks to the number of the feature points to serve as the weight of the feature blocks. Sorting the feature blocks according to the descending order of the weight, and dividing the feature blocks of the top 50% of the sorted feature blocks into further feature blocks if the feature blocks contain more than one feature point. And the feature points in the divided feature blocks only account for 50% of the original feature points in the feature blocks before division, and further feature point matching is carried out to detect whether the feature points are matched. The divided feature blocks with the complete number of the matching feature points are secondary similar blocks, and the complete number of the matching feature points are all the feature points in the divided feature blocks. And recording the proportion of the second-level similar blocks to the divided characteristic blocks. And counting the number of the matched feature points in the divided feature blocks except the secondary similar blocks in the table, wherein the number of the matched feature points accounts for the number of all the feature points, and counting all data of the whole process in one table. And finally, carrying out face similarity measurement according to the statistical data, wherein the adopted formula is as follows:
Figure DEST_PATH_IMAGE002A
wherein m is the number of matched feature points in a pair of feature blocks, j is the number of matched feature points in the feature blocks after division, c is the number of secondary similar blocks, N is the number of the feature blocks after division,
Figure DEST_PATH_IMAGE004A
Figure DEST_PATH_IMAGE006A
respectively taking the adjustment coefficients of the feature block and the divided feature block as any real number, taking k as a real number, and counting all data of the whole process in a table; w is a numerical value of the face similarity measurement, and the higher the numerical value is, the higher the similarity of two face images is.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the embodiments of the present invention without departing from the spirit or scope of the embodiments of the invention. Thus, if such modifications and variations of the embodiments of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to encompass such modifications and variations.
While the embodiments of the present invention have been described in detail with reference to the specific embodiments, the present invention is not limited to the embodiments described above, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.

Claims (4)

1. A face similarity comparison method, characterized in that the method comprises: 1) setting characteristic points: collecting and photographing two face images, and setting characteristic points on the face images, wherein the characteristic points are points with symbolic characteristics on the face images; 2) dividing the characteristic blocks: dividing two face images into a plurality of feature blocks respectively, wherein each feature block comprises at least two feature points, the shape of each feature block on each face is indefinite, but the feature blocks on the two faces are in one-to-one correspondence, and the corresponding feature blocks have the same feature points; 3) comparing the characteristic blocks: comparing similarity of a pair of corresponding feature blocks on two faces, wherein the feature blocks are amplified at the same expansion rate, the amplification times are the same, the corresponding feature points are matched after amplification, a connection line is connected between the corresponding feature points, two feature points of which the connected line is a horizontal line are matched feature points, the number of the matched feature points is recorded as m, if all the corresponding feature points in a pair of feature blocks are matched feature points, the pair of feature blocks are similar blocks, the number of the similar blocks is recorded as n, the proportion of the similar blocks to all the feature blocks is calculated, if the proportion is more than 50%, the matched feature points in other feature blocks except the similar blocks are further counted, the proportion of the matched feature points in the feature blocks to all the feature points is calculated as the weight of other feature blocks, and the feature blocks are arranged in a descending order according to the weight, performing further division on the feature blocks ranked in the top 50% if more than one feature point is contained in the feature blocks, obtaining divided feature blocks, wherein the number of the feature points contained in the divided feature blocks is only 50% of the number of the feature points in the feature blocks before the division, performing further feature point matching, if all the feature points in a pair of the divided feature blocks are matched feature points, the divided feature blocks are secondary similar blocks, recording the proportion of the secondary similar blocks to the number of the divided feature blocks, then counting the number of the matched feature points in other divided feature blocks, and performing face similarity measurement through the following formula:
Figure DEST_PATH_IMAGE002
wherein m is the number of matched feature points in the feature block, N is the number of similar blocks, j is the number of matched feature points in the divided feature block, c is the number of secondary similar blocks, N is the number of divided feature blocks,
Figure DEST_PATH_IMAGE004
Figure DEST_PATH_IMAGE006
the adjustment coefficients of the feature block and the divided feature block are any real numbers, and w is a numerical value of human face similarity measurement.
2. The face similarity comparison method according to claim 1, wherein the points having a characteristic signature on the face as the feature points include points related to five sense organs.
3. The face similarity comparison method according to claim 2, wherein the feature points include two points of the edge of the eyebrow, a middle point of the eyebrow, a point of the eyeball in the eye, a point of the tip of the nose, points of the edge of the two mouths, and a middle point of the mouth.
4. The method for comparing human face similarity as claimed in claim 1, wherein when matching the feature points, if the matched feature points are not clear enough, the matched feature points are locally amplified, and then secondary feature points are taken from the locally amplified feature points for further matching, and the matched secondary feature points are obtained as matched feature points.
CN201810311000.1A 2018-04-09 2018-04-09 Face similarity comparison method Active CN108710823B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810311000.1A CN108710823B (en) 2018-04-09 2018-04-09 Face similarity comparison method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810311000.1A CN108710823B (en) 2018-04-09 2018-04-09 Face similarity comparison method

Publications (2)

Publication Number Publication Date
CN108710823A CN108710823A (en) 2018-10-26
CN108710823B true CN108710823B (en) 2022-04-19

Family

ID=63866534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810311000.1A Active CN108710823B (en) 2018-04-09 2018-04-09 Face similarity comparison method

Country Status (1)

Country Link
CN (1) CN108710823B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070183653A1 (en) * 2006-01-31 2007-08-09 Gerard Medioni 3D Face Reconstruction from 2D Images
CN101833672A (en) * 2010-04-02 2010-09-15 清华大学 Sparse representation face identification method based on constrained sampling and shape feature
CN105740808A (en) * 2016-01-28 2016-07-06 北京旷视科技有限公司 Human face identification method and device
CN106980819A (en) * 2017-03-03 2017-07-25 竹间智能科技(上海)有限公司 Similarity judgement system based on human face five-sense-organ
CN107729855A (en) * 2017-10-25 2018-02-23 成都尽知致远科技有限公司 Mass data processing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070183653A1 (en) * 2006-01-31 2007-08-09 Gerard Medioni 3D Face Reconstruction from 2D Images
CN101833672A (en) * 2010-04-02 2010-09-15 清华大学 Sparse representation face identification method based on constrained sampling and shape feature
CN105740808A (en) * 2016-01-28 2016-07-06 北京旷视科技有限公司 Human face identification method and device
CN106980819A (en) * 2017-03-03 2017-07-25 竹间智能科技(上海)有限公司 Similarity judgement system based on human face five-sense-organ
CN107729855A (en) * 2017-10-25 2018-02-23 成都尽知致远科技有限公司 Mass data processing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DISCRIMINATIVE SPARSITY PRESERVING EMBEDDING FOR FACE RECOGNITION;Jian Lai等;《IEEE》;20140213;全文 *
Sparse discriminativemulti-manifoldembeddingforone-sample;Pengyue Zhang;《IEEE》;20160430;全文 *

Also Published As

Publication number Publication date
CN108710823A (en) 2018-10-26

Similar Documents

Publication Publication Date Title
Qader et al. An overview of bag of words; importance, implementation, applications, and challenges
Song et al. Region-based quality estimation network for large-scale person re-identification
CN107194341B (en) Face recognition method and system based on fusion of Maxout multi-convolution neural network
CN109063565B (en) Low-resolution face recognition method and device
CN108520216B (en) Gait image-based identity recognition method
CN109522853B (en) Face datection and searching method towards monitor video
CN104504362A (en) Face detection method based on convolutional neural network
CN111126240B (en) Three-channel feature fusion face recognition method
CN102637251A (en) Face recognition method based on reference features
WO2019153175A1 (en) Machine learning-based occluded face recognition system and method, and storage medium
CN111339930A (en) Face recognition method combining mask attribute loss function
CN105956570B (en) Smiling face's recognition methods based on lip feature and deep learning
CN112052772A (en) Face shielding detection algorithm
CN104156690B (en) A kind of gesture identification method based on image space pyramid feature bag
CN109344856B (en) Offline signature identification method based on multilayer discriminant feature learning
CN111914643A (en) Human body action recognition method based on skeleton key point detection
Zhang et al. A survey on face anti-spoofing algorithms
CN112613480A (en) Face recognition method, face recognition system, electronic equipment and storage medium
Zhong et al. Palmprint and dorsal hand vein dualmodal biometrics
Engoor et al. Occlusion-aware dynamic human emotion recognition using landmark detection
CN105550642B (en) Gender identification method and system based on multiple dimensioned linear Differential Characteristics low-rank representation
CN110969101A (en) Face detection and tracking method based on HOG and feature descriptor
CN103942545A (en) Method and device for identifying faces based on bidirectional compressed data space dimension reduction
CN108710823B (en) Face similarity comparison method
Shigang et al. A pig face recognition method for distinguishing features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant