CN107273817A - A kind of face identification method and system based on rarefaction representation and average Hash - Google Patents

A kind of face identification method and system based on rarefaction representation and average Hash Download PDF

Info

Publication number
CN107273817A
CN107273817A CN201710379451.4A CN201710379451A CN107273817A CN 107273817 A CN107273817 A CN 107273817A CN 201710379451 A CN201710379451 A CN 201710379451A CN 107273817 A CN107273817 A CN 107273817A
Authority
CN
China
Prior art keywords
face
sample
mrow
test sample
average
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710379451.4A
Other languages
Chinese (zh)
Other versions
CN107273817B (en
Inventor
刘治
曹丽君
许建中
朱耀文
辛阳
朱洪亮
曹艳坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201710379451.4A priority Critical patent/CN107273817B/en
Publication of CN107273817A publication Critical patent/CN107273817A/en
Application granted granted Critical
Publication of CN107273817B publication Critical patent/CN107273817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/513Sparse representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of face identification method based on rarefaction representation and average Hash and system;Comprise the following steps:Face test sample and all people's face training sample are pre-processed:Colored human face sample is converted into gray level image, and normalizes face test sample and all people's face training sample;Openness feature between sample is extracted using sparse representation model, face test sample is encoded into the linear combination of all face training samples, sparse coefficient matrix of the face test sample on all face training samples is calculated;The spatial structure feature of face sample interior is extracted, face test sample and the average Hash feature of each face training sample is calculated;By the openness feature and the spatial structure Fusion Features of sample interior between face sample;The final generic of face test sample is judged using the reconstructed error between the average Hash feature of face test sample and the face test sample of reconstruct.

Description

A kind of face identification method and system based on rarefaction representation and average Hash
Technical field
The present invention relates to a kind of face identification method based on rarefaction representation and average Hash and system.
Background technology
Recognition of face is one of core content in computer vision field, it be facial image is carried out feature extraction, The process of Classification and Identification, after facial image feature is obtained, is handled by further Algorithm Analysis, is realized to facial image Authentication.Recognition of face is a complicated synthesis, the technology such as fusion mode identification, computer vision, image procossing, can be used In fields such as security protection inspection, security monitoring, criminal investigation tracking, identifications, it is with a wide range of applications.But due to by face In the limitation of sample size, class between otherness, class the factors such as similitude, illumination variation influence, how from face sample Notebook data, which is concentrated, to be efficiently extracted face characteristic and efficiently realizes face classification, so as to improve robustness and the people of recognition of face The accuracy rate of face identification, is one of problem of field of face identification.Therefore, research is directed to small sample face database, realizes Shandong The face recognition algorithms that rod is strong, discrimination is high are extremely important.
Rarefaction representation is proposed based on compressive sensing theory, and its main thought is that training sample is utilized in assorting process To constitute dictionary, the rarefaction representation of test sample is realized by seeking linear combination of the test sample on dictionary, if test Sample belongs to a certain classification, then the test sample can be by all training sample linear expressions of the category in theory.But Rarefaction representation algorithm has taken into consideration only the openness and spatial structural form that have ignored sample interior between sample, works as facial image When being influenceed by strong illumination variation, traditional rarefaction representation can reduce the robustness of recognition of face, face recognition accuracy rate It can be decreased obviously.
The content of the invention
The purpose of the present invention is exactly that there is provided a kind of face based on rarefaction representation and average Hash in order to solve the above problems Recognition methods and system, it effectively increases the robustness of face characteristic extraction, is conducive to improving face recognition accuracy rate, simultaneously Improve the speed of recognition of face.
To achieve these goals, the present invention is adopted the following technical scheme that:
A kind of face identification method based on rarefaction representation and average Hash, comprises the following steps:
Step (1):Face test sample and all people's face training sample are pre-processed:Colored human face sample is turned Change gray level image into, and normalize face test sample and all people's face training sample;
Step (2):Openness feature between sample is extracted using sparse representation model:Face test sample is encoded into institute There is the linear combination of face training sample, calculate sparse coefficient matrix of the face test sample on all face training samples;
Step (3):Extract the spatial structure feature of face sample interior:Calculate face test sample and each face instruction Practice the average Hash feature of sample;
Step (4):By the openness feature and the spatial structure Fusion Features of sample interior between face sample:Will be all The average Hash feature of face training sample as new training sample, and obtained with new training sample and step (2) it is dilute Sparse coefficient matrix reconstructs face test sample;
Step (5):Utilize the reconstruct between the average Hash feature of face test sample and the face test sample of reconstruct Error judges the final generic of face test sample.
Step (1) the normalization face test sample and all face training samples are shown in formula (1) and (2):
Y=y/ | | y | |; (1)
Wherein, y ∈ RsColumn vector for face test sample represents that s=w × h, s is the size of gray level image, and w is ash The length of image is spent, h is the width of gray level image;Represent that the column vector of j-th of face training sample of c classes is represented;R represents real Number, | | y | | the mould of face test sample is represented,Represent the mould of face training sample.
The step (2) includes:
Step (21):Face test sample is encoded into the linear combination of all face training samples:
Y=XA=[X1,X2,...,Xn][A1,A2,...,An]T; (3)
Wherein, A=[A1,A2,...Ac...,An], A represents sparse system of all face training samples under rarefaction representation Matrix number, AcWhat is represented is the sparse coefficient vector in c classes corresponding to all face training samples;
X=[X1,X2,...Xc...,Xn], X represents the column vector set of all face training samples, Xc∈Rs*mAndRs*mRepresent column vector set of the gray level image size for s m face training sample, XcTable Show the column vector set of m face training sample of c classes.
Step (22):Sparse coefficient matrix is solved using object function and solution formula:
μ is the normal number of setting, and I is unit matrix;The solution of sparse coefficient matrix is represented, and XTRepresent the transposition of the column vector of face training sample.
The step (3), including:
Step (31):Calculate the average pixel of each its own w × h pixel of face sample:
Wherein,Represent c classes jth (j=1,2 ..., m) p-th of pixel value of individual face training sample.
Step (32):The average pixel value of each pixel value of each face sample and the face sample is subjected to size ratio Compared with:If pixel value is more than or equal to average pixel value, just two-value turns to 1 to pixel value, and pixel value is less than average pixel value, as Just two-value turns to 0 to plain value, and specific formula is shown in formula (7) and formula (8):
IfThen
IfThen
Step (33):
Compared by the way that each pixel value and average pixel value are carried out into size, the average Hash that generation embodies face sample is special Levy;
Average Hash character representation by i-th of face training sample of c classes isAnd be made up of 0 and 1, andH=[H1,H2,...,Hc,...,Hn];
It is y by the average Hash character representation of face test sampleh, and be made up of 0 and 1.HcRepresent the face instruction of c classes Average Hash characteristic set, the H for practicing sample represent average Hash characteristic set, the H of all face training samplesnRepresent the n-th class Face training sample average Hash characteristic set,Represent+1 face training sample of m (c-1) of c classes Average Hash feature.
The step of step (4) is:
Face test sample is reconstructed using the sparse coefficient vector and such other new training sample of each classificationMeter Calculating formula is:
The step of step (5) is:
Weigh face test sample average Hash feature yhWith the face test sample of reconstructBetween reconstructed error, if The reconstructed error e of c classescMinimum, then judge that face test sample belongs to c classes, the formula for calculating reconstructed error is:
A kind of face identification system based on rarefaction representation and average Hash, including:
Pretreatment module:Face test sample and all people's face training sample are pre-processed:By colored human face sample Originally gray level image is converted into, and normalizes face test sample and all people's face training sample;
Openness characteristic extracting module between sample:Using sparse representation model, face test sample is encoded into all The linear combination of face training sample, calculates sparse coefficient matrix of the face test sample on all face training samples;
The spatial structure characteristic extracting module of face sample interior:Calculate face test sample and each face training sample This average Hash feature;
Fusion Features module:By the openness feature and the spatial structure Fusion Features of sample interior between face sample: Using the average Hash feature of all face training samples as new training sample, and obtained with new training sample and step (2) To sparse coefficient matrix reconstruct face test sample;
Determination module:Utilize the reconstruct between the average Hash feature of face test sample and the face test sample of reconstruct Error judges the final generic of face test sample.
Beneficial effects of the present invention:
1 face identification method is by the sample of openness feature between the sample of sparse representation model and average hash algorithm Structural Characteristics are blended, for face Small Sample Database collection, are effectively overcome rarefaction representation algorithm and are only considered dilute between sample Dredge property and have ignored the shortcoming of sample interior spatial structural form, while the redundancy in face feature vector can be removed, Preferably extract the identification information of facial image.The robustness of face characteristic extraction is effectively increased, is conducive to improving face Recognition accuracy, while improving the speed of recognition of face.
2 face identification methods to the change of strong illumination variation or contrast with very strong robustness, in brightness or In the case that contrast changes, the average Hash feature of this method changes all without generation is significant, it is possible to prevente effectively from Gamma correction is adjusted the influence brought, can then improve the recognition accuracy of face.
Brief description of the drawings
Fig. 1 is the flow chart of the inventive method.
Fig. 2 is the algorithm flow chart for calculating average Hash feature.
Embodiment
The invention will be further described with embodiment below in conjunction with the accompanying drawings.
The method of the invention is based on sparse representation model, using the space of average Hash feature extraction face sample interior Structural information, by openness feature between the sample of sparse representation model and the sample inner structure feature phase of average hash algorithm Merge to reconstruct face test sample, carry out the classification of classification to face test sample finally by reconstructed error.Idiographic flow As shown in figure 1, comprising the following steps:
Step 1:Face test sample and all people's face training sample are pre-processed, i.e., turned colored human face sample Change gray level image into, and normalize face test sample and all people's face training sample;
Wherein, the formula of normalization face test sample and all face training samples is as follows:
Y=y/ | | y | |
y∈RsFor face test sample, wherein s=w × h is the size of gray level image,Represent j-th of face of the i-th class Training sample.
Step 2:In order to extract the openness feature between sample, using sparse representation model, face test sample is encoded Into the linear combination of all face training samples, sparse coefficient matrix of the test sample on all face training samples is calculated;
The specific method for obtaining sparse coefficient matrix by sparse representation model includes:
1) face test sample is encoded into the linear combination of all face training samples, formula is as follows:
Y=XA=[X1,X2,...,Xn][A1,A2,...,An]T
A=[A1,A2,...,An], A represents sparse coefficient matrix of all face training samples under rarefaction representation, AjTable What is shown is the sparse coefficient vector in jth class corresponding to all face training samples;X=[X1,X2,...,Xn], X is represented The column vector set of all face training samples, Xc∈Rs*mRepresent m training sample set of c classes.
2) sparse coefficient matrix is solved using following object function and solution formula:
μ is less normal number, the unit matrix that I refers to.
Step 3:In order to extract the spatial structure feature of face sample interior, face test sample and each face are calculated The average Hash feature of training sample;
The algorithm flow of average Hash feature is calculated as shown in Fig. 2 specifically including:
1) the average pixel of each its own w × h pixel of face sample is calculated, formula is as follows:
In above formula,Represent c classes jth (j=1,2 ..., the m) pth of individual face training sample Individual pixel value.
2) each pixel value of each face sample is carried out into size with the average pixel value of this face sample to be compared:If picture Element value is more than average pixel value, and just two-value turns to 1 to pixel value herein, and pixel value is less than average pixel value, herein pixel Just two-value turns to 0 to value, and specific formula is as follows:
IfThen
IfThen
3) compared by the way that each pixel value and average pixel value are carried out into size, the average that generation embodies this face sample is breathed out Uncommon feature.Wherein, it is by the average Hash character representation of i-th of face training sample of c classesAnd by 0 and 1 group Into, andH=[H1,H2,...,Hn];It is y by the average Hash character representation of face test sampleh, And be made up of 0 and 1.
Step 4:By the openness feature and the spatial structure Fusion Features of sample interior between face sample, by owner The average Hash feature of face training sample is as new training sample, and the sparse system obtained with new training sample and step 2 Matrix number reconstructs face test sample;
Reconstruct face test sample specific method be:Utilize the sparse coefficient vector and such other new instruction of each classification Practice sample to reconstruct face test sampleCalculation formula is:
Step 5:Utilize the reconstructed error between the average Hash feature and reconstruct face test sample of face test sample To judge the final generic of face test sample.
Specific method is:
Weigh face test sample average Hash feature yhWith the face test sample of reconstructBetween reconstructed error, if The reconstructed error e of c classescMinimum, then judge that face test sample belongs to c classes, the formula for calculating reconstructed error is:
Although above-mentioned the embodiment of the present invention is described with reference to accompanying drawing, not to present invention protection model The limitation enclosed, one of ordinary skill in the art should be understood that on the basis of technical scheme those skilled in the art are not Need to pay various modifications or deform still within protection scope of the present invention that creative work can make.

Claims (7)

1. a kind of face identification method based on rarefaction representation and average Hash, it is characterized in that, comprise the following steps:
Step (1):Face test sample and all people's face training sample are pre-processed:Colored human face sample is converted into Gray level image, and normalize face test sample and all people's face training sample;
Step (2):Openness feature between sample is extracted using sparse representation model:Face test sample is encoded into owner The linear combination of face training sample, calculates sparse coefficient matrix of the face test sample on all face training samples;
Step (3):Extract the spatial structure feature of face sample interior:Calculate face test sample and each face training sample This average Hash feature;
Step (4):By the openness feature and the spatial structure Fusion Features of sample interior between face sample:By all faces The average Hash feature of training sample is as new training sample, and the sparse system obtained with new training sample and step (2) Matrix number reconstructs face test sample;
Step (5):Utilize the reconstructed error between the average Hash feature of face test sample and the face test sample of reconstruct To judge the final generic of face test sample.
2. the method as described in claim 1, it is characterized in that, step (1) the normalization face test sample and all faces Training sample is shown in formula (1) and (2):
Y=y/ | | y | |; (1)
<mrow> <msubsup> <mi>x</mi> <mi>j</mi> <mi>c</mi> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>c</mi> </msubsup> <mo>/</mo> <mo>|</mo> <mo>|</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>c</mi> </msubsup> <mo>|</mo> <mo>|</mo> <mo>;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Wherein, y ∈ RsColumn vector for face test sample represents that s=w × h, s is the size of gray level image, and w is gray level image Length, h be gray level image width;Represent that the column vector of j-th of face training sample of c classes is represented;R represents real number, | | y | | the mould of face test sample is represented,Represent the mould of face training sample.
3. the method as described in claim 1, it is characterized in that, the step (2) includes:
Step (21):Face test sample is encoded into the linear combination of all face training samples:
Y=XA=[X1,X2,...,Xn][A1,A2,...,An]T; (3)
Wherein, A=[A1,A2,...Ac...,An], A represents sparse coefficient square of all face training samples under rarefaction representation Battle array, AcWhat is represented is the sparse coefficient vector in c classes corresponding to all face training samples;
X=[X1,X2,...Xc...,Xn], X represents the column vector set of all face training samples, Xc∈Rs*mAndRs*mRepresent column vector set of the gray level image size for s m face training sample, XcTable Show the column vector set of m face training sample of c classes;
Step (22):Sparse coefficient matrix is solved using object function and solution formula:
<mrow> <mover> <mi>A</mi> <mo>^</mo> </mover> <mo>=</mo> <mi>arg</mi> <mi> </mi> <msub> <mi>min</mi> <mi>A</mi> </msub> <mo>{</mo> <mo>|</mo> <mo>|</mo> <mi>y</mi> <mo>-</mo> <mi>X</mi> <mi>A</mi> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&amp;mu;</mi> <mo>|</mo> <mo>|</mo> <mi>A</mi> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>}</mo> <mo>;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mover> <mi>A</mi> <mo>^</mo> </mover> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msup> <mi>X</mi> <mi>T</mi> </msup> <mi>X</mi> <mo>+</mo> <mi>&amp;mu;</mi> <mi>I</mi> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msup> <mi>X</mi> <mi>T</mi> </msup> <mi>y</mi> <mo>;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
μ is the normal number of setting, and I is unit matrix;The solution of sparse coefficient matrix is represented, and XTRepresent the transposition of the column vector of face training sample.
4. the method as described in claim 1, it is characterized in that, the step (3), including:
Step (31):Calculate the average pixel of each its own w × h pixel of face sample:
<mrow> <mover> <mi>k</mi> <mo>&amp;OverBar;</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>w</mi> <mo>&amp;times;</mo> <mi>h</mi> </mrow> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>w</mi> <mo>&amp;times;</mo> <mi>h</mi> </mrow> </munderover> <msubsup> <mi>k</mi> <mi>p</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>j</mi> </mrow> </msubsup> <mo>;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> 1
Wherein,Represent p-th of pixel value of j-th of face training sample of c classes;J=1,2 ..., m;
Step (32):Each pixel value of each face sample is carried out into size with the average pixel value of the face sample to be compared: If pixel value is more than or equal to average pixel value, just two-value turns to 1 to pixel value, and pixel value is less than average pixel value, pixel Just two-value turns to 0 to value, and specific formula is shown in formula (7) and formula (8):
Step (33):
Compared by the way that each pixel value and average pixel value are carried out into size, generation embodies the average Hash feature of face sample;
Average Hash character representation by i-th of face training sample of c classes isAnd be made up of 0 and 1, andH=[H1,H2,...,Hc,...,Hn];
It is y by the average Hash character representation of face test sampleh, and be made up of 0 and 1;HcRepresent the face training sample of c classes This average Hash characteristic set, H represent average Hash characteristic set, the H of all face training samplesnRepresent the people of the n-th class The average Hash characteristic set of face training sample,Represent the average of m (c-1)+1 face training sample of c classes Hash feature.
5. the method as described in claim 1, it is characterized in that, it is the step of step (4):
Utilize the sparse coefficient vector of each classificationWith such other new training sample HcTo reconstruct face test sampleMeter Calculating formula is:
<mrow> <mover> <mi>y</mi> <mo>^</mo> </mover> <mo>=</mo> <msub> <mi>H</mi> <mi>c</mi> </msub> <msub> <mover> <mi>A</mi> <mo>^</mo> </mover> <mi>c</mi> </msub> <mo>;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> <mo>.</mo> </mrow>
6. method as claimed in claim 5, it is characterized in that, it is the step of step (5):
Weigh face test sample average Hash feature yhWith the face test sample of reconstructBetween reconstructed error, if c classes Reconstructed error ecMinimum, then judge that face test sample belongs to c classes, the formula for calculating reconstructed error is:
<mrow> <msub> <mi>e</mi> <mi>c</mi> </msub> <mo>=</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>y</mi> <mi>h</mi> </msub> <mo>-</mo> <msub> <mi>H</mi> <mi>c</mi> </msub> <msub> <mover> <mi>A</mi> <mo>^</mo> </mover> <mi>c</mi> </msub> <mo>|</mo> <msub> <mo>|</mo> <mn>2</mn> </msub> <mo>;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> <mo>.</mo> </mrow>
7. a kind of face identification system based on rarefaction representation and average Hash, it is characterized in that, including:
Pretreatment module:Face test sample and all people's face training sample are pre-processed:Colored human face sample is turned Change gray level image into, and normalize face test sample and all people's face training sample;
Openness characteristic extracting module between sample:Using sparse representation model, face test sample is encoded into all faces The linear combination of training sample, calculates sparse coefficient matrix of the face test sample on all face training samples;
The spatial structure characteristic extracting module of face sample interior:Calculate face test sample and each face training sample Average Hash feature;
Fusion Features module:By the openness feature and the spatial structure Fusion Features of sample interior between face sample:By institute There is the average Hash feature of face training sample as new training sample, and obtained with new training sample and step (2) Sparse coefficient matrix reconstructs face test sample;
Determination module:Utilize the reconstructed error between the average Hash feature of face test sample and the face test sample of reconstruct To judge the final generic of face test sample.
CN201710379451.4A 2017-05-25 2017-05-25 A kind of face identification method and system based on rarefaction representation and mean value Hash Active CN107273817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710379451.4A CN107273817B (en) 2017-05-25 2017-05-25 A kind of face identification method and system based on rarefaction representation and mean value Hash

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710379451.4A CN107273817B (en) 2017-05-25 2017-05-25 A kind of face identification method and system based on rarefaction representation and mean value Hash

Publications (2)

Publication Number Publication Date
CN107273817A true CN107273817A (en) 2017-10-20
CN107273817B CN107273817B (en) 2019-09-13

Family

ID=60065394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710379451.4A Active CN107273817B (en) 2017-05-25 2017-05-25 A kind of face identification method and system based on rarefaction representation and mean value Hash

Country Status (1)

Country Link
CN (1) CN107273817B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633399A (en) * 2020-12-30 2021-04-09 郑州轻工业大学 Sparse collaborative joint representation pattern recognition method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463148A (en) * 2014-12-31 2015-03-25 南京信息工程大学 Human face recognition method based on image reconstruction and Hash algorithm
CN105160312A (en) * 2015-08-27 2015-12-16 南京信息工程大学 Recommendation method for star face make up based on facial similarity match
CN106250811A (en) * 2016-06-15 2016-12-21 南京工程学院 Unconfinement face identification method based on HOG feature rarefaction representation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463148A (en) * 2014-12-31 2015-03-25 南京信息工程大学 Human face recognition method based on image reconstruction and Hash algorithm
CN105160312A (en) * 2015-08-27 2015-12-16 南京信息工程大学 Recommendation method for star face make up based on facial similarity match
CN106250811A (en) * 2016-06-15 2016-12-21 南京工程学院 Unconfinement face identification method based on HOG feature rarefaction representation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LEI ZHANG 等: "Sparse Representation or Collaborative Representation: Which Helps Face Recognition", 《IEEE》 *
M_ZHANGJB: "三种基于感知哈希算法的相似图像检索技术", 《CSDN博客》 *
阚亮: "基于稀疏表示的人脸识别算法研究", 《万方数据》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633399A (en) * 2020-12-30 2021-04-09 郑州轻工业大学 Sparse collaborative joint representation pattern recognition method

Also Published As

Publication number Publication date
CN107273817B (en) 2019-09-13

Similar Documents

Publication Publication Date Title
CN106611169B (en) A kind of dangerous driving behavior real-time detection method based on deep learning
CN104866829B (en) A kind of across age face verification method based on feature learning
CN103605972B (en) Non-restricted environment face verification method based on block depth neural network
CN110399821B (en) Customer satisfaction acquisition method based on facial expression recognition
CN111639558B (en) Finger vein authentication method based on ArcFace Loss and improved residual error network
CN106651830A (en) Image quality test method based on parallel convolutional neural network
CN110133610A (en) ULTRA-WIDEBAND RADAR action identification method based on time-varying distance-Doppler figure
CN109255289B (en) Cross-aging face recognition method based on unified generation model
CN108446601A (en) A kind of face identification method based on sound Fusion Features
CN105117708A (en) Facial expression recognition method and apparatus
CN112329721B (en) Remote sensing small target detection method for model lightweight design
CN103246874B (en) Face identification method based on JSM (joint sparsity model) and sparsity preserving projection
CN107798308B (en) Face recognition method based on short video training method
CN106778512A (en) Face identification method under the conditions of a kind of unrestricted based on LBP and depth school
CN114783034B (en) Facial expression recognition method based on fusion of local sensitive features and global features
CN107767416A (en) The recognition methods of pedestrian&#39;s direction in a kind of low-resolution image
CN109977887A (en) A kind of face identification method of anti-age interference
CN105574489A (en) Layered stack based violent group behavior detection method
CN109919921B (en) Environmental impact degree modeling method based on generation countermeasure network
CN106529586A (en) Image classification method based on supplemented text characteristic
CN112766283B (en) Two-phase flow pattern identification method based on multi-scale convolution network
CN104463148B (en) Face identification method based on Image Reconstruction and hash algorithm
CN114332942A (en) Night infrared pedestrian detection method and system based on improved YOLOv3
CN105868711A (en) Method for identifying human body behaviors based on sparse and low rank
CN107832753A (en) A kind of face feature extraction method based on four value weights and multiple classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant