CN105740838A - Recognition method in allusion to facial images with different dimensions - Google Patents

Recognition method in allusion to facial images with different dimensions Download PDF

Info

Publication number
CN105740838A
CN105740838A CN201610083936.4A CN201610083936A CN105740838A CN 105740838 A CN105740838 A CN 105740838A CN 201610083936 A CN201610083936 A CN 201610083936A CN 105740838 A CN105740838 A CN 105740838A
Authority
CN
China
Prior art keywords
image
facial image
eigenvalue
matrix
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610083936.4A
Other languages
Chinese (zh)
Inventor
张欣
刘海
于红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University
Original Assignee
Hebei University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University filed Critical Hebei University
Priority to CN201610083936.4A priority Critical patent/CN105740838A/en
Publication of CN105740838A publication Critical patent/CN105740838A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention provides a recognition method in allusion to facial images with different dimensions. The method comprises the following steps: 1, carrying out discrete cosin transformation (DCT) on facial images and obtaining the same DCT coefficient, forming a sample set by a to-be-recognized unknown facial image and a known facial image set for matching, carrying out DCT on each two-dimensional facial grayscale image in the sample set, and selecting 56*46 components from a low-frequency part of a two-dimension DCT coefficient matrix of the transformation result to retain; 2, carrying out principal component analysis (PCA) on the transformed sample set to ensure that the dimensionality of a facial image characteristic value matrix is reduced to 20; and 3, calculating the characteristic matching value between every two the facial images by utilizing a correlation coefficient normalization algorithm, and selecting the facial image with the maximum characteristic matching value as a known facial image matched with the unknown facial image. The method provided by the invention is capable of solving the influences, caused by the image dimension differences of the facial images with different dimensions in the recognition process, on the recognition result, thereby improving the recognition correctness.

Description

Recognition methods for different scale facial image
Technical field
The present invention relates to the recognition methods of facial image, specifically a kind of recognition methods for different scale facial image.
Background technology
Recognition of face is the important component part of living things feature recognition, and it has a wide range of applications in fields such as public safety, monitor in real time, purview certification, man-machine interactions.Have such as at present: the systems such as gate inhibition's certification of sensitizing range, face work attendance, gathering of people regional aim people's tracking have put into actually used.
The main method of recognition of face includes: template matching method, neural network, the method based on HMM, the face recognition algorithms based on AdaBoost, the face identification method based on geometric properties and the face identification method etc. based on algebraic characteristic.These face identification methods are typically based on standard faces storehouse, and the face digital picture in standard faces storehouse all has same scale, and therefore, these face identification methods are desirable that identified person must can be only achieved design effect when extremely coordinating.But no matter in actual applications, be in video or in the photo of shooting, the scale size of the facial image collected is different.
When the yardstick of facial image to be identified Yu known facial image differs, conventional image scaling techniques solves the problem that face yardstick is different.By image down and down-sampled time, image can lose partial pixel;When image is amplified frequently with interpolation algorithm, the redundant sub-pixels in image can be increased.Therefore, when adopting common method to zoom in and out operation for different scale facial image, the quality of image will inevitably be affected, and then have influence on the accuracy rate of recognition of face.
Summary of the invention
The invention provides a kind of recognition methods for different scale facial image, it is intended to solve the graphical rule difference impact on recognition result in identification process of different scale facial image, improve recognition accuracy.
The present invention is achieved in that
For the recognition methods of different scale facial image, comprise the steps:
A, facial image is carried out discrete cosine transform (DCT) and obtains identical DCT coefficient: unknown facial image to be identified and the known face image set being used for mating are collectively formed sample set, each two-dimension human face gray level image in sample set is carried out dct transform, the low frequency part in the two dimensional DCT coefficients matrix of transformation result is chosen 56 × 46 components and remains.
B, to conversion after sample set carry out principal component analysis (PCA), obtain relatively low characteristic dimension: for sample set, the proper subspace of 20 eigenvalue of maximum is comprised with PCA structure, sample set after converting according to step a is projected in this proper subspace, respectively obtains the eigenvalue of each width facial image.
C, correlation coefficient normalization method is utilized to calculate the characteristic matching value between facial image: with the eigenvalue of known face image set, the unknown eigenvalue of facial image to be calculated correlation coefficient normalization respectively, obtains a stack features matching value;Selected characteristic matching value the maximum is and the known facial image of unknown face images match.
In described recognition methods, for the digital face gray level image that yardstick is M × N in step a, its two-dimension discrete cosine transform formula:
F ( u , v ) = α ( u ) α ( v ) Σ x = 0 M - 1 Σ y = 0 N - 1 f ( x , y ) c o s π ( 2 x + 1 ) u 2 M c o s π ( 2 y + 1 ) v 2 N - - - ( 1 )
Wherein, the line number of M and the N respectively two-dimensional pixel matrix of face gray level image and columns;X and y is two-dimensional image spatial domain coordinate;(x, y) for two-dimensional image spatial domain element vector value for f;As generalized frequency variable u=0,As u=1,2 ..., M-1,As generalized frequency variable v=0,As v=1,2 ..., N-1,Image its low frequency component after dct transform concentrates on the upper left corner, and high fdrequency component is distributed in the lower right corner.Low frequency component contains the main information of original image, and the information that high fdrequency component comprises by contrast just seems less important.Cast out HFS, retain 56 × 46 DCT coefficient of low frequency part, make the facial image of different scale be provided with identical yardstick.
In the described recognition methods for different scale facial image, for the facial image after dct transform in step b, utilize PCA method dimensionality reduction.
The vector form of facial image sample set is: X=[X1,X2……Xm], wherein Xi(i=1,2,3...m) represent the i-th width facial image and expand into the column vector of n × 1 dimension, and n is image dimension, and m is the summation of image pattern collection;The vector of average face is:The average face of every width facial image is:Thus constituting a new matrixStructure covariance matrixCalculate eigenvalue and the characteristic vector thereof of covariance matrix M, the characteristic vector corresponding to front k=20 eigenvalue is constructed to a proper subspace.Matrix corresponding to this subspace is the final eigenvalue matrix of facial image sample set.
In the described recognition methods for different scale facial image, according to formula (2) in step c, calculate the characteristic matching value of the two width facial images being mutually matched, set up a facial image matching technique based on correlation coefficient normalization formula.
Wherein,For piece image eigenvalue matrix, x and y is its two-dimensional matrix ranks number;(s, is t) width subimage eigenvalue matrix that is sized to B × A to ω, and s and t is its two-dimensional matrix ranks number.
In recognition methods for different scale facial image provided by the present invention, first the facial image of different scale is carried out dct transform, the low frequency part in the two-dimensional matrix of transformation results is extracted 56 × 46 DCT coefficient again, then the sample set after conversion is carried out PCA, the dimension making facial image eigenvalue matrix is reduced to 20, finally utilizes correlation coefficient normalization method that facial image to be identified and known facial image are carried out match cognization.
Accompanying drawing explanation
Fig. 1 is the method flow schematic diagram of the present invention.
Fig. 2 is the part facial image in ORL face database.
In Fig. 3, a is facial image, and b is the result that a carries out dct transform.
Fig. 4 is the facial image of the different scale that experiment gathers.
Fig. 5 is the face position detected for the facial image in Fig. 4.
Detailed description of the invention
The software and hardware condition of computer used by the invention process is: Asus's notebook, and CPU is intel-Corei5-3230M, 2.6GHz, and video card is NVIDIAGeForce720M, internal memory 4GB, and operating system is Window7, and software-programming languages uses Matlab6.5.
In conjunction with accompanying drawing, the present invention is made an explanation:
In Fig. 1, step S1, obtain facial image by human face region detection;Step S2, carries out dct transform to facial image, makes the facial image of different scale have identical DCT coefficient;Step S3, carries out PCA analysis by transformation results, extracts image principal character so as to dimensionality reduction;Step S4, carries out match cognization by calculating the matching value between image.
Step S1, obtains facial image.
Experiment sample includes the image in international face database ORL (OliverttiResearchLaboratory) and the facial image of experiment collection.ORL sample is by 40 people, everyone 10 width, altogether 400 width facial images, and wherein parts of images is as shown in Figure 2.The face gray level image of the different scale that experiment gathers amounts to 200 width, and wherein parts of images is as shown in Figure 4.
Experiment sample is introduced:
ORL face database: ORL face database is made up of from a series of facial images of shooting during in April, 1994 in April, 1992 to Britain Camb Olivetti laboratory, has 40 all ages and classes, different sexes and not agnate object.Each object 10 width image amounts to 400 width gray level image compositions, and picture size is 112 × 92, and image background is black, 256 gray levels.Wherein face submeter feelings and details all change, for instance laugh at do not laugh at, eyes are opened or closed, and wear or do not wear glasses, human face posture also changes, its degree of depth rotate and Plane Rotation up to 20 degree, face size also has the change of maximum 10%.This storehouse is currently used standard database the most widely, and it contains substantial amounts of comparative result.
The face image data of actual acquisition: utilize the 8000000 rearmounted pixel camera heads of i Phone iphone4s under the premise of uniform illumination, at the facial image amounting to 20 people with white for the multi-angle that background carries out shooting.The image that experiment is gathered by skin color detection method is utilized to carry out human face region detection, obtain afterwards after testing the face of people at least account for whole image 50% subimage and gray processing after as experiment facial image, everyone 10 width, amounting to the facial image of 200 width different scales, wherein parts of images is as shown in Figure 5.
Step S2, dct transform.The image comprising face position detected is carried out dct transform and extracts same factor.
DCT is a kind of conventional image data compression method.One width is sized to its two-dimensional dct transform formula of digital picture of M × N:
F ( u , v ) = α ( u ) α ( v ) Σ x = 0 M - 1 Σ y = 0 N - 1 f ( x , y ) c o s π ( 2 x + 1 ) u 2 M c o s π ( 2 y + 1 ) v 2 N - - - ( 1 )
Wherein, the line number of the two-dimensional pixel value matrix of M and N respectively face gray level image and columns;X and y is two-dimensional image spatial domain coordinate;(x, y) for two-dimensional image spatial domain vector element for f;As generalized frequency variable u=0,As u=1,2 ..., M-1,As generalized frequency variable v=0,As v=1,2 ..., N-1,Image its low frequency component after dct transform concentrates on the upper left corner, and high fdrequency component is distributed in the lower right corner, in Fig. 3 shown in b.Low frequency component contains the main information of original image, and the information that high fdrequency component comprises by contrast just seems less important.
Sample set is constituted after being mixed at random by the total 60 people totally 600 width face gray level images that ORL face database and experiment gather.Arbitrarily choosing everyone 5 width facial images totally 300 width in sample set as unknown facial image to be identified, all the other 300 width are as known face image set.Each two-dimension human face image in sample set is carried out dct transform according to formula (1), casts out HFS, retain X × Y DCT coefficient of low frequency part, make the facial image of different scale be provided with identical yardstick.
This step is by dct transform and casts out HFS, makes the facial image of different scale be provided with identical DCT coefficient, lays a good foundation for later stage match cognization.
Step S3, for sample set, comprises the proper subspace of K eigenvalue of maximum, being projected in this proper subspace by the sample set after converting according to S2, respectively obtaining the eigenvalue of each width facial image, thus reaching the purpose to feature space dimensionality reduction with PCA structure.
The vector form of facial image sample set is: X=[X1,X2……Xm], wherein Xi(i=1,2,3...m) represent the i-th width facial image and expand into the column vector of n × 1 dimension, and n is image dimension, and m is the summation of image pattern collection;The vector of average face is:The average face of every width facial image is:Thus constituting a new matrixStructure covariance matrixCalculate eigenvalue and the characteristic vector thereof of covariance matrix M, the characteristic vector corresponding to front k=20 eigenvalue is constructed to a proper subspace.Matrix corresponding to this subspace is the final eigenvalue matrix of facial image sample set.
Step S4, correlation coefficient normalization method identification: by the eigenvalue of facial image to be identified respectively with the eigenvalue of known facial image, carry out relevant matches with formula (2), calculate each matching value, choose the maximum and be coupling facial image.
For being sized to the eigenvalue of M × N imageBe sized to B × A subimage eigenvalue ω (s, t),Can be expressed as with the dependency of ω:Wherein x and y isImage feature value two-dimensional matrix ranks number;S and t is ω image feature value two-dimensional matrix ranks number.
(x, process y) is namely at eigenmatrix to calculate cThe mover matrix ω of middle pointwise (s, t), make the initial point of ω and point (x, y) overlaps, then calculate ω withThe middle region respective value sum of products covered by ω, using this result of calculation, as correlation c, (x, y) in point (x, response y).
Relevant calculating is by connecting acquisition by pictorial element and subpattern pictorial element, cumulative after being multiplied by coherent element.Subimage ω can be considered as a vector stored row wise or column wiseThe image-region covered by ω in calculating process is considered as another vector stored in the same wayThus, correlation computations has just become the dot-product operation between vector.
Two vectorial dot products are:Wherein, θ is vectorBetween angle.Obviously, whenWithWhen having exactly the same direction (parallel), cos θ=1, dot product obtains its maximumThis means that when the regional area of image is similar to subimage pattern, and related operation produces maximum response.But, the final value of dot product also with vectorThe mould of self is relevant, and the relevant response causing dot product is also existed vector value than more sensitive defect by this.Therefore, existHigh level region, it is possible to although the content of its content and ω is not close, but due toSelf produce more greatly and equally a significantly high response.By vector being carried out normalization with its modulus value thus solving this problem, namely can pass throughCalculate relevant.
Formula after improvement, namely correlation coefficient normalization computing formula is as follows:
Correlation coefficient normalization method is actually the similarity representing two vectors (eigenvalue of image) with the cosine value of angle between vector, is namely closer to 1 more good.The unknown eigenvalue of facial image is calculated correlation coefficient normalization respectively with the eigenvalue of known face image set, obtains a stack features matching value;Selected characteristic matching value the maximum is and the known facial image of unknown face images match.
When the experiment sample collection described in step sl, in S2 step, after dct transform, the difference of retention factor X × Y is as shown in table 1 to the effect of sample set discrimination;In S3 step, PCA eigenvalue number is as shown in table 2 to the effect of sample set discrimination.
Combined experiments is carried out by several different scale facial images that international face database ORL (OliverttiResearchLaboratory) face database and experiment are gathered, demonstrate that the present invention is at least 56 × 46 in DCT coefficient, the eigenvalue of PCA method is when being at least 20, the match cognization of different scale facial image effectively and is had good discrimination.
The recognition accuracy of PCA eigenvalue number fixed by table 1
Tab.1FixedPCAprincipalcomponentscoresofrecognitionaccuracy
The recognition accuracy of DCT extraction coefficient fixed by table 2
Tab.2RecognitionaccuracyrateisfixedtoextractDCTcoefficients

Claims (4)

1. for the recognition methods of different scale facial image, it is characterized in that, comprise the steps:
A, facial image is carried out discrete cosine transform (DCT) and obtains identical DCT coefficient: unknown facial image to be identified and the known face image set being used for mating are collectively formed sample set, each two-dimension human face gray level image in sample set is carried out dct transform, the low frequency part in the two dimensional DCT coefficients matrix of transformation result is chosen 56 × 46 components and remains;
B, to conversion after sample set carry out principal component analysis (PCA), obtain relatively low characteristic dimension: for sample set, the proper subspace of 20 eigenvalue of maximum is comprised with PCA structure, sample set after converting according to step a is projected in this proper subspace, respectively obtains the eigenvalue of each width facial image;
C, correlation coefficient normalization method is utilized to calculate the characteristic matching value between facial image: with the eigenvalue of known face image set, the unknown eigenvalue of facial image to be calculated correlation coefficient normalization respectively, obtains a stack features matching value;Selected characteristic matching value the maximum is and the known facial image of unknown face images match.
2. the recognition methods for different scale facial image according to claim 1, is characterized in that, for the digital face gray level image that yardstick is M × N in step a, and its two-dimension discrete cosine transform formula:
F ( u , v ) = α ( u ) α ( v ) Σ x = 0 M - 1 Σ y = 0 N - 1 f ( x , y ) c o s π ( 2 x + 1 ) u 2 M c o s π ( 2 y + 1 ) v 2 N - - - ( 1 )
Wherein, the line number of M and the N respectively two-dimensional pixel matrix of face gray level image and columns;X and y is two-dimensional image spatial domain coordinate;(x, y) for two-dimensional image spatial domain element vector value for f;As generalized frequency variable u=0,As u=1,2 ..., M-1,As generalized frequency variable v=0,As v=1,2 ..., N-1,Image its low frequency component after dct transform concentrates on the upper left corner, and high fdrequency component is distributed in the lower right corner;Cast out HFS, retain 56 × 46 DCT coefficient of low frequency part, make the facial image of different scale be provided with identical yardstick.
3. the recognition methods for different scale facial image according to claim 1, is characterized in that, for the facial image value matrix after dct transform in step b, utilizes PCA dimensionality reduction:
The vector form of facial image sample set is: X=[X1,X2······Xm], wherein Xi(i=1,2,3...m) represent the i-th width facial image and expand into the column vector of n × 1 dimension, and n is image dimension, and m is the summation of image pattern collection;The vector of average face is:The average face of every width facial image is:Thus constituting a new matrixStructure covariance matrixCalculate eigenvalue and the characteristic vector thereof of covariance matrix M, the characteristic vector corresponding to front k=20 eigenvalue is constructed to a proper subspace;Matrix corresponding to this subspace is the final eigenvalue matrix of facial image sample set.
4. the recognition methods for different scale facial image according to claim 1, it is characterized in that, according to formula (2) in step c, calculate the characteristic matching value of the two width facial images being mutually matched, set up a facial image matching technique based on correlation coefficient normalization formula:
Wherein,For piece image eigenvalue matrix, x and y is its two-dimensional matrix ranks number;(s, is t) width subimage eigenvalue matrix that is sized to B × A to ω, and s and t is its two-dimensional matrix ranks number.The unknown eigenvalue of facial image is calculated correlation coefficient normalization respectively with the eigenvalue of known face image set, obtains a stack features matching value;Selected characteristic matching value the maximum is and the known facial image of unknown face images match.
CN201610083936.4A 2016-02-06 2016-02-06 Recognition method in allusion to facial images with different dimensions Pending CN105740838A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610083936.4A CN105740838A (en) 2016-02-06 2016-02-06 Recognition method in allusion to facial images with different dimensions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610083936.4A CN105740838A (en) 2016-02-06 2016-02-06 Recognition method in allusion to facial images with different dimensions

Publications (1)

Publication Number Publication Date
CN105740838A true CN105740838A (en) 2016-07-06

Family

ID=56245994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610083936.4A Pending CN105740838A (en) 2016-02-06 2016-02-06 Recognition method in allusion to facial images with different dimensions

Country Status (1)

Country Link
CN (1) CN105740838A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709598A (en) * 2016-12-15 2017-05-24 全球能源互联网研究院 One-class sample-based voltage stability prediction judgment method
CN108921043A (en) * 2018-06-08 2018-11-30 新疆大学 A kind of Uygur nationality's face identification method of new blending algorithm
CN110210340A (en) * 2019-05-20 2019-09-06 深圳供电局有限公司 Face characteristic value comparison method and system and readable storage medium
CN110458007A (en) * 2019-07-03 2019-11-15 平安科技(深圳)有限公司 Match method, apparatus, computer equipment and the storage medium of face
CN112070913A (en) * 2020-07-17 2020-12-11 盛威时代科技集团有限公司 Ticket checking processing method based on Internet of things technology

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101261678A (en) * 2008-03-18 2008-09-10 中山大学 A method for normalizing face light on feature image with different size
CN101604376A (en) * 2008-10-11 2009-12-16 大连大学 Face identification method based on the HMM-SVM mixture model
CN102982322A (en) * 2012-12-07 2013-03-20 大连大学 Face recognition method based on PCA (principal component analysis) image reconstruction and LDA (linear discriminant analysis)
CN103729625A (en) * 2013-12-31 2014-04-16 青岛高校信息产业有限公司 Face identification method
CN104036254A (en) * 2014-06-20 2014-09-10 成都凯智科技有限公司 Face recognition method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101261678A (en) * 2008-03-18 2008-09-10 中山大学 A method for normalizing face light on feature image with different size
CN101604376A (en) * 2008-10-11 2009-12-16 大连大学 Face identification method based on the HMM-SVM mixture model
CN102982322A (en) * 2012-12-07 2013-03-20 大连大学 Face recognition method based on PCA (principal component analysis) image reconstruction and LDA (linear discriminant analysis)
CN103729625A (en) * 2013-12-31 2014-04-16 青岛高校信息产业有限公司 Face identification method
CN104036254A (en) * 2014-06-20 2014-09-10 成都凯智科技有限公司 Face recognition method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709598A (en) * 2016-12-15 2017-05-24 全球能源互联网研究院 One-class sample-based voltage stability prediction judgment method
CN106709598B (en) * 2016-12-15 2022-02-15 全球能源互联网研究院 Voltage stability prediction and judgment method based on single-class samples
CN108921043A (en) * 2018-06-08 2018-11-30 新疆大学 A kind of Uygur nationality's face identification method of new blending algorithm
CN110210340A (en) * 2019-05-20 2019-09-06 深圳供电局有限公司 Face characteristic value comparison method and system and readable storage medium
CN110458007A (en) * 2019-07-03 2019-11-15 平安科技(深圳)有限公司 Match method, apparatus, computer equipment and the storage medium of face
CN110458007B (en) * 2019-07-03 2023-10-27 平安科技(深圳)有限公司 Method, device, computer equipment and storage medium for matching human faces
CN112070913A (en) * 2020-07-17 2020-12-11 盛威时代科技集团有限公司 Ticket checking processing method based on Internet of things technology
CN112070913B (en) * 2020-07-17 2022-05-10 盛威时代科技集团有限公司 Ticket checking processing method based on Internet of things technology

Similar Documents

Publication Publication Date Title
CN112418074B (en) Coupled posture face recognition method based on self-attention
Li et al. Overview of principal component analysis algorithm
Vageeswaran et al. Blur and illumination robust face recognition via set-theoretic characterization
CN105740838A (en) Recognition method in allusion to facial images with different dimensions
CN110458192B (en) Hyperspectral remote sensing image classification method and system based on visual saliency
CN107292299B (en) Side face recognition methods based on kernel specification correlation analysis
CN111709313B (en) Pedestrian re-identification method based on local and channel combination characteristics
Ng et al. Hybrid ageing patterns for face age estimation
Wang et al. Robust head pose estimation via supervised manifold learning
CN107784284B (en) Face recognition method and system
Lakshmi et al. Off-line signature verification using Neural Networks
KR20130059212A (en) Robust face recognition method through statistical learning of local features
CN111325275A (en) Robust image classification method and device based on low-rank two-dimensional local discriminant map embedding
Kare et al. Using bidimensional regression to assess face similarity
Tao et al. Illumination-insensitive image representation via synergistic weighted center-surround receptive field model and weber law
CN109919056B (en) Face recognition method based on discriminant principal component analysis
Si et al. Age-invariant face recognition using a feature progressing model
Liu et al. Combining dissimilarity measures for image classification
Asad et al. Low complexity hybrid holistic–landmark based approach for face recognition
Li et al. Shadow determination and compensation for face recognition
Ghorbel et al. Hybrid approach for face recognition from a single sample per person by combining VLC and GOM
Tun et al. Gait based Human Identification through Intra-Class Variations
Fan et al. A feature extraction algorithm based on 2D complexity of gabor wavelets transform for facial expression recognition
Huang et al. Mixture of deep regression networks for head pose estimation
Hahmann et al. Combination of facial landmarks for robust eye localization using the Discriminative Generalized Hough Transform

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160706