CN105678265B - Method of Data with Adding Windows and device based on manifold learning - Google Patents

Method of Data with Adding Windows and device based on manifold learning Download PDF

Info

Publication number
CN105678265B
CN105678265B CN201610011619.1A CN201610011619A CN105678265B CN 105678265 B CN105678265 B CN 105678265B CN 201610011619 A CN201610011619 A CN 201610011619A CN 105678265 B CN105678265 B CN 105678265B
Authority
CN
China
Prior art keywords
sub
training set
detected
face image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610011619.1A
Other languages
Chinese (zh)
Other versions
CN105678265A (en
Inventor
廖晨钢
钱广麟
严君
张吉
孙刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUANGZHOU HONGSEN TECHNOLOGY Co Ltd
Original Assignee
GUANGZHOU HONGSEN TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GUANGZHOU HONGSEN TECHNOLOGY Co Ltd filed Critical GUANGZHOU HONGSEN TECHNOLOGY Co Ltd
Priority to CN201610011619.1A priority Critical patent/CN105678265B/en
Publication of CN105678265A publication Critical patent/CN105678265A/en
Application granted granted Critical
Publication of CN105678265B publication Critical patent/CN105678265B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of Method of Data with Adding Windows and device based on manifold learning, including facial image to be detected is carried out being divided into subgraph according to the rule of equal stripeds and is converted into corresponding subpattern, then the facial image to be detected after division is subjected to Data Dimensionality Reduction processing, then classify to the image low-dimensional vector after its dimensionality reduction according to K sub- set of modes in training set, K recognition result is obtained, K recognition result is finally carried out to according to the method for weighting the final recognition result that facial image to be detected is calculated, it that is to say to obtain facial image to be detected to be some facial image belonged in training set.

Description

Data dimension reduction method and device based on manifold learning
Technical Field
The present invention relates to a data dimension reduction method and device, and in particular, to a data dimension reduction method and device based on manifold learning.
Background
In recent years, with the rapid development of science and technology, data obtained from various channels has increased greatly compared with the past, and therefore, processing of such high-dimensional data by using a data dimension reduction technology becomes an essential important component in data processing. Conventional dimensionality reduction methods (e.g., principal component analysis, independent component analysis, linear discriminant analysis, etc.) can efficiently process datasets having linear structures. However, when the data set has a non-linear structure, it is difficult for these methods to find the inherent low-dimensional information hidden in the high-dimensional data. The data dimension reduction method based on manifold learning assumes that high-dimensional observation data is positioned on a low-dimensional manifold embedded in a high-dimensional Euclidean space, so that the inherent geometric structure presenting a distorted set in the high-dimensional space can be effectively found and maintained. As a linearized version of the laplacian eigenmap, the Local Preserving Projection (LPP) algorithm has achieved some success in face recognition, mainly because it can effectively preserve the manifold structure of the face in the face of a high-dimensional warped face data set.
However, the LPP algorithm has the following disadvantages in the actual face recognition system, especially when applied to a complex environment and a mass stream of people:
firstly, in the conventional LPP algorithm, the whole face image is considered as a whole, and recent research shows that changes of the face due to factors such as lighting conditions and facial expressions are often only reflected in partial regions of the image, that is, the local data are scattered, and changes of other parts are little or even no change, so that if the whole face image is considered as a whole in the LPP algorithm, the local changes inevitably have great influence on the recognition result.
Secondly, when the LPP algorithm represents image data by using high-dimensional vectors, the calculation complexity increases exponentially with the increase of image dimensions when encountering singular matrixes, which inevitably consumes a large amount of calculation resources and reduces the operation speed of the algorithm, thereby reducing the performance of the whole system.
In the document of "data dimension reduction method based on manifold learning and application thereof in face recognition, wang build, 6 months 2010", although the above disadvantages are realized, such as dividing the face into equal size, then using LPP algorithm to perform dimension reduction processing on the face image data, using nearest neighbor classification algorithm to classify the face image and using weighting method to recognize the face image, there are some disadvantages, and the present application is to further improve the method on the basis.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a data dimension reduction method and device based on manifold learning, so that more key feature information of a face picture is reserved, the method and device are applied to a face recognition system to improve the recognition accuracy, and the consumption of the whole system is reduced.
In order to solve the problems, the technical scheme adopted by the invention is as follows:
the invention discloses a data dimension reduction method based on manifold learning, which comprises the following steps:
s101: dividing a face image X to be detected into K sub-images according to a certain rule, converting the K sub-images into corresponding sub-modes, and recording vectors of the sub-modes as Xi(i=1,2,…,K);
S102: according to formula Yi=Wi TXiDetermining Yi(ii) a Said Y isiIs XiIs represented by a low-dimensional vector of Wi TThe method is derived through a maximum distance criterion and a local preserving projection algorithm;
s103: according to the nearest neighbor classification algorithm, performing low-dimensional vector Y on the sub-mode of the face image X to be detectediClassifying, and obtaining K recognition results according to K sub-mode sets in the training set; the training set is a preset facial image set which is divided into K sub-mode sets according to the rule;
s104: according to a weighting method, the probability that the face image X to be detected belongs to the c-th person is obtained by calculating K recognition results:
wherein
Then, the identification result of the face image X to be detected is obtained as follows: identity (x) ═ argmax (p)c);
Wherein said WgiThe identification weight of the ith sub-pattern set is as follows:wherein KijIs for each sample X of the ith set of sub-patterns in the training setijOf the neighbor of (a) and the number of samples it is in the same class, said XijIs a sample of the ith sub-pattern set in the training set, wherein the sample refers to the coordinate representation of the vector of each sub-pattern set; j is between 1 and N, wherein N is the total number of the preset face images in the training set.
Further, the rule is to divide stripes such as a face image.
Further, the Wi TThe method is derived through a maximum distance criterion and a local preserving projection algorithm and comprises the following specific steps:
the target formula of the new local preserving projection algorithm obtained by combining the maximum distance criterion with the local preserving projection algorithm is as follows:
wherein,where D is the diagonal matrix, L, D are both known; m is all of the training setsAverage vector of samples, miIs the average vector, S, of all samples of the i-th class sub-pattern in the training setbIs the inter-class moment of walkingArray, SwIs an intra-class walk matrix, niThe number of samples belonging to the i-th class sub-mode in the training set; the value of i is between 1 and K;from this formula, W can be obtainedi TIs uniquely determined.
The invention also discloses a data dimension reduction device based on manifold learning, which is characterized by comprising the following devices:
the dividing module is used for dividing the face image X to be detected into K sub-images according to a certain rule, then converting the K sub-images into corresponding sub-modes, and recording the vectors of the sub-modes as Xi(i=1,2,…,K);
A data dimension reduction module for passing formula Yi=Wi TXiDetermining Yi(ii) a Said Y isiIs XiIs represented by a low-dimensional vector of Wi TThe method is derived through a maximum distance criterion and a local preserving projection algorithm;
a classification identification module used for identifying the low-dimensional vector Y of the sub-mode of the face image X to be detected according to the nearest neighbor classification algorithmiClassifying, and obtaining K recognition results according to K sub-mode sets in a training set; the training set is a preset facial image set which is divided into K sub-mode sets according to the rule;
the identification result obtaining module calculates the K identification results according to a weighting method, and the probability that the face image X to be detected belongs to the c-th person is as follows:
wherein
Then, the identification result of the face image X to be detected is obtained as follows: identity (x) ═ argmax (p)c);
Wherein said WgiThe identification weight of the ith sub-pattern set is as follows:wherein KijIs for each sample X of the ith set of sub-patterns in the training setijOf the neighbor of (a) and the number of samples it is in the same class, said XijIs a sample of the ith sub-pattern set in the training set, wherein the sample refers to the coordinate representation of the vector of each sub-pattern set; j is between 1 and N, wherein N is the total number of the preset face images in the training set.
Further, the rule is to divide stripes such as a face image.
Compared with the prior art, the invention has the beneficial effects that: the invention divides the face image by the mode of equal stripes, and the dividing mode can greatly keep the texture structure of each part of the face, thereby keeping the key characteristic information. Meanwhile, the maximum distance criterion is applied to a local preserving projection algorithm, and a recognition result of the face image is obtained by combining a nearest neighbor classification algorithm, so that the recognition accuracy is higher than that of the conventional LPP algorithm, the calculation efficiency of the conventional LPP algorithm is improved, and the system consumption is reduced; the recognition accuracy of the system is up to more than 90%, and the recognition requirement under the complex environment is met.
Drawings
FIG. 1 is a flow chart of a data dimension reduction method based on manifold learning according to the present invention;
FIG. 2 is a data processing flow chart of an intelligent security system for face recognition provided by the invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and the detailed description below:
as shown in FIG. 1, the invention provides a data dimension reduction method based on manifold learning, comprising the following steps:
s101: dividing a face image X to be detected into K sub-images according to a certain rule, and then converting the K sub-images into corresponding sub-modes, wherein the vectors of the sub-modes are recorded as: xi(i=1,2,…,K)。
The dividing mode according to a certain rule is to divide stripes such as a face image, and the dividing mode can greatly keep the texture structure of each part of the face, so that more key feature information is kept; meanwhile, the lower the divided sub-image data is, the lower the calculation is compared with other manifold data dimension reduction methods.
S102: according to formula Yi=Wi TXiDetermining Yi(ii) a Said Y isiIs XiIs represented by a low-dimensional vector of (1), wherein W isi TIs derived through a maximum spacing criterion and a local preserving projection algorithm.
In this step, formula Yi=Wi TXiRefers to the mapping of high dimensional spatial data to low dimensional spatial data. The objective function for the local preserving projection algorithm (LPP) is:
wherein SjkIs XijAnd XikThe similarity of (c).
The maximum space criterion (MMC) is to project original samples into a low-dimensional subspace, so that data samples of the same class are more compact, and data samples of different classes are separated as far as possible; it is required to keep the inter-class distance maximized in a low-dimensional space after conversion. The objective function of the MMC is then:
J=maxtr(Sb-Sw) (b),
wherein,m is all training setAverage vector of all samples, miIs the average vector, S, of all samples of the i-th class sub-pattern in the training setbIs a class roomWalk matrix, SwIs an intra-class walk matrix, niThe number of samples belonging to the i-th class sub-mode in the training set; the sample refers toOne sample in each sub-pattern in the training set, that is, a coordinate table of vectors in each sub-patternShowing that for each sub-pattern is a sample set; the value of i is between 1 and K; the samples refer to directions in the sub-imagesCoordinate representation of the quantity.
In the method of the present invention, MMC is applied to the LPP algorithm, that is, formula (b) is applied to the LPP algorithm formula (a) to obtain a new target formula of LPP:
where D is the diagonal matrix, L, D is known. The derivation is well within the skill of the art and will not be described in detail herein.
From the formula (c), W can be uniquely determinedi TA value of (d);
to uniquely determine Wi TThe formula (c) is solved by Lagrange orange, namely:
then W can be obtainediIs thatAndgenerated feature vector, λiIs corresponding sub-pattern XiThe characteristic value of (2).
Therefore, W can be uniquely determinedi TThen the low-dimensional mapping function can be determined: y isi=Wi TXi
S103: according to the nearest neighbor classification algorithm, representing Y by the low-dimensional vector of the sub-mode of the face image X to be detectediClassifying, and obtaining K recognition results according to K sub-mode sets in the training set;
the training set is a preset set of face images, wherein N face images of P persons are shared, each face image in the training set is divided into K sub-images according to the equal stripe division rule, the size of each image is constant, and the dimension of each sub-image is converted into a vector of H1H 2/K if H1H 2 pixels are assumed; after all images in the training set are divided into sub-images, combining the sub-images located at the same position in different images into corresponding sub-mode sets, so as to obtain K different sub-mode sets, wherein each sub-mode set is composed of a plurality of samples, each sub-mode set is represented by a vector, and the sample is represented by data of each coordinate in the vector.
S104: according to the weighted voting method, the probability that the face image X to be detected belongs to the c-th person is obtained by calculating K recognition results:
whereinWherein said WgiIs the identification weight of the ith sub-pattern set;
then the recognition result for the face image X to be detected is:
Identity(X)=argmax(pc) That is, the category with the highest probability is taken as the recognition result of X.
Calculating the final recognition weight value of each sub-mode in each face through the category label between each sample in the sub-mode set and K adjacent samples thereof; the K neighbor sample refers to a sample which is in the same sub-pattern set with the K neighbor sample and is obtained by calculation by using Euclidean distance.
Then for the ith set of sub-patterns, its recognition weight WgiThe calculation formula is as follows:
wherein KijIs for each sample X of the ith set of sub-patterns in the training setijOf the neighbor of (a) and the number of samples it is in the same class, said XijIs a sample of the ith sub-pattern set in the training set, wherein the sample refers to the coordinate representation of the vector of each sub-pattern set; j is between 1 and N, wherein N is the total number of the preset face images in the training set.
The invention takes the Yake face database as a training set and carries out classification and identification on the face image to be detected, and the obtained result is obviously superior to the existing LPP (local preserved projection) and SPLPP (sub-mode local preserved projection) algorithms, thereby improving the calculation efficiency of the traditional LPP and being superior to other sub-mode algorithms in the aspect of identification accuracy. The Yake face database is created by the computer vision and control center of Yake university. For a detailed description, reference may be made to the literature of the background.
As shown in fig. 2, the invention further provides a face recognition intelligent security system, and the system is applied to the data dimension reduction method based on manifold learning. The system comprises the following modules:
a search module: the method is used for face search, real-time control and face comparison.
A detection module: the method is used for face detection based on the hsv model space.
A processing module: the method is used for correcting the human face, preprocessing the human face image and extracting the human face characteristics.
A data dimension reduction module: the method is used for performing data dimension reduction on the face image.
An identification module: for deriving the recognition result.
The system identifies the human face by using a data dimension reduction method based on manifold learning, so that the identification precision of the system can reach more than 90%, and the identification requirement under a complex environment is basically met. Meanwhile, the system has good real-time performance and is convenient to carry.
The invention also provides a device corresponding to the data dimension reduction method based on manifold learning, which comprises the following devices:
the dividing module is used for dividing the face image X to be detected into K sub-images according to a certain rule, then converting the K sub-images into corresponding sub-modes, and recording the vectors of the sub-modes as sub-modesXi(i=1,2,…,K);
A data dimension reduction module for passing formula Yi=Wi TXiDetermining Yi(ii) a Said Y isiIs XiIs represented by a low-dimensional vector of Wi TThe method is derived through a maximum distance criterion and a local preserving projection algorithm;
a classification identification module used for identifying the low-dimensional vector Y of the sub-mode of the face image X to be detected according to the nearest neighbor classification algorithmiClassifying, and obtaining K recognition results according to K sub-mode sets in a training set; the training set is a preset facial image set which is divided into K sub-mode sets according to the rule;
the identification result obtaining module calculates the K identification results according to a weighting method to obtain the probability that the face image X to be detected belongs to the c-th person as follows:
wherein
Then, the identification result of the face image X to be detected is obtained as follows: identity (x) ═ argmax (p)c);
Wherein said WgiThe identification weight of the ith sub-pattern set is as follows:wherein KijIs for each sample X of the ith set of sub-patterns in the training setijOf the neighbor of (a) and the number of samples it is in the same class, said XijIs a sample of the ith sub-pattern set in the training set, wherein the sample refers to the coordinate representation of the vector of each sub-pattern set; j is between 1 and N, wherein N is the total number of the preset face images in the training set.
Further, the rule is to divide stripes such as a face image.
Various other modifications and changes may be made by those skilled in the art based on the above-described technical solutions and concepts, and all such modifications and changes should fall within the scope of the claims of the present invention.

Claims (5)

1. The data dimension reduction method based on manifold learning is characterized by comprising the following steps:
s101: dividing a face image X to be detected into K sub-images according to a certain rule, converting the K sub-images into corresponding sub-modes, and recording vectors of the sub-modes as Xi(i=1,2,…,K);
S102: according to formula Yi=Wi TXiDetermining Yi(ii) a Said Y isiIs XiIs represented by a low-dimensional vector of Wi TThe method is derived through a maximum distance criterion and a local preserving projection algorithm;
s103: according to the nearest neighbor classification algorithm, performing low-dimensional vector Y on the sub-mode of the face image X to be detectediClassifying, and obtaining K recognition results according to K sub-mode sets in the training set; the training set is a preset facial image set which is divided into K sub-mode sets according to the rule;
s104: according to a weighting method, the probability that the face image X to be detected belongs to the c-th person is obtained by calculating K recognition results:
wherein
Then, the identification result of the face image X to be detected is obtained as follows: identity (x) ═ argmax (p)c);
Wherein said WgiThe identification weight of the ith sub-pattern set is as follows:wherein KijIs for each sample X of the ith set of sub-patterns in the training setijOf the neighbor of (a) and the number of samples it is in the same class, said XijIs a sample of the ith sub-pattern set in the training set, wherein the sample refers to the coordinate representation of the vector of each sub-pattern set; j is between 1 and N, wherein N is the total number of the preset face images in the training set.
2. The manifold learning-based data dimension reduction method according to claim 1, wherein the rule is to divide stripes such as face images.
3. Manifold learning based on claim 1The data dimension reduction method is characterized in that W isi TThe method is derived through a maximum distance criterion and a local preserving projection algorithm and comprises the following specific steps:
the target formula of the new local preserving projection algorithm obtained by combining the maximum distance criterion with the local preserving projection algorithm is as follows:
wherein, where D is the diagonal matrix, L, D are both known; m is the average vector of all samples in all training sets, miIs the average vector, S, of all samples of the i-th class sub-pattern in the training setbIs an inter-class walking matrix, SwIs an intra-class walk matrix, niThe number of samples belonging to the i-th class sub-mode in the training set; the value of i is between 1 and K;l =K;
from this formula, W can be obtainedi TIs uniquely determined.
4. The data dimension reduction device based on manifold learning is characterized by comprising the following devices:
the dividing module is used for dividing the face image X to be detected into K sub-images according to a certain rule, then converting the K sub-images into corresponding sub-modes, and recording the vectors of the sub-modes as Xi(i=1,2,…,K);
A data dimension reduction module for passing formula Yi=Wi TXiDetermining Yi(ii) a Said Y isiIs XiIs represented by a low-dimensional vector of Wi TThe method is derived through a maximum distance criterion and a local preserving projection algorithm;
a classification recognition module for recognizing the classification based onThe nearest neighbor classification algorithm is a low-dimensional vector Y of the face image X to be detected to the sub-mode thereofiClassifying, and obtaining K recognition results according to K sub-mode sets in a training set; the training set is a preset facial image set which is divided into K sub-mode sets according to the rule;
the identification result obtaining module calculates the K identification results according to a weighting method, and the probability that the face image X to be detected belongs to the c-th person is as follows:
wherein
Then, the identification result of the face image X to be detected is obtained as follows: identity (x) ═ argmax (p)c);
Wherein said WgiThe identification weight of the ith sub-pattern set is as follows:wherein KijIs for each sample X of the ith set of sub-patterns in the training setijOf the neighbor of (a) and the number of samples it is in the same class, said XijIs a sample of the ith sub-pattern set in the training set, wherein the sample refers to the coordinate representation of the vector of each sub-pattern set; j is between 1 and N, wherein N is the total number of the preset face images in the training set.
5. The manifold learning-based data dimension reduction device according to claim 4, wherein the rule is to divide stripes such as face images.
CN201610011619.1A 2016-01-06 2016-01-06 Method of Data with Adding Windows and device based on manifold learning Active CN105678265B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610011619.1A CN105678265B (en) 2016-01-06 2016-01-06 Method of Data with Adding Windows and device based on manifold learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610011619.1A CN105678265B (en) 2016-01-06 2016-01-06 Method of Data with Adding Windows and device based on manifold learning

Publications (2)

Publication Number Publication Date
CN105678265A CN105678265A (en) 2016-06-15
CN105678265B true CN105678265B (en) 2019-08-20

Family

ID=56299550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610011619.1A Active CN105678265B (en) 2016-01-06 2016-01-06 Method of Data with Adding Windows and device based on manifold learning

Country Status (1)

Country Link
CN (1) CN105678265B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503633A (en) * 2016-10-10 2017-03-15 上海电机学院 The method for building up in face characteristic storehouse in a kind of video image
CN107451538A (en) * 2017-07-13 2017-12-08 西安邮电大学 Human face data separability feature extracting method based on weighting maximum margin criterion
CN107657214B (en) * 2017-09-04 2021-02-26 重庆大学 Electronic tongue taste recognition method for local discrimination and retention projection

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006103240A1 (en) * 2005-03-29 2006-10-05 France Telecom Method of identifying faces from face images and corresponding device and computer program
CN1908960A (en) * 2005-08-02 2007-02-07 中国科学院计算技术研究所 Feature classification based multiple classifiers combined people face recognition method
CN101079105A (en) * 2007-06-14 2007-11-28 上海交通大学 Human face identification method based on manifold learning
CN103336960A (en) * 2013-07-26 2013-10-02 电子科技大学 Human face identification method based on manifold learning
CN103745200A (en) * 2014-01-02 2014-04-23 哈尔滨工程大学 Facial image identification method based on word bag model
CN105184281A (en) * 2015-10-12 2015-12-23 上海电机学院 Face feature library building method based on high-dimensional manifold learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006103240A1 (en) * 2005-03-29 2006-10-05 France Telecom Method of identifying faces from face images and corresponding device and computer program
CN1908960A (en) * 2005-08-02 2007-02-07 中国科学院计算技术研究所 Feature classification based multiple classifiers combined people face recognition method
CN101079105A (en) * 2007-06-14 2007-11-28 上海交通大学 Human face identification method based on manifold learning
CN103336960A (en) * 2013-07-26 2013-10-02 电子科技大学 Human face identification method based on manifold learning
CN103745200A (en) * 2014-01-02 2014-04-23 哈尔滨工程大学 Facial image identification method based on word bag model
CN105184281A (en) * 2015-10-12 2015-12-23 上海电机学院 Face feature library building method based on high-dimensional manifold learning

Also Published As

Publication number Publication date
CN105678265A (en) 2016-06-15

Similar Documents

Publication Publication Date Title
CN106682598B (en) Multi-pose face feature point detection method based on cascade regression
WO2019134327A1 (en) Facial expression recognition feature extraction method employing edge detection and sift
Si et al. Learning hybrid image templates (hit) by information projection
Cheng et al. Gait analysis for human identification through manifold learning and HMM
Abusham et al. Face recognition using local graph structure (LGS)
Li et al. Overview of principal component analysis algorithm
Aykut et al. Developing a contactless palmprint authentication system by introducing a novel ROI extraction method
Wang et al. Video-based face recognition: A survey
CN107220627B (en) Multi-pose face recognition method based on collaborative fuzzy mean discrimination analysis
Yi et al. Motion keypoint trajectory and covariance descriptor for human action recognition
CN104966075B (en) A kind of face identification method and system differentiating feature based on two dimension
CN104268507A (en) Manual alphabet identification method based on RGB-D image
CN105678265B (en) Method of Data with Adding Windows and device based on manifold learning
Lin et al. A study of real-time hand gesture recognition using SIFT on binary images
Whitehill et al. A discriminative approach to frame-by-frame head pose tracking
Yunqi et al. 3D face recognition by SURF operator based on depth image
CN103942572A (en) Method and device for extracting facial expression features based on bidirectional compressed data space dimension reduction
CN109740429A (en) Smiling face's recognition methods based on corners of the mouth coordinate mean variation
Kocjan et al. Face recognition in unconstrained environment
Hu et al. An effective head pose estimation approach using Lie Algebrized Gaussians based face representation
Zhang et al. Combining weighted adaptive CS-LBP and local linear discriminant projection for gait recognition
Sahoo et al. Multi-feature-based facial age estimation using an incomplete facial aging database
Rajalakshmi et al. A review on classifiers used in face recognition methods under pose and illumination variation
Shen et al. A novel distribution-based feature for rapid object detection
Viswanathan et al. Recognition of hand gestures of English alphabets using HOG method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder

Address after: 510000 room 414, room 413, No. 662, Huangpu Avenue middle, Tianhe District, Guangzhou City, Guangdong Province (office only)

Patentee after: GUANGZHOU HONGSEN TECHNOLOGY Co.,Ltd.

Address before: 510665, room 12, No. 301, Yun Yun Road, Guangzhou, Guangdong, Tianhe District

Patentee before: GUANGZHOU HONGSEN TECHNOLOGY Co.,Ltd.

CP02 Change in the address of a patent holder
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Data dimension reduction method and device based on Manifold Learning

Effective date of registration: 20200729

Granted publication date: 20190820

Pledgee: Zhujiang Branch of Guangzhou Bank Co.,Ltd.

Pledgor: GUANGZHOU HONGSEN TECHNOLOGY Co.,Ltd.

Registration number: Y2020980004530

PE01 Entry into force of the registration of the contract for pledge of patent right
PP01 Preservation of patent right

Effective date of registration: 20220726

Granted publication date: 20190820

PP01 Preservation of patent right