CN106503687B - Merge the monitor video system for identifying figures and its method of face multi-angle feature - Google Patents

Merge the monitor video system for identifying figures and its method of face multi-angle feature Download PDF

Info

Publication number
CN106503687B
CN106503687B CN201610984667.9A CN201610984667A CN106503687B CN 106503687 B CN106503687 B CN 106503687B CN 201610984667 A CN201610984667 A CN 201610984667A CN 106503687 B CN106503687 B CN 106503687B
Authority
CN
China
Prior art keywords
face
identity
key frame
picture
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610984667.9A
Other languages
Chinese (zh)
Other versions
CN106503687A (en
Inventor
孙晓
吕曼
彭晓琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Xinfa Technology Co ltd
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN201610984667.9A priority Critical patent/CN106503687B/en
Publication of CN106503687A publication Critical patent/CN106503687A/en
Application granted granted Critical
Publication of CN106503687B publication Critical patent/CN106503687B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of monitor video system for identifying figures and its method for merging face multi-angle feature, including module of target detection, multi-orientation Face identification module and identities match module.One section of monitor video for being converted to the key frame set comprising people by module of target detection, multi-orientation Face identification module is used to the key frame set comprising people being converted to the human face image sequence with angle value label, identities match module is used to complete the feature vector similarity mode of image sequence in human face image sequence and identity library with angle value label, and finds immediate identity output as recognition result.The characteristic information of the multiple angles of face in energy comprehensively monitoring video of the present invention, to improve in the random biggish situation of personage's posture in monitor video, the accuracy of identification.

Description

Merge the monitor video system for identifying figures and its method of face multi-angle feature
Technical field
The invention belongs to field of intelligent video surveillance, are related to the technologies such as pattern-recognition, artificial intelligence more particularly to one The monitor video system for identifying figures and its method of kind fusion multi-angle feature.
Background technique
In modern society, need the place of authentication more and more, recognition of face is possessed using mankind itself A kind of technology of biological characteristic progress authentication.With the development of the application fields such as video monitoring, information security, access control Demand, video human face identifying system suffer from huge application prospect in these areas.
Recognition of face at present is mainly used in the fields such as attendance, gate inhibition's authentication, and there are no the knowledges of more complete face Other monitoring device is applied in monitoring scene.The recognition of face in video sequence is mostly derived from than the recognition of face in still image Environment wants complicated more, for example video monitoring camera is farther out from target so that acquisition quality it is good facial image it is relatively difficult; User's posture has very big randomness simultaneously, and is kept in motion, and side face and the probability back to video camera greatly increase;This Outside, monitoring scene usually will appear body and block, and all bring sizable difficulty to Face datection and face alignment identification.
This is also just needed by overcoming the problems, such as in monitor video that it is few that small positive face probability, picture blur, sample number occurs in personage, Go out identity of personage information in conjunction with the image recognition of face all angles.Traditional two-dimension human face identification technology is closed based on face Identification is completed in key feature extraction comparison, and the feature extraction algorithm of mainstream has the LBP feature of expression image texture characteristic, table at present The eigenface method (PCA) of intelligent's face image statistical nature, and the extracting method based on image local geometrical characteristic.It is common Feature extracting method is based on static positive facial image, and this method often relies on the accurate detection of facial characteristics, The integrality of feature is an extremely crucial factor of algorithm success or failure, and the feature extracted in this way carries out face match cognization It can be preferably.Its defect be it is more by external interference, once face is rotated, blocked or obscure portions, cause Partial Feature It disappears, when causing facial image feature imperfect, this algorithm will fail, and lead to not and face information progress in library Match.It is difficult adaptive video monitoring bring in this way, quality of human face image is low, multi-angle, and distance is remote to wait challenges.
Summary of the invention
The present invention overcomes in place of the deficiencies in the prior art, propose a kind of monitor video people for merging face multi-angle feature Object identification system and its method exist to the characteristic information of the multiple angles of face in energy comprehensively monitoring video to improve In monitor video in the random biggish situation of personage's posture, the accuracy of identification.
The present invention to achieve the above object of the invention, adopts the following technical scheme that
It includes: mesh that a kind of the characteristics of monitor video system for identifying figures of fusion multi-angle feature of the present invention, which is composition, Mark detection module, multi-orientation Face identification module and identities match module;
The module of target detection collects someone's picture and unmanned picture of single frames, and carries out size normalizing to every width picture Class label is assigned after change processing, to obtain the training sample data collection being made of positive sample and negative sample;Extract the instruction Practice sample data and concentrate the SIFT feature of each sample, so that each sample is converted to feature vector;Recycle supporting vector Machine SVM is trained described eigenvector, obtains target detection model;
Monitor video is converted into a series of single frames picture by the module of target detection, and as test set;Using institute It states target detection model and identifies that whether every frame picture contains someone in the test set, if containing someone, retains corresponding single frames picture, And as key frame, otherwise, corresponding single frames picture is abandoned, to obtain key frame set;
The multi-orientation Face identification module is starting collection point with 0 ° of the front of face, acquires one every k degree clockwise Width facial image, wherein do not acquire the facial image of 90 ° and 270 °, each identity collects m=360/k-2 width different angle altogether Facial image, to form the human face image sequence for having face and constituting without face, and then obtain n different identity The multi-orientation Face database that human face image sequence is constituted;Every width facial image is categorized into respective classification according to angle value In set, expanded by the upper left of the every width facial image of partial occlusion, upper right, lower-left, bottom right, intermediate five parts described polygonal Face database is spent, to form multi-orientation Face training set;Extract every width facial image in the multi-orientation Face training set SIFT feature, so that every width facial image is converted to multi-angle feature vector;Recycle support vector machines to described more Angle character vector is trained, and obtains multi-orientation Face detection model;
The multi-orientation Face identification module carries out the key frame set using the multi-orientation Face detection model Detection, obtains the facial angle value of each key frame, judges whether each key frame contains face by the facial angle value;If Containing face, then retain corresponding key frame, otherwise give up corresponding key frame, to being converted to the key frame set with angle The human face image sequence of angle value label;
The identities match module constructs body using the human face image sequence for having face in the multi-orientation Face database Part library, using the identity library as the input of neural network, and is trained, to obtain the neural network for identification Model;Again using the human face image sequence with angle value label as test sample, and the test sample is inputted into institute It states in the neural network model for identification, extracts any one layer of output among neural network and be used as feature, to will survey Sample is originally converted to multidimensional identity characteristic vector to be identified;
The identities match module distinguishes the human face image sequence of n different identity in the multi-orientation Face database In the input neural network model for identification, to obtain n for matched multidimensional identity characteristic vector;Again Multidimensional identity characteristic vector to be identified is used for matched multidimensional identity characteristic vector with the n respectively and carries out COS distance Similarity-rough set, and find corresponding to maximum cosine value for matched multidimensional identity characteristic vector as corresponding to be identified The identities match of multidimensional identity characteristic vector is as a result, using identity label corresponding to the identities match result as accordingly wait know The identification result of other multidimensional identity characteristic vector.
A kind of the characteristics of monitor video piece identity's recognition methods of fusion multi-angle feature of the present invention is as follows It carries out:
Step 1, someone's picture and unmanned picture for collecting single frames, and assigned after carrying out size normalized to every width picture Class label is given, to obtain the training sample data collection being made of positive sample and negative sample;
Step 2, the extraction training sample data concentrate the SIFT feature of each sample, so that each sample is converted to Feature vector;
Step 3 is trained described eigenvector using support vector machines, obtains target detection model;
Monitor video is converted into a series of single frames picture by step 4, and as test set;
Step 5 identifies whether every frame picture contains someone in the test set using the target detection model, if containing People then retains corresponding single frames picture, and as key frame, otherwise, corresponding single frames picture is abandoned, to obtain key frame set;
Step 6 take the front of face as 0 ° of collection point of starting, clockwise every k degree one width facial image of acquisition, wherein The facial image of 90 ° and 270 ° is not acquired, and each identity collects the facial image of m=360/k-2 width different angle altogether, thus shape The human face image sequence for having face at one and being constituted without face, and then obtain the human face image sequence institute structure of n different identity At multi-orientation Face database;
Every width facial image is categorized into corresponding category set by step 7 according to angle value, passes through the every width of partial occlusion The multi-orientation Face database is expanded in upper left, upper right, lower-left, bottom right, intermediate five parts of facial image, to be formed more Angle face training set;
Step 8, the SIFT feature for extracting every width facial image in the multi-orientation Face training set, thus by every width face Image is converted to multi-angle feature vector;
Step 9 is trained the multi-angle feature vector using support vector machines, obtains multi-orientation Face inspection Survey model;
Step 10 detects the key frame set using the multi-orientation Face detection model, obtains each pass The facial angle value of key frame;
Step 11 judges whether each key frame contains face by the facial angle value;If retaining phase containing face Key frame is answered, corresponding key frame is otherwise given up, so that the key frame set is converted to the face figure with angle value label As sequence;
Step 12 constructs identity library using the human face image sequence for having face in the multi-orientation Face database, will Input of the identity library as neural network, and be trained, to obtain the neural network model for identification;
Step 13, using the human face image sequence with angle value label as test sample, and by the test specimens In this input neural network model for identification, extracts any one layer of output among neural network and is used as feature, To which test sample is converted to multidimensional identity characteristic vector to be identified;
Step 14, by the human face image sequence of n different identity in the identity library input respectively described in be used for identity know In other neural network model, to obtain n for matched multidimensional identity characteristic vector;
Multidimensional identity characteristic vector to be identified is used for matched multidimensional identity characteristic with the n respectively by step 15 Vector carries out COS distance similarity-rough set, and finds corresponding to maximum cosine value for matched multidimensional identity characteristic vector Identities match as corresponding multidimensional identity characteristic vector to be identified is as a result, with identity corresponding to the identities match result Identification result of the label as corresponding multidimensional identity characteristic vector to be identified.
Compared with prior art, the beneficial effects of the present invention are embodied in:
1. the present invention proposes a kind of suitable for monitor video, the collected random biggish face inspection of object human face posture It surveys, and its attitude angle angle value is identified.Using convolutional neural networks to one section of different angle face figure in collection period As sequential extraction procedures feature, overcomes acquisition single frames front face image in the prior art and extract the difficulty that feature is identified, it is real Show the fusion of multiple angle characters, while deep learning algorithm reduces characteristic dimension, improves the face in monitor video The accuracy of identification.The security of the lives and property of people is protected in the early warning that can be realized dangerous person, is tieed up to the assistance of China's public security Hold important role.
2. in image characteristics extraction stage of the present invention in module of target detection and multi-orientation Face identification module, all making With Scale invariant features transform (SIFT) algorithm of detection image local feature, to monitor video single frames picture and key frame collection It closes and carries out feature extraction.SIFT feature not only has scale invariability, even if changing rotation angle, brightness of image or shooting view Feel, still is able to the detection effect obtained.In monitor video in the random biggish situation of human body posture, Ke Yixue More image Invariance features are practised, the accuracy in simulation of Target Recognition in Complex Environment is improved.
3. the present invention is in multi-orientation Face identification module, to the multi-orientation Face database of collection, using partial occlusion Method expands data set.Making training sample includes the sample for largely existing and blocking, and on the one hand can improve sample Distribution solves face in the prior art and blocks the limitation caused by Face datection, on the other hand also efficiently solves sample Quantity is few, the weak problem of classifier generalization ability.Such data enhance the method in expanding data library, improve arithmetic accuracy and Characteristic dimension, and then the accuracy rate of identification can be improved.
4. the present invention uses the human face image sequence training convolutional neural networks in identity library, structure in identities match module The CNN model made is considered as a feature extractor, for learning the identity characteristic of a different angle human face image sequence, one The positive feature of a personage's face, lateral feature fusion get up, are converted to an advanced multidimensional identity characteristic vector, improve The accuracy of identity expression, target is matched with the identity characteristic vector in library, searches the identity information of target person, It overcomes and carries out matched complexity with the characteristics of image retraining classifier of traditional method for extracting these different angles.Depth Habit is a new field in machine learning research, and motivation is to establish, simulate the nerve net that human brain carries out analytic learning Network, compared with general machine learning algorithm, deep learning explains data by the mechanism for imitating human brain, can not only to characteristic value into Row optimization dimension-reduction treatment well, while the better identity characteristic value of feature rank can be obtained.
Detailed description of the invention
Fig. 1 is identifying system schematic diagram of the present invention.
Specific embodiment
In the present embodiment, as shown in Figure 1, a kind of monitor video system for identifying figures for merging multi-angle feature, group At including: module of target detection: for one section of monitor video to be converted to the key frame set comprising people;Multi-orientation Face identification Module: for will include that the key frame set of people is converted to the human face image sequence with angle value label;Identities match module: For completing the identities match of image sequence in human face image sequence and identity library with angle value label.
Module of target detection includes training two stages of someone's picture target detection model and video pre-filtering.Training someone Picture target detection model collects someone's picture and unmanned picture of single frames first, and carries out at size normalization to every width picture Class label is assigned after reason, to obtain the training sample data collection being made of positive sample and negative sample.Positive sample is data set In someone's picture, class label is set as 1, and negative sample is unmanned picture, and label 0 is respectively placed in a file; The SIFT feature that training sample data concentrate each sample is extracted, each sample extracts the characteristic point of fixed number, thus will Each sample is converted to feature vector;Support vector machines are recycled to be trained the feature vector of training sample data collection, Someone's picture target detection model is obtained, for completing monitor video pretreatment;
Monitor video is converted into one first with FFmpeg software package by the video pre-filtering stage in module of target detection The single frames picture of series, and as test set, every width picture extracts the SIFT feature of same number;It is examined using someone's picture target Survey whether every frame picture in model identification test set contains someone, SVM output result corresponds to the identification label of every test set piece, if Classification is 1, indicates to contain someone, then retains corresponding single frames picture, cuts wherein containing the part of someone, and as key frame, otherwise, Corresponding single frames picture is abandoned, to obtain key frame set;
Multi-orientation Face identification module includes that training multi-orientation Face detection model and key frame set facial angle are known Other two stages.Training multi-orientation Face detection model constructs multi-orientation Face database first, is with 0 ° of the front of face Beginning collection point acquires a width facial image every k degree clockwise, wherein do not acquire the facial image of 90 ° and 270 °.This implementation In example, k=10 is taken, each identity collects the facial image of m=360/k-2=34 width different angle altogether, so that forming one has Face and without face constitute human face image sequence, wherein have face human face image sequence include [0 °, 80 °] ∪ [280 °, 350 °] m/2 width image in range, the human face image sequence of no face includes the m/2 width image in [100 °, 260 °] range, And then the multi-orientation Face database that the human face image sequence for obtaining n different identity is constituted;To every width facial image according to Angle value assigns class label, and every width facial image is categorized into corresponding category set according to angle value.Meanwhile in order to expand It fills training sample and solves the Face datection blocked, some data enhancement methods can be applied, pass through the every width people of partial occlusion Multi-orientation Face database is expanded in upper left, upper right, lower-left, bottom right, intermediate five parts of face image, to form Multi-angle human Face training set;The SIFT feature of every width facial image in multi-orientation Face training set is extracted, so that every width facial image be converted For multi-angle feature vector;It recycles support vector machines to be trained multi-angle feature vector, obtains multi-orientation Face inspection Model is surveyed, which can be used to identify the angle of face in key frame set;
The key frame set facial angle cognitive phase of multi-orientation Face identification module first extracts the every width figure of key frame set Picture is converted to vector by the SIFT feature of piece, is detected using multi-orientation Face detection model to key frame set, according to SVM is exported as a result, obtaining the facial angle value of each key frame;Judge whether each key frame contains someone by facial angle value Face retains corresponding key frame if containing face, otherwise gives up corresponding key frame, detects to the key frame of reservation and cuts figure Face part as in, the image after cutting are unified for fixed size.To be converted to key frame set with angle value mark The human face image sequence of label;
Identities match module includes the neural network model that training is used for identification, learns all identity libraries and to be identified Human face image sequence identity characteristic and identity similarity mode three phases.Using in multi-orientation Face database There is a human face image sequence construction identity library of face, the class label of the corresponding whole real number of each identity is each in identity library Human face image sequence is arranged successively as a training sample, picture according to angle sequence, and each sample is converted to pixel table The eigenmatrix shown as the input of convolutional neural networks, and is trained, to obtain the neural network for identification Model;
Again using the human face image sequence with angle value label as test sample, and test sample input is used for identity The neural network model of identification is exported without using the model as a result, but the neural network model is used for feature extraction, this reality It applies in example, the advanced features for using the output of the full articulamentum of convolutional neural networks layer last to learn as test sample, To which test sample is converted to a multidimensional identity characteristic vector to be identified.After the same method by n in identity library The human face image sequence of different identity is inputted respectively in the neural network model for identification, extracts the knot of identical middle layer Fruit is as feature, to obtain n for matched multidimensional identity characteristic vector;
Multidimensional identity characteristic vector to be identified is used for matched multidimensional with n respectively by the identity similarity mode stage Identity characteristic vector carries out COS distance similarity-rough set, and finds corresponding to maximum cosine value for matched multidimensional identity Feature vector as corresponding multidimensional identity characteristic vector to be identified identities match as a result, with corresponding to identities match result Identification result of the identity label as corresponding multidimensional identity characteristic vector to be identified.
In the present embodiment, a kind of monitor video piece identity's recognition methods for merging multi-angle feature includes the following steps:
Step 1, someone's picture and unmanned picture for collecting single frames construct someone's picture and unmanned picture sample database, And class label is assigned after carrying out size normalized to every width picture.In the present embodiment, positive sample is having in data set People's picture, intercepts the human body parts of wherein 128*128 pixel coverage size, and all positive samples are labeled as 1;Negative sample is never to wrap It is intercepted at random in picture containing human body, size is similarly 128*128, and all negative sample labels are 0, and positive negative sample is respectively It is placed in a file, to obtain the training sample data collection being made of positive sample and negative sample;
Step 2 extracts the SIFT feature that training sample data concentrate each sample, each SIFT key point, which describes son, is The vector of one 4*4*8=128 dimension.After extracting feature, each sample is converted to the eigenmatrix of n*128 dimension, wherein n Number for the characteristic point extracted., it is specified that 100 characteristic points of each sample extraction in this example, by the feature of each characteristic point Vector is sequentially connected, so that each sample to be converted to the feature vector of 100*128 dimension;
Step 3 is trained using feature vector of the support vector machines to training sample data collection, obtains someone's figure Piece target detection model, for detecting in single frames picture with the presence or absence of people.By the SIFT feature vector and sample of positive negative sample Corresponding label arranges (Label 1:value 2:value ...) according to the format that libsvm is required.With svmscale to sample Originally it zooms in and out, in data regularization to [- 1,1] range.Optimal parameter c and g are selected using grid.py cross validation, is connect Svmtrain using the optimal parameter c and g, linear classifier, RBF kernel function obtained to the feature of entire training dataset to Amount is trained, and is obtained supporting vector machine model parameter, after having trained, is as a result saved as .model model file, just obtain One classifier, that is, someone picture target detection model.
Step 4 pre-processes monitor video, and one section of monitor video to be identified is converted into one using ffmeg software package The single frames picture of series, and as test set, test set image size is similarly 128*128.The SIFT for extracting every width picture is special Every test set picture is converted to the feature vector of 100*128 dimension by sign;
Whether step 5 contains someone using every frame picture in someone's picture target detection model identification test set.It is loaded into training Good model file calls the svmpredict function in libsvm to identify test data picture, generates one Whether .predict file, the corresponding class label (1 or 0) for obtaining every test picture identify in every frame picture containing someone. For a test case x, classify according to such as minor function, wherein (xi,yi) it is training sample, yi∈{1,0},
If it is 1 that SVM, which exports result, indicates to contain someone, then retains corresponding single frames picture, and remove the peripheral information of image, Human body parts therein are intercepted, dimension of picture is normalized to 64*64 size, as key frame, otherwise, abandons corresponding single frames figure Piece, to obtain key frame set;
Step 6, building multi-orientation Face database include face and without face in database, with 0 ° of the front of face To originate collection point, a width facial image is uniformly acquired every k degree clockwise, every image interception wherein 64*64 pixel size Face part.In the present embodiment, k=10 is taken, is rotated a circle from positive 0 degree to 360 degree, uniformly acquires a width every 10 degree Image, wherein do not acquire the facial image of 90 ° and 270 °, each identity collects m=360/k-2=34 width different angle altogether Facial image, thus the human face image sequence for being formed with face and being constituted without face, wherein there is the human face image sequence packet of face Containing 17 width images in [0 °, 80 °] ∪ [280 °, 350 °] range, the human face image sequence of no face includes [100 °, 260 °] model Enclose 17 interior width images, and then the multi-orientation Face database that the human face image sequence for obtaining n different identity is constituted.
Step 7 assigns classification mark according to angle value to every width facial image in the multi-orientation Face database being collected into Label, for example, the image that facial angle is 0 ° assigns label 0, the image that facial angle is 280 ° assigns label 28, and so on. Every width facial image is placed into respective category set according to angle value, altogether 34 category file folders.In Face datection In, an image to be detected may have the face blocked by other people or object, or the face that mask of wearing glasses blocks Situations such as.If training sample is all made of the unobstructed face of multi-angle, when face has part to be blocked in test sample, it is easy to It is screened to be non-face and generates missing inspection.Simultaneously as the multi-orientation Face database sample size collected is few, number can be used According to Enhancement Method expanding data library, arithmetic accuracy and characteristic dimension are improved.So passing through a left side for the every width facial image of partial occlusion Multi-orientation Face database is expanded in upper, upper right, lower-left, bottom right, intermediate five parts, and the size of shielding window is 4*4, and every complete Whole face corresponding five block face, block face to every and assign same class label, to form multi-orientation Face instruction Practice collection;
Step 8, the SIFT feature for extracting every width facial image in multi-orientation Face training set, in the present embodiment, setting Each image extracts 100 characteristic points, and each characteristic point vector is sequentially connected, so that every width facial image is converted to multi-angle The dimension of feature vector, each multi-angle feature vector is 100*128;
Step 9 instructs the multi-angle feature vector in entire multi-orientation Face training set using support vector machines Practice, obtains multi-orientation Face detection model.For n classification problem, there are two types of the methods that multiclass divides by svm, and one is " a pair of Any two class therein is constructed a classifier, shares n (n-1)/2 classifier by one " mode, this method;Another kind is Wherein certain n a kind of training sample is considered as one kind by " one-to-many " mode, this method, and every other classification is considered as another Class, therefore share n classifier.In the present embodiment, " one-to-one " method, therefore the sample of n=34 classification are used, altogether 561 two classification devices of training are needed, when classifying to test sample, each classifier judges its classification, And ballot form is taken, last who gets the most votes's classification is the classification of the test sample.The multi-angle obtained after training Face datection model can be used for detecting the angle of face in key frame set, so that also may determine that in image whether there is Face.
Step 10 detects key frame set using multi-orientation Face detection model, obtains the people of each key frame Face angle value.The SIFT feature for extracting every width picture in key frame set first, is converted to one for every test set picture The feature vector of 100*128 dimension, then the 561 two classification devices successively obtained by training, if being tested by classification function Use-case x belongs to i class, then i class ballot plus 1;Belong to j class, j class adds 1.Add up class corresponding to all kinds of component selections highest scoring persons Classification not as test picture x.Classification is 10, then it represents that test picture x facial angle value be 10 °, similarly to get to close The facial angle value of key frame set.
Step 11 judges whether each key frame contains face by facial angle value;If retaining corresponding close containing face Otherwise key frame gives up corresponding key frame.If it is [0,8] ∪ [28,35] that svm, which exports facial angle category result, then someone is indicated Face, output result are that [10,34] then indicate no face.There are face key frames to reservation, remove the peripheral information of image, uses Haar classifier in opencv detects face, cuts image and only retains face part therein, size is unified for 64*64 picture Element.To which key frame set is converted to the human face image sequence with angle value label;
Step 12 constructs identity library using someone's human face image sequence in multi-orientation Face database, and identity library is made It for the input of neural network, and is trained, to obtain the neural network model for identification.Identity library uses polygonal 17 someone's human face image sequences of each identity in face database are spent, 17 unmanned human face image sequences therein are rejected, Identity library of the construction containing n identity altogether.Each image sequence represents an identity category as a training sample, according to Identity label is assigned from 0 to n.Sample in identity library is collected according to the ratio cut partition training set of 8:2 and verifying, using being based on The library the keras training convolutional neural networks of theano include three-layer coil lamination, two layers of pond layer, two layers of full articulamentum and one layer Softmax layers.Using the pixel value of every facial image each point as input data, every image is the vector of 64*64 dimension, will be every Someone's human face image sequence of a identity is arranged successively according to the facial angle sequence of (0 ..., 8,28 ..., 35), i.e., each instruction Practice the eigenmatrix that sample is converted to a 17*64*64, the input as convolutional neural networks.Training process includes preceding to instruction Experienced and backward training.Forward direction training process is unsupervised learning from bottom to top, i.e., since bottom, past top layer in layer Training, in training process, after training study obtains the (n-1)th layer parameter, input by n-1 layer of output as n-th layer trains the Thus n-layer respectively obtains the parameter of each layer;Backward training process is top-down supervised learning, i.e., training error from push up to Lower transmission, is finely adjusted parameter.After the completion of training, a model.pkl model file is obtained.
Step 13, using the human face image sequence with angle value label as test sample, also according to (0 ..., 8, 28 ..., 35) facial angle sequence is arranged successively, if corresponding image is not present in a certain facial angle value, by the figure As the zero moment matrix representation for being 64*64 with dimension, test sample is converted into the eigenmatrix indicated with pixel value size.It will Test sample input is for extracting the full articulamentum of convolutional neural networks layer last in the neural network model of identification Output, the advanced features learnt as test sample.The output the result is that a k dimension feature vector, wherein the size of k be It is determined by the node number of full articulamentum, this feature vector has merged the characteristics of image of the multiple angles of face, can be used to only One one identity of characterization.Identity characteristic vector is tieed up to which complicated test sample is converted to a k to be identified.
The human face image sequence of n different identity in identity Kuku is inputted the mind for being used for identification by step 14 respectively Through in network model, each human face image sequence is arranged successively conduct according to the facial angle sequence of (0 ..., 8,28 ..., 35) One sample, is converted to eigenmatrix, the same output for extracting the full articulamentum of convolutional neural networks layer last as a result, Identity characteristic vector is tieed up for matched k to which study obtains n;
Step 15, in order to which identity characteristic vector to be identified is matched with the identity in identity library, by k to be identified It ties up identity characteristic vector and carries out COS distance similarity-rough set for matched k dimension identity characteristic vector with n respectively, it can Use two vectorial angle cosine values as the measurement for measuring similarity size.For example, m dimension identity characteristic vector to be identified indicates ForSome is expressed as matched m dimension identity characteristic vectorTwo vectorial angle cosine values are as follows: Obtained value is bigger, and it is more similar to represent two vectors.Therefore, it is possible to find that being used for matched multidimensional body corresponding to maximum cosine value Identities match result of part feature vector as corresponding multidimensional identity characteristic vector to be identified.It is right with the identities match result institute Identification result of the identity label answered as corresponding multidimensional identity characteristic vector to be identified.If identity to be identified is special Sign vector does not find the identity of successful match in identity library, then using the human face image sequence as new user, sample graph Picture data input is into identity library, while improving identity library, improves the matched efficiency of system identity.

Claims (2)

1. a kind of monitor video system for identifying figures for merging multi-angle feature, it is characterized in that composition includes: target detection Module, multi-orientation Face identification module and identities match module;
The module of target detection collects someone's picture and unmanned picture of single frames, and carries out at size normalization to every width picture Class label is assigned after reason, to obtain the training sample data collection being made of positive sample and negative sample;Extract the trained sample Notebook data concentrates the SIFT feature of each sample, so that each sample is converted to feature vector;Recycle support vector machines Described eigenvector is trained, target detection model is obtained;
Monitor video is converted into a series of single frames picture by the module of target detection, and as test set;Utilize the mesh Mark detection model identifies that whether every frame picture contains someone in the test set, if containing someone, retains corresponding single frames picture, and make Corresponding single frames picture otherwise is abandoned for key frame, to obtain key frame set;
The multi-orientation Face identification module is starting collection point with 0 ° of the front of face, acquires a width people every k degree clockwise Face image, wherein do not acquire the facial image of 90 ° and 270 °, each identity collects the people of m=360/k-2 width different angle altogether Face image to form the human face image sequence for having face and constituting without face, and then obtains the face of n different identity The multi-orientation Face database that image sequence is constituted;Every width facial image is categorized into respective category set according to angle value In, expand the Multi-angle human by the upper left of the every width facial image of partial occlusion, upper right, lower-left, bottom right, intermediate five parts Face database, to form multi-orientation Face training set;Extract every width facial image in the multi-orientation Face training set SIFT feature, so that every width facial image is converted to multi-angle feature vector;Recycle support vector machines to described polygonal Degree feature vector is trained, and obtains multi-orientation Face detection model;
The multi-orientation Face identification module detects the key frame set using the multi-orientation Face detection model, The facial angle value of each key frame is obtained, judges whether each key frame contains face by the facial angle value;If containing Face, then retain corresponding key frame, otherwise gives up corresponding key frame, to being converted to the key frame set with angle value The human face image sequence of label;
The identities match module constructs identity library using the human face image sequence for having face in the multi-orientation Face database, It using the identity library as the input of neural network, and is trained, to obtain the neural network model for identification; Again using the human face image sequence with angle value label as test sample, and will be used for described in test sample input In the neural network model of identification, extracts any one layer of output among neural network and be used as feature, thus by test sample Be converted to multidimensional identity characteristic vector to be identified;
The identities match module inputs the human face image sequence of n different identity in the multi-orientation Face database respectively In the neural network model for identification, to obtain n for matched multidimensional identity characteristic vector;It again will be to The multidimensional identity characteristic vector of identification is similar for matched multidimensional identity characteristic vector progress COS distance to the n respectively Degree compares, and finds corresponding to maximum cosine value for matched multidimensional identity characteristic vector as corresponding multidimensional to be identified The identities match of identity characteristic vector is as a result, using identity label corresponding to the identities match result as corresponding to be identified The identification result of multidimensional identity characteristic vector.
2. a kind of monitor video piece identity's recognition methods for merging multi-angle feature, it is characterized in that carrying out as follows:
Step 1, someone's picture and unmanned picture for collecting single frames, and class is assigned after carrying out size normalized to every width picture Distinguishing label, to obtain the training sample data collection being made of positive sample and negative sample;
Step 2 extracts the SIFT feature that the training sample data concentrate each sample, so that the conversion of each sample is characterized Vector;
Step 3 is trained described eigenvector using support vector machines, obtains target detection model;
Monitor video is converted into a series of single frames picture by step 4, and as test set;
Step 5 identifies whether every frame picture contains someone in the test set using the target detection model, if containing someone, Retain corresponding single frames picture, and as key frame, otherwise, corresponding single frames picture is abandoned, to obtain key frame set;
Step 6 take the front of face as 0 ° of collection point of starting, clockwise every k degree one width facial image of acquisition, wherein do not adopt The facial image of 90 ° and 270 ° of collection, each identity collect the facial image of m=360/k-2 width different angle altogether, to form one A human face image sequence for having face and being constituted without face, and then the human face image sequence for obtaining n different identity is constituted Multi-orientation Face database;
Every width facial image is categorized into corresponding category set by step 7 according to angle value, passes through the every width face of partial occlusion The multi-orientation Face database is expanded in upper left, upper right, lower-left, bottom right, intermediate five parts of image, to form multi-angle Face training set;
Step 8, the SIFT feature for extracting every width facial image in the multi-orientation Face training set, thus by every width facial image Be converted to multi-angle feature vector;
Step 9 is trained the multi-angle feature vector using support vector machines, obtains multi-orientation Face and detects mould Type;
Step 10 detects the key frame set using the multi-orientation Face detection model, obtains each key frame Facial angle value;
Step 11 judges whether each key frame contains face by the facial angle value;If retaining corresponding close containing face Otherwise key frame gives up corresponding key frame, so that the key frame set is converted to the facial image sequence with angle value label Column;
Step 12 constructs identity library using the human face image sequence for having face in the multi-orientation Face database, will be described Input of the identity library as neural network, and be trained, to obtain the neural network model for identification;
Step 13, using the human face image sequence with angle value label as test sample, and it is the test sample is defeated Enter in the neural network model for identification, extracts any one layer of output among neural network and be used as feature, thus Test sample is converted into multidimensional identity characteristic vector to be identified;
Step 14, by the human face image sequence of n different identity in the identity library input respectively described in be used for identification In neural network model, to obtain n for matched multidimensional identity characteristic vector;
Multidimensional identity characteristic vector to be identified is used for matched multidimensional identity characteristic vector with the n respectively by step 15 COS distance similarity-rough set is carried out, and is found corresponding to maximum cosine value for matched multidimensional identity characteristic vector conduct The identities match of corresponding multidimensional identity characteristic vector to be identified is as a result, with identity label corresponding to the identities match result Identification result as corresponding multidimensional identity characteristic vector to be identified.
CN201610984667.9A 2016-11-09 2016-11-09 Merge the monitor video system for identifying figures and its method of face multi-angle feature Active CN106503687B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610984667.9A CN106503687B (en) 2016-11-09 2016-11-09 Merge the monitor video system for identifying figures and its method of face multi-angle feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610984667.9A CN106503687B (en) 2016-11-09 2016-11-09 Merge the monitor video system for identifying figures and its method of face multi-angle feature

Publications (2)

Publication Number Publication Date
CN106503687A CN106503687A (en) 2017-03-15
CN106503687B true CN106503687B (en) 2019-04-05

Family

ID=58323481

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610984667.9A Active CN106503687B (en) 2016-11-09 2016-11-09 Merge the monitor video system for identifying figures and its method of face multi-angle feature

Country Status (1)

Country Link
CN (1) CN106503687B (en)

Families Citing this family (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108073859A (en) * 2016-11-16 2018-05-25 天津市远卓自动化设备制造有限公司 The monitoring device and method of a kind of specific region
CN107145908B (en) * 2017-05-08 2019-09-03 江南大学 A kind of small target detecting method based on R-FCN
CN107301406A (en) * 2017-07-13 2017-10-27 珠海多智科技有限公司 Fast face angle recognition method based on deep learning
CN107506702B (en) * 2017-08-08 2020-09-11 江西高创保安服务技术有限公司 Multi-angle-based face recognition model training and testing system and method
CN107563337A (en) * 2017-09-12 2018-01-09 广东欧珀移动通信有限公司 The method and Related product of recognition of face
CN107704812A (en) * 2017-09-18 2018-02-16 维沃移动通信有限公司 A kind of face identification method and mobile terminal
CN107590474B (en) * 2017-09-21 2020-08-14 Oppo广东移动通信有限公司 Unlocking control method and related product
CN109697389B (en) * 2017-10-23 2021-10-01 北京京东尚科信息技术有限公司 Identity recognition method and device
CN107862270B (en) * 2017-10-31 2020-07-21 深圳云天励飞技术有限公司 Face classifier training method, face detection method and device and electronic equipment
CN108108662B (en) * 2017-11-24 2021-05-25 深圳市华尊科技股份有限公司 Deep neural network recognition model and recognition method
CN108038176B (en) * 2017-12-07 2020-09-29 浙江大华技术股份有限公司 Method and device for establishing passerby library, electronic equipment and medium
CN108062542B (en) * 2018-01-12 2020-07-28 杭州智诺科技股份有限公司 Method for detecting shielded human face
CN108304800A (en) * 2018-01-30 2018-07-20 厦门启尚科技有限公司 A kind of method of Face datection and face alignment
CN110084258A (en) * 2018-02-12 2019-08-02 成都视观天下科技有限公司 Face preferred method, equipment and storage medium based on video human face identification
CN108549899B (en) * 2018-03-07 2022-02-15 ***股份有限公司 Image identification method and device
CN108509862B (en) * 2018-03-09 2022-03-25 华南理工大学 Rapid face recognition method capable of resisting angle and shielding interference
US10839238B2 (en) * 2018-03-23 2020-11-17 International Business Machines Corporation Remote user identity validation with threshold-based matching
CN110378170B (en) * 2018-04-12 2022-11-08 腾讯科技(深圳)有限公司 Video processing method and related device, image processing method and related device
CN108596135A (en) * 2018-04-26 2018-09-28 上海诚数信息科技有限公司 Personal identification method and system
CN110472460A (en) * 2018-05-11 2019-11-19 北京京东尚科信息技术有限公司 Face image processing process and device
TWI684918B (en) * 2018-06-08 2020-02-11 和碩聯合科技股份有限公司 Face recognition system and method for enhancing face recognition
CN109002767A (en) * 2018-06-22 2018-12-14 恒安嘉新(北京)科技股份公司 A kind of face verification method and system based on deep learning
CN108960119B (en) * 2018-06-28 2021-06-08 武汉市哈哈便利科技有限公司 Commodity recognition algorithm for multi-angle video fusion of unmanned sales counter
CN109033988A (en) * 2018-06-29 2018-12-18 江苏食品药品职业技术学院 A kind of library's access management system based on recognition of face
CN109190512A (en) * 2018-08-13 2019-01-11 成都盯盯科技有限公司 Method for detecting human face, device, equipment and storage medium
CN109190561B (en) * 2018-09-04 2022-03-22 四川长虹电器股份有限公司 Face recognition method and system in video playing
CN109344289B (en) * 2018-09-21 2020-12-11 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN109543521A (en) * 2018-10-18 2019-03-29 天津大学 The In vivo detection and face identification method that main side view combines
CN109446985B (en) * 2018-10-28 2021-06-04 贵州师范学院 Multi-angle plant identification method based on vector neural network
CN111160068B (en) * 2018-11-07 2024-01-26 杭州海康威视数字技术股份有限公司 Target picture generation method and device and electronic equipment
CN109583445A (en) * 2018-11-26 2019-04-05 平安科技(深圳)有限公司 Character image correction processing method, device, equipment and storage medium
CN109561210A (en) * 2018-11-26 2019-04-02 努比亚技术有限公司 A kind of interaction regulation method, equipment and computer readable storage medium
CN109598223A (en) * 2018-11-26 2019-04-09 北京洛必达科技有限公司 Method and apparatus based on video acquisition target person
CN109543633A (en) * 2018-11-29 2019-03-29 上海钛米机器人科技有限公司 A kind of face identification method, device, robot and storage medium
CN109800643B (en) * 2018-12-14 2023-03-31 天津大学 Identity recognition method for living human face in multiple angles
CN109376717A (en) * 2018-12-14 2019-02-22 中科软科技股份有限公司 Personal identification method, device, electronic equipment and the storage medium of face comparison
CN109711357A (en) * 2018-12-28 2019-05-03 北京旷视科技有限公司 A kind of face identification method and device
CN109784243B (en) * 2018-12-29 2021-07-09 网易(杭州)网络有限公司 Identity determination method and device, neural network training method and device, and medium
CN109784240B (en) * 2018-12-30 2023-08-22 深圳市明日实业有限责任公司 Character recognition method, device and storage device
KR20220004628A (en) * 2019-03-12 2022-01-11 엘리먼트, 인크. Detection of facial recognition spoofing using mobile devices
CN110110593A (en) * 2019-03-27 2019-08-09 广州杰赛科技股份有限公司 Face Work attendance method, device, equipment and storage medium based on self study
CN110443137B (en) * 2019-07-03 2023-07-25 平安科技(深圳)有限公司 Multi-dimensional identity information identification method and device, computer equipment and storage medium
CN110309362A (en) * 2019-07-05 2019-10-08 深圳中科云海科技有限公司 A kind of video retrieval method and system
CN110399811A (en) * 2019-07-08 2019-11-01 厦门市美亚柏科信息股份有限公司 A kind of face identification method, device and storage medium
CN111783507A (en) * 2019-07-24 2020-10-16 北京京东尚科信息技术有限公司 Target searching method, device and computer readable storage medium
CN110414437A (en) * 2019-07-30 2019-11-05 上海交通大学 Face datection analysis method and system are distorted based on convolutional neural networks Model Fusion
CN110609920B (en) * 2019-08-05 2022-03-18 华中科技大学 Pedestrian hybrid search method and system in video monitoring scene
CN110852150B (en) * 2019-09-25 2022-12-20 珠海格力电器股份有限公司 Face verification method, system, equipment and computer readable storage medium
CN110852303A (en) * 2019-11-21 2020-02-28 中科智云科技有限公司 Eating behavior identification method based on OpenPose
CN111126346A (en) * 2020-01-06 2020-05-08 腾讯科技(深圳)有限公司 Face recognition method, training method and device of classification model and storage medium
CN111079717B (en) * 2020-01-09 2022-02-22 西安理工大学 Face recognition method based on reinforcement learning
CN111325156B (en) * 2020-02-24 2023-08-11 北京沃东天骏信息技术有限公司 Face recognition method, device, equipment and storage medium
CN111539911B (en) * 2020-03-23 2021-09-28 中国科学院自动化研究所 Mouth breathing face recognition method, device and storage medium
CN111540090A (en) * 2020-04-29 2020-08-14 北京市商汤科技开发有限公司 Method and device for controlling unlocking of vehicle door, vehicle, electronic equipment and storage medium
CN111640125B (en) * 2020-05-29 2022-11-18 广西大学 Aerial photography graph building detection and segmentation method and device based on Mask R-CNN
CN111968152B (en) * 2020-07-15 2023-10-17 桂林远望智能通信科技有限公司 Dynamic identity recognition method and device
CN112052728B (en) * 2020-07-30 2024-04-02 广州市标准化研究院 Portable portrait identification anti-deception device and control method thereof
CN112132057A (en) * 2020-09-24 2020-12-25 天津锋物科技有限公司 Multi-dimensional identity recognition method and system
CN112257595A (en) * 2020-10-22 2021-01-22 广州市百果园网络科技有限公司 Video matching method, device, equipment and storage medium
CN114445951A (en) * 2020-10-30 2022-05-06 许沁沁 Campus intelligent management system and method
CN112525352A (en) * 2020-11-24 2021-03-19 深圳市高巨创新科技开发有限公司 Infrared temperature measurement compensation method based on face recognition and terminal
CN112597886A (en) * 2020-12-22 2021-04-02 成都商汤科技有限公司 Ride fare evasion detection method and device, electronic equipment and storage medium
CN112613480A (en) * 2021-01-04 2021-04-06 上海明略人工智能(集团)有限公司 Face recognition method, face recognition system, electronic equipment and storage medium
CN112712066B (en) * 2021-01-19 2023-02-28 腾讯科技(深圳)有限公司 Image recognition method and device, computer equipment and storage medium
CN112836655B (en) * 2021-02-07 2024-05-28 上海卓繁信息技术股份有限公司 Method and device for identifying identity of illegal actor and electronic equipment
CN113393436A (en) * 2021-06-15 2021-09-14 北京美医医学技术研究院有限公司 Skin detection system based on multi-angle image acquisition
CN114241459B (en) * 2022-02-24 2022-06-17 深圳壹账通科技服务有限公司 Driver identity verification method and device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102467655A (en) * 2010-11-05 2012-05-23 株式会社理光 Multi-angle face detection method and system
CN102609695A (en) * 2012-02-14 2012-07-25 上海博物馆 Method and system for recognizing human face from multiple angles
CN102622589A (en) * 2012-03-13 2012-08-01 辉路科技(北京)有限公司 Multispectral face detection method based on graphics processing unit (GPU)
CN103106393A (en) * 2012-12-12 2013-05-15 袁培江 Embedded type face recognition intelligent identity authentication system based on robot platform
CN104182726A (en) * 2014-02-25 2014-12-03 苏凯 Real name authentication system based on face identification
CN105760836A (en) * 2016-02-17 2016-07-13 厦门美图之家科技有限公司 Multi-angle face alignment method based on deep learning and system thereof and photographing terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102467655A (en) * 2010-11-05 2012-05-23 株式会社理光 Multi-angle face detection method and system
CN102609695A (en) * 2012-02-14 2012-07-25 上海博物馆 Method and system for recognizing human face from multiple angles
CN102622589A (en) * 2012-03-13 2012-08-01 辉路科技(北京)有限公司 Multispectral face detection method based on graphics processing unit (GPU)
CN103106393A (en) * 2012-12-12 2013-05-15 袁培江 Embedded type face recognition intelligent identity authentication system based on robot platform
CN104182726A (en) * 2014-02-25 2014-12-03 苏凯 Real name authentication system based on face identification
CN105760836A (en) * 2016-02-17 2016-07-13 厦门美图之家科技有限公司 Multi-angle face alignment method based on deep learning and system thereof and photographing terminal

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Multi-feature face recognition based on PSO-SVM;Sompong Valuvanathorn et al.;《2012 Tenth International Conference on ICT and Knowledge Engineering》;20130111;第140-145页
多角度人脸识别的深度学习方法研究;郝利刚;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160315;第I138-6840页
视频序列中的人物身份识别研究;代毅;《中国优秀硕士学位论文全文数据库 信息科技辑》;20100815;第I138-782页

Also Published As

Publication number Publication date
CN106503687A (en) 2017-03-15

Similar Documents

Publication Publication Date Title
CN106503687B (en) Merge the monitor video system for identifying figures and its method of face multi-angle feature
Kang et al. Deep unsupervised embedding for remotely sensed images based on spatially augmented momentum contrast
CN108764308B (en) Pedestrian re-identification method based on convolution cycle network
CN110889672B (en) Student card punching and class taking state detection system based on deep learning
CN105354548B (en) A kind of monitor video pedestrian recognition methods again based on ImageNet retrievals
US11263435B2 (en) Method for recognizing face from monitoring video data
CN109522853B (en) Face datection and searching method towards monitor video
CN105590099B (en) A kind of more people's Activity recognition methods based on improvement convolutional neural networks
CN108304788A (en) Face identification method based on deep neural network
CN104504362A (en) Face detection method based on convolutional neural network
Rahimpour et al. Person re-identification using visual attention
CN104504395A (en) Method and system for achieving classification of pedestrians and vehicles based on neural network
Li et al. Sign language recognition based on computer vision
Amato et al. Face verification and recognition for digital forensics and information security
CN113205002B (en) Low-definition face recognition method, device, equipment and medium for unlimited video monitoring
Xie et al. Feature consistency-based prototype network for open-set hyperspectral image classification
Wen et al. Multi-view gait recognition based on generative adversarial network
Ngxande et al. Detecting inter-sectional accuracy differences in driver drowsiness detection algorithms
CN106886771A (en) The main information extracting method of image and face identification method based on modularization PCA
Akhter et al. Abnormal action recognition in crowd scenes via deep data mining and random forest
Duffner et al. A neural scheme for robust detection of transparent logos in TV programs
Nimbarte et al. Biased face patching approach for age invariant face recognition using convolutional neural network
Sai et al. Identification of missing person using convolutional neural networks
Sripriya et al. Real time detection and recognition of human faces
Privietha et al. Deep Learning Technic on Gait Analysis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220602

Address after: 266000 room 504, floor 5, building a, Shinan Software Park, No. 288, Ningxia road, Shinan District, Qingdao, Shandong Province

Patentee after: Shandong Xinfa Technology Co.,Ltd.

Address before: Tunxi road in Baohe District of Hefei city of Anhui Province, No. 193 230009

Patentee before: Hefei University of Technology

TR01 Transfer of patent right