CN103870811A - Method for quickly recognizing front face through video monitoring - Google Patents

Method for quickly recognizing front face through video monitoring Download PDF

Info

Publication number
CN103870811A
CN103870811A CN201410080841.8A CN201410080841A CN103870811A CN 103870811 A CN103870811 A CN 103870811A CN 201410080841 A CN201410080841 A CN 201410080841A CN 103870811 A CN103870811 A CN 103870811A
Authority
CN
China
Prior art keywords
face
sample image
sample
training
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410080841.8A
Other languages
Chinese (zh)
Other versions
CN103870811B (en
Inventor
徐玮
谭树人
熊志辉
张政
刘煜
杨建�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201410080841.8A priority Critical patent/CN103870811B/en
Publication of CN103870811A publication Critical patent/CN103870811A/en
Application granted granted Critical
Publication of CN103870811B publication Critical patent/CN103870811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for quickly recognizing a front face through video monitoring, which is used for recognizing the most front image frame in one segment of video monitoring. The method comprises the following steps of firstly, extracting front face images from a standard video base and monitoring videos to use as a training positive sample, extracting side face images as a training negative sample, then extracting features of an integral channel, and utilizing an Adaboost algorithm to train a strong classifier from the features. When the front face is recognized, in the inputted monitoring videos, a face is detected by the existing matured face detection algorithm, a detection window is recognized, the images are scored, and the image with highest score is selected, namely the most front face frame. The method has the advantages that the recognizing speed and recognizing precision of the face gesture are improved, the front face is accurately recognized, and the calculation workload of face gesture recognizing is reduced.

Description

A kind of front face Quick method for video monitoring
Technical field
The present invention relates to image processes and area of pattern recognition, particularly a kind of front face Quick method for video monitoring.
Background technology
As the important component part of safety monitoring, video monitoring system has been widely used in all departments such as supermarket, bank, government.The development of economic society is also had higher requirement to safety precaution industry, the effect of video monitoring in productive life is also increasingly important, cause supervisory system scale increasing, control point is also more and more, therefore need to store massive video data, video is analyzed and also become more and more difficult, for example, in these videos, mate a face and will become very difficult, if Useful Information in video can be stored, the analysis of video just can be converted into these useful informations are analyzed, with regard to making, the analysis of video is become to simple and effective like this.In Indoor Video video, that mainly pay close attention to is the people in monitor video, for ease of the identification to face in massive video data and analysis, Useful Information also can be considered as that the most positive image of face, problem is just converted into front human face discriminating in video, in video, comparative maturity of people's face detection algorithm such as method for detecting human face in OPENCV etc., but will be still a challenging problem by Quick front face from monitor video.This document assumes that carries out front human face discriminating on the basis that detects face.
Front human face discriminating problem is a human face posture estimation problem, and existing face pose estimation substantially can two classes [list of references 1]:
Based on the learning method of face outward appearance, there is unique corresponding relation in some characteristic (image density, color, image gradient value etc.) of supposing three-dimensional face attitude and facial image, with the training sample of a large amount of known three-dimensional face attitudes, set up this relation [list of references 2] by statistical method.The method does not need accurate extract minutiae, only need to have the sample of a large amount of three-dimensional face attitudes, and popular statistical learning method has support vector machine, neural network etc. at present.Support vector machine and neural net method do not need feature to train, but directly sample training is obtained to sorter, therefore these two kinds of method precision are limited to choosing of training sample, it is binary output that while support vector machine trains out sorter, can not estimate continuously human face posture, and the precision of neural net method also depends on network level, more its computation complexities of the number of plies are higher.
Based on the method for model, utilize certain geometric model or structure to represent structure and the shape of face, set up the corresponding relation between model and image, then realize face spatial attitude by geometry or other method and estimate.Compared with said method, the method based on model has implements simple, precision advantages of higher, but the method also has the accuracy requirement of pair feature point extraction higher [list of references 3] and the slow shortcoming of arithmetic speed.
Summary of the invention
Technical matters to be solved by this invention is, for prior art deficiency, a kind of front face Quick method for video monitoring is provided, improve human face posture distinguishing speed and precision, carry out exactly front human face discriminating, reduce the calculated amount that human face posture is differentiated.
For solving the problems of the technologies described above, the technical solution adopted in the present invention is: a kind of front face Quick method for video monitoring, and the method is:
1) in normal video storehouse or the monitor video that collects, extract face picture as training sample set, concentrated training sample face is less than to 5 degree facial image around the Y-axis anglec of rotation is as positive sample image, and concentrated training sample face is greater than to 30 degree facial image around the Y-axis anglec of rotation is as negative sample image; Choose n sample image (x 1, y 1), (x 2, y 2) ..., (x n, y n), wherein x irepresent sample image, y iclass formative, y i=0 represents negative sample image, y i=1 represents positive sample image;
2) initializes weights: ω 1 , i = 1 2 m , y i = 0 1 2 l , y i = 1 ;
Wherein, m and l are respectively the quantity of non-positive face sample and positive face sample; N=m+l; I=1,2 ..., n;
3) described sample image is carried out to the conversion of LUV Color Channel, the conversion of gradient magnitude passage and the conversion of histogram of gradients passage, wherein said LUV Color Channel conversion comprises that 3 passages, the conversion of gradient magnitude passage comprise 6 passages, the conversion of histogram of gradients passage comprises 1 passage, totally 10 passages;
4) training classifier: make t=1;
5) utilize following formula normalized weight: ω t,ifor training the weight of i sample image of t sorter; ω ' t,ifor the weight of i sample image of t sorter after normalization;
6) from above-mentioned 10 passages, choose at random a passage, and choose at random rectangular area sample image after the passage conversion of choosing through this, in this rectangular area all pixels and as candidate feature value; Repeat this step, until obtain K candidate feature value;
7) to each candidate feature value f j, train a Weak Classifier, utilize this Weak Classifier to calculate ω ' t,ierror rate ε j:
ϵ j = Σ i ω t , i ′ | h j ( x i ) - y i | ;
H j(x i) expression sample image x ithe Weak Classifier that forms of j candidate feature value, j=1,2 ..., K;
Figure BDA0000473646250000031
θ jfor threshold value, p jfor the biasing of instruction inequality direction, p j=± 1; h j(x i)=1 represents that j candidate feature value judges this sample image x ifor positive sample image, otherwise it is negative sample image;
8) repeating step 7), obtain error rate corresponding to all candidate feature values, choose minimal error rate ε tcorresponding Weak Classifier h t(x) as candidate classification device;
9) utilize following formula to upgrade weight:
Figure BDA0000473646250000032
work as x iwhile correctly classification, (such as having chosen feature j, if x ibe positive sample, feature j also can judge x ibe positive sample, just crying is correct classification, otherwise, classification error) e i=0, otherwise, e i=1; β t = ϵ t 1 - ϵ t ; If α t = log 1 β t ;
10) make t=t+1, by ω t+1, ias the weight of i sample image of t+1 sorter of training, repetition above-mentioned steps 5)~step 9), until obtain T candidate classification device, utilize described T candidate classification device to determine strong classifier h (x):
h ( x ) = 1 , Σ t = 1 T α t h t ( x ) ≥ 1 2 Σ t = 1 T α t 0 , else ;
11) examination criteria video library or the monitor video that collects, obtain multiframe facial image, utilizes above-mentioned strong classifier h (x) to described multiple facial images marking, selects the two field picture that mark is the highest, obtains front face image.
Threshold value θ jsize according to candidate feature value determine, for convenience of calculation, in the present invention, threshold value θ jsize is chosen for the intermediate value of K candidate feature value.
The present invention goes out various eigenwerts by carrying out efficient calculation with integrogram after the conversion of integration passage, alter a great deal for human face posture, the feature extraction mode of integration channel characteristics can overcome the deficiency that single feature mode is described feature, for example simple Haar-Like feature is just difficult to well describe the variation of various human face postures, it is to carry out integrated from all angles to feature, simultaneously again unlike Fusion Features, just several simple features are all extracted, increase the complexity of calculating, integration passage conversion of the present invention has well overcome the slow-footed problem of general features fusion calculation, the sorter that the present invention trains out by Adaboost has very strong classification capacity, because the result that it trains is not out simple single or several sorters, neither simply combine the independent sorter of minority, it is to select each Weak Classifier according to minimal error rate, and finally form strong classifier, there is extremely strong classification capacity, the largest benefit that integration channel characteristics is combined with training aids is exactly not only to have ensured that the classification capacity of the sorter for differentiating front face is very strong, and also fine aspect distinguishing speed, because integration channel characteristics piece image only need to calculate integrogram one time, and integration channel characteristics not only adopts single feature mode but the combination of various features, finely overcome the feature that single features brings and described not enough shortcoming, therefore further increased the precision of differentiating.So in sum, the method that integration channel characteristics is combined with Adaboost is having good performance aspect speed and precision.Therefore method of the present invention has good real-time and accuracy.
Compared with prior art, the beneficial effect that the present invention has is: the method for human face discriminating is carried out in the application integration channel characteristics that the present invention proposes and Adaboost training, solve well the slow and poor problem of training effect of training speed that Fusion Features and other training methods bring, human face posture distinguishing speed and precision are improved, carry out exactly front human face discriminating, reduced the calculated amount that human face posture is differentiated.The front face picture that the present invention extracts can be good at choosing the rotation of face in one section of video degree minimum, and the most useful face pictorial information, for follow-up video analysis provides a great convenience.
Brief description of the drawings
Fig. 1 is method flow diagram of the present invention;
Fig. 2 is the positive sample of training;
Fig. 3 is training negative sample;
Fig. 4 is channel type in 3 in integration channel characteristics of the present invention; Fig. 4 (a) is LUV Color Channel Transformation Graphs; Fig. 4 (b) is gradient magnitude passage Transformation Graphs; Fig. 4 (c) is histogram of gradients passage Transformation Graphs;
Fig. 5 is front human face discriminating process flow diagram in input video;
Fig. 6 is video section schematic diagram;
Fig. 7 is real-time front human face discriminating schematic diagram;
Fig. 8 is experimental result picture.
Embodiment
Method of the present invention comprises following two stages:
(1) training stage
1) first extract face picture as training sample set in normal video storehouse and in gathering video, because the present invention goes out front face image for Quick, therefore in order better to distinguish face, select front face picture as the positive sample of training, select face picture that face rotation degree is large as training negative sample.Front face image refers to that face is less than or equal to the facial image of 5 degree around the Y-axis anglec of rotation, and negative sample adopts the anglec of rotation to be greater than or equal to the facial image of 30 degree.Positive negative sample as shown in Figure 2,3.
Choose n sample image (x 1, y 1), (x 2, y 2) ..., (x i, y i) ..., (x n, y n), x iinput sample image, y iclass formative, wherein y i=0 is expressed as negative sample, y i=1 is expressed as positive sample;
2) initializes weights:
ω 1 , i = 1 2 m , y i = 0 1 2 l , y i = 1 ;
Wherein m and l are respectively the quantity of non-positive face sample and positive face sample, n=m+l; In order to train strong classifier from a large amount of Weak Classifiers, Adaboost adopts training time to change the weight of sample, can better choose sample and increase the efficiency of training while once training upper like this.
3) T sorter of training, extract integration channel characteristics [4], its basic thought is that the feature such as part summation, histogram, Haar-like feature and their mutation just can efficiently be calculated fast by integrogram by input picture being carried out to various linearities and nonlinear conversion.Its passage alternative types is also varied.Alter a great deal for human face posture, the feature extraction mode of integration channel characteristics can overcome the deficiency that single feature mode is described feature, for example simple Haar-Like feature is just difficult to well describe the variation of various human face postures, it is to carry out integrated from all angles to feature, simultaneously again unlike Fusion Features, just several simple features are all extracted, increase the complexity of calculating, well overcome the slow-footed problem of general features fusion calculation, next has chosen channel type in following 3, wherein LUV Color Channel can be good at describing face brightness and colourity variation, gradient magnitude passage has well reflected the profile of face, histogram of gradients passage comprehensively changes and is described face from different gradient directions, the variation that several channel types can reasonable description human face posture above.First first input sample image is carried out to above 3 kinds of passages conversion, always have 10 passages.3 kinds of passages convert as shown in Figure 4.
Concrete leaching process:
Fort=1,2,…,T
(a) normalized weight:
Figure BDA0000473646250000052
ω t,ibe i sample weights in the t time circulation;
(b)Forj=1,2,…,K
The random integration channel characteristics f that selects j:
1. random selector channel index bin k(k=1 ..., 10);
2. the random rectangular area Rect that selects jand calculating pixel value sum Sum k,j;
To each feature f j, train a Weak Classifier h j, calculate corresponding ω terror rate:
Figure BDA0000473646250000053
wherein h j(x i) expression sample x ithe Weak Classifier that forms of j feature.And
Figure BDA0000473646250000061
it is by a threshold value θ of j feature j, eigenwert f jand the biasing p of an instruction inequality direction j(only having ± 1 two kinds of situations) forms.H j=1 represents that j feature judges that this sample is true sample, otherwise is dummy copy.
(c) select minimal error rate ε tweak Classifier h t;
(d) upgrade weight:
ω t + 1 , i = ω t , i β t 1 - ei
Wherein, work as x iwhile correctly classification, e i=0, otherwise, e i=1;
Figure BDA0000473646250000063
4) final strong classifier is h (x):
h ( x ) = 1 , Σ t = 1 T α t h t ( x ) ≥ 1 2 Σ t = 1 T α t 0 , else ;
Wherein,
Figure BDA0000473646250000065
utilize flow process in step 3, can be good at training and thering is the strong classifier that classification capacity is very strong a large amount of integration channel characteristics from extracting.
(2) the differentiation stage
The differentiation stage is exactly, from monitor video, the face detecting is carried out to a positive process of differentiating.The differentiation stage is exactly, from monitor video, the face detecting is carried out to a positive process of differentiating.In the front human face discriminating stage, to the monitor video of input, utilize people's face detection algorithm of existing maturation to carry out face detection, for example people's face detection algorithm in OPENCV, by sliding window, view picture input picture is detected, and detection window is each time differentiated, the sorter training with first stage carries out the differentiation of front face to this window, the degree that is front face to this window provides judgement, marking, more approach front face, finally select mark the highest, namely that the most positive frame of face.Store this two field picture, as the foundation of video analysis.
Concrete differentiation flow process is shown in accompanying drawing 5.
Key point of the present invention is the estimation to human face posture, for better distinguishing positive side face, in selecting positive negative sample to train, sample unified standard is changed into the facial image of 64 × 64 sizes, what positive sample was chosen is positive face or approaches front face picture, and what negative sample was chosen is side face picture.(for example, in specific implementation process, choose l=1000 and opened positive sample, m=2000 opens negative sample as training sample) partly train picture as shown in Figures 2 and 3, as can be seen from the figure the anglec of rotation of positive sample face picture in Y direction is for 0 or approach 0, and negative sample face picture is the face picture of anglec of rotation comparison.
The present invention's experiment is initialized as, train T=5000 Weak Classifier, positive sample error weight initial value is 1/2000, negative sample error weight initial value is 1/4000, and in each Weak Classifier of training, choose K=1000 integration channel characteristics, found the feature that makes classification have minimal error rate, i.e. Weak Classifier.
After initialization training, utilize Adaboost training method, for example, in first round training process, in K=1000 eigenwert, Min=165, Max=1853, thereby obtain threshold value initial value 1009, train according to above-mentioned steps, thereby the needed strong classifier of the differentiation process below that obtains.
After training place sorter, for the real time video image sequence of input, utilize ripe people's face detection algorithm, be that in OPENCV, face detection module carries out face detection to input video, for effect of the present invention is described, the video sequence of choosing is totally 70 frames, and the frame per second of video is 30fps.Video section schematic diagram as shown in Figure 6.
In the front face real time discriminating stage, as shown in Figure 5 and Figure 6, utilize strong classifier that 3 training stages trained to carry out front human face discriminating to detection window each time, in order well to choose front face picture, definition " marking " function Score (x), the window that is about at every turn detect carries out front human face discriminating.
After 5, the face picture detecting is each time carried out to front human face discriminating, and store its mark, then select that two field picture that this function mid-score is the highest, selected the most positive picture of face in this section of video, namely Useful Information, is convenient to video to analyze.Finally differentiating result is the 52nd frame of choosing in this section of video, and as shown in Figure 7, as we can see from the figure, this face picture can very clear reflection face information, is convenient to analyze.
For further verifying the validity of this method, utilize partial graph sheet in FEI face database to carry out the comparison of following several method, it is respectively the Haar-Like feature of the utilization rotation of the people such as Jones [5] proposition, using front face image as the positive sample of training, using non-positive face image as the Adaboost training classifier of training negative sample, the human face region detecting is carried out to front and sentence method for distinguishing.Nikolaidis[6] etc. people point out, in front face image two with face central point line form triangles be isosceles triangle, the rotation of face will cause this leg-of-mutton variation, propose the method based on facial feature points.The people such as Zhu [7] have proposed a kind ofly to collect that face detects, attitude face is estimated and human face region is marked on the method for one, utilize face area feature to train the method for setting up tree structure model.The present invention has carried out the comparison of 4 kinds of methods from differentiation accuracy, distinguishing speed aspect.The whole bag of tricks is more as shown in table 1.
The comparison of table 1 the whole bag of tricks
From experimental result above, the Nikolaidis method proposing the earliest needs accurate location feature point, therefore result is vulnerable to the impact of external condition, it is not fine differentiating effect, Jones sorting technique is based on rotation Haar-Like feature, this feature can not well be distinguished positive face and side face, although its computing velocity is than very fast, accuracy is but not high, be difficult to be applied in real-time monitor video, Zhu method has very high accuracy, from document, also can find out that it also can determine the human face posture of complex condition, it is oversize that but more unique deficiency is exactly its computing time, can not be applied in real system and go, and the method that the present invention proposes, both having reached must accuracy, it is accuracy requirement, computational discrimination speed is also very fast, therefore can be applied in real system.
List of references:
[1]Eric?M?C,Mohan?M?T.Head?pose?estimation?in?computer?vision:A?survey[J].IEEE?Transactions?on?Pattern?Analysis?andMachineIntelligence,2009,31(4):607-626.
[2]Li?S?Z,Lu?Xiaogang,Hou?Xinwen,et?a1.Learning?multiview?face?subspaces?and?facial?pose?estimation?using?independent?component?analysis[J].IEEE?Transactions?on?Image?Processing,2005,14(6):705-712.
[3]Shaft?M,Chung?P?W?H.Face?pose?estimation?from?eyes?and?mouth[J].Advanced?Mechatronics?Systems,2010,11(2):132-138.
[4]Dollar?P,Tu?Z?W,Pe?rona?P,et?al.Integral?channel?features[A].In?Proc.BMVC[C],2009:1-11.
[5]M.Jones,P.Viola.Fast?Multi-View?Face?Detection.TechnicalReport096,Mitsubishi?Electric?Research?Laboratories,2003.
[6]A.Nikolaidis,I.Pitas.Facial?Feature?Extraction?and?Pose?Determination.Pattern?Recognition,2000,vol.33,no.11,PP.1783-1791.
[7]X.Zhu,D.Ramanan.Face?Detection,Pose?Estimation,and?Landmark?Localization?in?the?Wild.Computer?Vision?and?Pattern?Recognition(CVPR)Providence,Rhode?Island,June2012.

Claims (2)

1. for a front face Quick method for video monitoring, it is characterized in that, the method is:
1) in normal video storehouse or the monitor video that collects, extract face picture as training sample set, concentrated training sample face is less than to 5 degree facial image around the Y-axis anglec of rotation is as positive sample image, and concentrated training sample face is greater than to 30 degree facial image around the Y-axis anglec of rotation is as negative sample image; Choose n sample image (x 1, y 1), (x 2, y 2) ..., (x n, y n), wherein x irepresent sample image, y iclass formative, y i=0 represents negative sample image, y i=1 represents positive sample image;
2) initializes weights: ω 1 , i = 1 2 m , y i = 0 1 2 l , y i = 1 ;
Wherein, m and l are respectively the quantity of non-positive face sample and positive face sample; N=m+l; I=1,2 ..., n;
3) described sample image is carried out to the conversion of LUV Color Channel, the conversion of gradient magnitude passage and the conversion of histogram of gradients passage, wherein said LUV Color Channel conversion comprises that 3 passages, the conversion of gradient magnitude passage comprise 6 passages, the conversion of histogram of gradients passage comprises 1 passage, totally 10 passages;
4) training classifier: make t=1;
5) utilize following formula normalized weight:
Figure FDA0000473646240000012
ω t,ifor training the weight of i sample image of t sorter; ω ' t,ifor the weight of i sample image of t sorter after normalization;
6) from above-mentioned 10 passages, choose at random a passage, and choose at random rectangular area sample image after the passage conversion of choosing through this, in this rectangular area all pixels and as candidate feature value; Repeat this step, until obtain K candidate feature value;
7) to each candidate feature value f j, train a Weak Classifier, utilize this Weak Classifier to calculate ω ' t,ierror rate ε j:
ϵ j = Σ i ω t , i ′ | h j ( x i ) - y i | ;
H j(x i) expression sample image x ithe Weak Classifier that forms of j candidate feature value, j=1,2 ..., K;
Figure FDA0000473646240000014
θ jfor threshold value, p jfor the biasing of instruction inequality direction, p j=± 1; h j(x i)=1 represents that j candidate feature value judges this sample image x ifor positive sample image, otherwise it is negative sample image;
8) repeating step 7), obtain error rate corresponding to all candidate feature values, choose minimal error rate ε tcorresponding Weak Classifier h t(x) as candidate classification device;
9) utilize following formula to upgrade weight: work as x iwhile correctly classification, e i=0, otherwise, e i=1;
Figure FDA0000473646240000022
if α t = log 1 β t ;
10) make t=t+1, by ω t+1, ias the weight of i sample image of t+1 sorter of training, repetition above-mentioned steps 5)~step 9), until obtain T candidate classification device, utilize described T candidate classification device to determine strong classifier h (x):
h ( x ) = 1 , Σ t = 1 T α t h t ( x ) ≥ 1 2 Σ t = 1 T α t 0 , else ;
11) examination criteria video library or the monitor video that collects, obtain multiframe facial image, utilizes above-mentioned strong classifier h (x) to described multiple facial images marking, selects the two field picture that mark is the highest, obtains front face image.
2. the front face Quick method for video monitoring according to claim 1, is characterized in that described threshold value θ jsize is the intermediate value of K candidate feature value.
CN201410080841.8A 2014-03-06 2014-03-06 A kind of front face Quick method for video monitoring Active CN103870811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410080841.8A CN103870811B (en) 2014-03-06 2014-03-06 A kind of front face Quick method for video monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410080841.8A CN103870811B (en) 2014-03-06 2014-03-06 A kind of front face Quick method for video monitoring

Publications (2)

Publication Number Publication Date
CN103870811A true CN103870811A (en) 2014-06-18
CN103870811B CN103870811B (en) 2016-03-02

Family

ID=50909327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410080841.8A Active CN103870811B (en) 2014-03-06 2014-03-06 A kind of front face Quick method for video monitoring

Country Status (1)

Country Link
CN (1) CN103870811B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751144A (en) * 2015-04-02 2015-07-01 山东大学 Frontal face quick evaluation method for video surveillance
CN105528584A (en) * 2015-12-23 2016-04-27 浙江宇视科技有限公司 Method and device for detecting frontal face image
CN105550648A (en) * 2015-12-08 2016-05-04 惠州学院 Video monitoring-based face recognition method
CN106504262A (en) * 2016-10-21 2017-03-15 泉州装备制造研究所 A kind of small tiles intelligent locating method of multiple features fusion
CN107563376A (en) * 2017-08-29 2018-01-09 济南浪潮高新科技投资发展有限公司 A kind of method and device for obtaining the plane picture anglec of rotation
CN107679506A (en) * 2017-10-12 2018-02-09 Tcl通力电子(惠州)有限公司 Awakening method, intelligent artifact and the computer-readable recording medium of intelligent artifact
CN107958244A (en) * 2018-01-12 2018-04-24 成都视观天下科技有限公司 A kind of face identification method and device based on the fusion of video multiframe face characteristic
CN108038176A (en) * 2017-12-07 2018-05-15 浙江大华技术股份有限公司 A kind of method for building up, device, electronic equipment and the medium in passerby storehouse
CN108197547A (en) * 2017-12-26 2018-06-22 深圳云天励飞技术有限公司 Face pose estimation, device, terminal and storage medium
WO2018113206A1 (en) * 2016-12-23 2018-06-28 深圳云天励飞技术有限公司 Image processing method and terminal
CN109993035A (en) * 2017-12-29 2019-07-09 深圳市优必选科技有限公司 The method and device of human testing based on embedded system
CN110338759A (en) * 2019-06-27 2019-10-18 嘉兴深拓科技有限公司 A kind of front pain expression data acquisition method
CN110472567A (en) * 2019-08-14 2019-11-19 旭辉卓越健康信息科技有限公司 A kind of face identification method and system suitable under non-cooperation scene
CN112001203A (en) * 2019-05-27 2020-11-27 北京君正集成电路股份有限公司 Method for extracting front face from face recognition library
CN113808066A (en) * 2020-05-29 2021-12-17 Oppo广东移动通信有限公司 Image selection method and device, storage medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080063264A1 (en) * 2006-09-08 2008-03-13 Porikli Fatih M Method for classifying data using an analytic manifold
CN103093250A (en) * 2013-02-22 2013-05-08 福建师范大学 Adaboost face detection method based on new Haar- like feature
CN103186774A (en) * 2013-03-21 2013-07-03 北京工业大学 Semi-supervised learning-based multi-gesture facial expression recognition method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080063264A1 (en) * 2006-09-08 2008-03-13 Porikli Fatih M Method for classifying data using an analytic manifold
CN103093250A (en) * 2013-02-22 2013-05-08 福建师范大学 Adaboost face detection method based on new Haar- like feature
CN103186774A (en) * 2013-03-21 2013-07-03 北京工业大学 Semi-supervised learning-based multi-gesture facial expression recognition method

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751144B (en) * 2015-04-02 2017-11-17 山东大学 A kind of front face fast appraisement method of facing video monitoring
CN104751144A (en) * 2015-04-02 2015-07-01 山东大学 Frontal face quick evaluation method for video surveillance
CN105550648A (en) * 2015-12-08 2016-05-04 惠州学院 Video monitoring-based face recognition method
CN105528584A (en) * 2015-12-23 2016-04-27 浙江宇视科技有限公司 Method and device for detecting frontal face image
CN105528584B (en) * 2015-12-23 2019-04-12 浙江宇视科技有限公司 A kind of detection method and device of face image
CN106504262A (en) * 2016-10-21 2017-03-15 泉州装备制造研究所 A kind of small tiles intelligent locating method of multiple features fusion
WO2018113206A1 (en) * 2016-12-23 2018-06-28 深圳云天励飞技术有限公司 Image processing method and terminal
CN107563376A (en) * 2017-08-29 2018-01-09 济南浪潮高新科技投资发展有限公司 A kind of method and device for obtaining the plane picture anglec of rotation
CN107679506A (en) * 2017-10-12 2018-02-09 Tcl通力电子(惠州)有限公司 Awakening method, intelligent artifact and the computer-readable recording medium of intelligent artifact
CN108038176A (en) * 2017-12-07 2018-05-15 浙江大华技术股份有限公司 A kind of method for building up, device, electronic equipment and the medium in passerby storehouse
CN108197547B (en) * 2017-12-26 2019-12-17 深圳云天励飞技术有限公司 Face pose estimation method, device, terminal and storage medium
CN108197547A (en) * 2017-12-26 2018-06-22 深圳云天励飞技术有限公司 Face pose estimation, device, terminal and storage medium
CN109993035B (en) * 2017-12-29 2021-06-29 深圳市优必选科技有限公司 Human body detection method and device based on embedded system
CN109993035A (en) * 2017-12-29 2019-07-09 深圳市优必选科技有限公司 The method and device of human testing based on embedded system
CN107958244B (en) * 2018-01-12 2020-07-10 成都视观天下科技有限公司 Face recognition method and device based on video multi-frame face feature fusion
CN107958244A (en) * 2018-01-12 2018-04-24 成都视观天下科技有限公司 A kind of face identification method and device based on the fusion of video multiframe face characteristic
CN112001203A (en) * 2019-05-27 2020-11-27 北京君正集成电路股份有限公司 Method for extracting front face from face recognition library
CN110338759B (en) * 2019-06-27 2020-06-09 嘉兴深拓科技有限公司 Facial pain expression data acquisition method
CN110338759A (en) * 2019-06-27 2019-10-18 嘉兴深拓科技有限公司 A kind of front pain expression data acquisition method
CN110472567A (en) * 2019-08-14 2019-11-19 旭辉卓越健康信息科技有限公司 A kind of face identification method and system suitable under non-cooperation scene
CN113808066A (en) * 2020-05-29 2021-12-17 Oppo广东移动通信有限公司 Image selection method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN103870811B (en) 2016-03-02

Similar Documents

Publication Publication Date Title
CN103870811B (en) A kind of front face Quick method for video monitoring
CN110263774B (en) A kind of method for detecting human face
William et al. Face recognition using facenet (survey, performance test, and comparison)
Majhi et al. Novel features for off-line signature verification
CN101739555B (en) Method and system for detecting false face, and method and system for training false face model
CN106355138A (en) Face recognition method based on deep learning and key features extraction
CN102436589B (en) Complex object automatic recognition method based on multi-category primitive self-learning
CN108564049A (en) A kind of fast face detection recognition method based on deep learning
CN107103281A (en) Face identification method based on aggregation Damage degree metric learning
CN103279768B (en) A kind of video face identification method based on incremental learning face piecemeal visual characteristic
CN105550658A (en) Face comparison method based on high-dimensional LBP (Local Binary Patterns) and convolutional neural network feature fusion
CN102163281B (en) Real-time human body detection method based on AdaBoost frame and colour of head
CN102521565A (en) Garment identification method and system for low-resolution video
CN104504362A (en) Face detection method based on convolutional neural network
CN102521561B (en) Face identification method on basis of multi-scale weber local features and hierarchical decision fusion
CN102902986A (en) Automatic gender identification system and method
CN102938065A (en) Facial feature extraction method and face recognition method based on large-scale image data
CN102682287A (en) Pedestrian detection method based on saliency information
CN101620673A (en) Robust face detecting and tracking method
CN109033953A (en) Training method, equipment and the storage medium of multi-task learning depth network
CN104834941A (en) Offline handwriting recognition method of sparse autoencoder based on computer input
CN105930792A (en) Human action classification method based on video local feature dictionary
CN107220598A (en) Iris Texture Classification based on deep learning feature and Fisher Vector encoding models
Sakthimohan et al. Detection and Recognition of Face Using Deep Learning
CN103942572A (en) Method and device for extracting facial expression features based on bidirectional compressed data space dimension reduction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant