CN101908152A - Customization classifier-based eye state identification method - Google Patents

Customization classifier-based eye state identification method Download PDF

Info

Publication number
CN101908152A
CN101908152A CN201010197980.0A CN201010197980A CN101908152A CN 101908152 A CN101908152 A CN 101908152A CN 201010197980 A CN201010197980 A CN 201010197980A CN 101908152 A CN101908152 A CN 101908152A
Authority
CN
China
Prior art keywords
image
word bank
eye
user
eye image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201010197980.0A
Other languages
Chinese (zh)
Other versions
CN101908152B (en
Inventor
马争
解梅
孙睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Houpu Clean Energy Group Co ltd
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN2010101979800A priority Critical patent/CN101908152B/en
Publication of CN101908152A publication Critical patent/CN101908152A/en
Application granted granted Critical
Publication of CN101908152B publication Critical patent/CN101908152B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image processing and mode identification and is suitable for driver fatigue detection. The method comprises the following steps of: establishing a human face image library and a user face image library, calculating eye images of each image and mixing the two libraries according to different proportions; calculating the haar-like characteristic vector of each image in the mixed eye image library and constructing a strong classifier by using an AdaBoost method; randomly selecting a plurality of eye images in user face image library, judging the constructed strong classifier, and selecting the strong classifier with the highest identification accuracy as the eye state identification classifier used when the user drives. Through the method, different classifiers are used for different users by a method of mixing the user data and the human face library data according to the customization concept, so that the identification accuracy of the classifier is improved and the identification risks are reduced. The invention further provides two different classifiers for users wearing or not wearing glasses, and the eye state identification is more flexible.

Description

A kind of eye state identification method based on customization classifier
Technical field
The invention belongs to the image processing and pattern recognition field, relate to the driver fatigue detection technique.
Background technology
At present, traffic hazard causes ten hundreds of vehicle collisions and great casualties every year, according to incompletely statistics, the whole world surpasses 600,000 because of road traffic accident causes dead number, wherein because the traffic hazard that driver tired driving causes has 100,000 at least, direct economic loss reaches 12,500,000,000 dollars.Driver tired driving with drive when intoxicated equally, become the main hidden danger of traffic hazard.Follow development of computer, the various countries researchist has begun to further investigate the detection method of fatigue driving from every field, the United States Federal in 1998 Speedway Control Broad test has confirmed that PERCLOS (number percent of unit interval human eye closure) and driver's fatigue conditions have the correlativity of height, and this has opened up new thinking for fatigue driving detects.See document D.F.Dinges for details, and R.Grace, " PERCLOS:A valid psychophysiological measure of alertness asassessed by psychomotor vigilance; " US Department of Transportation, Federal highwayAdministration.Publication Number FHWA-MCRT-98-006.
Method for detecting fatigue driving based on the PERCLOS feature is gathered the driver front usually, and especially the video image of eye areas is handled, and whole detection method mainly comprises people's face location, human eye location, three processes of human eye state identification.And these processes all can be summed up as in the pattern-recognition people's face and non-face, human eye and non-human eye, the classification problem of opening eyes and closing one's eyes.Solve above-mentioned classification problem following several classical way is arranged usually: (1) SVM, i.e. support vector machine.SVM is a kind of learning machine of the Statistical Learning Theory based on structural risk minimization, is widely used in each branch of pattern-recognition.SVM is the earliest by propositions such as Vapnik, and it is specially adapted to the higher-dimension small sample problem, and the excellent popularization ability is arranged.(2) FLD, promptly Fisher is linear differentiates.FLD attempts to seek a projecting direction, makes to differentiate best to 2 class samples.Try to achieve best projection direction w *After, all samples are projected to the best projection direction, obtain y=w * TX, and select a threshold value y 0Carrying out 2 classes divides.(3) based on the Adaboost algorithm of Haar type rectangular characteristic.The Adaboost algorithm is a kind of learning algorithm that is widely used in recent years, and it is proposed by people such as Schapire the earliest, and its main thought is to select the part Weak Classifier from a big Weak Classifier space, and they are combined constitutes a strong classifier.
Experiment shows, and is fast based on Adaboost algorithm strong robustness, accuracy height and the speed of Haar type rectangular characteristic, has very significantly actual application value.Its specific practice is to extract the Haar-like proper vector from positive negative sample, uses cascade AdaBoost method to make up sorter model then, trains the concrete parameter of sorter.See for details document Paul Viola andMichael J.Jones. " Rapid Object Detection using a Boosted Cascade of Simple Features; " IEEECVPR, 2001. and document R.Lienhart, A.Kuranov, and V.Pisarevsky. " Empirical analysis of detectioncascades of boosted classifiers for rapid object detection; " In DAGM25th Pattern RecognitionSymposium, 2003.
In actual applications, adopt the Adaboost algorithm based on Haar type rectangular characteristic, the classifier parameters of training out by general people's face sample storehouse can apply to people's face location and human eye location; And for the identification of eye state, this method can only reach certain accuracy rate for most of crowd, and higher relatively for another part crowd misclassification rate, even mistake fully.This is that and custom such as whether wear glasses is difficult to differentiate with a general sorter because everyone eyes are opened with closed otherness very greatly.
Summary of the invention
The invention provides a kind of eye state identification method based on customization classifier, this method can generate the sorter of different eye states according to different users, improves the accuracy rate and the scope of application of eye state identification.
In order to describe content of the present invention easily, at first some terms are defined.
Definition 1: eye state.Detect for fatigue driving, eye state is divided into to be opened and closed two types.
Definition 2: people's face sample storehouse.People's face sample storehouse among the present invention is meant the image library that has comprised different front faces.Whether the image of this database should be gathered under different photoenvironments, and according to wearing glasses, and is divided into wearing spectacles database and wearing spectacles database not.
Definition 3: human eye central point.For the image of opening eyes, definition human eye central point is a pupil center location; For the image of closing one's eyes, definition human eye central point is an eye seam point midway.
Define five in 4: three front yards." five in three front yards " be people's face long with the wide ratio of face, think 3/10ths of human eye area width behaviour face width in the present invention, and the distance between two human eyes is the width of a human eye just.
Definition 5:Haar-like proper vector.The Haar-like feature is to be characterized in people's face by humans such as Papageorgiou the earliest.People such as Papageorgiou use the Haar wavelet basis function at the research of front face and human detection problem, they find that standard quadrature Haar wavelet basis is subjected to certain restriction on using, in order to obtain better spatial resolution, they have used the feature of 3 kinds of forms.People such as Viola have done expansion on this basis, use 2 types of features of totally 4 kinds of forms.Lienhart has increased the rectangular characteristic of several hypotenuses again finally, makes characteristic type reach 3 types 14 kinds forms (as shown in Figure 2).
Definition 6:AdaBoost.The Adaboost full name is Adaptive Boost, is a kind of iterative algorithm, and its core concept is at the different sorter (Weak Classifier) of same training sample set training, then these Weak Classifiers is combined, and constitutes a strong classifier.Its algorithm itself realizes by changing DATA DISTRIBUTION whether it is correct according to the classification of each training sample among each training sample set, and the accuracy rate of overall classification last time, determines the weights of each training sample.Give lower floor's sorter with the new training sample set of revising weights and train, will train the set of classifiers that obtains at last at every turn altogether as decision-making sorter (strong classifier).Use the Adaboost sorter can get rid of some unnecessary training sample features, and the main foundation that will classify is placed on above the main training sample feature.Wherein common Adaboost has Discrete AdaBoost, Real AdaBoost and Gentle AdaBoost.Discrete AdaBoost be meant a kind of output valve of Weak Classifier be limited to 1 ,+1}'s and generate the AdaBoost algorithm of strong classifier through the weights adjustment; Real AdaBoost is meant that a kind of Weak Classifier output area is R's and generate the AdaBoost algorithm of strong classifier through the weights adjustment; Gentle AdaBoost be a kind of at the two kinds of AdaBoost in front owing to " unlike " the very high problem that has caused the decrease in efficiency of sorter of positive sample weights adjustment, and the mutation algorithm that produces.
Technical solution of the present invention is as follows:
A kind of eye state identification method based on customization classifier as shown in Figure 1, may further comprise the steps:
Step 1: set up facial image database A.Described face database A comprises two word bank A1 and A2, one of them word bank A1 forms by removing with the open air, Different Individual, that do not wear glasses, front face gray level image, and another word bank A2 forms by removing with the open air, Different Individual, that wear glasses, front face gray level image.Two central point distances of the people's face gray level image among the face database A are not less than 48 pixel units, people's face gray level image quantity basically identical of open eyes state and closed-eye state.
Step 2: set up user's facial image database B.Described user's facial image database B comprises two word bank B1 and B2, and one of them word bank B1 is made up of the user, that do not wear glasses, front face gray level image, and another word bank B2 is made up of the user, that wear glasses, front face gray level image.Two central point distances of the people's face gray level image among the face database B are not less than 48 pixel units, people's face gray level image quantity basically identical of open eyes state and closed-eye state.
Step 3: the eye image that calculates each width of cloth facial image among facial image database A and the user's facial image database B, obtain two word bank A1 ' and the A2 ' of the eye image database A ' corresponding respectively with two word bank A1 and A2 among the facial image database A, and two the word bank B1 ' of the eye image database B ' corresponding with two word bank B1 and B2 among user's facial image database B and B2 '.The computing method of concrete eye image are: at first calculate the pixel distance d between two of people's face gray level images; According to the principle in five in three front yards, be the center then with the human eye central point, the long and wide rectangular area that is the d/2 pixel size of intercepting; All rectangular areas are zoomed to 24 * 24 pixel sizes, and rotation at random in-10 ° to 10 ° scopes in the direction of the clock, eye image obtained at last.
Step 4: set up and mix eye image database C.Described mixing eye image database C comprises 2N word bank
Figure BSA00000158390600031
With
Figure BSA00000158390600032
Word bank wherein
Figure BSA00000158390600033
(1≤i≤N, N are natural number) by the eye image of the eye image of the A1 ' of word bank described in the step 3 and word bank B1 ' according to different proportion, mix at random; Word bank
Figure BSA00000158390600041
(1≤i≤N, N are natural number) by the eye image of the eye image of the A2 ' of word bank described in the step 3 and word bank B2 ' according to different proportion, mix at random.Described word bank
Figure BSA00000158390600042
With
Figure BSA00000158390600043
In eye image quantity be not less than 2000.
Step 5: calculate the eye image word bank
Figure BSA00000158390600044
With
Figure BSA00000158390600045
In the haar-like proper vector x of all eye images, described haar-like proper vector x comprises 3 types of 14 kinds of forms, and with each eye image word bank
Figure BSA00000158390600046
With
Figure BSA00000158390600047
All proper vector x combine and constitute 2N training sequence
Figure BSA00000158390600048
With
Figure BSA00000158390600049
(1≤i≤N); And training sequence
Figure BSA000001583906000410
With
Figure BSA000001583906000411
Can be expressed as { (x 1, y 1), (x 2, y 2) ..., (x i, y i) ..., (x M, y M) form, x wherein iExpression
Figure BSA000001583906000412
With In i haar-like proper vector; y i∈ 1,1}, expression haar-like proper vector x iThe state that pairing eye image is opened eyes or closed one's eyes; M is the eye image storehouse
Figure BSA000001583906000414
With
Figure BSA000001583906000415
Middle eye image quantity.
Step 6: to 2N training sequence of step 5 gained
Figure BSA000001583906000416
With
Figure BSA000001583906000417
Adopt the AdaBoost method to make up a corresponding 2N strong classifier
Figure BSA000001583906000418
With
Figure BSA000001583906000419
Step 7: the eye image from user's eye image word bank B1 ' that step 3 is set up more than picked at random 1000 width of cloth, calculate its haar-like proper vector x, adopt the constructed strong classifier of step 6 respectively Judge, obtain judged result: 1-opens eyes, and 0-closes one's eyes; The eye image more than picked at random 1000 width of cloth from user's eye image word bank B2 ' that step 3 is set up calculates its haar-like proper vector x equally, adopts the constructed strong classifier of step 6 respectively
Figure BSA000001583906000421
Judge, obtain judged result: 1-opens eyes, and 0-closes one's eyes.
Step 8: the judged result of step 7 gained and selected eye image actual opened eyes or closed-eye state compares, and then count two groups of strong classifiers respectively
Figure BSA000001583906000422
With
Figure BSA000001583906000423
Recognition accuracy, choose strong classifier then In the highest strong classifier of recognition accuracy carry out the sorter of the human eye state identification in the driving procedure as the user at wearing spectacles not, choose strong classifier
Figure BSA000001583906000425
The sorter that the strong classifier that middle recognition accuracy is the highest carries out the human eye state identification in the driving procedure as the user at wearing spectacles.
Step 9: in user's driving procedure, gather user's front face image in real time, and calculate the eyes image of 24 * 24 pixel sizes and the haar-like proper vector x of this eyes image in real time, at last according to the user whether wearing spectacles select that corresponding strong classifier carries out human eye state identification in the step 8.
By above step, just can use eye state sorter, thereby improve the accuracy rate of individual state identification according to different user based on customization.
Need to prove:
1. step 1 and step 2 are when setting up face database A and user's face database B, and facial image is preferably under various different light environment and gathers.Can at first make up one and gather environment, this collection environment is preferably the darkroom, is furnished with regulatable light source, can realize that the light and shade of photoenvironment changes, and can collect individual thousands of width of cloth facial images in a few minutes.
2. there is no particular limitation to the AdaBoost method that adopted in the step 6, and various AdaBoost methods all can be used, and is that last accuracy rate is slightly different.
The present invention adopts the constant method of feature according to the thought of customization, sets up facial image database and user's facial image database at first respectively; Calculate the eye image of every width of cloth image in facial image database and the user's facial image database then respectively; Eye image with the facial image database mixes by different proportion with the eye image of user's facial image database again, obtains mixing the eye image database; Calculate the haar-like proper vector of mixing every width of cloth image in the eye image database again, and adopt the AdaBoost method to make up strong classifier; Eye image in the some width of cloth users of the picked at random facial image database again, calculate its haar-like proper vector, the strong classifier that adopts the AdaBoost method to make up is judged, count the recognition accuracy of strong classifier, choose the highest strong classifier of recognition accuracy as the human eye state recognition classifier of using in user's driving procedure; In user's driving procedure, adopt this sorter to carry out human eye state identification at last.
Innovation part of the present invention is:
1, the thought with customization applies to use different sorters for different user in the human eye state identification, has improved the accuracy rate of individual human eye state identification.
2, the training sample of sorter has adopted the method for user data and face database data mixing, makes sorter to improve accuracy rate at individuality, guarantees again simultaneously to be without loss of generality, and reduces the identification risk.
3, the user's of raising wearing spectacles recognition accuracy, and user can be selected for use and wear glasses and the two kinds of different sorters of not wearing glasses, and possesses dirigibility.
Description of drawings
Fig. 1 is a schematic flow sheet of the present invention.
Fig. 2 is the synoptic diagram of haar-like feature, has comprised 3 types of 14 kinds of forms.
Fig. 3 is to be the quantity of the various haar-like features of example with 24 * 24 sized images.
Embodiment
A kind of eye state identification method based on customization classifier as shown in Figure 1, may further comprise the steps:
Step 1: set up facial image database A.Described face database A comprises two word bank A1 and A2, one of them word bank A1 forms by removing with the open air, Different Individual, that do not wear glasses, front face gray level image, and another word bank A2 forms by removing with the open air, Different Individual, that wear glasses, front face gray level image.Two central point distances of the people's face gray level image among the face database A are not less than 48 pixel units, people's face gray level image quantity basically identical of open eyes state and closed-eye state.
Step 2: set up user's facial image database B.Described user's facial image database B comprises two word bank B1 and B2, and one of them word bank B1 is made up of the user, that do not wear glasses, front face gray level image, and another word bank B2 is made up of the user, that wear glasses, front face gray level image.Two central point distances of the people's face gray level image among the face database B are not less than 48 pixel units, people's face gray level image quantity basically identical of open eyes state and closed-eye state.
Step 3: the eye image that calculates each width of cloth facial image among facial image database A and the user's facial image database B, obtain two word bank A1 ' and the A2 ' of the eye image database A ' corresponding respectively with two word bank A1 and A2 among the facial image database A, and two the word bank B1 ' of the eye image database B ' corresponding with two word bank B1 and B2 among user's facial image database B and B2 '.The computing method of concrete eye image are: at first calculate the pixel distance d between two of people's face gray level images; According to the principle in five in three front yards, be the center then with the human eye central point, the long and wide rectangular area that is the d/2 pixel size of intercepting; All rectangular areas are zoomed to 24 * 24 pixel sizes, and rotation at random in-10 ° to 10 ° scopes in the direction of the clock, eye image obtained at last.
Step 4: set up and mix eye image database C.Described mixing eye image database C comprises 2N word bank
Figure BSA00000158390600061
With
Figure BSA00000158390600062
Word bank wherein (1≤i≤N, N are natural number) by the eye image of the eye image of the A1 ' of word bank described in the step 3 and word bank B1 ' according to different proportion, mix at random; Word bank
Figure BSA00000158390600064
(1≤i≤N, N are natural number) by the eye image of the eye image of the A2 ' of word bank described in the step 3 and word bank B2 ' according to different proportion, mix at random.Described word bank
Figure BSA00000158390600065
With
Figure BSA00000158390600066
In eye image quantity be not less than 2000.
Step 5: calculate the eye image word bank
Figure BSA00000158390600067
With
Figure BSA00000158390600068
In the haar-like proper vector x of all eye images, described haar-like proper vector x comprises 3 types of 14 kinds of forms, and with each eye image word bank
Figure BSA00000158390600069
With
Figure BSA000001583906000610
All proper vector x combine and constitute 2N training sequence
Figure BSA000001583906000611
With
Figure BSA000001583906000612
(1≤i≤N); And training sequence
Figure BSA000001583906000613
With Can be expressed as { (x 1, y 1), (x 2, y 2) ..., (x i, y i) ..., (x M, y M) form, x wherein iExpression
Figure BSA000001583906000615
With
Figure BSA000001583906000616
In i haar-like proper vector; y i∈ 1,1}, expression haar-like proper vector x iThe state that pairing eye image is opened eyes or closed one's eyes; M is the eye image storehouse
Figure BSA00000158390600071
With Middle eye image quantity.
Step 6: to 2N training sequence of step 5 gained
Figure BSA00000158390600073
With Adopt the AdaBoost method to make up a corresponding 2N strong classifier
Figure BSA00000158390600075
With
Figure BSA00000158390600076
Step 7: the eye image from user's eye image word bank B1 ' that step 3 is set up more than picked at random 1000 width of cloth, calculate its haar-like proper vector x, adopt the constructed strong classifier of step 6 respectively Judge, obtain judged result: 1-opens eyes, and 0-closes one's eyes; The eye image more than picked at random 1000 width of cloth from user's eye image word bank B2 ' that step 3 is set up calculates its haar-like proper vector x equally, adopts the constructed strong classifier of step 6 respectively
Figure BSA00000158390600078
Judge, obtain judged result: 1-opens eyes, and 0-closes one's eyes.
Step 8: the judged result of step 7 gained and selected eye image actual opened eyes or closed-eye state compares, and then count two groups of strong classifiers respectively With
Figure BSA000001583906000710
Recognition accuracy, choose strong classifier then
Figure BSA000001583906000711
In the highest strong classifier of recognition accuracy carry out the sorter of the human eye state identification in the driving procedure as the user at wearing spectacles not, choose strong classifier The sorter that the strong classifier that middle recognition accuracy is the highest carries out the human eye state identification in the driving procedure as the user at wearing spectacles.
Step 9: in user's driving procedure, gather user's front face image in real time, and calculate the eyes image of 24 * 24 pixel sizes and the haar-like proper vector x of this eyes image in real time, at last according to the user whether wearing spectacles select that corresponding strong classifier carries out human eye state identification in the step 8.
The inventive method is compared with the method for only using general face database image to train, and the general individual accuracy rate improves about 2%, and the individual accuracy rate of wearing spectacles improves 3%~5%, and operation time is less than 0.1s.
In sum, method of the present invention is utilized the thought of customization, and user data is combined with the face database data, adopts the constant method of feature to train the human eye state sorter, thereby has realized human eye state identification fast and accurately.

Claims (1)

1. eye state identification method based on customization classifier may further comprise the steps:
Step 1: set up facial image database A;
Described face database A comprises two word bank A1 and A2, one of them word bank A1 forms by removing with the open air, Different Individual, that do not wear glasses, front face gray level image, and another word bank A2 forms by removing with the open air, Different Individual, that wear glasses, front face gray level image; Two central point distances of the people's face gray level image among the face database A are not less than 48 pixel units, people's face gray level image quantity basically identical of open eyes state and closed-eye state;
Step 2: set up user's facial image database B;
Described user's facial image database B comprises two word bank B1 and B2, and one of them word bank B1 is made up of the user, that do not wear glasses, front face gray level image, and another word bank B2 is made up of the user, that wear glasses, front face gray level image; Two central point distances of the people's face gray level image among the face database B are not less than 48 pixel units, people's face gray level image quantity basically identical of open eyes state and closed-eye state;
Step 3: the eye image that calculates each width of cloth facial image among facial image database A and the user's facial image database B, obtain two word bank A1 ' and the A2 ' of the eye image database A ' corresponding respectively with two word bank A1 and A2 among the facial image database A, and two the word bank B1 ' of the eye image database B ' corresponding with two word bank B1 and B2 among user's facial image database B and B2 '; The computing method of concrete eye image are: at first calculate the pixel distance d between two of people's face gray level images; According to the principle in five in three front yards, be the center then with the human eye central point, the long and wide rectangular area that is the d/2 pixel size of intercepting; All rectangular areas are zoomed to 24 * 24 pixel sizes, and rotation at random in-10 ° to 10 ° scopes in the direction of the clock, eye image obtained at last;
Step 4: set up and mix eye image database C;
Described mixing eye image database C comprises 2N word bank
Figure FSA00000158390500011
With
Figure FSA00000158390500012
Word bank wherein
Figure FSA00000158390500013
By the eye image of the eye image of the A1 ' of word bank described in the step 3 and word bank B1 ' according to different proportion, mix at random; Word bank
Figure FSA00000158390500014
By the eye image of the eye image of the A2 ' of word bank described in the step 3 and word bank B2 ' according to different proportion, mix at random; Described word bank
Figure FSA00000158390500015
With
Figure FSA00000158390500016
In eye image quantity be not less than 2000; 1≤i≤N wherein, N is a natural number;
Step 5: calculate the eye image word bank
Figure FSA00000158390500017
With
Figure FSA00000158390500018
In the haar-like proper vector x of all eye images, described haar-like proper vector x comprises 3 types of 14 kinds of forms, and with each eye image word bank
Figure FSA00000158390500019
With
Figure FSA000001583905000110
All proper vector x combine and constitute 2N training sequence
Figure FSA000001583905000111
With
Figure FSA000001583905000112
And training sequence
Figure FSA000001583905000113
With
Figure FSA000001583905000114
Can be expressed as { (x 1, y 1), (x 2, y 2) ..., (x i, y i) ..., (x M, y M) form, x wherein iExpression
Figure FSA00000158390500021
With
Figure FSA00000158390500022
In i haar-like proper vector; y i∈ 1,1}, expression haar-like proper vector x iThe state that pairing eye image is opened eyes or closed one's eyes; M is the eye image storehouse
Figure FSA00000158390500023
With
Figure FSA00000158390500024
Middle eye image quantity;
Step 6: to 2N training sequence of step 5 gained
Figure FSA00000158390500025
With
Figure FSA00000158390500026
Adopt the AdaBoost method to make up a corresponding 2N strong classifier With
Figure FSA00000158390500028
Step 7: the eye image from user's eye image word bank B1 ' that step 3 is set up more than picked at random 1000 width of cloth, calculate its haar-like proper vector x, adopt the constructed strong classifier of step 6 respectively
Figure FSA00000158390500029
Judge, obtain judged result: 1-opens eyes, and 0-closes one's eyes; The eye image more than picked at random 1000 width of cloth from user's eye image word bank B2 ' that step 3 is set up calculates its haar-like proper vector x equally, adopts the constructed strong classifier of step 6 respectively
Figure FSA000001583905000210
Judge, obtain judged result: 1-opens eyes, and 0-closes one's eyes;
Step 8: the judged result of step 7 gained and selected eye image actual opened eyes or closed-eye state compares, and then count two groups of strong classifiers respectively
Figure FSA000001583905000211
With
Figure FSA000001583905000212
Recognition accuracy, choose strong classifier then
Figure FSA000001583905000213
In the highest strong classifier of recognition accuracy carry out the sorter of the human eye state identification in the driving procedure as the user at wearing spectacles not, choose strong classifier
Figure FSA000001583905000214
The sorter that the strong classifier that middle recognition accuracy is the highest carries out the human eye state identification in the driving procedure as the user at wearing spectacles;
Step 9: in user's driving procedure, gather user's front face image in real time, and calculate the eyes image of 24 * 24 pixel sizes and the haar-like proper vector x of this eyes image in real time, at last according to the user whether wearing spectacles select that corresponding strong classifier carries out human eye state identification in the step 8.
CN2010101979800A 2010-06-11 2010-06-11 Customization classifier-based eye state identification method Active CN101908152B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101979800A CN101908152B (en) 2010-06-11 2010-06-11 Customization classifier-based eye state identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101979800A CN101908152B (en) 2010-06-11 2010-06-11 Customization classifier-based eye state identification method

Publications (2)

Publication Number Publication Date
CN101908152A true CN101908152A (en) 2010-12-08
CN101908152B CN101908152B (en) 2012-04-25

Family

ID=43263607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101979800A Active CN101908152B (en) 2010-06-11 2010-06-11 Customization classifier-based eye state identification method

Country Status (1)

Country Link
CN (1) CN101908152B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102085099A (en) * 2011-02-11 2011-06-08 北京中星微电子有限公司 Method and device for detecting fatigue driving
CN102096810A (en) * 2011-01-26 2011-06-15 北京中星微电子有限公司 Method and device for detecting fatigue state of user before computer
CN102163289A (en) * 2011-04-06 2011-08-24 北京中星微电子有限公司 Method and device for removing glasses from human face image, and method and device for wearing glasses in human face image
CN103049740A (en) * 2012-12-13 2013-04-17 杜鹢 Method and device for detecting fatigue state based on video image
CN103584852A (en) * 2012-08-15 2014-02-19 深圳中科强华科技有限公司 Personalized electrocardiogram intelligent auxiliary diagnosis device and method
CN103902975A (en) * 2014-03-28 2014-07-02 北京科技大学 Human eye state detection method based on balanced Vector Boosting algorithm
CN104102896A (en) * 2013-04-14 2014-10-15 张忠伟 Human eye state recognition method based on graph cut model
CN104504404A (en) * 2015-01-23 2015-04-08 北京工业大学 Online user type identification method and system based on visual behavior
CN105512603A (en) * 2015-01-20 2016-04-20 上海伊霍珀信息科技股份有限公司 Dangerous driving detection method based on principle of vector dot product
CN106485214A (en) * 2016-09-28 2017-03-08 天津工业大学 A kind of eyes based on convolutional neural networks and mouth state identification method
CN108021875A (en) * 2017-11-27 2018-05-11 上海灵至科技有限公司 A kind of vehicle driver's personalization fatigue monitoring and method for early warning
CN108294759A (en) * 2017-01-13 2018-07-20 天津工业大学 A kind of Driver Fatigue Detection based on CNN Eye state recognitions
CN108491824A (en) * 2018-04-03 2018-09-04 百度在线网络技术(北京)有限公司 model generating method and device
US11270100B2 (en) 2017-11-14 2022-03-08 Huawei Technologies Co., Ltd. Face image detection method and terminal device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1889093A (en) * 2005-06-30 2007-01-03 上海市延安中学 Recognition method for human eyes positioning and human eyes opening and closing
CN101350063A (en) * 2008-09-03 2009-01-21 北京中星微电子有限公司 Method and apparatus for locating human face characteristic point
US20090097701A1 (en) * 2007-10-11 2009-04-16 Denso Corporation Sleepiness level determination device for driver
CN101593425A (en) * 2009-05-06 2009-12-02 深圳市汉华安道科技有限责任公司 A kind of fatigue driving monitoring method and system based on machine vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1889093A (en) * 2005-06-30 2007-01-03 上海市延安中学 Recognition method for human eyes positioning and human eyes opening and closing
US20090097701A1 (en) * 2007-10-11 2009-04-16 Denso Corporation Sleepiness level determination device for driver
CN101350063A (en) * 2008-09-03 2009-01-21 北京中星微电子有限公司 Method and apparatus for locating human face characteristic point
CN101593425A (en) * 2009-05-06 2009-12-02 深圳市汉华安道科技有限责任公司 A kind of fatigue driving monitoring method and system based on machine vision

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096810A (en) * 2011-01-26 2011-06-15 北京中星微电子有限公司 Method and device for detecting fatigue state of user before computer
CN102085099A (en) * 2011-02-11 2011-06-08 北京中星微电子有限公司 Method and device for detecting fatigue driving
CN102163289A (en) * 2011-04-06 2011-08-24 北京中星微电子有限公司 Method and device for removing glasses from human face image, and method and device for wearing glasses in human face image
CN102163289B (en) * 2011-04-06 2016-08-24 北京中星微电子有限公司 The minimizing technology of glasses and device, usual method and device in facial image
CN103584852A (en) * 2012-08-15 2014-02-19 深圳中科强华科技有限公司 Personalized electrocardiogram intelligent auxiliary diagnosis device and method
CN103049740B (en) * 2012-12-13 2016-08-03 杜鹢 Fatigue state detection method based on video image and device
CN103049740A (en) * 2012-12-13 2013-04-17 杜鹢 Method and device for detecting fatigue state based on video image
CN104102896A (en) * 2013-04-14 2014-10-15 张忠伟 Human eye state recognition method based on graph cut model
CN104102896B (en) * 2013-04-14 2017-10-17 张忠伟 A kind of method for recognizing human eye state that model is cut based on figure
CN103902975A (en) * 2014-03-28 2014-07-02 北京科技大学 Human eye state detection method based on balanced Vector Boosting algorithm
CN105512603A (en) * 2015-01-20 2016-04-20 上海伊霍珀信息科技股份有限公司 Dangerous driving detection method based on principle of vector dot product
WO2016115895A1 (en) * 2015-01-23 2016-07-28 北京工业大学 On-line user type identification method and system based on visual behaviour
CN104504404A (en) * 2015-01-23 2015-04-08 北京工业大学 Online user type identification method and system based on visual behavior
CN104504404B (en) * 2015-01-23 2018-01-12 北京工业大学 The user on the network's kind identification method and system of a kind of view-based access control model behavior
CN106485214A (en) * 2016-09-28 2017-03-08 天津工业大学 A kind of eyes based on convolutional neural networks and mouth state identification method
CN108294759A (en) * 2017-01-13 2018-07-20 天津工业大学 A kind of Driver Fatigue Detection based on CNN Eye state recognitions
US11270100B2 (en) 2017-11-14 2022-03-08 Huawei Technologies Co., Ltd. Face image detection method and terminal device
CN108021875A (en) * 2017-11-27 2018-05-11 上海灵至科技有限公司 A kind of vehicle driver's personalization fatigue monitoring and method for early warning
CN108491824A (en) * 2018-04-03 2018-09-04 百度在线网络技术(北京)有限公司 model generating method and device

Also Published As

Publication number Publication date
CN101908152B (en) 2012-04-25

Similar Documents

Publication Publication Date Title
CN101908152B (en) Customization classifier-based eye state identification method
CN106096538B (en) Face identification method and device based on sequencing neural network model
CN106295522B (en) A kind of two-stage anti-fraud detection method based on multi-orientation Face and environmental information
CN104091147B (en) A kind of near-infrared eyes positioning and eye state identification method
CN101944174B (en) Identification method of characters of licence plate
CN102663413B (en) Multi-gesture and cross-age oriented face image authentication method
CN100452081C (en) Human eye positioning and human eye state recognition method
CN100592322C (en) An automatic computer authentication method for photographic faces and living faces
CN107273845A (en) A kind of facial expression recognizing method based on confidence region and multiple features Weighted Fusion
CN103336973B (en) The eye state identification method of multiple features Decision fusion
CN102270308B (en) Facial feature location method based on five sense organs related AAM (Active Appearance Model)
CN102096810A (en) Method and device for detecting fatigue state of user before computer
CN104463128A (en) Glass detection method and system for face recognition
CN102915453B (en) Real-time feedback and update vehicle detection method
CN102129574B (en) A kind of face authentication method and system
CN110175501A (en) More people's scene focus recognition methods based on recognition of face
Bhowmick et al. Detection and classification of eye state in IR camera for driver drowsiness identification
CN104331160A (en) Lip state recognition-based intelligent wheelchair human-computer interaction system and method
CN102799872A (en) Image processing method based on face image characteristics
CN102880864A (en) Method for snap-shooting human face from streaming media file
CN101916369A (en) Face recognition method based on kernel nearest subspace
Liu et al. Urban expressway parallel pattern recognition based on intelligent IOT data processing for smart city
CN110232327A (en) A kind of driving fatigue detection method based on trapezoidal concatenated convolutional neural network
Gao et al. Fatigue state detection from multi-feature of eyes
CN104050451A (en) Robust target tracking method based on multi-channel Haar-like characteristics

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210518

Address after: No.3, 11th floor, building 6, no.599, shijicheng South Road, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610041

Patentee after: Houpu clean energy Co.,Ltd.

Address before: 611731, No. 2006, West Avenue, Chengdu hi tech Zone (West District, Sichuan)

Patentee before: University of Electronic Science and Technology of China

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: No.3, 11th floor, building 6, no.599, shijicheng South Road, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610041

Patentee after: Houpu clean energy (Group) Co.,Ltd.

Address before: No.3, 11th floor, building 6, no.599, shijicheng South Road, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610041

Patentee before: Houpu clean energy Co.,Ltd.