CN109063545A - A kind of method for detecting fatigue driving and device - Google Patents
A kind of method for detecting fatigue driving and device Download PDFInfo
- Publication number
- CN109063545A CN109063545A CN201810607250.XA CN201810607250A CN109063545A CN 109063545 A CN109063545 A CN 109063545A CN 201810607250 A CN201810607250 A CN 201810607250A CN 109063545 A CN109063545 A CN 109063545A
- Authority
- CN
- China
- Prior art keywords
- point
- feature point
- image
- fatigue driving
- head
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 210000001508 eye Anatomy 0.000 claims abstract description 159
- 210000003128 head Anatomy 0.000 claims abstract description 90
- 238000001514 detection method Methods 0.000 claims abstract description 39
- 210000000744 eyelid Anatomy 0.000 claims abstract description 27
- 230000003287 optical effect Effects 0.000 claims abstract description 9
- 238000006073 displacement reaction Methods 0.000 claims description 29
- 206010048232 Yawning Diseases 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 14
- 241001282135 Poromitra oscitans Species 0.000 claims description 13
- 238000012549 training Methods 0.000 claims description 12
- 238000012360 testing method Methods 0.000 claims description 11
- 238000005070 sampling Methods 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000013145 classification model Methods 0.000 claims description 5
- 230000009467 reduction Effects 0.000 claims description 4
- 238000000513 principal component analysis Methods 0.000 claims description 3
- 239000000203 mixture Substances 0.000 claims description 2
- 239000000284 extract Substances 0.000 abstract description 4
- 230000006872 improvement Effects 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000010606 normalization Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000001914 filtration Methods 0.000 description 3
- 230000004886 head movement Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 206010041349 Somnolence Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000035080 detection of muscle activity involved in regulation of muscle adaptation Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000035699 permeability Effects 0.000 description 1
- 230000035479 physiological effects, processes and functions Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Ophthalmology & Optometry (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a kind of method for detecting fatigue driving, comprising: acquires head image by near-infrared 3D video camera;Two continuous frames head image is tracked using LK optical flow method, and extracts mouth feature point, head feature point and eye feature point from the head image traced into;Eyes image is searched for according to the position coordinates of eye feature point, to extract the eye feature point of eyes image;According to the eye feature of the eyes image in predetermined times of collection point, mouth feature point and the one-to-one determining eyelid moving characteristic of head feature point, mouth moving characteristic and head moving characteristic, and it is sent into SVM classifier, so that SVM classifier judges fatigue driving state, fatigue driving detection is realized.The present invention also provides fatigue driving detection devices.Using method for detecting fatigue driving and fatigue driving detection device of the invention, the precision of fatigue driving state judgement can be effectively improved and judge speed.
Description
Technical field
The present invention relates to motor vehicle fatigue-driving detection technology field more particularly to a kind of method for detecting fatigue driving and dress
It sets.
Background technique
The sleepy and tired of driver is the universally recognized risk factors of highway safety.According to the report of the World Health Organization
Display is accused, tens of thousands of people can be had more than every year since itself or other people fatigue drivings come to harm or even lose valuable life.
Therefore traffic accident caused by being effectively reduced because of fatigue driving to the fatigue driving progress alarm of driver, wherein fatigue driving
The detection of state plays a crucial role the timeliness and validity of fatigue driving alarm.
Currently, the method for fatigue driving detection mainly includes following four:
(1) the fatigue driving detection based on physiological signal: correlative study shows that physiology of the driver under fatigue state refers to
Rotating savings deviates the physical signs of normal condition, thus the prior art is believed by EEG signals, electrocardiosignal, the myoelectricity to driver
Number, the composite measurements of blood pressure, the physical signs such as breathing and pulse judge whether driver enters fatigue driving state.Although this
Kind of method detection accuracy with higher, still, due to needing using acquiring physiological signal using touch sensor, to driving
The person of sailing drives vehicle and makes troubles, and has biggish limitation.
(2) the fatigue driving detection based on driver's operation behavior: the detection method is the operation behavior by driver
The fatigue state that driver is inferred in such as steering wheel operation operation.But the operation of people is with fatigue state in addition to having outside the Pass, also by
Accuracy to the influence of many factors such as personal habits, travel speed, road environment, operative skill, the detection method is low, and
And standard used by this method is difficult to unification, is difficult to accomplish large-scale promotion.
(3) the fatigue driving detection based on car status information: the detection method is by vehicle driving trace variation, vehicle
The vehicle running states such as diatom deviation, Vehicle Speed, acceleration speculate the fatigue state of driver.However, vehicle
Driving status is related with many environmental factors such as vehicle feature, road, and accuracy is low.
(4) the fatigue driving detection based on driver head and facial characteristics state, this method are passed using camera etc.
Sensor acquires the head image of driver, with computer vision intelligence and head and face of the image processing techniques to driver
Feature such as head movement state, eye moving characteristic, yawn, the frequency nodded etc. is analyzed and processed, then determine that its is tired
Labor degree.Since such method belongs to non-contact measurement mode, and adopt detection mode with good intuitive, use
The advantages that standard is more unified, it has also become the mainstream side that domestic and international numerous scholars, department of enterprises and institutions study, design and develop
To.
Currently, the method for detecting fatigue driving based on driver head and facial characteristics state mainly passes through standard camera
Image is obtained, since the frame rate of standard camera is 25 to 30 hertz, and the duration of blinking may very short (minimum be about
150ms), therefore at one it blinks to the sampling number of vision signal only 4 to 5 in the duration, this is not enough to obtain
Accurate eyelid moving characteristic is so as to cause judging that fatigue driving state precision is not high and judges that speed is slow.
Summary of the invention
In view of the above-mentioned problems, a kind of method for detecting fatigue driving of the invention and device, can effectively improve fatigue driving
The precision and judge speed that state judges.
In order to solve the above technical problems, a kind of method for detecting fatigue driving of the invention, includes the following steps:
S1, pass through head image of the near-infrared 3D video camera acquisition comprising face;
S2, two continuous frames head image is tracked using LK optical flow method, and is extracted from the head image traced into
Mouth feature point, head feature point and eye feature point;The eye feature point includes the position coordinates of eye feature point;
S3, eye is searched for from the image that near-infrared high-speed camera acquires according to the position coordinates of the eye feature point
Image, to track the eye feature point of the eyes image;
S4, according to the eye feature point of the eyes image in predetermined times of collection, mouth feature point and described
The one-to-one determining eyelid moving characteristic of head feature point, mouth moving characteristic and head moving characteristic, and it is sent into SVM classifier,
So that the SVM classifier judges fatigue driving state, fatigue driving detection is realized.
As an improvement of the above scheme, the step S2 includes the following steps:
The light stream of tracking characteristics point between two continuous frames head image is calculated using LK optical flow method;Wherein, the tracking is special
Sign point is the multiple pixels being located in preset hollow rectangle in the first frame head image of the two continuous frames head image;
Predict the tracking characteristics point the of the two continuous frames head image by the light stream of the tracking characteristics point
Position in two frame head images;
According to position of the tracking characteristics point in the first frame head image and prediction in second frame head
Position in portion's image calculates the displacement of the tracking characteristics point;
By the displacement according to being sequentially ranked up from small to large, displacement intermediate value is obtained;
When the displacement meets preset condition, mouth feature point, head spy are extracted from the second frame head image
Sign point and eye feature point;Wherein, the preset condition is that the displacement of the displacement is less than the displacement intermediate value, and institute's rheme
The quantity of shifting is greater than or equal to the 50% of the tracking characteristics point quantity.
As an improvement of the above scheme, the eye feature point further includes the velocity information of eye feature point;The step
S3 includes the following steps:
Observational characteristic point is determined according to the position coordinates;The observational characteristic point is used to indicate the two continuous frames human eye
The eye feature point of first frame eyes image in image;
The State-Vector Equation of the first frame eyes image is constructed according to the position coordinates and the velocity information;
Observation model is determined according to the state vector and preset observing matrix;
The second frame eye figure in the two continuous frames eyes image is determined according to the state vector and the observation model
The search range of picture;
When searching the observational characteristic point within the scope of described search, the observation point searched is set as described
The eye feature point of two frame eyes images is realized and is extracted to the eye feature point of the eyes image.
As an improvement of the above scheme, the eyelid moving characteristic includes eyes closed ratio, frequency of wink and eyes
Average closing speed;Eyelid is determined according to the eye feature point of the eyes image in predetermined times of collection in the step S4
Moving characteristic includes the following steps:
When extracting the eye feature point of the eyes image every time, obtained from the eye feature point of the eyes image
Take canthus characteristic point, upper eyelid characteristic point and palpebra inferior characteristic point;
According to the canthus characteristic point, the upper eyelid characteristic point and the palpebra inferior characteristic point, the angle at canthus is calculated
To determine the closure of eyes;
When times of collection reaches the predetermined times of collection, the eyes closed is calculated according to determining whole closures
Ratio and frequency of wink, and eyes are calculated according to the displacement of whole upper eyelid characteristic points and are averaged closing speed.
As an improvement of the above scheme, the mouth moving characteristic includes frequency of yawning;According to mouth in the step S4
Portion's characteristic point determines that mouth moving characteristic includes the following steps:
When extracting the mouth feature point every time, corners of the mouth characteristic point, upper lip are obtained from the mouth feature point
Highest characteristic point and the minimum characteristic point of lower lip;
According to the corners of the mouth characteristic point, the upper lip highest characteristic point and the minimum characteristic point of the lower lip, mouth is calculated
The angle at angle is to determine mouth closure;
When times of collection reaches the predetermined times of collection, frequency of yawning is determined according to whole mouth closures.
As an improvement of the above scheme, the head moving characteristic includes frequency of nodding;According to predetermined in the step S4
Head feature point in times of collection determines that head moving characteristic includes the following steps:
When extracting the head feature point every time, crown characteristic point is obtained from the head feature point, to draw
The curve for the upright position that one horizontal coordinate is times of collection, vertical coordinate is crown characteristic point;
It when times of collection reaches the predetermined times of collection, determines occur the number of wave crest in the curve, obtains institute
State frequency of nodding.
As an improvement of the above scheme, in step s 4, fatigue driving state is detected as follows:
It is averaged closing speed, the frequency of yawning according to the eyes closed ratio, the frequency of wink, the eyes
With frequency building training set and the test set of nodding;
The SVM classifier is constructed by the training set;
Construct train classification models;
Model prediction is carried out to the test set by the train classification models of building, obtains fatigue driving state.
As an improvement of the above scheme, the training set and the test set are constructed as follows:
N number of eyes closed ratio, N number of frequency of wink, N number of eyes are collected respectively within the scheduled sampling period to be averagely closed
Speed N number of yawns frequency and N number of frequency of nodding is normalized to obtain sample as sample data, and to the sample data
This collection P;Wherein, N >=2, and N is integer;
By principal component analysis to the sample set P dimensionality reduction, sample set P1 is obtained;
70% sample data is chosen from the sample set P1 as training set, in the sample set P1 remaining 30%
Sample data as test set.
In order to solve the above technical problems, the present invention also provides a kind of fatigue driving detection device, the detection device storage
There is computer program, the computer program is suitable for being performed to realize fatigue driving detection side described in any one of the above
Method.
In order to solve the above technical problems, the present invention also provides a kind of fatigue driving detection device, including processor, with it is described
The memory of processor connection, the memory are stored with computer program, and the computer program is suitable for being performed to realize
Method for detecting fatigue driving described in any one of the above;
The processor is used to call the computer program in the memory, tired described in any one of the above to execute
Please detection method is sailed.
Compared with prior art, method for detecting fatigue driving and device of the invention, on the one hand, imaged by near-infrared 3D
Machine acquires the head image of driver, and then the position for obtaining mouth motion feature, head movement feature and eye feature point is sat
Mark;On the other hand, the eye image that driver is acquired by near-infrared high-speed camera, effectively can avoid standard camera from sampling
The insufficient limitation of number, and transported in conjunction with the eyes image of the position coordinates of eye feature point search driver to track eyelid
Dynamic feature, reduces the detecting step to eyes image, improves the acquisition speed of eyelid moving characteristic, and promotion judges speed;Separately
Outside, tired to judge by being sent into eyelid moving characteristic, mouth moving characteristic and the multiple features of head moving characteristic to SVM classifier
Please it sails, realizes fatigue driving detection, sample type is more, so that the accuracy of detection is high.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of method for detecting fatigue driving of the embodiment of the present invention 1.
Fig. 2 is the flow diagram of step S2 in the embodiment of the present invention 1.
Fig. 3 is the flow diagram of step S3 in the embodiment of the present invention 1.
Fig. 4 is the structural schematic diagram of fatigue detection device in the embodiment of the present invention 3.
Specific embodiment
In the following description, numerous specific details are set forth in order to facilitate a full understanding of the present invention.But the present invention can be with
It is different from the other modes of this description much to implement, those skilled in the art can be without violating the connotation of the present invention
Similar popularization is done, therefore the present invention is not limited by the specific embodiments disclosed below.
In of the invention a kind of method for detecting fatigue driving and device, by the face of near-infrared 3D video camera alignment driver
The head image of driver is acquired with head, nearly infrared high-speed video camera is directed at the eye of driver to acquire driver's
Eyes image.Since the frequency acquisition of near-infrared high-speed camera is greater than the frequency acquisition of standard camera, thus blinked at one
The eye duration is much larger than the sampling number of standard camera to eyes image sampling number, so that the eye moving characteristic obtained
It is more accurate, promote the precision of fatigue state judgement.Combined with specific embodiments below with attached drawing to technical solution of the present invention into
Clear, the complete description of row.
As shown in Figure 1, being a kind of flow diagram of method for detecting fatigue driving of the embodiment of the present invention 1.
The method for detecting fatigue driving, includes the following steps:
S1, pass through head image of the near-infrared 3D video camera acquisition comprising face;
S2, two continuous frames head image is tracked using LK optical flow method, and is extracted from the head image traced into
Mouth feature point, head feature point and eye feature point;The eye feature point includes the position coordinates of eye feature point;
Wherein, as shown in Fig. 2, step S2 includes the following steps:
S21, the light stream that tracking characteristics point between two continuous frames head image is calculated using LK optical flow method;Wherein, tracking is special
Sign point is the multiple pixels being located in preset hollow rectangle in the first frame head image of two continuous frames head image;
Before tracking to two continuous frames head image, interested face need to be determined by a Face datection
Region, to extract tracking characteristics point from human face region, step includes:
S201, when obtaining initial head image, gray processing processing, high pass filter are successively carried out to initial head image
Wave, calculus of differences and median filtering obtain pretreated head image;
S202, face inspection is carried out to pretreated head image using the Haar classifier based on Adaboost algorithm
It surveys, to mark interested human face region;
S203, the human face region that will test are set as tracking target Orect, and set a hollow matrix Hrect;Wherein,
Orect∈Hrect, hollow rectangle HrectInside casing comparison-tracking target OrectIt is small by 20%, hollow rectangle HrectOutline border comparison-tracking
Target OrectIt is big by 20%;
S203, by hollow matrix HrectIn pixel as initial tracking characteristics point.
Next, step S21 is described in detail.
Since head image is acquired using infrared 3D video camera, frequency acquisition is high, thus tracking characteristics point meets LK light
Three kinds of stream method it is assumed that i.e. tracking characteristics point brightness constancy, small movement and space between two continuous frames head image is consistent, under
Face is with then the brightness I (x, y, t) of tracking characteristics point meets in tracking characteristics point:
According to brightness constancy condition, I (x, y, t)=I (x+ Δ x, y+ Δ y, t+ Δ t);
According to small moving condition,That is,Wherein, vxAnd vyBe with
The light stream of track characteristic point;
According to space uniform condition, it is assumed that hollow matrix HrectInside have a size be e × e small window, in tracking
Characteristic point i movement is consistent, then obtains over-determined systems:Wherein,It is tracking characteristics
The partial derivative of point i in the x direction,Number is the partial derivative of tracking characteristics point i in y-direction,When being tracking characteristics point i
Partial derivative;Wherein i=1,2,3 ... e, e >=2 and be integer, then using least square method solve, two continuous frames head can be solved
The light stream v of tracking characteristics point between imagex、vy。
S22, the position by light stream predicting tracing characteristic point in the second frame head image of two continuous frames head image;
S23, according to position of the tracking characteristics point in the first frame head image and prediction in the second frame head portion figure
Position as in calculates the displacement of tracking characteristics point;
S24, by displacement according to being sequentially ranked up from small to large, obtain displacement intermediate value;
S25, when displacement meets preset condition, from the second frame head image extract mouth feature point, head feature point
With eye feature point;Wherein, preset condition be displacement displacement be less than displacement intermediate value, and be displaced quantity be greater than or equal to
The 50% of track characteristic point quantity.
In step s 25, when displacement meets preset condition, then to the success of the second frame head portion image trace, displacement is determined
Amount is less than the tracking characteristics point that tracking characteristics point corresponding to displacement intermediate value is next round tracking, realizes the dynamic of tracking characteristics point
It updates.
In step s 25, when displacement meets preset condition, using corner feature detection algorithm from the second frame head image
Middle extraction mouth feature point, head feature point and eye feature point.
S3, eyes image is searched for from the eye image that near-infrared high-speed camera acquires according to position coordinates, to extract
And track the eye feature point of eyes image;
Specifically, step S3 realizes the tracking of the eye feature point of eyes image using Kalman filtering algorithm, such as Fig. 3 institute
Show, step S3 includes the following steps:
S31, observational characteristic point is determined according to the position coordinates of eye feature point;The observational characteristic point, which is used to indicate, continues two
The eye feature point of first frame eyes image in frame eye image;
Wherein, in step S31, since the eye feature point in head image is tracked by LK optical flow method and using angle
Point detection algorithm obtains, thus the eye feature point available position and speed indicate, that is, set eye feature point extraction when
Between t=k, then can determine the eye feature point position coordinates be (xk,yk), what which moved in x-axis and y-axis
Speed is respectively uk、vk, and then according to the position coordinates (x of the eye feature pointk,yk) search for and see in first frame eyes image
Measuring point sets observational characteristic point in the state vector of t moment as Xk=[xk,yk,uk,uk]。
S32, the State-Vector Equation that first frame eyes image is constructed according to position coordinates and velocity information;
In step s 32, the State-Vector Equation due to without input, then being constructed before first frame eyes image are as follows: Xk+1
=AkXk+Wk, wherein since the frequency acquisition of near-infrared high-speed camera is high, the time interval of two continuous frames image is shorter, because
And observational characteristic point is considered as constant velocity linear, then AkIt is represented byWkIt makes an uproar for the process of state vector
Sound;
S33, observation model is determined according to state vector and preset observing matrix;
In step S33, determining observation model are as follows: Zk=HXk+Vk, wherein
VkFor observation noise.
S34, the search that the second frame eyes image in two continuous frames eyes image is determined according to state vector and observation model
Range;
S35, when searching observational characteristic point in search range, the observation point searched is set as the second frame eye
The eye feature point of image is realized and is extracted to the eye feature point of eyes image.
In step s 35, when not searching observational characteristic point in search range, target loss is occurred as soon as, is needed at this time
The eye feature point of head image is redefined using the head image that near-infrared 3D video camera acquires, and then redefines observation
Characteristic point.
S4, eye feature point, mouth feature point and head feature point one according to the eyes image in predetermined times of collection
Eyelid moving characteristic, mouth moving characteristic and head moving characteristic are determined to one, and is sent into SVM classifier, so that SVM classifier
Judge fatigue driving state, realizes fatigue driving detection.
In step s 4, eyelid moving characteristic includes that eyes closed ratio, frequency of wink and eyes are averaged closing speed.
Wherein, eyelid moving characteristic is determined according to the eye feature point of the eyes image in predetermined times of collection in step S4
Include the following steps:
S411, when extracting the eye feature point of eyes image every time, obtained from the eye feature point of eyes image
Canthus characteristic point, upper eyelid characteristic point and palpebra inferior characteristic point;
S412, according to canthus characteristic point, upper eyelid characteristic point and palpebra inferior characteristic point, calculate the angle at canthus to determine eye
The closure of eyeball;
Specifically, step S412 includes:
The folder at canthus is calculated according to the position coordinates of canthus characteristic point, upper eyelid characteristic point and palpebra inferior characteristic point
Angle;
The closure of eyes is determined according to the angle at canthus;For example, setting the closure of eyes when the angle when canthus is 0
Degree is 100%;When angle when canthus reaches preset angle threshold, the closures of eyes is set as 0%.
S413, when times of collection reaches predetermined times of collection, calculate eyes closed ratio according to determining whole closures
Example and frequency of wink, and eyes are calculated according to the displacement of whole upper eyelid characteristic points and are averaged closing speed.
Specifically, in step S413, when times of collection reaches predetermined times of collection, then it represents that when having collected unit
The closure of eyes in interior every frame eye image, and then eyes closed ratio is calculated by following formula:
Eyes closed ratio=eyes closed frame number/times of collection;
Frequency of wink=eyes closed frame number.
Further, in step s 4, mouth moving characteristic includes frequency of yawning;According to mouth feature point in step S4
Determine that mouth moving characteristic includes the following steps:
S421, when every time extract mouth feature point when, from mouth feature point obtain corners of the mouth characteristic point, upper lip highest
Characteristic point and the minimum characteristic point of lower lip;
S422, according to corners of the mouth characteristic point, upper lip highest characteristic point and the minimum characteristic point of lower lip, calculate the angle of the corners of the mouth
To determine mouth closure;
Specifically, step S422 includes:
Mouth is calculated according to the position coordinates of corners of the mouth characteristic point, upper lip highest characteristic point and the minimum characteristic point of lower lip
The angle in portion;
The closure of mouth is determined according to the angle of mouth;For example, when the angle of mouth is less than the first angle threshold value, if
The closure for determining mouth is 100%;When the angle of mouth reaches second angle threshold value, the closure of the corners of the mouth is set as 0%;Its
In, the first angle threshold value is less than the second angle threshold value.
S423, when times of collection reaches predetermined times of collection, frequency of yawning is determined according to whole mouth closures.
Specifically, in step S423, when times of collection reaches predetermined times of collection, then it represents that when having collected unit
The closure of mouth in interior every frame head image determines hair when detecting that mouth closure is more than mouth closure threshold value
Life is yawned;Pass through formula again: frequency of yawning=number/times of collection of yawning obtains frequency of yawning.
Further, in step s 4, head moving characteristic includes frequency of nodding;According to predetermined times of collection in step S4
Interior head feature point determines that head moving characteristic includes the following steps:
S431, when every time extract head feature point when, from the characteristic point of head obtain crown characteristic point, with draw one
The curve for the upright position that horizontal coordinate is times of collection, vertical coordinate is crown characteristic point;
S432, when times of collection reaches predetermined times of collection, determine occur the number of wave crest in curve, obtain nodding frequency
Rate.
Specifically, in step s 4, the building of SVM classifier is specific as follows:
S441, to collect N number of eyes closed ratio, N number of frequency of wink, N number of eyes respectively within the scheduled sampling period flat
Equal closing speed N number of yawns frequency and N number of frequency of nodding is normalized as sample data, and to the sample data
To sample set P;Wherein, N=60,a1,nN number of eyes closed ratio after indicating normalization, as
The first subsample in sample set P;a2,nN number of frequency of wink after indicating normalization, as the second subsample in sample set P;
a3,nN number of eyes after indicating normalization are averaged closing speed, as the third subsample in sample set P;a4,nIndicate normalization
N number of frequency of yawning afterwards, as the 4th subsample in sample set P;a5,nN number of frequency of nodding after indicating normalization, as
The 5th subsample in sample set P;Wherein, n=1,2,3 ..., N;
Wherein, in step S441, the sample data in each subsample is normalized specially according to sample number
According to value determine fatigue driving grade, fatigue driving grade be divided into complete vigilance, slight fatigue, moderate fatigue, major fatigue and
Extremely tired 5 grades.
S442, dimensionality reduction is carried out to each subsample by principal component analysis, so that the sample data in sample set P reduces one
Half, obtain sample set P1;
S443, training set D of the sample data as SVM that 70% is chosen from the sample set P1 after dimensionality reduction, remaining 30%
Sample data as test set T;
S444, setting training set D={ (x1,l1), (x2,l2) ..., (xm,lm), wherein D ∈ P1li∈{-1,1}xi
It is sample data, liIt is sample labeling;
S445, assume that the training set can be by a hyperplane ωTX+b=0 linear partition, wherein ω is to determine hyperplane
Normal vector, b is the position of origin Yu hyperplane distance, then problem be converted into optimize hyperplane problem:
Wherein, ξiIt is 0/1 loss function, c is punishment parameter;
S445, for Nonlinear separability situation, select RBF Radial basis kernel functionThen
The optimization problem of SVM classifier is eventually converted into the select permeability of parameter (C, δ);Wherein, δ >=0 is the width of Radial basis kernel function
Parameter is spent, for controlling the radial effect range of Radial basis kernel function.
Specifically, in step s 4, train classification models are constructed as follows:
S451, with 2-10≤c≤27With 2-10≤δ≤23For all c and δ that range, step pitch are in 0.1 building value range
The parameter pair of composition;
S452, initial value of the parameter to (C, δ) as the SVM classifier parameter based on RBF Radial basis kernel function is successively taken,
Every group of parameter is obtained using cross validation K-CV method, and classification accuracy is verified to training set D under (C, δ);
S453, parameter corresponding to highest classification accuracy is chosen to (Co, δo) it is used as optimal parameter, obtain required SVM points
The model parameter of class model.
Further, model prediction is carried out as follows in step s 4:
Model parameter (the C obtained using above-mentioned stepso, δo) svm classifier prediction is carried out to test set T, obtain fatigue driving
State.
The embodiment of the present invention 2 provides a kind of fatigue driving detection device, which is stored with computer program, the meter
Calculation machine program is suitable for being performed to realize any one of the above method for detecting fatigue driving.
The embodiment of the present invention 3 provides a kind of fatigue detection device, the as shown in Figure 4 fatigue detection device include processor 1,
The memory 2 connecting with processor 1, memory 2 are stored with computer program, and computer program is suitable for being performed above-mentioned to realize
Any one method for detecting fatigue driving;The processor 1 is used to call the computer program in memory 1, to execute above-mentioned
It anticipates a kind of method for detecting fatigue driving.
Compared with prior art, fatigue detection method and device of the invention, have the advantages that
(1) on the one hand, the head image of driver is acquired by near-infrared 3D video camera, and then it is special to obtain mouth movement
The position coordinates of sign, head movement feature and eye feature point;On the other hand, driver is acquired by near-infrared high-speed camera
Eye image, can effectively avoid the insufficient limitation of standard camera sampling number, and combine the position of eye feature point
The eyes image of coordinate search driver tracks eyelid movement feature, reduces detecting step to eyes image, improves eye
The acquisition speed of eyelid moving characteristic, promotion judge speed;
(2) by being sent into eyelid moving characteristic, mouth moving characteristic and the multiple features of head moving characteristic to SVM classifier
Judge fatigue driving, realizes fatigue driving detection, sample type is more, so that the accuracy of detection is high;
(3) two continuous frames head image is tracked using hollow rectangle in LK optical flow method, calculating can be greatly reduced
Amount further promotes the speed of fatigue driving detection;Also, near-infrared high-speed camera is acquired in conjunction with Kalman filtering algorithm
Eye image carry out eyes image search, can also be further reduced calculation amount, promote detection speed;
(4) with artificial intelligence machine learning algorithm to from two image sources eyes closed ratio, frequency of wink,
Eyes be averaged closing speed, frequency of yawning and 5 types of frequency of nodding sample data carry out fatigue driving state identification,
So that the detection of fatigue driving state is more comprehensive and accurate.
The above described is only a preferred embodiment of the present invention, limitation in any form not is done to the present invention, therefore
All contents without departing from technical solution of the present invention, it is made to the above embodiment according to the technical essence of the invention any simply to repair
Change, equivalent variations and modification, all of which are still within the scope of the technical scheme of the invention.
Claims (10)
1. a kind of method for detecting fatigue driving, which comprises the steps of:
S1, pass through head image of the near-infrared 3D video camera acquisition comprising face;
S2, it is tracked using LK optical flow method head image described in two continuous frames, and is extracted from the head image traced into
Mouth feature point, head feature point and eye feature point;The eye feature point includes the position coordinates of eye feature point;
S3, eye figure is searched for from the image that near-infrared high-speed camera acquires according to the position coordinates of the eye feature point
Picture, to track the eye feature point of the eyes image;
S4, eye feature point, mouth feature point and the head according to the eyes image in predetermined times of collection
The one-to-one determining eyelid moving characteristic of characteristic point, mouth moving characteristic and head moving characteristic, and it is sent into SVM classifier, so that
The SVM classifier judges fatigue driving state, realizes fatigue driving detection.
2. method for detecting fatigue driving as described in claim 1, which is characterized in that step S2 includes the following steps:
The light stream of tracking characteristics point between two continuous frames head image is calculated using LK optical flow method;Wherein, the tracking characteristics point
To be located at multiple pixels in preset hollow rectangle in the first frame head image of the two continuous frames head image;
Predict the tracking characteristics point in the second frame of the two continuous frames head image by the light stream of the tracking characteristics point
Position in head image;
According to position of the tracking characteristics point in the first frame head image and prediction in second frame head portion figure
Position as in, calculates the displacement of the tracking characteristics point;
By the displacement according to being sequentially ranked up from small to large, displacement intermediate value is obtained;
When the displacement meets preset condition, mouth feature point, head feature point are extracted from the second frame head image
With eye feature point;Wherein, the preset condition is that the displacement of the displacement is less than the displacement intermediate value, and the displacement
Quantity is greater than or equal to the 50% of the tracking characteristics point quantity.
3. method for detecting fatigue driving as described in claim 1, which is characterized in that the eye feature point further includes eye spy
Levy the velocity information of point;The step S3 includes the following steps:
Observational characteristic point is determined according to the position coordinates;The observational characteristic point is used to indicate the two continuous frames eye image
The eye feature point of middle first frame eyes image;
The State-Vector Equation of the first frame eyes image is constructed according to the position coordinates and the velocity information;
Observation model is determined according to the state vector and preset observing matrix;
The second frame eyes image in the two continuous frames eyes image is determined according to the state vector and the observation model
Search range;
When searching the observational characteristic point within the scope of described search, the observation point searched is set as second frame
The eye feature point of eyes image is realized and is extracted to the eye feature point of the eyes image.
4. method for detecting fatigue driving as described in claim 1, which is characterized in that the eyelid moving characteristic includes that eyes close
Composition and division in a proportion example, frequency of wink and eyes are averaged closing speed;According to the eye in predetermined times of collection in the step S4
The eye feature point of image determines that eyelid moving characteristic includes the following steps:
When extracting the eye feature point of the eyes image every time, eye is obtained from the eye feature point of the eyes image
Corner characteristics point, upper eyelid characteristic point and palpebra inferior characteristic point;
According to the canthus characteristic point, the upper eyelid characteristic point and the palpebra inferior characteristic point, the angle at canthus is calculated with true
Determine the closure of eyes;
When times of collection reaches the predetermined times of collection, the eyes closed ratio is calculated according to determining whole closures
And frequency of wink, and eyes are calculated according to the displacement of whole upper eyelid characteristic points and are averaged closing speed.
5. method for detecting fatigue driving as claimed in claim 4, which is characterized in that the mouth moving characteristic includes yawning
Frequency;Determine that mouth moving characteristic includes the following steps: according to mouth feature point in the step S4
When extracting the mouth feature point every time, corners of the mouth characteristic point, upper lip highest are obtained from the mouth feature point
Characteristic point and the minimum characteristic point of lower lip;
According to the corners of the mouth characteristic point, the upper lip highest characteristic point and the minimum characteristic point of the lower lip, the corners of the mouth is calculated
Angle is to determine mouth closure;
When times of collection reaches the predetermined times of collection, frequency of yawning is determined according to whole mouth closures.
6. method for detecting fatigue driving as claimed in claim 4, which is characterized in that the head moving characteristic includes frequency of nodding
Rate;Determine that head moving characteristic includes the following steps: according to the head feature point in predetermined times of collection in the step S4
When extracting the head feature point every time, crown characteristic point is obtained from the head feature point, to draw one
The curve for the upright position that horizontal coordinate is times of collection, vertical coordinate is crown characteristic point;
It when times of collection reaches the predetermined times of collection, determines occur the number of wave crest in the curve, obtains the point
Head frequency.
7. method for detecting fatigue driving as claimed in claim 6, which is characterized in that in step s 4, examine as follows
Survey fatigue driving state:
It is averaged closing speed, frequency and the institute of yawning according to the eyes closed ratio, the frequency of wink, the eyes
State nod frequency building training set and test set;
The SVM classifier is constructed by the training set;
Construct train classification models;
Model prediction is carried out to the test set by the train classification models of building, obtains fatigue driving state.
8. method for detecting fatigue driving as claimed in claim 7, which is characterized in that construct the training set as follows
With the test set:
It collects N number of eyes closed ratio, N number of frequency of wink, N number of eyes respectively within the scheduled sampling period and is averagely closed speed
Degree N number of yawns frequency and N number of frequency of nodding is normalized to obtain sample as sample data, and to the sample data
Collect P;Wherein, N >=2, and N is integer;
By principal component analysis to the sample set P dimensionality reduction, sample set P1 is obtained;
Remaining 30% sample of 70% sample data as training set, in the sample set P1 is chosen from the sample set P1
Notebook data is as test set.
9. a kind of fatigue driving detection device, which is characterized in that the detection device is stored with computer program, the computer
Program is suitable for being performed to realize such as method for detecting fatigue driving according to any one of claims 1 to 8.
10. a kind of fatigue driving detection device, which is characterized in that including processor, the memory being connected to the processor, institute
It states memory and is stored with computer program, the computer program is suitable for being performed to realize such as any one of claim 1~8
The method for detecting fatigue driving;
The processor is used to call the computer program in the memory, to execute such as any one of claim 1~8 institute
The method for detecting fatigue driving stated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810607250.XA CN109063545B (en) | 2018-06-13 | 2018-06-13 | Fatigue driving detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810607250.XA CN109063545B (en) | 2018-06-13 | 2018-06-13 | Fatigue driving detection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109063545A true CN109063545A (en) | 2018-12-21 |
CN109063545B CN109063545B (en) | 2021-11-12 |
Family
ID=64820785
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810607250.XA Expired - Fee Related CN109063545B (en) | 2018-06-13 | 2018-06-13 | Fatigue driving detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109063545B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109664891A (en) * | 2018-12-27 | 2019-04-23 | 北京七鑫易维信息技术有限公司 | Auxiliary driving method, device, equipment and storage medium |
CN110532976A (en) * | 2019-09-03 | 2019-12-03 | 湘潭大学 | Method for detecting fatigue driving and system based on machine learning and multiple features fusion |
CN111950371A (en) * | 2020-07-10 | 2020-11-17 | 上海淇毓信息科技有限公司 | Fatigue driving early warning method and device, electronic equipment and storage medium |
CN112183220A (en) * | 2020-09-04 | 2021-01-05 | 广州汽车集团股份有限公司 | Driver fatigue detection method and system and computer storage medium |
CN112528815A (en) * | 2020-12-05 | 2021-03-19 | 西安电子科技大学 | Fatigue driving detection method based on multi-mode information fusion |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102436715A (en) * | 2011-11-25 | 2012-05-02 | 大连海创高科信息技术有限公司 | Detection method for fatigue driving |
CN103426184A (en) * | 2013-08-01 | 2013-12-04 | 华为技术有限公司 | Optical flow tracking method and device |
US20160078305A1 (en) * | 2004-12-23 | 2016-03-17 | Magna Electronics Inc. | Driver assistance system for vehicle |
CN106372621A (en) * | 2016-09-30 | 2017-02-01 | 防城港市港口区高创信息技术有限公司 | Face recognition-based fatigue driving detection method |
CN106682603A (en) * | 2016-12-19 | 2017-05-17 | 陕西科技大学 | Real time driver fatigue warning system based on multi-source information fusion |
CN107194346A (en) * | 2017-05-19 | 2017-09-22 | 福建师范大学 | A kind of fatigue drive of car Forecasting Methodology |
CN107704805A (en) * | 2017-09-01 | 2018-02-16 | 深圳市爱培科技术股份有限公司 | method for detecting fatigue driving, drive recorder and storage device |
-
2018
- 2018-06-13 CN CN201810607250.XA patent/CN109063545B/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160078305A1 (en) * | 2004-12-23 | 2016-03-17 | Magna Electronics Inc. | Driver assistance system for vehicle |
CN102436715A (en) * | 2011-11-25 | 2012-05-02 | 大连海创高科信息技术有限公司 | Detection method for fatigue driving |
CN103426184A (en) * | 2013-08-01 | 2013-12-04 | 华为技术有限公司 | Optical flow tracking method and device |
CN106372621A (en) * | 2016-09-30 | 2017-02-01 | 防城港市港口区高创信息技术有限公司 | Face recognition-based fatigue driving detection method |
CN106682603A (en) * | 2016-12-19 | 2017-05-17 | 陕西科技大学 | Real time driver fatigue warning system based on multi-source information fusion |
CN107194346A (en) * | 2017-05-19 | 2017-09-22 | 福建师范大学 | A kind of fatigue drive of car Forecasting Methodology |
CN107704805A (en) * | 2017-09-01 | 2018-02-16 | 深圳市爱培科技术股份有限公司 | method for detecting fatigue driving, drive recorder and storage device |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109664891A (en) * | 2018-12-27 | 2019-04-23 | 北京七鑫易维信息技术有限公司 | Auxiliary driving method, device, equipment and storage medium |
CN110532976A (en) * | 2019-09-03 | 2019-12-03 | 湘潭大学 | Method for detecting fatigue driving and system based on machine learning and multiple features fusion |
CN111950371A (en) * | 2020-07-10 | 2020-11-17 | 上海淇毓信息科技有限公司 | Fatigue driving early warning method and device, electronic equipment and storage medium |
CN111950371B (en) * | 2020-07-10 | 2023-05-19 | 上海淇毓信息科技有限公司 | Fatigue driving early warning method and device, electronic equipment and storage medium |
CN112183220A (en) * | 2020-09-04 | 2021-01-05 | 广州汽车集团股份有限公司 | Driver fatigue detection method and system and computer storage medium |
CN112183220B (en) * | 2020-09-04 | 2024-05-24 | 广州汽车集团股份有限公司 | Driver fatigue detection method and system and computer storage medium thereof |
CN112528815A (en) * | 2020-12-05 | 2021-03-19 | 西安电子科技大学 | Fatigue driving detection method based on multi-mode information fusion |
Also Published As
Publication number | Publication date |
---|---|
CN109063545B (en) | 2021-11-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ramzan et al. | A survey on state-of-the-art drowsiness detection techniques | |
CN109063545A (en) | A kind of method for detecting fatigue driving and device | |
Fridman et al. | Cognitive load estimation in the wild | |
Lalonde et al. | Real-time eye blink detection with GPU-based SIFT tracking | |
CN102324166B (en) | Fatigue driving detection method and device | |
Wang et al. | A survey on driver behavior analysis from in-vehicle cameras | |
CN103340637B (en) | Move and driver's Alertness intelligent monitor system of brain electro' asion and method based on eye | |
CN105809144A (en) | Gesture recognition system and method adopting action segmentation | |
CN101877051A (en) | Driver attention state monitoring method and device | |
Du et al. | A multimodal fusion fatigue driving detection method based on heart rate and PERCLOS | |
Ahmedt-Aristizabal et al. | Understanding patients’ behavior: Vision-based analysis of seizure disorders | |
Pandey et al. | Temporal and spatial feature based approaches in drowsiness detection using deep learning technique | |
Selvakumar et al. | Real-time vision based driver drowsiness detection using partial least squares analysis | |
Kong et al. | Remote photoplethysmography and motion tracking convolutional neural network with bidirectional long short-term memory: Non-invasive fatigue detection method based on multi-modal fusion | |
CN111134693B (en) | Virtual reality technology-based autism child auxiliary detection method, system and terminal | |
CN107832699A (en) | Method and device for testing interest point attention degree based on array lens | |
CN107669282A (en) | A lie detector based on face recognition | |
Chen et al. | Attention estimation system via smart glasses | |
Liu et al. | Development of a fatigue detection and early warning system for crane operators: A preliminary study | |
CN111861275B (en) | Household work mode identification method and device | |
Li et al. | A driving attention detection method based on head pose | |
CN103646508A (en) | Device and operation method for preventing fatigue driving | |
CN109508089B (en) | Sight line control system and method based on hierarchical random forest | |
Zhou et al. | Driver fatigue tracking and detection method based on OpenMV | |
Subbaiah et al. | Driver drowsiness detection methods: A comprehensive survey |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20211112 |