CN106485191B - A kind of method for detecting fatigue state of driver and system - Google Patents
A kind of method for detecting fatigue state of driver and system Download PDFInfo
- Publication number
- CN106485191B CN106485191B CN201510555903.0A CN201510555903A CN106485191B CN 106485191 B CN106485191 B CN 106485191B CN 201510555903 A CN201510555903 A CN 201510555903A CN 106485191 B CN106485191 B CN 106485191B
- Authority
- CN
- China
- Prior art keywords
- state
- eyes
- eye
- driver
- confidence level
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/175—Static expression
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
A kind of method for detecting fatigue state of driver provided by the invention, it carries out calculating acquisition confidence level by being sent into disaggregated model using eyes SIFT feature, closed state is opened to eyes according to level of confidence to judge, it can not be influenced by facial angle, with rotational invariance, it is high to the accuracy of human eye measurement, and calculating process is simple, the driving fatigue state of driver can be detected in time, there can be the requirement for meeting real-time detection, in addition, the present invention also provides a kind of driver fatigue state detection systems.
Description
Technical field
The present invention relates to safe driving field, in particular to a kind of method for detecting fatigue state of driver and system.
Background technique
With the rapid development of communication, driver tired driving has become the main original of frequent accidents generation
One of because, and driver often can not recognize to lay oneself open to fatigue driving in time because of the reason of working environment and driving time
State, and then great security risk is brought, the automatic driver fatigue state that detects is the important means for preventing traffic accident, greatly
Amount experimental data proves that the percentage of time with degree of fatigue of eyes closed have good correlation in the unit time, thus, inspection
Driver's eyes state is surveyed to be of great significance.
In recent years, with the rapid development of image processing and pattern recognition, pass through video surveillance driver's eyes shape
State always judges that driver fatigue state becomes feasible scheme.The crucial place of human eye state detection is exactly to find differentiation to open eyes
With the feature of eye closing, the common feature of researcher includes the edge features such as iris, eyelid, geometrical characteristic, color characteristic at present.
Currently, the method for detection human eye state, which has, much mainly to be had method based on template matching, is based on Hough transform
Method, based on method of human eye difference etc. under infrared light supply.The method of template matching needs to prestore multiple template, stores information
Amount is big, it is not easy to promote.Hough transform ellipse detection method is computationally intensive, and real-time is poor.It is poor based on human eye under infrared light supply
Method system is divided to build complexity, vulnerable to light source position, the interference such as irradiating angle and face cutaneous reflex.
Summary of the invention
In view of this, the embodiment of the invention provides a kind of method for detecting fatigue state of driver and systems.
It is an object of the present invention to provide a kind of method for detecting fatigue state of driver, according to eyes scale invariant feature
The preparatory train classification models of SIFT feature are converted, the disaggregated model is used to carry out eyes SIFT feature to calculate output corresponding
Confidence level, comprising:
Obtain the face contour image of driver;
Acquisition normalization facial image is normalized to the face contour image;
Eyes SIFT feature is extracted to the normalization facial image, wherein the eyes SIFT feature includes left eye SIFT
Feature and right eye SIFT feature;
The left eye SIFT feature and the right eye SIFT feature input disaggregated model calculate and obtain institute respectively
State the first confidence level of left eye SIFT feature and the second confidence level of the right eye SIFT feature;
Institute is determined according to the comparison result of first confidence level and second confidence level and the default confidence interval
The eyes for stating driver open closed state;
Determine that the driver is in a state of fatigue when the eyes of the driver are closed-eye state.
Optionally, the comparison according to first confidence level and second confidence level and the default confidence interval
As a result determine that the eyes of the driver open closed state, comprising:
It is determined as left eye eyes-open state when first confidence level is greater than default confidence interval, when first confidence level
It is determined as left eye closed-eye state when less than default confidence interval;
It is determined as right eye eyes-open state when second confidence level is greater than default confidence interval, when first confidence level
It is determined as right eye closed-eye state when less than default confidence interval;
When left eye eyes-open state and/or right eye eyes-open state are that then the determining driver is eyes-open state, when the left side
Determine that the driver is closed-eye state when eye closed-eye state and the right eye closed-eye state.
Optionally, the comparison according to first confidence level and second confidence level and the default confidence interval
As a result determine that the eyes of the driver open closed state, comprising:
It is determined as left eye unstable state when first confidence bit is in default confidence interval, when second confidence
Degree is determined as right eye unstable state when being located at the confidence interval, the first confidence level corresponding to the left eye unstable state
The second confidence level corresponding with the right eye unstable state carries out joint probability calculation and obtains probability value, and the probability value is greater than
Preset threshold is that determining driver is eyes-open state, determines that driver is eye closing shape when the probability value is not more than preset threshold
State.
It is optionally, described that acquisition normalization facial image is normalized to the face contour image, comprising:
Obtain eye position and face contour size in the face contour image;
Face size, position and the posture for calculating the driver according to the eye position and face contour size are special
Sign;
It is obtained using image mapping method to the face contour image according to the face size, position and posture feature
Operation is normalized to obtain normalization facial image.
It is optionally, described that eyes SIFT feature is extracted to the normalization facial image, comprising:
It determines and calculates the required image-region of eyes SIFT feature description;
Reference axis is rotated to be to the direction of key point, to ensure rotational invariance;
The direction histogram of each seed point is calculated, feature vector is formed;
The feature vector of key point is normalized;
Subvector thresholding is described so that off-limits gradient value is truncated.
Optionally, it is described when the eyes of the driver be closed-eye state when determine the driver it is in a state of fatigue it
Afterwards, further includes:
It is alerted when the driver is in a state of fatigue or vehicle deceleration, the alarm includes auditory tone cues, lamp
At least one of light prompt or vibration prompting.
It is pre- according to eyes SIFT feature it is a further object to provide a kind of driver fatigue state detection system
First train classification models, the disaggregated model are used to carry out corresponding confidence calculations to eyes SIFT feature, comprising:
First extraction unit, for extracting the face contour image of driver;
First processing units, for acquisition normalization facial image to be normalized to the face contour image;
Second extraction unit, for extracting eyes SIFT feature to the normalization facial image, wherein the eyes
SIFT feature includes left eye SIFT feature and right eye SIFT feature;
The second processing unit is distinguished for the left eye SIFT feature and right eye SIFT feature to be inputted the disaggregated model
Calculate the first confidence level of the left eye SIFT feature and the second confidence level of the right eye SIFT feature;
First determination unit, for according to first confidence level and second confidence level and the default confidence interval
Comparison result determine that the eyes of the driver open closed state;
Second determination unit, for determining that the driver is in fatigue when the eyes of the driver are closed-eye state
State.
Optionally, first determination unit is also used to:
It is determined as left eye eyes-open state when first confidence level is greater than default confidence interval, when first confidence level
It is determined as left eye closed-eye state when less than default confidence interval;
It is determined as right eye eyes-open state when second confidence level is greater than default confidence interval, when first confidence level
It is determined as right eye closed-eye state when less than default confidence interval;
When left eye eyes-open state and/or right eye eyes-open state are that then the determining driver is eyes-open state, when the left side
Determine that the driver is closed-eye state when eye closed-eye state and the right eye closed-eye state.
Optionally, first determination unit is also used to:
It is determined as left eye unstable state when first confidence bit is in default confidence interval, when second confidence
Degree is determined as right eye unstable state when being located at the confidence interval, the first confidence level corresponding to the left eye unstable state
The second confidence level corresponding with the right eye unstable state carries out joint probability calculation and obtains probability value, and the probability value is greater than
Preset threshold is that determining driver is eyes-open state, determines that driver is eye closing shape when the probability value is not more than preset threshold
State.
Optionally, the first processing units are also used to:
Obtain eye position and face contour size in the face contour image;
Face size, position and the posture for calculating the driver according to the eye position and face contour size are special
Sign;
It is obtained using image mapping method to the face contour image according to the face size, position and posture feature
Operation is normalized to obtain normalization facial image.
Optionally, second extraction unit is also used to:
It determines and calculates the required image-region of eyes SIFT feature description;
Reference axis is rotated to be to the direction of key point, to ensure rotational invariance;
The direction histogram of each seed point is calculated, feature vector is formed;
The feature vector of key point is normalized;
Subvector thresholding is described so that off-limits gradient value is truncated.
Optionally, the system also includes:
Danger early warning unit, for being alerted when the driver is in a state of fatigue or vehicle deceleration, the announcement
Alert includes at least one of auditory tone cues, light prompt or vibration prompting.
A kind of method for detecting fatigue state of driver provided by the invention and system, by dividing the input of eyes SIFT feature
Class model, which calculate, obtains confidence level, opens closed state to eyes according to level of confidence and judges, can not be by face angle
Degree influences, and has rotational invariance, high to the accuracy of human eye measurement, and calculating process is simple, can be in time to driver
Driving fatigue state detected, can have the requirement for meeting real-time detection.
Detailed description of the invention
Fig. 1 is a kind of flow chart of embodiment of method for detecting fatigue state of driver provided by the invention;
Fig. 2 is the flow chart of another embodiment of method for detecting fatigue state of driver provided by the invention;
Fig. 3 is a kind of structure chart of embodiment of driver fatigue state detection system provided by the invention.
Specific embodiment
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention
Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only
The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people
The model that the present invention protects all should belong in member's every other embodiment obtained without making creative work
It encloses.
Description and claims of this specification and term " first ", " second ", " third " " in above-mentioned attached drawing
Four " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way
Data be interchangeable under appropriate circumstances, so that the embodiments described herein can be in addition to illustrating herein or describing
Sequence other than appearance is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that covering is non-exclusive
Include, for example, the process, method, system, product or equipment for containing a series of steps or units are not necessarily limited to clearly arrange
Those of out step or unit, but may include be not clearly listed or it is solid for these process, methods, product or equipment
The other step or units having.
SIFT feature (Chinese: scale invariant feature conversion, English: Scaleinvariant feature transform)
It is the local feature of image, maintains the invariance to rotation, scaling, brightness change, to visual angle change, affine transformation, makes an uproar
Sound also keeps a degree of stability.
SVM (Chinese: support vector machines, English: Support Vector Machine) is a kind of establishes in statistical learning
Theoretical VC dimension is theoretical and Structural risk minization basis on according to limited sample information model complexity (i.e. to spy
Determine the study precision of training sample) and learning ability (i.e. without error identify arbitrary sample ability) between seek most preferably to roll over
Inner feelings, to obtain the learning method of best Generalization Ability.
As shown in connection with fig. 1, the present invention provides a kind of method for detecting fatigue state of driver, pre- according to eyes SIFT feature
First train classification models, the disaggregated model, which is used to carry out calculating to eyes SIFT feature, exports corresponding confidence level, comprising:
Here disaggregated model can use support vector machines, use a large amount of eyes SIFT feature in advance in disaggregated model
(eyes SIFT feature and eyes SIFT feature when closing one's eyes when eye opening) is trained acquisition, and after study, disaggregated model can
Eyes SIFT feature when the range data of eyes SIFT feature and human eye are closed when human eye is opened when normal with determination
Range data, then the image by obtaining driver in real time are handled to obtain face contour image, utilize what is obtained after processing
Eyes SIFT feature is sent into disaggregated model the confidence level for being available for indicating similarity degree, is in using confidence level expression
Eyes-open state or closed-eye state, confidence level is higher, the more approximate eyes-open state of human eye, and confidence level is lower, and human eye is more approximate to be closed
Eye state will do it specific introduction hereinafter.
S101, the face contour image for extracting driver.
It can use ASM algorithm to position to obtain the position of face characteristic, including human face five-sense-organ position and face contour information,
Specifically can using face Rough Inspection, edge inspection area connectivity filter the methods of, for example, first to the rough rectangle of face outside
Profile carries out Rough Inspection, then does edge detection, effective information binaryzation, section connectivity filtering, profile point and correct, to priority area
The processing such as vertical-horizontal projection are done with entire image, accurate facial contour is obtained, recycles transformation of scale, histogram modification etc.
Method obtains normalization facial image, herein without limitation.
S102, the face contour image is normalized acquisition normalization facial image.
It can be by face contour image for normalization facial image as described in step S101
Reason obtains, and the methods of proportion of utilization transformation, histogram modification obtain normalization facial image, for example, the available face
Eye position and face contour size in contour images calculate the driver according to the eye position and face contour size
Face size, position and posture feature, according to the face size, position and posture feature utilize image mapping method obtain
Operation is normalized to obtain normalization facial image, herein without limitation to the face contour image.
S103, eyes SIFT feature is extracted to the normalization facial image, wherein the eyes SIFT feature includes a left side
Eye SIFT feature and right eye SIFT feature.
A variety of methods can be used by extracting eyes SIFT feature, can be in the following ways:
S1, determine that calculating eyes SIFT feature describes the required image-region of son;
Feature Descriptor is related with the scale where characteristic point, therefore, to gradient seek should be in the corresponding height of characteristic point
It is carried out on this image.Neighborhood near key point is divided into d × d sub-regions, such as d=4, each subregion is as one
Seed point, each seed point have n direction, such as n=8.
S2, the direction that reference axis is rotated to be to key point, to ensure rotational invariance;
S3, the direction histogram for calculating each seed point form feature vector;
Sampled point in neighborhood is assigned in corresponding subregion, the gradient value in subregion is assigned to n direction
On, the weight of gradient value is calculated, postrotational sample point coordinate is assigned to the sub-district of d × d in the circle that radius is radius
Domain calculates the gradient for influencing the sampled point of subregion and direction, is assigned on n direction, is calculated using linear interpolation method each
The gradient in n direction of seed point, subscript of the gained sampled point in subregion carry out linear interpolation, calculate it to each seed point
Contribution.
S4, the feature vector of key point is normalized.
It after feature vector is formed, in order to remove the influence of illumination variation, needs that they are normalized, for figure
As gray value integrally drifts about, the gradient of image each point is that neighborhood territory pixel subtracts each other to obtain, so can also remove
S5, description subvector thresholding are to be truncated off-limits gradient value;
Nonlinear optical shines, and the variation of camera saturation degree is excessive to the gradient value for causing certain directions, and the influence to direction is micro-
It is weak, therefore threshold value is set and (after vector normalization, generally takes and biggish gradient value 0.2) is truncated.Then, to feature vector again into
Normalized of row, improves the distinctive of feature.
It is noted that being judged using eyes SIFT feature, do not influenced by facial angle, there is invariable rotary
Property, it is high that human eye accuracy of measurement is carried out according to the technical solution of the present invention, and calculate simply, speed is fast, can satisfy in real time
The requirement of detection.
It can specifically use and take 16 × 16 neighborhood using centered on characteristic point as sampling window, by sampled point and characteristic point
Relative direction by Gauss weight after be included into comprising 8 bin (Chinese: group away from, English: binwidth) direction histogram,
128 dimensional features for finally obtaining 4 × 4 × 8 describe son, and the mode for extracting SIFT feature is not limited to upper type, can be with
Using other forms, herein without limitation.
It can be using the method that eyes SIFT feature obtains confidence level when the eyes SIFT feature vector of two images is raw
After, measured using the Euclidean distance of key point feature vector as the similarity determination of key point in two images.Example
Some key point such as is taken in first image, finds two nearest key points of the distance in second image by traversing.
In the two key points, if secondary short distance is less than some default threshold divided by minimum distance, it is determined as a pair of of match point,
Then may determine that this is similar to match point, and so on calculate separately other key points, determine the similarity of two images, in turn
The confidence level for obtaining eyes SIFT feature, by confidence level it will be seen that human eye is close to opening or approach in current frame image
Closure is opened completely or is closed completely, and shows that eyes open the trend closed, those of ordinary skill in the art using confidence level
It is to be appreciated that do not repeat herein.
S104, the left eye SIFT feature and the right eye SIFT feature input disaggregated model calculate and obtain respectively
Obtain the first confidence level of the left eye SIFT feature and the second confidence level of the right eye SIFT feature.
The similarity degree that eyes SIFT feature how is calculated using Feature Descriptor and key point is mentioned in step s 103,
Confidence level is calculated, in step S104, confidence level is calculated to left eye SIFT feature and right eye SIFT feature using this method, wherein
Corresponding left eye SIFT feature is the first confidence level, and corresponding right eye SIFT feature is the second confidence level, and the first confidence level indicates
Left eye indicates the state that right eye is opened and is closed in the state opened He must closed, the second confidence level, it should be noted that can be pre-
Confidence interval is first set, higher than confidence interval it may be considered that being that eyes are opened, lower than confidence interval it may be considered that being eyes
Closure, it is unstable when then showing in confidence interval, i.e., it can not accurately determine that eyes are to open eyes or close one's eyes, can lead at this time
The confidence level judged in continuous three frames image is crossed, when the confidence level in continuous three frames image is in decline, it may be considered that at this time
The eyes are in closure, continuous three frame before can be this current frame image continuous two field pictures or this current frame image it
Continuous two field pictures afterwards judge that the method for confidence level decline can be using the confidence level of the eyes SIFT feature in rear frame image
Make poor result with eyes SIFT feature in prior frame image to be negative it may be considered that confidence level decline, those of ordinary skill in the art
It is to be appreciated that herein without repeating.
It is S105, true according to the comparison result of first confidence level and second confidence level and the default confidence interval
The eyes of the fixed driver open closed state.
Referred in step S104 how using the relationship of confidence level and confidence interval determine eyes be in open or close
It closes or state, the comparison result of the first confidence level will be seen that the closed state of opening of left eye, the comparison result of the second confidence level can be with
Solution right eye opens closed state, but judgement has at least that one eye eyeball is in when opening state and then thinks that driver is in eyes-open state,
When two eyes then think that driver is in closed-eye state all in closed state.
Specific confirmation process can be opened using left eye is determined as when first confidence level is greater than default confidence interval
Eye state is determined as left eye closed-eye state when first confidence level is less than default confidence interval;When second confidence level
It is determined as right eye eyes-open state when greater than default confidence interval, is determined as when first confidence level is less than default confidence interval
Right eye closed-eye state;When left eye eyes-open state and/or right eye eyes-open state be then the determining driver be eyes-open state, work as place
Determine that the driver is closed-eye state when the left eye closed-eye state and the right eye closed-eye state.
It should be noted that the judgement when eyes plays pendulum, that is, can not judge it is eyes-open state or eye closing
When state, the mode of joint probability can also be used to be calculated to determine which kind of state driver is in, i.e., when described first
Confidence bit is determined as left eye unstable state when default confidence interval, when second confidence bit is in the confidence interval
When be determined as right eye unstable state, to corresponding first confidence level of the left eye unstable state and the unstable shape of the right eye
Corresponding second confidence level of state carries out joint probability calculation and obtains probability value, and it is determining driving that the probability value, which is greater than preset threshold,
Member is eyes-open state, determines that driver is closed-eye state when the probability value is not more than preset threshold, utilizes joint probability meter
When calculation, calculate summing multiplied by certain coefficient respectively and obtain probability value for the first confidence level and the second confidence level, to
To probability value calculated, how to use joint probability calculation, those of ordinary skill in the art are not it is to be appreciated that specifically doing and being situated between
It continues, preset threshold mentioned herein counts available by the closed state of opening to driver's eyes.
S106, determine that the driver is in a state of fatigue when the eyes of the driver are closed-eye state.
State or closed state are opened by may determine that the eyes of driver are in S105, when determining driver
Eyes can determine that driver is in fatigue driving state in the closure state, because people is when generating sleepiness, eyes meeting
Automatic closure, this can be very dangerous during driving, slow in reacting, in time detect driver's frazzle
To have great significance to safe driving, it should be noted that when judging that driver is in a state of fatigue, after can also taking
Continuous safety measure carries out early warning, such as carries out reduction of speed to vehicle automatically, carries out voice reminder etc. to driver, does not limit specifically
It is fixed.
A kind of method for detecting fatigue state of driver provided by the invention, by the way that eyes SIFT feature is sent into disaggregated model
Confidence level is obtained, closed state is opened to eyes according to level of confidence and is judged, can not be influenced by facial angle, there is rotation
Invariance, it is high to the accuracy of human eye measurement, and calculating process is simple, can be in time to the driving fatigue state of driver
It is detected, can there is the requirement for meeting real-time detection.
As shown in connection with fig. 2, the present invention also provides a kind of method for detecting fatigue state of driver another embodiment, packet
It includes:
S201, the face contour image for extracting driver.
It is similar with step S101 in a upper embodiment, herein without repeating.
S202, eye position and face contour size in the face contour image are obtained.
It is similar with step S102 in a upper embodiment, herein without repeating.
S203, face size, position and the appearance that the driver is calculated according to the eye position and face contour size
State feature.
It is similar with step S102 in a upper embodiment, herein without repeating.
S204, it is obtained according to the face size, position and posture feature using image mapping method and the face is taken turns
Operation is normalized to obtain normalization facial image in wide image.
It is similar with step S102 in a upper embodiment, herein without repeating, it should be noted that for normalizing people
The method of face image can also be using implementation in other, and those of ordinary skill in the art are it is to be appreciated that herein without being situated between
It continues.
S205, it is described to the normalization facial image extract eyes SIFT feature, wherein the eyes SIFT feature packet
Include left eye SIFT feature and right eye SIFT feature.
It should be noted that the process that the eyes SIFT feature mentioned in step S205 is extracted can be implemented with reference to upper one
Introduction in example, in addition, those skilled in the art are it is to be appreciated that this without introducing.
S206, the left eye SIFT feature and the right eye SIFT feature input disaggregated model calculate and obtain respectively
Obtain the first confidence level of the left eye SIFT feature and the second confidence level of the right eye SIFT feature.
By carrying out the extraction of eyes SIFT feature to driver's eyes in the present embodiment, then by the eyes of each eye
SIFT feature be sent into disaggregated model obtain confidence level, using confidence level judge eyes open or closure situation, improve judgement
Fault-tolerance, deterministic process simple operations.
It is S207, true according to the comparison result of first confidence level and second confidence level and the default confidence interval
The eyes of the fixed driver open closed state, S201 are executed when the eyes of driver are eyes-open state, when the eyes of driver
To execute S208 when closed-eye state.
It is determined as left eye eyes-open state when first confidence level is greater than default confidence interval, when first confidence level
It is determined as left eye closed-eye state when less than default confidence interval;
It is determined as right eye eyes-open state when second confidence level is greater than default confidence interval, when first confidence level
It is determined as right eye closed-eye state when less than default confidence interval;
When left eye eyes-open state and/or right eye eyes-open state are that then the determining driver is eyes-open state, when the left side
Determine that the driver is closed-eye state when eye closed-eye state and the right eye closed-eye state, and
It is determined as left eye unstable state when first confidence bit is in default confidence interval, when second confidence
Degree is determined as right eye unstable state when being located at the confidence interval, the first confidence level corresponding to the left eye unstable state
The second confidence level corresponding with the right eye unstable state carries out joint probability calculation and obtains probability value, and the probability value is greater than
Preset threshold is that determining driver is eyes-open state, determines that driver is eye closing shape when the probability value is not more than preset threshold
State.
By carrying out corresponding operation for different situations, so that it is more accurate that eyes are opened with the detection closed, especially exist
When confidence level is in confidence interval, the first confidence level and the second confidence level are calculated using the method for joint probability and obtains probability
Value recycles preset threshold to judge that the corresponding driver of probability value is in eyes-open state or closed-eye state, adapts to various scenes,
Improve the flexibility of the method for the present invention.
S208, determine that the driver is in a state of fatigue when the eyes of the driver are closed-eye state.
Step S208 is similar with S106 in a upper embodiment, is not repeated herein.
S209, carry out alarm or vehicle deceleration, it is described alarm include auditory tone cues, light prompt or vibration prompting in extremely
Few one kind.
Need to take safety measures in time when determining that driver is in a state of fatigue, voice reminder may include " in order to
You and other people safety, would you please rest in time!", vibration prompting can be shaken by seat to carry out, and light prompt can use red
Optical flare etc. can require vehicle deceleration according to scene flexible choice, such as when high speed uplink is sailed to speed, dash forward
Right reduction of speed is prone to accidents, and can choose voice prompting etc., to this without limiting.
A kind of method for detecting fatigue state of driver is provided above, accordingly, the present invention also provides a kind of driving
Member's fatigue state detection system, is specifically introduced below.
As shown in connection with fig. 3, the present invention provides a kind of embodiments of driver fatigue state detection system, according to eyes
The preparatory train classification models of SIFT feature, the disaggregated model are used to eyes SIFT feature be compared the corresponding confidence of output
Degree, comprising:
First extraction unit 301, for extracting the face contour image of driver;
First processing units 302, for acquisition normalization face figure to be normalized to the face contour image
Picture;
Second extraction unit 303, for extracting eyes SIFT feature to the normalization facial image, wherein the eye
Eyeball SIFT feature includes left eye SIFT feature and right eye SIFT feature;
The second processing unit 304, for the left eye SIFT feature and right eye SIFT feature to be inputted the disaggregated model
Calculate and obtains the first confidence level of the left eye SIFT feature and the second confidence level of the right eye SIFT feature respectively;
First determination unit 305, for according to first confidence level and second confidence level and the default confidence
The comparison result in section determines that the eyes of the driver open closed state;
Second determination unit 306, for determining that the driver is in when the eyes of the driver are closed-eye state
Fatigue state.
Optionally, first determination unit 305 is also used to:
It is determined as left eye eyes-open state when first confidence level is greater than default confidence interval, when first confidence level
It is determined as left eye closed-eye state when less than default confidence interval;
It is determined as right eye eyes-open state when second confidence level is greater than default confidence interval, when first confidence level
It is determined as right eye closed-eye state when less than default confidence interval;
When left eye eyes-open state and/or right eye eyes-open state are that then the determining driver is eyes-open state, when the left side
Determine that the driver is closed-eye state when eye closed-eye state and the right eye closed-eye state.
Optionally, first determination unit 305 is also used to:
It is determined as left eye unstable state when first confidence bit is in default confidence interval, when second confidence
Degree is determined as right eye unstable state when being located at the confidence interval, the first confidence level corresponding to the left eye unstable state
The second confidence level corresponding with the right eye unstable state carries out joint probability calculation and obtains probability value, and the probability value is greater than
Preset threshold is that determining driver is eyes-open state, determines that driver is eye closing shape when the probability value is not more than preset threshold
State.
Optionally, the first processing units 302 are also used to:
Obtain eye position and face contour size in the face contour image;
Face size, position and the posture for calculating the driver according to the eye position and face contour size are special
Sign;
It is obtained using image mapping method to the face contour image according to the face size, position and posture feature
Operation is normalized to obtain normalization facial image.
Optionally, second extraction unit 302 is also used to:
It determines and calculates the required image-region of eyes SIFT feature description;
Reference axis is rotated to be to the direction of key point, to ensure rotational invariance;
The direction histogram of each seed point is calculated, feature vector is formed;
The feature vector of key point is normalized;
Subvector thresholding is described so that off-limits gradient value is truncated.
Optionally, the system also includes:
Danger early warning unit 307, it is described for being alerted when the driver is in a state of fatigue or vehicle deceleration
Alarm includes at least one of auditory tone cues, light prompt or vibration prompting
A kind of eyes provided by the invention open closed state monitoring system, by obtaining eyes SIFT feature input disaggregated model
Confidence level is obtained, closed state is opened to eyes according to level of confidence and is judged, can not be influenced by facial angle, there is rotation not
Denaturation, it is high to the accuracy of human eye measurement, and calculating process is simple, the driving fatigue state to driver can carry out in time
Detection, can there is the requirement for meeting real-time detection.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit
It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components
It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or
The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit
It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can
It is completed with instructing relevant hardware by program, which can be stored in a computer readable storage medium, storage
Medium may include: read-only memory (ROM, Read Only Memory), random access memory (RAM, Random
Access Memory), disk or CD etc..
Those of ordinary skill in the art will appreciate that implement the method for the above embodiments be can be with
Relevant hardware is instructed to complete by program, the program can store in a kind of computer readable storage medium, on
Stating the storage medium mentioned can be read-only memory, disk or CD etc..
A kind of method for detecting fatigue state of driver provided by the present invention and system are described in detail above, it is right
In those of ordinary skill in the art, thought according to an embodiment of the present invention can in specific embodiments and applications
There is change place, in conclusion the contents of this specification are not to be construed as limiting the invention.
Claims (10)
1. a kind of method for detecting fatigue state of driver, which is characterized in that convert SIFT feature according to eyes scale invariant feature
Preparatory train classification models, the disaggregated model, which is used to carry out calculating to eyes SIFT feature, exports corresponding confidence level, comprising:
Obtain the face contour image of driver;
Acquisition normalization facial image is normalized to the face contour image;
Eyes SIFT feature is extracted to the normalization facial image, wherein the eyes SIFT feature includes left eye SIFT feature
With right eye SIFT feature;
The left eye SIFT feature and the right eye SIFT feature input disaggregated model calculate and obtain the left side respectively
Eye the first confidence level of SIFT feature and the second confidence level of the right eye SIFT feature;
The driver is determined according to the comparison result of first confidence level and second confidence level and default confidence interval
Eyes open closed state;
Determine that the driver is in a state of fatigue when the eyes of the driver are closed-eye state;
It is wherein, described that eyes SIFT feature is extracted to the normalization facial image, comprising:
It determines and calculates the required image-region of eyes SIFT feature description;
Reference axis is rotated to be to the direction of key point, to ensure rotational invariance;
The direction histogram of each seed point is calculated, feature vector is formed;
The feature vector of key point is normalized;
Subvector thresholding is described so that off-limits gradient value is truncated.
2. the method according to claim 1, wherein described according to first confidence level and second confidence
Degree and the comparison result of the default confidence interval determine that the eyes of the driver open closed state, comprising:
It is determined as left eye eyes-open state when first confidence level is greater than default confidence interval, when first confidence level is less than
It is determined as left eye closed-eye state when default confidence interval;
It is determined as right eye eyes-open state when second confidence level is greater than default confidence interval, when first confidence level is less than
It is determined as right eye closed-eye state when default confidence interval;
Then determine that the driver is eyes-open state when left eye eyes-open state and/or right eye eyes-open state, when the left eye closes
Determine that the driver is closed-eye state when eye state and the right eye closed-eye state.
3. method according to claim 1 or 2, which is characterized in that described according to first confidence level and described second
The comparison result of confidence level and the default confidence interval determines that the eyes of the driver open closed state, comprising:
It is determined as left eye unstable state when first confidence bit is in default confidence interval, when second confidence bit
It is determined as right eye unstable state when the confidence interval, to corresponding first confidence level of the left eye unstable state and institute
It states the corresponding second confidence level progress joint probability calculation of right eye unstable state and obtains probability value, the probability value is greater than default
Threshold value is that determining driver is eyes-open state, determines that driver is closed-eye state when the probability value is not more than preset threshold.
4. the method according to claim 1, wherein described be normalized the face contour image
Obtain normalization facial image, comprising:
Obtain eye position and face contour size in the face contour image;
Face size, position and the posture feature of the driver are calculated according to the eye position and face contour size;
It is obtained according to the face size, position and posture feature using image mapping method and the face contour image is carried out
Normalization operation is to obtain normalization facial image.
5. the method according to claim 1, wherein described true when the eyes of the driver are closed-eye state
After the fixed driver is in a state of fatigue, further includes:
It is alerted when the driver is in a state of fatigue or vehicle deceleration, the alarm is mentioned including auditory tone cues, light
Show or at least one of vibration prompting.
6. a kind of driver fatigue state detection system, which is characterized in that according to the preparatory train classification models of eyes SIFT feature,
The disaggregated model is used to carry out corresponding confidence calculations to eyes SIFT feature, comprising:
First extraction unit, for extracting the face contour image of driver;
First processing units, for acquisition normalization facial image to be normalized to the face contour image;
Second extraction unit, for extracting eyes SIFT feature to the normalization facial image, wherein the eyes SIFT is special
Sign includes left eye SIFT feature and right eye SIFT feature;
The second processing unit is calculated separately for the left eye SIFT feature and right eye SIFT feature to be inputted the disaggregated model
Second confidence level of the first confidence level of the left eye SIFT feature and the right eye SIFT feature;
First determination unit, for the comparison knot according to first confidence level and second confidence level and default confidence interval
Fruit determines that the eyes of the driver open closed state;
Second determination unit, for determining that the driver is in tired shape when the eyes of the driver are closed-eye state
State;
Wherein, second extraction unit is also used to:
It determines and calculates the required image-region of eyes SIFT feature description;
Reference axis is rotated to be to the direction of key point, to ensure rotational invariance;
The direction histogram of each seed point is calculated, feature vector is formed;
The feature vector of key point is normalized;
Subvector thresholding is described so that off-limits gradient value is truncated.
7. system according to claim 6, which is characterized in that first determination unit is also used to:
It is determined as left eye eyes-open state when first confidence level is greater than default confidence interval, when first confidence level is less than
It is determined as left eye closed-eye state when default confidence interval;
It is determined as right eye eyes-open state when second confidence level is greater than default confidence interval, when first confidence level is less than
It is determined as right eye closed-eye state when default confidence interval;
Then determine that the driver is eyes-open state when left eye eyes-open state and/or right eye eyes-open state, when the left eye closes
Determine that the driver is closed-eye state when eye state and the right eye closed-eye state.
8. system according to claim 6 or 7, which is characterized in that first determination unit is also used to:
It is determined as left eye unstable state when first confidence bit is in default confidence interval, when second confidence bit
It is determined as right eye unstable state when the confidence interval, to corresponding first confidence level of the left eye unstable state and institute
It states the corresponding second confidence level progress joint probability calculation of right eye unstable state and obtains probability value, the probability value is greater than default
Threshold value is that determining driver is eyes-open state, determines that driver is closed-eye state when the probability value is not more than preset threshold.
9. system according to claim 6, which is characterized in that the first processing units are also used to:
Obtain eye position and face contour size in the face contour image;
Face size, position and the posture feature of the driver are calculated according to the eye position and face contour size;
It is obtained according to the face size, position and posture feature using image mapping method and the face contour image is carried out
Normalization operation is to obtain normalization facial image.
10. system according to claim 6, which is characterized in that the system also includes:
Danger early warning unit, for being alerted when the driver is in a state of fatigue or vehicle deceleration, the alarm packet
Include at least one of auditory tone cues, light prompt or vibration prompting.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510555903.0A CN106485191B (en) | 2015-09-02 | 2015-09-02 | A kind of method for detecting fatigue state of driver and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510555903.0A CN106485191B (en) | 2015-09-02 | 2015-09-02 | A kind of method for detecting fatigue state of driver and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106485191A CN106485191A (en) | 2017-03-08 |
CN106485191B true CN106485191B (en) | 2018-12-11 |
Family
ID=58237920
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510555903.0A Active CN106485191B (en) | 2015-09-02 | 2015-09-02 | A kind of method for detecting fatigue state of driver and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106485191B (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107704805B (en) * | 2017-09-01 | 2018-09-07 | 深圳市爱培科技术股份有限公司 | Method for detecting fatigue driving, automobile data recorder and storage device |
CN107578008B (en) * | 2017-09-02 | 2020-07-17 | 吉林大学 | Fatigue state detection method based on block feature matrix algorithm and SVM |
CN108372785B (en) * | 2018-04-25 | 2023-06-23 | 吉林大学 | Image recognition-based automobile unsafe driving detection device and detection method |
CN108615014B (en) | 2018-04-27 | 2022-06-21 | 京东方科技集团股份有限公司 | Eye state detection method, device, equipment and medium |
CN109241842B (en) * | 2018-08-02 | 2024-03-05 | 平安科技(深圳)有限公司 | Fatigue driving detection method, device, computer equipment and storage medium |
CN109192275A (en) * | 2018-08-06 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | The determination method, apparatus and server of personage's state of mind |
CN109190515A (en) * | 2018-08-14 | 2019-01-11 | 深圳壹账通智能科技有限公司 | A kind of method for detecting fatigue driving, computer readable storage medium and terminal device |
CN110059650A (en) * | 2019-04-24 | 2019-07-26 | 京东方科技集团股份有限公司 | Information processing method, device, computer storage medium and electronic equipment |
CN110582437A (en) * | 2019-05-31 | 2019-12-17 | 驭势(上海)汽车科技有限公司 | driving reminding method, driving state detection method and computing device |
CN111242065B (en) * | 2020-01-17 | 2020-10-13 | 江苏润杨汽车零部件制造有限公司 | Portable vehicle-mounted intelligent driving system |
JP7127661B2 (en) * | 2020-03-24 | 2022-08-30 | トヨタ自動車株式会社 | Eye opening degree calculator |
CN113454645B (en) * | 2021-05-27 | 2022-08-09 | 华为技术有限公司 | Driving state detection method and device, equipment, storage medium, system and vehicle |
CN113255558A (en) * | 2021-06-09 | 2021-08-13 | 北京惠朗时代科技有限公司 | Driver fatigue driving low-consumption identification method and device based on single image |
CN114220158A (en) * | 2022-02-18 | 2022-03-22 | 电子科技大学长三角研究院(湖州) | Fatigue driving detection method based on deep learning |
CN117079255B (en) * | 2023-10-17 | 2024-01-05 | 江西开放大学 | Fatigue driving detection method based on face recognition and voice interaction |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102096810A (en) * | 2011-01-26 | 2011-06-15 | 北京中星微电子有限公司 | Method and device for detecting fatigue state of user before computer |
CN102156871A (en) * | 2010-02-12 | 2011-08-17 | 中国科学院自动化研究所 | Image classification method based on category correlated codebook and classifier voting strategy |
CN103049740A (en) * | 2012-12-13 | 2013-04-17 | 杜鹢 | Method and device for detecting fatigue state based on video image |
CN103839379A (en) * | 2014-02-27 | 2014-06-04 | 长城汽车股份有限公司 | Automobile and driver fatigue early warning detecting method and system for automobile |
CN103971093A (en) * | 2014-04-22 | 2014-08-06 | 大连理工大学 | Fatigue detection method based on multi-scale LBP algorithm |
CN104688251A (en) * | 2015-03-02 | 2015-06-10 | 西安邦威电子科技有限公司 | Method for detecting fatigue driving and driving in abnormal posture under multiple postures |
-
2015
- 2015-09-02 CN CN201510555903.0A patent/CN106485191B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102156871A (en) * | 2010-02-12 | 2011-08-17 | 中国科学院自动化研究所 | Image classification method based on category correlated codebook and classifier voting strategy |
CN102096810A (en) * | 2011-01-26 | 2011-06-15 | 北京中星微电子有限公司 | Method and device for detecting fatigue state of user before computer |
CN103049740A (en) * | 2012-12-13 | 2013-04-17 | 杜鹢 | Method and device for detecting fatigue state based on video image |
CN103839379A (en) * | 2014-02-27 | 2014-06-04 | 长城汽车股份有限公司 | Automobile and driver fatigue early warning detecting method and system for automobile |
CN103971093A (en) * | 2014-04-22 | 2014-08-06 | 大连理工大学 | Fatigue detection method based on multi-scale LBP algorithm |
CN104688251A (en) * | 2015-03-02 | 2015-06-10 | 西安邦威电子科技有限公司 | Method for detecting fatigue driving and driving in abnormal posture under multiple postures |
Also Published As
Publication number | Publication date |
---|---|
CN106485191A (en) | 2017-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106485191B (en) | A kind of method for detecting fatigue state of driver and system | |
CN108830199B (en) | Method and device for identifying traffic light signal, readable medium and electronic equipment | |
CN104200192B (en) | Driver's gaze detection system | |
US10025998B1 (en) | Object detection using candidate object alignment | |
CN109506664B (en) | Guide information providing device and method using pedestrian crossing recognition result | |
CN106965675B (en) | A kind of lorry swarm intelligence safety work system | |
US10445602B2 (en) | Apparatus and method for recognizing traffic signs | |
CN105740779B (en) | Method and device for detecting living human face | |
KR101937323B1 (en) | System for generating signcription of wireless mobie communication | |
EP1868138A2 (en) | Method of tracking a human eye in a video image | |
Zhang et al. | A pedestrian detection method based on SVM classifier and optimized Histograms of Oriented Gradients feature | |
JP6351243B2 (en) | Image processing apparatus and image processing method | |
CN104281839A (en) | Body posture identification method and device | |
CN104915642B (en) | Front vehicles distance measuring method and device | |
CN108256454B (en) | Training method based on CNN model, and face posture estimation method and device | |
CN109977771A (en) | Verification method, device, equipment and the computer readable storage medium of driver identification | |
Kim et al. | Autonomous vehicle detection system using visible and infrared camera | |
CN103839056B (en) | A kind of method for recognizing human eye state and device | |
CN103544478A (en) | All-dimensional face detection method and system | |
Yazdi et al. | Driver drowsiness detection by Yawn identification based on depth information and active contour model | |
JP2009279186A (en) | Face detecting device and method | |
Escalera et al. | Fast greyscale road sign model matching and recognition | |
Panicker et al. | Open-eye detection using iris–sclera pattern analysis for driver drowsiness detection | |
Ribarić et al. | A neural-network-based system for monitoring driver fatigue | |
CN106407904A (en) | Bang zone determining method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |