CN109409347A - A method of based on facial features localization fatigue driving - Google Patents
A method of based on facial features localization fatigue driving Download PDFInfo
- Publication number
- CN109409347A CN109409347A CN201811609791.2A CN201811609791A CN109409347A CN 109409347 A CN109409347 A CN 109409347A CN 201811609791 A CN201811609791 A CN 201811609791A CN 109409347 A CN109409347 A CN 109409347A
- Authority
- CN
- China
- Prior art keywords
- mouth
- eyes
- image
- face
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
A method of based on facial features localization fatigue driving, being related to a kind of method of artificial intelligence detection fatigue driving.The present invention can determine whether the fatigue state of driver to eyes and mouth feature joint-detection, and avoiding can not accurately be detected because of wearing spectacles.Detection method: one, Image Acquisition;Two, image procossing;Three, Face detection is carried out based on improved Adboost algorithm classification device;Four, detect that face in next step, face is not detected and carries out step 1;Five, face characteristic identifies;Six, fatigue state determines.The present invention conjunctive use eyes and mouth state compared with conventional monitoring methods carry out the feature extraction of fatigue state, the erroneous judgement probability for improving the accuracy of judgement, reducing fatigue driving detection.
Description
Technical field
The present invention relates to a kind of methods of artificial intelligence detection fatigue driving.
Background technique
Traffic safety is the major issue that the current world is faced, and correlative study shows that traffic accident is non-in the mankind
Shared ratio is maximum in natural death, and therefore many people lose valuable life, and bring very to the economy of country
Big negative effect.In order to solve a series of problems caused by fatigue driving, the at home and abroad effort of many experts and scholar
Under, there are many effective methods of novelty.Nowadays, fatigue detecting algorithm can be divided into two major classes substantially, i.e., objective and subjective
Two classes.Subjective aspect is largely to be felt either to be observed by other people to judge real-time state by driver oneself,
But due to individual physique difference and the careful degree of observation etc., ununified standard not can guarantee accuracy, usually only use
Make the supplementary means detected.The research of objective aspects is the emphasis of fatigue detecting algorithm research.The research of objective aspects is mainly
Some inherent and external indexs of driver are tested, research is broadly divided into following 3 kinds of methods: (1) using sensing
Device monitors the physical signs of driver, such as electrocardiogram, the frequency of breathing, the frequency etc. of heartbeat.(2) machine is used
Vision technique is monitored the external change of driver, for example yawns, eyes closed etc..(3) onboard sensor pair is used
The travelling characteristic of vehicle and the behavior of driver are monitored, such as speed, turn to speed, brake speed etc..These methods
Though can reach certain detection effect, mostly the case where not wearing glasses in view of driver, and generally only for eyes or
The single position of mouth is detected and is identified, is changed especially with machine vision technique according to the surface of driver and is carried out
The technological means of monitoring is easy to cause detection effect not as good as people's will, haves the defects that erroneous judgement, accuracy are low.
Summary of the invention
The present invention can determine whether the fatigue state of driver to eyes and mouth feature joint-detection, avoid because of wearing spectacles
And can not accurately be detected, the present invention conjunctive use eyes and mouth state compared with conventional monitoring methods carry out fatigue state
Feature extraction, improve the accuracy of judgement, reduce fatigue driving detection erroneous judgement probability.
The present invention is based on the methods of facial features localization fatigue driving to sequentially include the following steps:
One, Image Acquisition;
Two, it image procossing: is denoised using image of the adaptive median filter method to acquisition and uses adaptive threshold
It is balanced that method carries out illumination to the image of acquisition;
Three, Face detection is carried out based on improved Adboost algorithm classification device;Wherein only in the weight of sample than at this time
It is small to update threshold value, and this sample is classified weight in incorrect situation can just make corresponding adjustment, increases weight;Except this
Except, weight will be reduced;
Four, detect that face in next step, face is not detected and carries out step 1;
Five, face characteristic identifies
The positioning of 5.1 human eyes: Gaboreye models coupling radiation symmetric algorithm positions eyes position;
5.2 human eye states differentiate: using Ostu method that will choose image procossing for grey level histogram, with the class of background and target
Between the value endpoint of variance be used as to threshold value basis for selecting, using after binaryzation in image the number of white pixel as the face of eyes
Product, eyes are in and open state, the number of white pixel when the number of white pixel is far more than eyes closed in eye areas;
It is normalized using eyes area of the largest eyes area to driver's a certain moment, normalization formula is
Wherein, current_area be white pixel region area, max_area be eyes most
Large area;If A > 0.6, eyes are opened;If A≤0.6, determine eyes for closed state;
5.3 mouth Rough Inspection: the geometrical rule being distributed in face area with mouth, by formula
Select rectangle part;Wherein (xface,yface) represent the coordinate for extracting the upper left corner of rectangle of face, (xm0,ym0) represent and extract
Mouth part the upper left corner coordinate, WmouthRepresent the width of mouth Rough Inspection part, HmouthRepresent the height of mouth Rough Inspection part
Degree;
5.4 mouth condition discriminations: binaryzation is carried out to image and calculates mouth area, formula isThe wherein gray value f of mouth partmouth(x, y) is represented, by two
Image B (x, y) expression is obtained after value processing, threshold value is set as 0.2;
Then the image after binary conversion treatment is corroded: using structure to all elements in image, later to quilt
The handled binary image crossed of structural element and structural element carry out the with operation in logic, if obtained result is all 1,
Then processed pixel obtains new pixel value 255, and otherwise processed pixel obtains new pixel value 0;To image after processing
In connected domain extract, and by being compared to each other, find maximum area, regard place where maximum area as mouth
The connected domain of bar part;
It is extracted using edge of the Sobel operator to mouth part in image;
Mouth state in which is determined using the method like circularity: being calculated white in maximum connected domain in image
This is regarded as the area of mouth by the number of color pixel, using required edge, obtains the number of white pixel on edge, will
This regards the perimeter of mouth as;It is represented using e like circularity, e ∈ [0,1], the area of mouth indicates that mouth perimeter uses P table using S
Show, using formulaCalculate mouth part like circularity;If e<0.4, mouth is closure, if e>=0.4, mouth
It is to open;
Six, fatigue state determines
The continuous picture obtained from video is selected to regard as if detecting that driver's eyes are closed and yawn
Fatigue state;It detects that driver's eyes closure is not yawned, regards as fatigue state;Detect that driver yawns but eyes
It is not closed, regards as non-fatigue state;It detects that driver's eyes are not closed and do not yawn, regards as non-fatigue state.
The method of the present invention has the advantages that driver fatigue state detection accuracy is high, and the method for the present invention is for different back
Whether scape, intensity of illumination, accurate judgement can be made by wearing glasses, and improved the accuracy of judgement, reduced fatigue driving
The erroneous judgement probability of detection, can save the life of more drivers and pedestrian.
Detailed description of the invention
Fig. 1 is the denoising original image used in 1 step 2 of embodiment;
Fig. 2 is image after salt-pepper noise in 1 step 2 of embodiment;
Fig. 3 is the image for using median filtering in 1 step 2 of embodiment after salt-pepper noise;
Fig. 4 is the image for using gaussian filtering in 1 step 2 of embodiment after salt-pepper noise;
Fig. 5 is the image for using mean filter in 1 step 2 of embodiment after salt-pepper noise;
Fig. 6 is the image for using adaptive median filter of the present invention in 1 step 2 of embodiment after salt-pepper noise;
Fig. 7 is the original image in the stronger situation of illumination that illumination equilibrium experiment is used in 1 step 2 of embodiment;
Fig. 8 be in 1 step 2 of embodiment illumination equilibrium experiment in the stronger illumination component figure of illumination;
Fig. 9 be in 1 step 2 of embodiment illumination equilibrium experiment in the stronger Adaptive Thresholding illumination equilibrium figures of illumination;
Figure 10 is the original image in the darker situation of illumination that illumination equilibrium experiment is used in 1 step 2 of embodiment;
Figure 11 is the darker illumination component figure of illumination in illumination equilibrium experiment in 1 step 2 of embodiment;
Figure 12 is the darker Adaptive Thresholding illumination equilibrium figures of illumination in illumination equilibrium experiment in 1 step 2 of embodiment;
Figure 13 is to use improved method Face detection effect of the present invention in 1 step 3 of embodiment in the case where illumination is darker
Figure;
Figure 14 is to use improved method Face detection effect of the present invention in 1 step 3 of embodiment in the stronger situation of illumination
Figure;
Figure 15 is to use improved method face of the present invention in 1 step 3 of embodiment in the case where the strong background of illumination is relatively mixed and disorderly
Locating effect figure;
Figure 16 is fixed using improved method face of the present invention in the case where the good background of illumination is mixed and disorderly in 1 step 3 of embodiment
Position effect picture;
Figure 17 is to use improved method face of the present invention in 1 step 3 of embodiment in the case where background is simple, illumination is strong
Locating effect figure;
Figure 18 is to use improved method face of the present invention in 1 step 3 of embodiment in the case where background is simple, illumination is dark
Locating effect figure;
Figure 19 is to use improved method face of the present invention in the case where background is complicated, illumination is strong in 1 step 3 of embodiment
Locating effect figure;
Figure 20 is to use improved method face of the present invention in the case where background is complicated, illumination is dark in 1 step 3 of embodiment
Locating effect figure;
Figure 21~Figure 28 is the human eye location drawing that Figure 13~Figure 20 is navigated to using human-eye positioning method of the present invention respectively;
Figure 29 is eyes closed original graph and corresponding binary picture;
Figure 30 is that eyes open original graph and corresponding binary picture;
Figure 31 is that eyes partly open original graph and corresponding binary picture;
Figure 32 is wear glasses original graph and corresponding binary picture;
Figure 33 is mouth Rough Inspection figure in embodiment 1;
Figure 34 is the result figure that the binary conversion treatment that mouth opens in embodiment 1 is corroded;
Figure 35 is the result figure that the binary conversion treatment that mouth half opens in embodiment 1 is corroded;
Figure 36 is the result figure that the binary conversion treatment that mouth is closed in embodiment 1 is corroded.
Specific embodiment
The technical solution of the present invention is not limited to the following list, further includes between each specific embodiment
Any combination.
Specific embodiment 1: method of the present embodiment based on facial features localization fatigue driving according to the following steps into
Row:
One, Image Acquisition;
Two, it image procossing: is denoised using image of the adaptive median filter method to acquisition and uses adaptive threshold
It is balanced that method carries out illumination to the image of acquisition;
Three, Face detection is carried out based on improved Adboost algorithm classification device;Wherein only in the weight of sample than at this time
It is small to update threshold value, and this sample is classified weight in incorrect situation can just make corresponding adjustment, increases weight;Except this
Except, weight will be all reduced;
Four, detect that face in next step, face is not detected and carries out step 1;
Five, face characteristic identifies
The positioning of 5.1 human eyes: Gaboreye models coupling radiation symmetric algorithm positions eyes position;
5.2 human eye states differentiate: using Ostu method that will choose image procossing for grey level histogram, with the class of background and target
Between the value endpoint of variance be used as to threshold value basis for selecting, using after binaryzation in image the number of white pixel as the face of eyes
Product, eyes are in and open state, the number of white pixel when the number of white pixel is far more than eyes closed in eye areas;
It is normalized using eyes area of the largest eyes area to driver's a certain moment, normalization formula is
Wherein, current_area be white pixel region area, max_area be eyes most
Large area;If A > 0.6, eyes are opened;If A≤0.6, determine eyes for closed state;
5.3 mouth Rough Inspections: the geometrical rule being distributed in face area with mouth, by formula
Select rectangle part;Wherein (xface,yface) represent the coordinate for extracting the upper left corner of rectangle of face, (xm0,ym0) represent and extract
Mouth part the upper left corner coordinate, WmouthRepresent the width of mouth Rough Inspection part, HmouthRepresent the height of mouth Rough Inspection part
Degree;
5.4 mouth condition discrimination: carrying out binaryzation to image and calculate mouth area, formula isThe wherein gray value f of mouth partmouth(x, y) is represented, by two
Image B (x, y) expression is obtained after value processing, threshold value is set as 0.2;
Then the image after binary conversion treatment is corroded: using structure to all elements in image, later to quilt
The handled binary image crossed of structural element and structural element carry out the with operation in logic, if obtained result is all 1,
Then processed pixel obtains new pixel value 255, and otherwise processed pixel obtains new pixel value 0;To image after processing
In connected domain extract, and by being compared to each other, find maximum area, regard place where maximum area as mouth
The connected domain of bar part;
It is extracted using edge of the Sobel operator to mouth part in image;
Mouth state in which is determined using the method like circularity: being calculated white in maximum connected domain in image
This is regarded as the area of mouth by the number of color pixel, using required edge, obtains the number of white pixel on edge, will
This regards the perimeter of mouth as;It is represented using e like circularity, e ∈ [0,1], the area of mouth indicates that mouth perimeter uses P table using S
Show, using formulaCalculate mouth part like circularity;If e<0.4, mouth is closure, if e>=0.4, mouth
It is to open;
Six, fatigue state determines
The continuous picture obtained from video is selected to regard as if detecting that driver's eyes are closed and yawn
Fatigue state;It detects that driver's eyes closure is not yawned, regards as fatigue state;Detect that driver yawns but eyes
It is not closed, regards as non-fatigue state;It detects that driver's eyes are not closed and do not yawn, regards as non-fatigue state.
It is obvious that present embodiment step 3 is updated effect to the weight of sample, if ensure that, difficult sample is classified always
Incorrect, weight unconfined will not increase, to improve the accuracy of the method for the present invention classifier.
Present embodiment method is in order to exclude the influence spoken to testing result, if detecting the continued eye of driver
In closed state, system show that driver is in a state of fatigue at once.
Specific embodiment 2: the difference of present embodiment and specific embodiment one is: in adaptive in step 2
Value filtering method using 3 × 3 median filtering template, δ=0.8,Gaussian template and
Mean filter template.Other steps and parameter are identical as embodiment one.
Specific embodiment 3: the difference of present embodiment and specific embodiment one is: in adaptive in step 2
The minimum window of value filtering method is 3, maximized window 19.Other steps and parameter are identical as embodiment one.
Embodiment 1
Method based on facial features localization fatigue driving sequentially includes the following steps:
One, Image Acquisition;
Two, it image procossing: is denoised using image of the adaptive median filter method to acquisition and uses adaptive threshold
It is balanced that method carries out illumination to the image of acquisition;Wherein adaptive median filter method using 3 × 3 median filtering template, δ=0.8,Gaussian template andMean filter template;The minimum of adaptive median filter method
Window is 3, maximized window 19;
Original image (as shown in Figure 1) will be denoised to be polluted (after salt-pepper noise with the salt-pepper noise that noise density is 0.4
Image is as shown in Figure 2), median filtering is respectively adopted as shown in figure 3, using gaussian filtering as shown in figure 4, using mean filter figure
As shown in figure 5, as shown in Figure 6 using adaptive median filter of the present invention;Denoising result shows median filtering and mean filter energy
Reach certain denoising effect, but the useful information in image cannot be retained well, gaussian filtering does not easily cause fuzzy mistake
Very, but since its filtration result is related with parameter δ therein, so general applicability is poor, the adaptive mean value filter of the present invention
Wave not only can preferably filter out the noise in image, moreover it is possible to be effectively maintained useful information, denoising effect is good, so that of the invention
Treated image is more suitable for carrying out subsequent various operations.
The present embodiment carries out illumination equilibrium to institute's acquired image using the method for adaptive threshold, first will be collected
Image is converted into YCbCr space, obtains average value Meb, Mer of Cb, Cr later;Calculate the mean square deviation Db and Dr of Cb and Cr;It is right
Image is split, if Db, Dr of some part of image are too small, that is, shows that the color difference of the part does not change significantly, then
To this part without processing;Using the maximum brightness and the ratio of the average value of reference point in each channel in image, as letter
Road gain;Then image is adjusted, so that image is after treatment, illumination is able to equilibrium;The method of the present invention is either
The image that the image acquired under strong light still acquires in darker situation, can be to figure by adaptive illumination equalization algorithm
Illumination component as in is adjusted, so that the brightness of image and color are equalized, eliminates uneven illumination weighing apparatus to subsequent figure
As the influence of processing, solve influence of the brightness to image of light.
Three, Face detection is carried out based on improved Adboost algorithm classification device;Wherein only in the weight of sample than at this time
It is small to update threshold value, and the sample is classified weight in incorrect situation can just make corresponding adjustment, increases weight;Except this
Except, weight will be all reduced;
Adboost algorithm is one of iterative algorithm, and the sample being collected into is trained, and obtains different weak point
Class device forms strong classifier by cascade mode by these classifiers, then these strong classifiers is combined together, and obtains most
Whole classifier.On this basis, the present invention improves, only when the weight of sample is smaller than update threshold value at this time, and
When this sample is classified incorrect, weight can just make corresponding adjustment, increase weight;In addition to this, weight all will be by
It reduces;If ensure that, difficult sample is classified always incorrect, and weight unconfined will not increase, to improve classifier
Accuracy.The process of building strong classifier makes the sample by mistake classification obtain bigger weight, strengthens to these samples
This training, so that accuracy rate constantly improves.
Figure 13 is in the case where illumination is darker using improved method Face detection effect picture of the present invention, and Figure 14 is in illumination
Improved method Face detection effect picture of the present invention is used in stronger situation, Figure 15 is in the case where the strong background of illumination is relatively mixed and disorderly
Using improved method Face detection effect picture of the present invention, Figure 16 is to be improved in the case where the good background of illumination is mixed and disorderly using the present invention
Method Face detection effect picture, Figure 17 are in the case where background is simple, illumination is strong using improved method Face detection of the present invention
Effect picture, Figure 18 are in the case where background is simple, illumination is dark using improved method Face detection effect picture of the present invention, Tu19Shi
Improved method Face detection effect picture of the present invention is used in the case where background is complicated, illumination is strong, Figure 20 is in background complexity, light
According to using improved method Face detection effect picture of the present invention in the case where dark.The present embodiment can be incited somebody to action in Figure 13~Figure 20
Extraneous areas excludes and navigates to face, as a result accurately.
Four, detect that face in next step, face is not detected and carries out step 1;
Five, face characteristic identifies
The positioning of 5.1 human eyes: Gaboreye models coupling radiation symmetric algorithm positions eyes position;
Rough estimation is carried out to the position of eyebrow, by the position of the rough determining eyes in the position of eyebrow, and utilizes spoke
It penetrates symmetry transformation and rapidly finds position where characteristic point, and using template this characteristic point is carried out regular, thus obtain eye
The approximate region of eyeball repeats the step, to reach the accurate positioning to human eye.The present embodiment for people face-image, according to
The size of face-image, using more appropriate Gabor kernel function, after convolution, by treated, image is expanded,
Thus to obtain Gaboreye model.Figure 21~Figure 28 is that Figure 13~Figure 20 is navigated to using human-eye positioning method of the present invention respectively
The human eye location drawing, it can be seen that, the method for the present invention can in the case where the different illumination differences of background and sample are different
Human eye is positioned, accurate frame selects the position where human eye, and accuracy rate is high.It can experimenter wears glasses
Orient eyes.
5.2 human eye states differentiate: using Ostu method that will choose image procossing for grey level histogram, with the class of background and target
Between the value endpoint of variance be used as to threshold value basis for selecting, using after binaryzation in image the number of white pixel as the face of eyes
Product, eyes are in and open state, the number of white pixel when the number of white pixel is far more than eyes closed in eye areas;
It is normalized using eyes area of the largest eyes area to driver's a certain moment, normalization formula is
Wherein current_area is white pixel region area, and max_area is largest eyes
Area;If A > 0.6, eyes are opened;If A≤0.6, determine eyes for closed state;
The white pixel that human eye region can be obtained by binaryzation obtains the area of eyes, thus to eyes institute
The position at place is differentiated.The area of eyes can be obtained it can be seen from Figure 29~Figure 32 after binaryzation of wearing glasses.
Thus it analyzes, the high treating effect of the method for the present invention.
5.3 mouth Rough Inspections: the geometrical rule being distributed in face area with mouth, by formula
Select rectangle part;Wherein (xface,yface) represent the coordinate for extracting the upper left corner of rectangle of face, (xm0,ym0) represent and extract
Mouth part the upper left corner coordinate, WmouthRepresent the width of mouth Rough Inspection part, HmouthRepresent the height of mouth Rough Inspection part
Degree, as shown in figure 33.
5.4 mouth condition discriminations: binaryzation is carried out to image and calculates mouth area, formula isThe wherein gray value f of mouth partmouth(x, y) is represented, by two
Image B (x, y) expression is obtained after value processing, threshold value is set as 0.2;
Then the image after binary conversion treatment is corroded, to reach the influence for reducing background to detection mouth part.
Step is: using structure to all elements in image, later to the binary image and structure crossed handled by structural element
Element carries out the with operation in logic, if obtained result is all 1, processed pixel obtains new pixel value 255, no
Then processed pixel obtains new pixel value 0;Connected domain in image after processing is extracted, and by being compared to each other,
Maximum area is found, by connected domain of the place as mouth part where maximum area;Using Sobel operator to image
The edge of middle mouth part extracts (result obtained by processing is as shown in Figure 34~Figure 36);
Mouth state in which is determined using the method like circularity: being calculated white in maximum connected domain in image
This is regarded as the area of mouth by the number of color pixel, using required edge, obtains the number of white pixel on edge, will
This regards the perimeter of mouth as;It is represented using e like circularity, e ∈ [0,1], the area of mouth indicates that mouth perimeter uses P table using S
Show, using formulaCalculate mouth part like circularity;If e<0.4, mouth is closure, if e>=0.4, mouth
It is to open.
Handle result that different mouth states obtain as shown in Figure 34~Figure 36 by 5.4 section methods, the present invention by pair
The image of mouth is acquired under different conditions, and binaryzation and edge extracting are carried out to it, detects mouth state.By Figure 34~
Figure 36 can be seen that via the number that can easily obtain mouth partial white pixel after binary conversion treatment, the i.e. area of mouth,
The edge of mouth can be detected by sobel edge detection, and obtains the number of marginal portion white pixel, i.e. mouth perimeter.
Six, fatigue state determines
The continuous picture obtained from video is selected to regard as if detecting that driver's eyes are closed and yawn
Fatigue state;It detects that driver's eyes closure is not yawned, regards as fatigue state;Detect that driver yawns but eyes
It is not closed, regards as non-fatigue state;It detects that driver's eyes are not closed and do not yawn, regards as non-fatigue state.
It is obvious that present embodiment step 3 is updated effect to the weight of sample, if ensure that, difficult sample is classified always
Incorrect, weight unconfined will not increase, to improve the accuracy of the method for the present invention classifier.
Support vector machines is used for using mixed kernel function in the present embodiment, the mixed kernel function be K-type kernel function with
The combination of logistic kernel function, mixed kernel function formula are K=nKlogistic+(1-n)KK-type, 0≤n≤1.Present invention mixing
When kernel function takes different value to n (0≤n≤1), all occur peak value at 0, thus the test point in 0 field to result have compared with
Big influence, and also will appear peak value after 0, also have the point being affected to result.Therefore, it can be seen that mixing letter
Not only learning ability is strong for number, and applicability is also very extensive.
The present embodiment support vector machines acts on the judgement of fatigue driving, i.e., by the state of the eye identified and mouth
It inputs as data, fatigue is differentiated with this.
Present embodiment method is in order to exclude the influence spoken to testing result, if detecting the continued eye of driver
In closed state, system show that driver is in a state of fatigue at once.
It is acquired using image of the present embodiment method to four experimenters, algorithm as described herein is verified,
Statistical result is as shown in table 1, and wherein 1 illumination of experimenter is darker, and background is simple, does not wear glasses;2 light of experimenter is brighter, does not wear
Glasses, background are simple;3 light of experimenter is brighter, wears glasses, and background is complicated;4 light of experimenter is moderate, wears glasses, background
It is complicated.
The tired decision statistic of table 1
Experimenter | Eye state | Mouth states | Fatigue determines | Time of day | As a result |
Experimenter 1 | Closure | It yawns | Fatigue | Fatigue | Accurately |
Experimenter 2 | Closure | It closes | Fatigue | Fatigue | Accurately |
Experimenter 3 | It opens | It yawns | It is not tired | It is not tired | Accurately |
Experimenter 4 | It opens | It closes | It is not tired | It is not tired | Accurately |
The experiment proved that can accurately judge experimenter's state in which in the case of above four kinds.
Claims (3)
1. a kind of method based on facial features localization fatigue driving, it is characterised in that facial features localization fatigue driving should be based on
Method sequentially include the following steps:
One, Image Acquisition;
Two, it image procossing: is denoised using image of the adaptive median filter method to acquisition and uses Adaptive Thresholding pair
It is balanced that the image of acquisition carries out illumination;
Three, Face detection is carried out based on improved Adboost algorithm classification device;Wherein only in the weight of sample than update at this time
Threshold value is small, and this sample is classified weight in incorrect situation can just make corresponding adjustment, increases weight;Except this it
Outside, weight will be reduced;
Four, detect that face in next step, face is not detected and carries out step 1;
Five, face characteristic identifies
The positioning of 5.1 human eyes: Gaboreye models coupling radiation symmetric algorithm positions eyes position;
5.2 human eye states differentiate: using Ostu method that will choose image procossing for grey level histogram, with side between background and the class of target
Difference value endpoint be used as to threshold value basis for selecting, using after binaryzation in image white pixel number as eyes area,
Eyes are in and open state, the number of white pixel when the number of white pixel is far more than eyes closed in eye areas;Benefit
It is normalized with eyes area of the largest eyes area to driver's a certain moment, normalization formula is
Wherein current_area is white pixel region area, and max_area is largest eyes face
Product;If A > 0.6, determine that eyes are opened;If A≤0.6, determine eyes for closed state;
5.3 mouth Rough Inspections: the geometrical rule being distributed in face area with mouth, by formulaSelection
Rectangle part;Wherein (xface,yface) represent the coordinate for extracting the upper left corner of rectangle of face, (xm0,ym0) represent the mouth extracted
The coordinate in the upper left corner of bar part, WmouthRepresent the width of mouth Rough Inspection part, HmouthRepresent the height of mouth Rough Inspection part;
5.4 mouth condition discriminations: binaryzation is carried out to image and calculates mouth area, formula isThe wherein gray value f of mouth partmouth(x, y) is represented, by two
Image B (x, y) expression is obtained after value processing, threshold value is set as 0.2;
Then the image after binary conversion treatment is corroded: using structure to all elements in image, later to by structure
The handled binary image crossed of element and structural element carry out the with operation in logic, if obtained result is all 1, by
The pixel of processing obtains new pixel value 255, and otherwise processed pixel obtains new pixel value 0;To in image after processing
Connected domain extracts, and by being compared to each other, finds maximum area, regard place where maximum area as mouth portion
The connected domain divided;
It is extracted using edge of the Sobel operator to mouth part in image;
Mouth state in which is determined using the method like circularity: calculating in image white picture in maximum connected domain
This, is regarded as the area of mouth, using required edge, obtains the number of white pixel on edge, this is seen by the number of element
Make the perimeter of mouth;It being represented using e like circularity, e ∈ [0,1], the area of mouth indicates that mouth perimeter is indicated using P using S,
Using formulaCalculate mouth part like circularity;If e<0.4, mouth is closure, if e>=0.4, mouth is
It opens;
Six, fatigue state determines
The continuous picture obtained from video is selected to regard as fatigue if detecting that driver's eyes are closed and yawn
State;It detects that driver's eyes closure is not yawned, regards as fatigue state;Detect that driver yawns but eyes do not close
It closes, regards as non-fatigue state;It detects that driver's eyes are not closed and do not yawn, regards as non-fatigue state.
2. a kind of method based on facial features localization fatigue driving according to claim 1, it is characterised in that step 2
Middle adaptive median filter method using 3 × 3 median filtering template, δ=0.8,Gaussian template andMean filter template.
3. a kind of method based on facial features localization fatigue driving according to claim 1 or 2, it is characterised in that step
The minimum window of adaptive median filter method is 3 in two, maximized window 19.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811609791.2A CN109409347A (en) | 2018-12-27 | 2018-12-27 | A method of based on facial features localization fatigue driving |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811609791.2A CN109409347A (en) | 2018-12-27 | 2018-12-27 | A method of based on facial features localization fatigue driving |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109409347A true CN109409347A (en) | 2019-03-01 |
Family
ID=65462221
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811609791.2A Pending CN109409347A (en) | 2018-12-27 | 2018-12-27 | A method of based on facial features localization fatigue driving |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109409347A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110188655A (en) * | 2019-05-27 | 2019-08-30 | 上海蔚来汽车有限公司 | Driving condition evaluation method, system and computer storage medium |
CN110319544A (en) * | 2019-07-04 | 2019-10-11 | 珠海格力电器股份有限公司 | Environmental management technique, device and air-conditioning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6927694B1 (en) * | 2001-08-20 | 2005-08-09 | Research Foundation Of The University Of Central Florida | Algorithm for monitoring head/eye motion for driver alertness with one camera |
CN102436714A (en) * | 2011-10-13 | 2012-05-02 | 无锡大麦创意设计有限公司 | Fatigue monitoring system |
CN104809445A (en) * | 2015-05-07 | 2015-07-29 | 吉林大学 | Fatigue driving detection method based on eye and mouth states |
-
2018
- 2018-12-27 CN CN201811609791.2A patent/CN109409347A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6927694B1 (en) * | 2001-08-20 | 2005-08-09 | Research Foundation Of The University Of Central Florida | Algorithm for monitoring head/eye motion for driver alertness with one camera |
CN102436714A (en) * | 2011-10-13 | 2012-05-02 | 无锡大麦创意设计有限公司 | Fatigue monitoring system |
CN104809445A (en) * | 2015-05-07 | 2015-07-29 | 吉林大学 | Fatigue driving detection method based on eye and mouth states |
Non-Patent Citations (2)
Title |
---|
徐科 等: "《金属表面质量在线检测技术》", 31 October 2016 * |
邹昕彤: "基于表情与头部状态识别的疲劳驾驶检测算法的研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑(月刊)》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110188655A (en) * | 2019-05-27 | 2019-08-30 | 上海蔚来汽车有限公司 | Driving condition evaluation method, system and computer storage medium |
CN110319544A (en) * | 2019-07-04 | 2019-10-11 | 珠海格力电器股份有限公司 | Environmental management technique, device and air-conditioning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106169081B (en) | A kind of image classification and processing method based on different illumination | |
CN107330465B (en) | A kind of images steganalysis method and device | |
CN102289660B (en) | Method for detecting illegal driving behavior based on hand gesture tracking | |
CN108053615B (en) | Method for detecting fatigue driving state of driver based on micro-expression | |
CN107292251B (en) | Driver fatigue detection method and system based on human eye state | |
EP1229493B1 (en) | Multi-mode digital image processing method for detecting eyes | |
CN111582086A (en) | Fatigue driving identification method and system based on multiple characteristics | |
CN105447503B (en) | Pedestrian detection method based on rarefaction representation LBP and HOG fusion | |
CN108986106A (en) | Retinal vessel automatic division method towards glaucoma clinical diagnosis | |
CN107729820B (en) | Finger vein identification method based on multi-scale HOG | |
CN106250801A (en) | Based on Face datection and the fatigue detection method of human eye state identification | |
CN101216887A (en) | An automatic computer authentication method for photographic faces and living faces | |
CN103268479A (en) | Method for detecting fatigue driving around clock | |
Rekhi et al. | Automated classification of exudates from digital fundus images | |
Cornforth et al. | Development of retinal blood vessel segmentation methodology using wavelet transforms for assessment of diabetic retinopathy | |
CN110728185B (en) | Detection method for judging existence of handheld mobile phone conversation behavior of driver | |
CN109242032B (en) | Target detection method based on deep learning | |
CN107895157B (en) | Method for accurately positioning iris center of low-resolution image | |
CN109543518A (en) | A kind of human face precise recognition method based on integral projection | |
Zhang et al. | A SVM approach for detection of hemorrhages in background diabetic retinopathy | |
CN110348461A (en) | A kind of Surface Flaw feature extracting method | |
CN106203338B (en) | Human eye state method for quickly identifying based on net region segmentation and threshold adaptive | |
CN110046565A (en) | A kind of method for detecting human face based on Adaboost algorithm | |
CN106557745A (en) | Human eyeball's detection method and system based on maximum between-cluster variance and gamma transformation | |
CN109409347A (en) | A method of based on facial features localization fatigue driving |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190301 |
|
WD01 | Invention patent application deemed withdrawn after publication |