CN104123549A - Eye positioning method for real-time monitoring of fatigue driving - Google Patents

Eye positioning method for real-time monitoring of fatigue driving Download PDF

Info

Publication number
CN104123549A
CN104123549A CN201410369776.0A CN201410369776A CN104123549A CN 104123549 A CN104123549 A CN 104123549A CN 201410369776 A CN201410369776 A CN 201410369776A CN 104123549 A CN104123549 A CN 104123549A
Authority
CN
China
Prior art keywords
image
eyes
frame
eye
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410369776.0A
Other languages
Chinese (zh)
Other versions
CN104123549B (en
Inventor
赵安
梁万元
种银保
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Second Affiliated Hospital of TMMU
Original Assignee
Second Affiliated Hospital of TMMU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Second Affiliated Hospital of TMMU filed Critical Second Affiliated Hospital of TMMU
Priority to CN201410369776.0A priority Critical patent/CN104123549B/en
Publication of CN104123549A publication Critical patent/CN104123549A/en
Application granted granted Critical
Publication of CN104123549B publication Critical patent/CN104123549B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to an eye positioning method for the real-time monitoring of fatigue driving, which is realized by Matlab2012 software. The method comprises the following steps: step 1, performing initial positioning on a face and eyes, so as to obtain a precise images of eyes; step 2, obtaining the absolute values of adjacent frame difference values based on a consecutive frame difference method of a skin color model of an YCbCr color space; step 3, judging whether front and rear head images are overlapped according to a binary image of adjacent frame difference values; step 4, head displacement detection: respectively detecting a transverse displacement dx and a longitudinal displace dy of the head; step 5, forecasting an eye candidate area; step 6, correcting the eye candidate area; and step 7, repeating the steps 2 to 6, and performing the eye positioning of the next frame. According to the eye positioning method provided by the invention, the positioning calculated amount of the face can be reduced, the eye positioning speed and image processing frame rate can be improved, the accuracy of eye positioning can be guaranteed, and the timeliness and reliability of monitoring the driving fatigue can be performed according to the statuses of eyes.

Description

A kind of eye locating method for fatigue driving Real-Time Monitoring
Technical field
The present invention relates to fatigue driving monitoring method, be specifically related to a kind of eye locating method for fatigue driving Real-Time Monitoring.
Background technology
Along with socio-economic development, automobile has become requisite a kind of vehicles in people's daily life.The increase day by day of automobile quantity, when bringing convenience for people's trip, transportation etc., the traffic hazard taking place frequently has brought huge loss also to people's life security and property.According to relevant statistics both domestic and external, show, the traffic hazard that various countries cause because of fatigue driving accounts for 10%~20% of traffic hazard sum, visible fatigue driving is one of principal element causing traffic hazard, thereby the research of fatigue driving monitoring technology in recent years receives increasing concern.
At present, fatigue driving monitoring technology mainly comprises the driving fatigue monitoring technology based on physiology signal, driver's operation behavior and vehicle-state, but because physiological signal is non-stationary, the complicacy of sensor and contact, and the imperfection of model, could not obtain good monitoring effect.As CN 102406507 A open " a kind of fatigue of automobile driver monitoring method based on physiology signal ", comprise tired scaling method and detection method.Scaling method comprises: the pulse peak value and frequency, heart rate and the respiratory rate that by sensor, are gathered and extracted N unit interval form fatigue characteristic calibration matrix, by principal component analytical method, set up the weight vectors of each fatigue characteristic, weights are added to calibration matrix, build thus the tired vector of demarcating.Fatigue detection method comprises: demarcation weight is added to fatigue characteristic vector in the unit interval, and the mahalanobis distance that calculated characteristics vector and demarcation are vectorial, differentiates fatigue of automobile driver degree by it apart from dispersion degree, and carry out early warning.This monitoring method, based on theory of traditional Chinese medical science, is found out driver's fatigue characteristic in conjunction with modern signal processing method.
CN103279752A open " a kind of eye locating method based on improving Adaboost method and Face geometric eigenvector ", its concrete steps are: step 1: difference training of human face sorter and eyes sorter; Step 2: utilize the face classification device training to determine people's face position; Step 3: utilize the eyes sorter training to determine the position of candidate's eye areas in the part on the top 2/3 of detected human face region; Step 4: utilize the definite right geometric properties coefficient of eyes of respectively organizing of inherent geometric properties on people's face statistical significance; Step 5: determine every group of d of decision metric separately that candidate's eyes are right; Step 6: respectively organize the right decision metric of candidate's eyes, decision metric is less, represents that the right confidence level of these candidate's eyes is higher; Can determine one group of best eye pair, and then determine the optimum position of eyes.This localization method utilizes the geometric properties of people's face inherence further to screen the eye areas searching, and can determine accurately and effectively the optimum position of eyes.But, can not monitor eye fatigue degree.
Comparatively proven technique is by video, to monitor eye fatigue degree to realize PERCLOS (Percentage of Eyelid Closure Over the Pupil Over Time, eyes closed accounts for the percent of special time) monitoring.Current PE RCLOS monitoring, when specific implementation, repeats identical positioning step to every two field picture.Although the accurate location that eyes can be realized in location frame by frame, does not excavate some positioning step that people's face, eyes are simplified in association between consecutive frame image, calculated amount is large, and eyes locating speed is difficult to lifting, and real-time is poor.Forefathers' result of study shows, generally the time of people's eyes closed is between 0.2~0.3s, and the Chang Yiyi minute time window as the sampling of people's face and analysis of PERCLOS index, if eye locating method real-time can not meet the requirement of sampling thheorem, be difficult to accurately monitor the open and-shut mode of eyes, incuring loss through delay judgement and failing to judge easily appears in the tired event occurring for moment, is difficult to effectively avoid the generation of accident.Thereby system delay is very large for the availability influence of existing PERCLOS fatigue driving monitoring method.In order to obtain good fatigue monitoring and early warning effect, also need further to explore eye locating method, and meet the requirement of the tired decision model of PERCLOS to real-time.
Summary of the invention
The present invention is directed to the existing calculated amount of eye locating method frame by frame large, slow-footed shortcoming, a kind of eye locating method for fatigue driving Real-Time Monitoring is proposed, the method can be when guaranteeing eyes accurate positioning, reduce the calculated amount of people's face location, thereby raising image processing speed, realizes the Real-Time Monitoring to eyes open and-shut mode, guarantees the reliability of Driving Fatigue Monitoring System.
The movement of eye position in the sampling interval such as the difference of the method based on adjacent two frames of YCbCr color space detects, and the accurate region of the eyes that framing goes out with reference to former frame, just can determine the candidate region of present frame eyes, again candidate region is extracted, can obtain the accurate region of eyes.
A kind of eye locating method for fatigue driving Real-Time Monitoring of the present invention, utilizes Matlab2012 software to realize, and comprises the following steps:
Step 1, to people's face and eyes initial alignment, obtains eyes exact image: the people's face coloured image that first utilizes camera shooting clear, cut apart again people's face coloured image, obtain people face width Fw and people's face height Fh, then, utilize existing eye locating method to carry out eyes location to the first two field picture, draw rectangular area and the exact position at the first frame eyes place, be eyes exact images, and record the parameter { (x, y) of eye position, w, h};
Step 2, neighbor frame difference method based on YCbCr color space complexion model, ask the absolute value of consecutive frame difference: first adjacent two color image frames are all transformed into YCbCr color space, the image of two frame YCbCr color spaces before and after obtaining, the image that represents former frame YCbCr color space with img1, the image of a frame YCbCr color space after representing with img2; Then utilization " complexion model " (Yuan Ying. the fatigue of automobile driver based on vision is driven detection algorithm research, Shenyang University of Technology's master thesis, 2010, p.11) img1 and img2 are carried out to binary conversion treatment, obtain two frame bianry images, with BW1, represent former frame bianry image, a frame bianry image after representing with BW2; Finally two frame bianry image BW1 and BW2 subtracted each other and taken absolute value, obtaining the bianry image BW of consecutive frame difference.
Step 3, according to the bianry image of consecutive frame difference, before and after judgement, whether two frame header images are overlapping: if adjacent two frame header images are overlapping, carry out next step; If it is overlapping that adjacent two frame header images do not have, return to step 1;
The method of adjacent two frame header doubling of the image judgements is, the region area that the pixel value of supposing former frame bianry image BW1 is 1 is A1, the region area that after supposing, the pixel value of a frame bianry image BW2 is 1 is A2, the region area that the pixel value of supposing the bianry image BW of consecutive frame difference is 1 is A3, if 0≤A3 < is A1+A2, adjacent two two field pictures have overlapping, otherwise not overlapping;
Step 4, head displacement detects: detect respectively head transversal displacement dx and length travel dy;
Step 5, the prediction of eyes candidate region:
According to head displacement, utilize " eyes displacement prediction model " to predict eyes transversal displacement Dx and length travel Dy; Make the rectangular area at the actual place of eyes of former frame image be expressed as (x, y), and w, h}, (x, y) is the coordinate of upper left, rectangular area angle point, the width that w is rectangle, the height that h is rectangle; According to displacement D x, D ywith the rectangular area at former frame eyes place, can determine present frame eyes candidate region:
Step 6, eyes candidate region is revised:
1) (rgb2gray is the function in the science software for calculation Matlab image processing toolbox being world known to utilize rgb2gray in Matlab2012 software, its function is to convert coloured image RGB to gray level image I, and its using method is: I=rgb2gray (RGB)) function is converted to gray level image by eyes candidate region figure;
2) utilize " maximum variance between clusters " (NOBUYUKI OTSU.A threshold selection method from gray level histograms.IEEE TRANSACTIONS ON SYSTREMS, MAN, AND CYBERNETICS, VOL.SMC-9, NO.1, JANUARY 1979, p62-66.) find out image and carry out the needed threshold value T of gray level threshold segmentation;
3) utilize threshold value T to carry out Threshold segmentation to image, obtain a width bianry image;
4) utilize the function of the bwlabel[Matlab image processing toolbox in Matlab2012 software, its function is the connected region in mark bianry image, using method L=bwlabel (BW, n), this function returns to a matrix L onesize with BW, and L contains the mark that is communicated with object in BW, and n generally chooses 4 or 8, meaning is that four-way is communicated with or eight to connection, is defaulted as 8] bianry image that obtains after to Threshold segmentation of function carries out connected component labeling;
" connected component labeling method " comes from < < Digital Image Processing > >, author: (U.S.) Paul Gonzales Deng Zhu publishing house: Electronic Industry Press, publication time: 2004-5-1 number of words: 879000 releases: 1 edition 1 number of pages: 609 printing times: 2004-5-1 I S B N:9787505398764.
The connected component labeling of bianry image is that in the width dot matrix image from being only comprised of " 0 " pixel (ordinary representation background dot) and " 1 " pixel (ordinary representation mode chart form point), " 1 " value pixel set of will adjoin each other (4-neighborhood or 8-neighborhood) extracts.
5) find out two regions that are communicated with area maximum in mark result, as eye areas;
6) in intercepting eyes candidate region original image with 4) in image corresponding to region, the image obtaining is the accurate area image of eyes;
7) with reference to Fig. 2, record location parameter { (x, y), w, h}, { (x, y), w, the h} in alternative steps one of eyes.
Step 7, repeating step two, to step 6, carries out the eyes location of next frame.
Further, described existing eye locating method comprises that Face Detection, people's face are cut apart, gray scale long-pending
Divide projection and morphology to process four steps.
Further, described complexion model is:
98 &le; Cb &le; 127 133 &le; Cr &le; 170
Wherein Cb and Cr represent two chromatic components in YCbCr color space.Utilize above-mentioned complexion model to carry out Face Detection to img1 and img2, by the pixel value order that meets the point of complexion model, be 1, the pixel value order that does not meet the point of complexion model is 0, obtain respectively bianry image BW1 and bianry image BW2, finally BW1 and BW2 are subtracted each other and taken absolute value, obtain bianry image BW, i.e. the neighbor frame difference figure in YCbCr space.
Further, head displacement detects and carries out according to the following step;
(1) first pass through scan image from top to bottom, find the horizontal line that occurs for the first time white pixel, get the image-region of the following Fh of this horizontal line, be designated as p1;
(2) get in p1 upper 2/3 region, be designated as p2;
(3) according to the border, left and right of white portion in p2, take out the region in border, left and right, be designated as p3, be the horizontal substantial range of motion of head, and make the wide W of being of p3, height is H;
(4) take the horizontal median axis y=H/2 of p3 is boundary, takes out respectively up and down the image that accounts for picture altitude 30%, is designated as p4; The breadth extreme that calculates the continuous white pixel of the every a line of p4, is designated as dx i, i ∈ [1,0.6H] wherein; By dx imean value as the transversal displacement of head, be designated as dx
dx = &Sigma; dx i 0.6 H ;
(5) take the vertical centering control axis x=W/2 of p3 is boundary, and the image that accounts for picture traverse 30% is taken out respectively in left and right, is designated as p5; In calculating, the breadth extreme of continuous white pixel in each row of p5, is designated as dy j, j ∈ [1,0.6W] wherein; By dy jmean value as the length travel of head, be designated as dy
dy = &Sigma; dy i 0.6 W .
Beneficial effect of the present invention
The present invention is based on the eye locating method of YCbCr color space neighbor frame difference modeling, utilize the feature of the difference of the consecutive frame based on YCbCr color space the variation of sampling interval eye position such as can detect.If eyes displacement size in can sensing range, can select with reference to the eyes exact position of former frame image detection after the eyes candidate region of a two field picture, simplified the positioning step of eyes in conventional eye localization method; If eyes displacement size outside can sensing range, utilizes traditional eye locating method to position.In the situation of normal driving, driver's head movement amplitude generally can be very not large, thereby occur that eyes displacement is very little at the probability outside can sensing range, and the overall real-time performance of this localization method accesses larger lifting.
Accompanying drawing explanation
Fig. 1 is eye locating method process flow diagram of the present invention;
Fig. 2 is existing eye locating method process flow diagram;
Fig. 3 is eye position parameter schematic diagram;
Fig. 4 is neighbor frame difference method schematic diagram;
Fig. 5 is eyes candidate region makeover process schematic diagram;
Fig. 6 is the 200 two field pictures eyes transversal displacement Dx manually detecting and the head transversal displacement dx scatter diagram and the fitting a straight line figure that automatically detect;
Fig. 7 is the 200 two field pictures eyes length travel Dx manually detecting and the head length travel dx scatter diagram and the fitting a straight line figure that automatically detect.
Embodiment
Below in conjunction with accompanying drawing, the present invention is described in further detail.
Referring to Fig. 1, described a kind of eye locating method for fatigue driving Real-Time Monitoring, utilizes Matlab2012 software to realize, and comprises the following steps:
Step 1, to people's face and eyes initial alignment, obtain eyes exact image: the people's face coloured image that first utilizes camera shooting clear, cut apart again people's face coloured image, obtain people face width Fw and people's face height Fh, then, utilize " existing eye locating method " to carry out eyes location to the first two field picture, draw rectangular area and the exact position at the first frame eyes place, i.e. eyes exact image; And record parameter { (x, y), w, the h} (referring to Fig. 3) of eye position;
Described " existing eye locating method " (referring to Fig. 2) comprises that Face Detection, people's face are cut apart, gray-level projection and morphology are processed four steps.
Step 2, neighbor frame difference model based on YCbCr color space complexion model, ask the absolute value of consecutive frame difference: first adjacent two color image frames are all transformed into YCbCr color space, the image of two frame YCbCr color spaces before and after obtaining, the image that represents former frame YCbCr color space with img1, the image of a frame YCbCr color space after representing with img2; Then utilize " complexion model " to carry out binary conversion treatment to img1 and img2, obtain two frame bianry images, with BW1, represent former frame bianry image, a frame bianry image after representing with BW2; Finally two frame bianry image BW1 and BW2 subtracted each other and taken absolute value, obtaining the bianry image BW of consecutive frame difference;
Described complexion model is:
98 &le; Cb &le; 127 133 &le; Cr &le; 170
Wherein Cb and Cr represent two chromatic components in YCbCr color space.Utilize above-mentioned complexion model to carry out Face Detection to img1 and img2, by the pixel value order that meets the point of complexion model, be 1, the pixel value order that does not meet the point of complexion model is 0, obtain respectively bianry image BW1[referring to Fig. 4 (1)] and bianry image BW2[referring to Fig. 4 (2)], finally BW1 and BW2 subtracted each other and taken absolute value, obtaining bianry image BW[referring to Fig. 4 (3)]
Step 3, according to the bianry image of consecutive frame difference, before and after judgement, whether two frame header images are overlapping: if adjacent two frame header images are overlapping, carry out next step; If it is overlapping that adjacent two frame header images do not have, return to step 1;
The method of adjacent two frame header doubling of the image judgements is, the region area that the pixel value of supposing former frame bianry image BW1 is 1 is A1, the region area that after supposing, the pixel value of a frame bianry image BW2 is 1 is A2, the region area that the pixel value of supposing the bianry image BW of consecutive frame difference is 1 is A3, if 0≤A3 < is A1+A2, adjacent two two field pictures have overlapping, otherwise not overlapping;
Step 4, head displacement detects: detect respectively head transversal displacement dx and length travel dy:
Described head displacement detects carries out according to the following step;
(1) first pass through scan image from top to bottom, find the horizontal line that occurs for the first time white pixel, get the image-region of the following Fh of this horizontal line (people's face height), be designated as p1;
(2) get in p1 upper 2/3 region, be designated as p2;
(3) according to the border, left and right of white portion in p2, take out the region in border, left and right, be designated as p3, be the horizontal substantial range of motion of head, and make the wide W of being of p3, height is H;
(4) take the horizontal median axis y=H/2 of p3 is boundary, takes out respectively up and down the image that accounts for picture altitude 30%, is designated as p4; The breadth extreme that calculates the continuous white pixel of the every a line of p4, is designated as dx i, i ∈ [1,0.6H] wherein; By dx imean value as the transversal displacement of head, be designated as dx
dx = &Sigma; dx i 0.6 H ;
(5) take the vertical centering control axis x=W/2 of p3 is boundary, and the image that accounts for picture traverse 30% is taken out respectively in left and right, is designated as p5; In calculating, the breadth extreme of continuous white pixel in each row of p5, is designated as dy j, j ∈ [1,0.6W] wherein; By dy jmean value as the length travel of head, be designated as dy
dy = &Sigma; dy i 0.6 W .
Step 5, eyes candidate region prediction: according to head displacement, utilize " eyes displacement prediction model " to predict eyes transversal displacement Dx and length travel Dy; Described eyes displacement prediction model is:
Dx = 1.2 dx - 1.4 Dy = 0 . 9 dy + 0 . 4
Make the rectangular area at the actual place of eyes of former frame image be expressed as (x, y), and w, h}, (x, y) is the coordinate of upper left, rectangular area angle point, the width that w is rectangle, the height that h is rectangle; According to displacement D x, D ywith the rectangular area at former frame eyes place, can determine present frame eyes candidate region:
For each specific monitoring target, eyes are determined in the position of head, thereby are feasible according to the variation of head displacement detection eye position.In order to set up according to head displacement, detect the model that eye position changes, the present invention utilizes the coloured image of 200 frame header of camera continuous acquisition same person under same background, people's head random left and right and seesawing in the camera visual field in gatherer process, and eyes naturally open and close.
First, this 200 two field picture gathering is usingd to adjacent two frames as one group of data, utilize YCbCr space neighbor frame difference method to process, obtain 199 groups of head position delta data (dx i, dy i), i=2 wherein ... 200.
Then, the mode to the manual selection area of this 200 two field picture utilization, marks the minimum rectangular area that every two field picture eyes form, and finds the central point of rectangle as the center of eyes, is designated as (x i, y i).Subtract each other between two again, obtain 199 groups of eye position variable quantities:
Dx i = | x i - x i - 1 | Dy i = | y i - y i - 1 | i = 2 , . . . 200
Referring to Fig. 6 and Fig. 7, can find out the linear relationship that presenting of two groups of data is stronger, related coefficient:
RR(dx i,Dx i)=0.9725
RR(dy i,Dy i)=0.9219
Illustrate and utilize the detected adjacent two two field picture eye position variable quantities of algorithm herein, linear with the adjacent two two field picture eye position variable quantities of manual detection.Thereby, according to experimental result above, set up eye position herein and changed detection model:
Dx = 1.2 dx - 1.4 Dy = 0 . 9 dy + 0 . 4
D wherein xand d ybe the head displacement detected value obtaining according to neighbor frame difference method, Dx and Dy are eyes displacement prediction value.
Step 6, eyes candidate region is revised:
1) utilize the rgb2gray function in Matlab2012 software that eyes candidate region figure is converted to gray level image, [referring to Fig. 5 (1)];
2) utilize " maximum variance between clusters " to find out image and carry out the needed threshold value T of gray level threshold segmentation;
3) utilize threshold value T to carry out Threshold segmentation to image, obtain a width bianry image, [referring to Fig. 5 (2)];
4) bianry image obtaining after utilizing bwlabel function in Matlab2012 software to Threshold segmentation carries out connected component labeling;
5) find out two regions that are communicated with area maximum in mark result, as eye areas, [referring to Fig. 5 (3)];
6) in intercepting eyes candidate region original image with 4) in image corresponding to region, the image obtaining is the accurate area image of eyes, [referring to Fig. 5 (4)];
7) with reference to Fig. 2, record location parameter { (x, y), w, h}, { (x, y), w, the h} in alternative steps one of eyes.
The eye areas obtaining due to step 1 is an expansion to former frame eyes rectangular area, may have the non-eye areas such as eyebrow, so need further to process.
Step 7, repeating step two, to step 6, carries out the eyes location of next frame.
The present invention utilizes Matlab2012 software programming to carry out emulation experiment, face image to 200 frame continuous acquisition is processed, the average velocity that utilizes existing eye locating method is 0.214 second/frame, utilize after method of the present invention, average velocity is 0.103 second/frame, be under normal circumstances human eye closure time 1/3 to 1/2, so, can effectively avoid because image processing speed cross slow and eye fatigue state that cause judgement time delay and fail to judge.

Claims (4)

1. for an eye locating method for fatigue driving Real-Time Monitoring, utilize Matlab2012 software to realize, comprise the following steps:
Step 1, to people's face and eyes initial alignment, obtains eyes exact image: the people's face coloured image that first utilizes camera shooting clear, cut apart again people's face coloured image, obtain people face width Fw and people's face height Fh, then, utilize " existing eye locating method " to carry out eyes location to the first two field picture, draw rectangular area and the exact position at the first frame eyes place, be eyes exact images, and record the parameter { (x, y) of eye position, w, h};
Step 2, neighbor frame difference method based on YCbCr color space complexion model, ask the absolute value of consecutive frame difference: first adjacent two color image frames are all transformed into YCbCr color space, the image of two frame YCbCr color spaces before and after obtaining, the image that represents former frame YCbCr color space with img1, the image of a frame YCbCr color space after representing with img2; Then utilize " complexion model " to carry out binary conversion treatment to img1 and img2, obtain two frame bianry images, with BW1, represent former frame bianry image, a frame bianry image after representing with BW2; Finally two frame bianry image BW1 and BW2 subtracted each other and taken absolute value, obtaining the bianry image BW of consecutive frame difference;
Step 3, according to the bianry image of consecutive frame difference, before and after judgement, whether two frame header images are overlapping: if adjacent two frame header images are overlapping, carry out next step; If it is overlapping that adjacent two frame header images do not have, return to step 1;
The method of adjacent two frame header doubling of the image judgements is, the region area that the pixel value of supposing former frame bianry image BW1 is 1 is A1, the region area that after supposing, the pixel value of a frame bianry image BW2 is 1 is A2, the region area that the pixel value of supposing the bianry image BW of consecutive frame difference is 1 is A3, if 0≤A3 < is A1+A2, adjacent two two field pictures have overlapping, otherwise not overlapping;
Step 4, head displacement detects: detect respectively head transversal displacement dx and length travel dy;
Step 5, eyes candidate region prediction: according to head displacement, utilize " eyes displacement prediction model " to predict eyes transversal displacement Dx and length travel Dy; Described eyes displacement prediction model is:
Dx = 1.2 dx - 1.4 Dy = 0 . 9 dy + 0 . 4
Make the rectangular area at the actual place of eyes of former frame image be expressed as (x, y), and w, h}, (x, y) is the coordinate of upper left, rectangular area angle point, the width that w is rectangle, the height that h is rectangle; According to displacement D x, D ywith the rectangular area at former frame eyes place, can determine present frame eyes candidate region:
Step 6, eyes candidate region is revised:
1) utilize the rgb2gray function in Matlab2012 software that eyes candidate region figure is converted to gray level image;
2) utilize " maximum variance between clusters " to find out image and carry out the needed threshold value T of gray level threshold segmentation;
3) utilize threshold value T to carry out Threshold segmentation to image, obtain a width bianry image;
4) bianry image obtaining after utilizing bwlabel function in Matlab2012 software to Threshold segmentation carries out connected component labeling;
5) find out two regions that are communicated with area maximum in mark result, as eye areas;
6) in intercepting eyes candidate region original image with 4) in image corresponding to region, the image obtaining is the accurate area image of eyes;
7) with reference to Fig. 2, record location parameter { (x, y), w, h}, { (x, y), w, the h} in alternative steps one of eyes;
Step 7, repeating step two, to step 6, carries out the eyes location of next frame.
2. a kind of eye locating method for fatigue driving Real-Time Monitoring according to claim 1, is characterized in that: " the existing eye locating method " described in step 1 comprises that Face Detection, people's face are cut apart, gray-level projection and morphology are processed four steps.
3. a kind of eye locating method for fatigue driving Real-Time Monitoring according to claim 1, is characterized in that: " complexion model " described in step 2 is:
98 &le; Cb &le; 127 133 &le; Cr &le; 170
Wherein Cb and Cr represent two chromatic components in YCbCr color space; Utilize above-mentioned complexion model to carry out Face Detection to img1 and img2, by the pixel value order that meets the point of complexion model, be 1, the pixel value order that does not meet the point of complexion model is 0, obtain respectively former frame bianry image BW1 and a rear frame bianry image BW2, finally BW1 and BW2 are subtracted each other and taken absolute value, obtain the bianry image BW of consecutive frame difference, i.e. the neighbor frame difference figure in YCbCr space.
4. a kind of eye locating method for fatigue driving Real-Time Monitoring according to claim 1, is characterized in that: the head displacement described in step 2 detects, and according to the following step, carries out;
(1) first pass through scan image from top to bottom, find the horizontal line that occurs for the first time white pixel, get this horizontal line with the image-region of human face height Fh, be designated as p1;
(2) get in p1 upper 2/3 region, be designated as p2;
(3) according to the border, left and right of white portion in p2, take out the region in border, left and right, be designated as p3, be the horizontal substantial range of motion of head, and make the wide W of being of p3, height is H;
(4) take the horizontal median axis y=H/2 of p3 is boundary, takes out respectively up and down the image that accounts for picture altitude 30%, is designated as p4; The breadth extreme that calculates the continuous white pixel of the every a line of p4, is designated as dx i, i ∈ [1,0.6H] wherein; By dx imean value as the transversal displacement of head, be designated as dx
dx = &Sigma; dx i 0.6 H ;
(5) take the vertical centering control axis x=W/2 of p3 is boundary, and the image that accounts for picture traverse 30% is taken out respectively in left and right, is designated as p5; In calculating, the breadth extreme of continuous white pixel in each row of p5, is designated as dy j, j ∈ [1,0.6W] wherein; By dy jmean value as the length travel of head, be designated as dy
dy = &Sigma; dy i 0.6 W .
CN201410369776.0A 2014-07-30 2014-07-30 Eye positioning method for real-time monitoring of fatigue driving Expired - Fee Related CN104123549B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410369776.0A CN104123549B (en) 2014-07-30 2014-07-30 Eye positioning method for real-time monitoring of fatigue driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410369776.0A CN104123549B (en) 2014-07-30 2014-07-30 Eye positioning method for real-time monitoring of fatigue driving

Publications (2)

Publication Number Publication Date
CN104123549A true CN104123549A (en) 2014-10-29
CN104123549B CN104123549B (en) 2017-05-03

Family

ID=51768954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410369776.0A Expired - Fee Related CN104123549B (en) 2014-07-30 2014-07-30 Eye positioning method for real-time monitoring of fatigue driving

Country Status (1)

Country Link
CN (1) CN104123549B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574820A (en) * 2015-01-09 2015-04-29 安徽清新互联信息科技有限公司 Fatigue drive detecting method based on eye features
CN105354985A (en) * 2015-11-04 2016-02-24 中国科学院上海高等研究院 Fatigue driving monitoring device and method
CN106447651A (en) * 2016-09-07 2017-02-22 遵义师范学院 Traffic sign detection method based on orthogonal Gauss-Hermite moment
CN106682603A (en) * 2016-12-19 2017-05-17 陕西科技大学 Real time driver fatigue warning system based on multi-source information fusion
CN106971194A (en) * 2017-02-16 2017-07-21 江苏大学 A kind of driving intention recognition methods based on the double-deck algorithms of improvement HMM and SVM
CN107222660A (en) * 2017-05-12 2017-09-29 河南工业大学 A kind of distributed network visual monitor system
CN107240292A (en) * 2017-06-21 2017-10-10 深圳市盛路物联通讯技术有限公司 A kind of parking induction method and system of technical ability of being stopped based on driver itself
CN107248313A (en) * 2017-06-21 2017-10-13 深圳市盛路物联通讯技术有限公司 A kind of vehicle parking inducible system and method
CN108162893A (en) * 2017-12-25 2018-06-15 芜湖皖江知识产权运营中心有限公司 A kind of running control system applied in intelligent vehicle
CN110738602A (en) * 2019-09-12 2020-01-31 北京三快在线科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN112749604A (en) * 2019-10-31 2021-05-04 Oppo广东移动通信有限公司 Pupil positioning method and related device and product

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4760349B2 (en) * 2005-12-07 2011-08-31 ソニー株式会社 Image processing apparatus, image processing method, and program
CN102122357B (en) * 2011-03-17 2012-09-12 电子科技大学 Fatigue detection method based on human eye opening and closure state
CN103700217A (en) * 2014-01-07 2014-04-02 广州市鸿慧电子科技有限公司 Fatigue driving detecting system and method based on human eye and wheel path characteristics
CN103839379B (en) * 2014-02-27 2017-05-10 长城汽车股份有限公司 Automobile and driver fatigue early warning detecting method and system for automobile

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐艳: "基于肤色的人脸检测方法及眼睛定位算法研究", 《中国优秀博硕士学位论文全文数据库 (硕士) 信息科技辑(月刊)》 *
李尚国: "基于肤色和人脸特征的人脸检测和人眼定位方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑(月刊)》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574820A (en) * 2015-01-09 2015-04-29 安徽清新互联信息科技有限公司 Fatigue drive detecting method based on eye features
CN105354985A (en) * 2015-11-04 2016-02-24 中国科学院上海高等研究院 Fatigue driving monitoring device and method
CN105354985B (en) * 2015-11-04 2018-01-12 中国科学院上海高等研究院 Fatigue driving monitoring apparatus and method
CN106447651A (en) * 2016-09-07 2017-02-22 遵义师范学院 Traffic sign detection method based on orthogonal Gauss-Hermite moment
CN106682603B (en) * 2016-12-19 2020-01-21 陕西科技大学 Real-time driver fatigue early warning system based on multi-source information fusion
CN106682603A (en) * 2016-12-19 2017-05-17 陕西科技大学 Real time driver fatigue warning system based on multi-source information fusion
CN106971194A (en) * 2017-02-16 2017-07-21 江苏大学 A kind of driving intention recognition methods based on the double-deck algorithms of improvement HMM and SVM
CN106971194B (en) * 2017-02-16 2021-02-12 江苏大学 Driving intention recognition method based on improved HMM and SVM double-layer algorithm
CN107222660A (en) * 2017-05-12 2017-09-29 河南工业大学 A kind of distributed network visual monitor system
CN107240292A (en) * 2017-06-21 2017-10-10 深圳市盛路物联通讯技术有限公司 A kind of parking induction method and system of technical ability of being stopped based on driver itself
CN107248313A (en) * 2017-06-21 2017-10-13 深圳市盛路物联通讯技术有限公司 A kind of vehicle parking inducible system and method
CN108162893A (en) * 2017-12-25 2018-06-15 芜湖皖江知识产权运营中心有限公司 A kind of running control system applied in intelligent vehicle
CN110738602A (en) * 2019-09-12 2020-01-31 北京三快在线科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN112749604A (en) * 2019-10-31 2021-05-04 Oppo广东移动通信有限公司 Pupil positioning method and related device and product

Also Published As

Publication number Publication date
CN104123549B (en) 2017-05-03

Similar Documents

Publication Publication Date Title
CN104123549A (en) Eye positioning method for real-time monitoring of fatigue driving
CN104013414B (en) A kind of Study in Driver Fatigue State Surveillance System based on intelligent movable mobile phone
CN103824420B (en) Fatigue driving identification system based on heart rate variability non-contact measurement
CN110119676A (en) A kind of Driver Fatigue Detection neural network based
CN102013011B (en) Front-face-compensation-operator-based multi-pose human face recognition method
CN110728241A (en) Driver fatigue detection method based on deep learning multi-feature fusion
CN108596087B (en) Driving fatigue degree detection regression model based on double-network result
CN102122357B (en) Fatigue detection method based on human eye opening and closure state
CN101814137B (en) Driver fatigue monitor system based on infrared eye state identification
CN202257856U (en) Driver fatigue-driving monitoring device
CN105389554A (en) Face-identification-based living body determination method and equipment
CN110334600A (en) A kind of multiple features fusion driver exception expression recognition method
CN105354985A (en) Fatigue driving monitoring device and method
CN108446678A (en) A kind of dangerous driving behavior recognition methods based on skeleton character
CN102289660A (en) Method for detecting illegal driving behavior based on hand gesture tracking
CN103902976A (en) Pedestrian detection method based on infrared image
CN102902986A (en) Automatic gender identification system and method
CN104200199B (en) Bad steering behavioral value method based on TOF camera
CN111505632A (en) Ultra-wideband radar action attitude identification method based on power spectrum and Doppler characteristics
CN112016429A (en) Fatigue driving detection method based on train cab scene
CN107563346A (en) One kind realizes that driver fatigue sentences method for distinguishing based on eye image processing
CN109948433A (en) A kind of embedded human face tracing method and device
CN114005167A (en) Remote sight estimation method and device based on human skeleton key points
CN110458093A (en) A kind of Safe belt detection method and corresponding equipment based on driver&#39;s monitoring system
CN111144174A (en) System for identifying falling behavior of old people in video by using neural network and traditional algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170503

Termination date: 20210730