CN104809482A - Fatigue detecting method based on individual learning - Google Patents

Fatigue detecting method based on individual learning Download PDF

Info

Publication number
CN104809482A
CN104809482A CN201510154342.3A CN201510154342A CN104809482A CN 104809482 A CN104809482 A CN 104809482A CN 201510154342 A CN201510154342 A CN 201510154342A CN 104809482 A CN104809482 A CN 104809482A
Authority
CN
China
Prior art keywords
face
threshold value
measured
samples
fatigue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510154342.3A
Other languages
Chinese (zh)
Inventor
袁杰
孙方轩
邱睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201510154342.3A priority Critical patent/CN104809482A/en
Publication of CN104809482A publication Critical patent/CN104809482A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a fatigue detecting method based on individual learning. The method includes the following steps that (1) a video of face of a detected party is shot, and frame grabbing is performed on the video so that enough samples can be obtained; (2) human face area dividing is performed on all samples by means of a human face dividing method based on skin color and approximate distribution ranges of eyes are determined through human face distribution; (3) features of human eye ranges of all samples are extracted, a threshold value that is accurately adapted to opening and closing of eyes of the detected party is obtained, and detection in allusion to the detected party is performed with the threshold value as the standard; (4) whether the detected party is in a fatigue state or not is determined by means of the obtained threshold value and percentage of eyelid closure time (PERCLOS). By means of the fatigue detecting method based on individual learning, fatigue detection of the detected party is achieved by means of an adaptive learning method in allusion to eye ranges of the detected party, pertinence and accuracy of fatigue detection can be improved, and certain innovativeness is achieved.

Description

A kind of fatigue detection method based on individuality study
Technical field
The present invention relates to the field utilizing image fatigue detecting, especially utilize skin color segmentation algorithm to carry out the field of fatigue detecting to driver.
Background technology
Driver tired driving is one of the major reason causing traffic hazard, ranks first in traffic accidents kill reason, and constantly increases along with number of vehicles, and fatigue driving becomes important social concern gradually.So development fatigue driving detecting system, for raising traffic safety, assure the safety for life and property of the people important in inhibiting.
More simple and effective mainly based on the fatigue detecting algorithm of PERCLOS algorithm in current fatigue detecting algorithm, mainly by face complexion characteristic and Face geometric eigenvector location human eye, by drawing the conclusion whether measured is tired to the observation of human eye.
Summary of the invention
Goal of the invention: problem to be solved by this invention is that to detect specific aim for current fatigue detecting algorithm for the measured of different facial characteristics poor, and when measured's facial characteristics produces larger change, current algorithm often produces larger deviation.
In order to solve the problems of the technologies described above, the invention discloses the specific aim that a kind of fatigue detection method based on individuality study improves fatigue detecting, applicability, comprises the following steps:
Step one, takes one section of video to measured face, to video to get frame, obtains abundant sample;
Step 2, utilizes the face dividing method based on the colour of skin to carry out segmentation human face region and distributed by face determining the roughly distribution range of eyes to all samples;
Step 3, extracts the feature of the human eye scope of all samples, and the eyes being accurately adapted to this measured open the threshold value of closing, and with the detection that this threshold value is carried out for this measured for benchmark;
Step 4, the threshold value that utilization obtains and PERCLOS method judge the fatigue state of measured.
In the present invention, preferably, the image in described step one can get frame to captured video with higher frequency by getting frame software, makes each pictures be similar to the instantaneous state representing face.
In the present invention, preferably, described step 2 comprises the following steps:
Step (21), first noise reduction process is carried out to the samples pictures obtained, secondly " reference white " this concept is introduced, extract the pixel that picture luminance is positioned at front 5%, and using the ratio of 255 i.e. brightness maxima and its average brightness as penalty coefficient, then other all pixels in picture are carried out adjusting to reach luminance compensation effect according to penalty coefficient.
Step (22), is converted to more convenient process YCbCr color space by the rgb color space residing for sample script;
Step (23), calculates the pixel often opened in samples pictures according to the Gaussian distribution model spatially of the CbCr based on the colour of skin, adopts Otsu method to obtain adaptive threshold, and according to threshold value by sample image binaryzation;
Step (24), human face region is split: according to the integral projection of face, and for the face integral projection figure that tests as shown in Fig. 2, Fig. 3, feature adopts integral projection method to calculate the face border of samples pictures, and splits;
Step (25) human eye approximate location is determined: the approximate range determining measured's human eye according to face essential characteristic " three five, front yards ".
In the present invention, preferably, described step 3 comprises the following steps:
Step (31), calculates the pixel gray-scale value sum in all samples pictures in ocular, and finds out maximal value and minimum value;
Step (32), according to the standard of PERCLOS in the threshold value determining between value to open and close eyes, and with the detection that this threshold value is carried out for this measured for benchmark.
In the present invention, preferably, described step 4 comprises the following steps:
Step (41), by the grey scale pixel value sum of the ocular of each sample compared with threshold value, determines the state of this sample eye;
Step (42), by calculating the ratio of eye closing sample and total number of samples amount, showing that closed-eye time accounts for the proportion of T.T., if more than 15%, then can think that this measured is in fatigue state;
Principle of the present invention is the facial information image obtaining measured by getting frame.Secondly, utilize and determine measured's eyes scope roughly based on the face partitioning algorithm of the colour of skin and space integral sciagraphy, then feature within the scope of eye is extracted, obtain the threshold value that opens and closes eyes being directed to measured.Finally, the threshold value obtained and PERCLOS method is utilized to judge the fatigue state of measured.
Beneficial effect: the present invention carries out image procossing by software approach, can be pointed after video acquisition is carried out to measured fatigue detecting is carried out to measured, accuracy of detection increases, and has application prospect widely in fields such as driver fatigue detections.
Accompanying drawing explanation
To do the present invention below in conjunction with the drawings and specific embodiments and further illustrate, above-mentioned and/or otherwise advantage of the present invention will become apparent.
Fig. 1 is Gauss's skin distribution model of the present invention.
Fig. 2 is the projective distribution figure of measured face of the present invention X-axis.
Fig. 3 is the projective distribution figure of measured face of the present invention Y-axis.
Fig. 4 is the inventive method simplified flow chart.
Embodiment:
The present invention, core thinking is that the study of utilization to measured's eye feature is to reach the effect that can adopt pointed threshold value for different measured.Judge tired finally by PERCLOS method.
As shown in Figure 3, the invention discloses the specific aim that a kind of fatigue detection method based on individuality study improves fatigue detecting, the method for applicability, comprises the following steps:
Step one, takes one section of video as the foundation learnt and judge by collecting device to measured, gets frame, make each pictures be similar to the instantaneous state representing face to this video with upper frequency.Obtain abundant samples pictures.
Step 2, the face dividing method based on the colour of skin is utilized to carry out segmentation human face region and distributed by face determining the roughly distribution range of eyes to all samples: to carry out pre-service to gained samples pictures, convert color spaces, image binaryzation, human face region is split, and human eye approximate location is determined;
Described step 2 comprises the following steps:
Step 21, the main thought of light compensation algorithm introduces the color that " reference white " processes image.So-called " reference white " refers to that in image, brightness value is in the brightness average of the pixel of front 5%, but only have the quantity of these pixels acquire a certain degree (being such as greater than 100) time just carry out light compensation, backoff algorithm is by the R of " reference white " pixel, G, channel B gray-scale value is all set to 255, the R of other pixels in image, G, B gray-scale value is adjusted accordingly in proportion, and its concrete implementation procedure is as follows:
1) RGB color image is converted to gray level image, the grey level histogram of statistics gray level image.
2) meeting according to threshold value statistics the critical grey level GRAY (if pixel number is less than reference thresholds, not carrying out light compensation) in threshold value requirement situation.
3) the gray average Average of the pixel of gray-scale value in [GRAY, 255] scope is calculated.
4) calculation compensation coefficient is
Compensate=255.0/Average。(1)
5) utilize penalty coefficient to image R, G, B component amplifies respectively.
Step 22, the Y of YCbCr refers to luminance component, and Cb refers to chroma blue component, and Cr refers to red chrominance component.Can find out relative to rgb color space, monochrome information can extract by YCbCr, so also have larger advantage compared to RGB in Color Image Processing.
And the conversion formula of YCbCr and RGB is as follows:
Y=0.298 R+0.587 G+0.114 B (2)
Cb=-0.1687 R-0.3313 G+0.5 B+128 (3)
Cr=0.5 R-0.4187 G-0.0813 B+128 (4)
Step 23, the theory of Gauss model thinks that colour of skin mother meets Gaussian distribution, thinks that the distribution of random sample in feature space that the such as colour of skin meets normal distribution like this should meet Gaussian distribution.Gauss model is not general two-value skin pixel location, but forms continuous print data message by calculating pixels probability value, and obtains a skin color probability map, completes the confirmation of the colour of skin according to numerical values recited.The Gaussian distribution image of the colour of skin as shown in Figure 1.Probability is higher, illustrates more close to face complexion, and the Gaussian distribution in figure then meets following formula:
P(Cb,Cr)=exp{-0.5(x-m) TC -1(x-m)} (5)
m=[148.5632 116.9231] T(6)
C = 231.1231 9.7823 9.7823 115.2362 - - - ( 7 )
The skin color probability of each pixel can be obtained by this formula, we take advantage of 255 by after the skin color probability normalization of each pixel obtained, set up gray level image, the gray-scale value of gray level image now meets Gaussian distribution, and namely more close with colour of skin indigenous grey angle value is higher.
The threshold value of binaryzation adopts Otsu algorithm to determine, detailed process is as follows:
Note T is the segmentation threshold of prospect and background, and prospect is counted and accounted for image scaled is W 0, average gray is U 0; Background is counted and accounted for image scaled is W 1, average gray is U 1.
Then the overall average gray scale of image is:
U=W 0*U 0+W 1*U 1(8)
The formula of variance of prospect and background is as follows:
G=W 0*(U 0-U)*(U 0-U)+W 1*(U 1-U)*(U 1-U) (9)
When variance G is maximum, can think that now prospect and background difference are maximum, gray scale T is now optimal threshold.Then gray level image can be converted into bianry image according to this threshold value.
Step 24, after binaryzation completes, adopts integral projection method segmentation human face region.So-called image integration projection refers to that or horizontal direction vertical to image does gray scale summation.If facial image size is M × N, if image is f (x, y), x and y represents row and column respectively.
Then the integrate levels projection of image is defined as:
Py ( x ) = Σ y = 1 N f ( x , y ) - - - ( 10 )
Vertical integral projection is defined as:
Px ( y ) = Σ x = 1 M f ( x , y ) - - - ( 11 )
In addition face meets following geometric properties:
1) the integral projection value (vertical direction) on human face region right boundary is between 0.2 to 0.4 times of the maximum integral projection value of vertical direction.
2) the integral projection value (horizontal direction) of human face region coboundary is approximately 1/2 of human face region width.
3) human face region vertical height is about 1.5 times of horizontal width.
Human face region can be split by above step.
Step 25, since face area divides out more accurately, so human eye location we just substantially can be distributed by people face " three five, front yards " determine, the region comprising human eye is drawn in general scope, by human eye scope location slight enlargement, choose in 17/30 to 2/3 scope of human face region.
Step 3, extracts the feature of the human eye scope of all samples, and the eyes being accurately adapted to this measured open the threshold value of closing.
Step 31, first we calculate the gray-scale value sum Sum of all pixels in eye square frame, and to needing all pictures detected to calculate, find out maximal value Max and the minimum M in of Sum.
Step 32, no matter have other features or color lump on the face, when opening eyes and close one's eyes, bianry image difference is necessarily very large.So state when Max represents eye closing substantially, and state when Min represents eye opening substantially, so choose a threshold value between a minimum value and a maximum value, with this threshold value for separatrix judges glasses state.Adopt P70 standard in PERCLOS, when namely eyes closed is used as closed-eye state more than 70%.Threshold value is selected in the position of between Max and Min about 2/3 by us.
Step 4, the threshold value that utilization obtains and PERCLOS method judge the fatigue state of measured.
Step 41, to each samples pictures, if the value detected is greater than a certain threshold value, is then judged as closing one's eyes, otherwise is then judged as opening eyes.
Step 42, according to PERCLOS method, if frequency of wink is more than 15%, namely can think that this driver is in fatigue state, due in process before, gets frame to video with high-frequency, so can think that each pictures all represents the instantaneous state of measured, then basis for estimation can be converted into, if judge that the samples pictures of closing one's eyes exceedes 15% of total picture, then can think fatigue state.After single measured is learnt, no longer need learning process.If measured changes, then need to carry out a learning process to realize the accurate measurement pointed to measured.
The invention provides a kind of fatigue detection method based on individuality study; the method and access of this technical scheme of specific implementation is a lot; the above is only the preferred embodiment of the present invention; should be understood that; for those skilled in the art; under the premise without departing from the principles of the invention, can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.The all available prior art of each ingredient not clear and definite in the present embodiment is realized.

Claims (5)

1., based on a fatigue detection method for individuality study, it is characterized in that, comprise the following steps:
Step one, takes one section of video to measured face, gets frame to video, obtain abundant sample;
Step 2, utilizes the face dividing method based on the colour of skin to carry out segmentation human face region and distributed by face determining the roughly distribution range of eyes to all samples;
Step 3, extracts the feature of the human eye scope of all samples, and the eyes being accurately adapted to this measured open the threshold value of closing, and with the detection that this threshold value is carried out for this measured for benchmark;
Step 4, the threshold value that utilization obtains and PERCLOS method judge the fatigue state of measured.
2. described in, step one comprises the following steps:
Step (11), the video moderate to measured's face shots one segment length;
Step (12), gets frame to gained video with enough little interval, to obtain abundant sample.
3. described in, step 2 comprises the following steps:
Step (21), carries out pre-service to gained samples pictures: comprise and carry out noise reduction to picture, the process such as light compensation;
Step (22), convert color spaces: the rgb color space residing for sample script is converted to more convenient process YCbCr color space;
Step (23), image binaryzation: calculate the pixel often opened in samples pictures according to the Gaussian distribution model spatially of the CbCr based on the colour of skin, adopts Otsu method to obtain adaptive threshold, and according to threshold value by sample image binaryzation;
Step (24), human face region is split: according to the integral projection of face, and for the face integral projection figure that tests as shown in Fig. 2, Fig. 3, feature adopts integral projection method to calculate the face border of samples pictures, and splits;
Step (25) human eye approximate location is determined: the approximate range determining measured's human eye according to face surplus standard feature " three five, front yards ".
4. described in, step 3 comprises the following steps:
Step (31), calculates the pixel gray-scale value sum in all samples pictures in ocular, and finds out maximal value and minimum value;
Step (32), according to the standard of PERCLOS in the threshold value determining between value to open and close eyes, and with the detection that this threshold value is carried out for this measured for benchmark.
5. described in, step 4 comprises the following steps:
Step (41), by the grey scale pixel value sum of the ocular of each sample compared with threshold value, determines the state of this sample eye;
Step (42), by calculating the ratio of eye closing sample and total number of samples amount, showing that closed-eye time accounts for the proportion of T.T., if more than 15%, then can think that this measured is in fatigue state.
CN201510154342.3A 2015-03-31 2015-03-31 Fatigue detecting method based on individual learning Pending CN104809482A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510154342.3A CN104809482A (en) 2015-03-31 2015-03-31 Fatigue detecting method based on individual learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510154342.3A CN104809482A (en) 2015-03-31 2015-03-31 Fatigue detecting method based on individual learning

Publications (1)

Publication Number Publication Date
CN104809482A true CN104809482A (en) 2015-07-29

Family

ID=53694293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510154342.3A Pending CN104809482A (en) 2015-03-31 2015-03-31 Fatigue detecting method based on individual learning

Country Status (1)

Country Link
CN (1) CN104809482A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106055894A (en) * 2016-05-30 2016-10-26 上海芯来电子科技有限公司 Behavior analysis method and system based on artificial intelligence
CN106600903A (en) * 2015-10-20 2017-04-26 阿里巴巴集团控股有限公司 Image-identification-based early-warning method and apparatus
CN108742656A (en) * 2018-03-09 2018-11-06 华南理工大学 Fatigue state detection method based on face feature point location
CN109344802A (en) * 2018-10-29 2019-02-15 重庆邮电大学 A kind of human-body fatigue detection method based on improved concatenated convolutional nerve net
CN109480808A (en) * 2018-09-27 2019-03-19 深圳市君利信达科技有限公司 A kind of heart rate detection method based on PPG, system, equipment and storage medium
CN110765807A (en) * 2018-07-25 2020-02-07 阿里巴巴集团控股有限公司 Driving behavior analysis method, driving behavior processing method, driving behavior analysis device, driving behavior processing device and storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600903A (en) * 2015-10-20 2017-04-26 阿里巴巴集团控股有限公司 Image-identification-based early-warning method and apparatus
WO2017067399A1 (en) * 2015-10-20 2017-04-27 阿里巴巴集团控股有限公司 Method and device for early warning based on image identification
CN106055894A (en) * 2016-05-30 2016-10-26 上海芯来电子科技有限公司 Behavior analysis method and system based on artificial intelligence
CN108742656A (en) * 2018-03-09 2018-11-06 华南理工大学 Fatigue state detection method based on face feature point location
WO2019169896A1 (en) * 2018-03-09 2019-09-12 华南理工大学 Fatigue state detection method based on facial feature point positioning
CN110765807A (en) * 2018-07-25 2020-02-07 阿里巴巴集团控股有限公司 Driving behavior analysis method, driving behavior processing method, driving behavior analysis device, driving behavior processing device and storage medium
CN110765807B (en) * 2018-07-25 2024-04-05 斑马智行网络(香港)有限公司 Driving behavior analysis and processing method, device, equipment and storage medium
CN109480808A (en) * 2018-09-27 2019-03-19 深圳市君利信达科技有限公司 A kind of heart rate detection method based on PPG, system, equipment and storage medium
CN109344802A (en) * 2018-10-29 2019-02-15 重庆邮电大学 A kind of human-body fatigue detection method based on improved concatenated convolutional nerve net
CN109344802B (en) * 2018-10-29 2021-09-10 重庆邮电大学 Human body fatigue detection method based on improved cascade convolution neural network

Similar Documents

Publication Publication Date Title
CN104809482A (en) Fatigue detecting method based on individual learning
CN104809445B (en) method for detecting fatigue driving based on eye and mouth state
CN102841354B (en) Vision protection implementation method of electronic equipment with display screen
CN105096273A (en) Automatic adjustment method of color image brightness
CN109635758B (en) Intelligent building site video-based safety belt wearing detection method for aerial work personnel
CN106056559A (en) Dark-channel-prior-method-based non-uniform-light-field underwater target detection image enhancement method
CN105787929B (en) Skin rash point extracting method based on spot detection
CN101853286B (en) Intelligent selection method of video thumbnails
CN104050480A (en) Cigarette smoke detection method based on computer vision
CN105844242A (en) Method for detecting skin color in image
CN107578008A (en) Fatigue state detection method based on blocking characteristic matrix algorithm and SVM
CN107895157B (en) Method for accurately positioning iris center of low-resolution image
CN101930596A (en) Color constancy method in two steps under a kind of complex illumination
CN102867179A (en) Method for detecting acquisition quality of digital certificate photo
CN102122357A (en) Fatigue detection method based on human eye opening and closure state
CN105631834A (en) Night vision image enhancement method
Tabrizi et al. Open/closed eye analysis for drowsiness detection
CN101615241B (en) Method for screening certificate photos
CN104392425B (en) A kind of image enchancing method of the adjust automatically contrast based on face
CN101604446B (en) Lip image segmenting method and system for fatigue detection
CN108009495A (en) Fatigue driving method for early warning
CN106203338B (en) Human eye state method for quickly identifying based on net region segmentation and threshold adaptive
CN104598914A (en) Skin color detecting method and device
CN107480629A (en) A kind of method for detecting fatigue driving and device based on depth information
CN103729624B (en) A kind of light measuring method and photometric system based on skin color model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150729