CN112241658A - Fatigue driving early warning system and method based on depth camera - Google Patents

Fatigue driving early warning system and method based on depth camera Download PDF

Info

Publication number
CN112241658A
CN112241658A CN201910646090.4A CN201910646090A CN112241658A CN 112241658 A CN112241658 A CN 112241658A CN 201910646090 A CN201910646090 A CN 201910646090A CN 112241658 A CN112241658 A CN 112241658A
Authority
CN
China
Prior art keywords
face
state
image
eyes
depth camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910646090.4A
Other languages
Chinese (zh)
Other versions
CN112241658B (en
Inventor
张维忠
李金宝
李广文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Dianzhiyun Intelligent Technology Co ltd
Qingdao University
Original Assignee
Qingdao Dianzhiyun Intelligent Technology Co ltd
Qingdao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Dianzhiyun Intelligent Technology Co ltd, Qingdao University filed Critical Qingdao Dianzhiyun Intelligent Technology Co ltd
Priority to CN201910646090.4A priority Critical patent/CN112241658B/en
Publication of CN112241658A publication Critical patent/CN112241658A/en
Application granted granted Critical
Publication of CN112241658B publication Critical patent/CN112241658B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Emergency Alarm Devices (AREA)

Abstract

The invention relates to a fatigue driving early warning system and method based on a depth camera, which comprises the depth camera, a characteristic point positioning module, a state identification module and a driving state early warning module; acquiring a face region and a face characteristic point set by acquiring infrared image and depth image samples, and calculating to acquire a classification state and fatigue state evaluation corresponding to the face characteristic point set acquired in the step S2. Aiming at the problem of identifying and extracting the features at night, the invention adopts the depth camera, obtains the infrared image and the depth image by utilizing the depth camera, judges the eye state through the eye length-width ratio of the front face feature point or the distance between the upper eyelid and the lower eyelid of the side face feature point, comprehensively evaluates the fatigue state through the eye state and the mouth state, can meet the requirement of real-time detection, and has practical value for fatigue detection and early warning.

Description

Fatigue driving early warning system and method based on depth camera
Technical Field
The invention relates to the technical field of driving monitoring and early warning, in particular to a fatigue driving early warning system and method based on a depth camera.
Background
At present, China is a big automobile manufacturing country, automobile reserves are in the second place in the world, and the convenience of traveling and a series of problems are brought by the continuously increased vehicles. Among them, frequently occurring traffic accidents have become the most important problem, the number of casualties caused by the traffic accidents occurring every year has been increasing year by year, and according to the survey report of the National Highway Traffic Safety Administration (NHTSA), it has been shown that the traffic accidents caused by fatigue driving account for a large proportion of the occurring accidents.
At present, the technology for researching the drowsiness/fatigue driving detection of people is mainly divided into (1) measuring physiological signals of people, such as electroencephalogram, electrocardio and skin potential lamps, and the main defect is that the people need to be in contact with the body; (2) measuring physical responses such as blink frequency, blink duration, eye movement, head movement, etc., without physical contact, and acceptable; (3) measuring vehicle and road related parameters such as speed, acceleration, lateral position, white line position, etc. has the disadvantage that the measured information is not reliable.
For measuring physical reaction, the algorithm related to fatigue driving detection is continuously updated in recent years, head detection is carried out on Liu Rui an and the like through a difference image method, and the position of an inner canthus is obtained through an inner angle extraction operator to track the state of the eyes so as to achieve blink detection; the method comprises the following steps that an infrared sensitive camera is utilized by Zhuzhenhua and the like to obtain a driver face image, eyes are positioned through a method of variability template matching, and then the state of the eyes is tracked through a method of Kalman filtering; the Adaboost-based face and eye detection positioning algorithm is provided in the process of equation II, and the eye acquisition state is segmented through an Otsu algorithm; and a method for extracting facial features of a driver based on a convolutional neural network and judging fatigue in the last layer of network is provided. However, the method mainly analyzes images acquired by a common camera, is greatly influenced by light in a vehicle, and is not suitable for use in a tunnel or under the condition of dark light at night. And the analysis of key points of the eyes can be influenced when the driver wears more glasses.
Disclosure of Invention
In order to overcome the defects, the invention aims to provide a fatigue driving early warning system and method based on a depth camera.
The technical scheme adopted by the invention for solving the technical problems is as follows: a fatigue driving early warning system based on a depth camera comprises the depth camera, a face feature point positioning module, a state recognition module and a driving state early warning module;
the depth camera is arranged in front of a driver and used for acquiring a front image set of the driver without wearing eyes and glasses;
the face feature point positioning module extracts visual feature parameters of each frame of image of the depth camera, identifies a face after infrared image preprocessing, and further obtains face feature points by using an LBF feature model trained by a method combining random forest and global linear regression;
the state recognition module is used for analyzing the face extracted by the face characteristic point positioning module, extracting eye state information and mouth state information and obtaining recognition result return values of the eye state and the mouth state;
and the driving state early warning module comprehensively analyzes the eye state and the mouth state to obtain a danger evaluation result of the driver, and if the danger evaluation result exceeds a set value, the driver carries out danger state early warning.
The depth camera adopts an Intel Realsense depth camera.
A fatigue driving early warning method based on a depth camera comprises the following steps:
s1 obtaining infrared image and depth image sample
The image samples comprise samples without wearing eyes, front face image samples with glasses and side face image samples with glasses;
s2, acquiring a face region and a face characteristic point set, identifying the face after infrared image preprocessing, and further acquiring the face characteristic points by using an LBF characteristic model trained by a method combining random forest and global linear regression, wherein the method specifically comprises the following steps:
s2.1 face region detection
Carrying out face detection by adopting a local binary pattern;
s2.2 obtaining human face characteristic points without wearing eyes
After the face region is obtained, the face region comprises a front face region and a side face region, a method of combining random forest and global linear regression proposed by Ren Shaoqing is used for carrying out face calibration to obtain feature points, and the accuracy and the real-time performance are high.
S2.3 obtaining the characteristic points of the human face when wearing the glasses
Whether the glasses are worn or not affects the ratio threshold of the width to the height of the eyes, and the fitting of the characteristic points of the eyes in the image is affected by wearing the glasses, so that the corresponding threshold is added on the basis of S2.1 and S2.2, and the specific algorithm for detecting the characteristic points of the faces with the glasses is as follows:
s2.3.1 preprocessing the obtained infrared image and smoothing the image by mean filtering;
s2.3.2, using a sobel operator to detect the edge of the image in the Y direction;
s2.3.3 performing binarization processing on the face image after edge detection;
s2.3.4 calculating the distance between the middle coordinates of two eyes and the nose feature point and the inner corner point of the eye by using the facial feature points obtained in S2.2, so as to segment the ROI area in the middle of the glasses, and the segmented ROI is worn or not worn;
s2.3.5 calculating the percentage of the edge of the glasses in the ROI (region of interest) in the segmented ROI, namely the ratio of white pixels, and determining that the glasses are worn when the ratio exceeds 10% through long-time experimental analysis;
s3 is calculated to obtain its corresponding classification status based on the face feature point set acquired in step S2:
s3.1 obtaining the front eye state under the condition of wearing and not wearing the glasses, and calculating the aspect ratio of the eyes
After the real-time facial feature points are obtained, the aspect ratio of eyes is calculated based on the blink detection method of the feature points, each eye has 6 feature points in 68 feature points of the face and is in different states when the eyes are opened and closed, the eye state is calculated by calculating the ratio of the width to the height of the eyes, the ratio of the width to the height of the eyes can obviously fluctuate when the eyes are closed, a threshold value can be conveniently found out, whether blinking occurs or not can be judged, the requirements of real-time detection are met, and the accuracy is high;
s3.2 calculating the aspect ratio of the mouth and calculating the state of the mouth
After the real-time facial feature points are obtained, the mouth state is obtained according to the feature points, the width and the height of the mouth are calculated, and the mouth state is calculated according to the aspect ratio of the mouth;
s4 fatigue driving assessment
S4.1 evaluation of eye State
The PERCLOS index is a physical quantity for measuring the fatigue state of a driver, which is provided by the research of merlong district in U.S. karez after a large number of experiments, and is a method for detecting fatigue, which is only approved by the national highway safety agency (NHTSH), wherein PERCLOS refers to the ratio of the closed state of eyes to the total time in a period of time, and has three measurement standards: p70, P80 and EM respectively judge that eyes are closed when the eyelids cover 70%, 80% and 50% of pupils, and in actual use, the standard P80 is used, namely the eyelids cover more than 80% of pupils and the eyes are considered to be closed;
s4.2 mouth State judgment
When the height-width ratio of the mouth is more than 24%, the mouth is considered to be in an open state, and if the mouth is continuously opened for 1.5s, the mouth is marked as one-time yawning;
s4.3 comprehensive fatigue State judgment
The eye PERCLOS index calculation method comprises the following steps:
Figure BDA0002133470910000051
wherein the framewinkFrame number for eye closuresumTaking 30 minutes as a time section for the total time frame number, calculating the eye PERCLOS value of the effective frame in the time section, counting the yawning times in the time section, wherein the PEICLOS value is more than 60 times and is +10 percent, and the comprehensive PERCLOS value after the yawning is fused is as follows:
Figure BDA0002133470910000052
wherein EYEperclosMean of PERCLOS values of the left and right eyes, yawn is the number of yawns in a time interval, if the PERCLOS values are combined in the time interval>And if the eye closing time reaches 40 percent, the fatigue stage is considered to be reached, fatigue early warning is carried out, and in order to prevent the driver from dozing off, the corresponding fatigue early warning is also sent out when the eyes of the driver are continuously closed for 3 seconds.
The invention has the following beneficial effects: aiming at the problem of night recognition feature extraction, an Intel Realsense depth camera is adopted, an infrared image and a depth image are obtained by the Intel Realsense depth camera, face recognition and feature point extraction are carried out by the infrared image and LBP features, face depth is obtained on the depth image, simultaneously face feature points with glasses are extracted, the eye state is judged through the eye length-width ratio of the front face feature points or the distance between the upper eyelid and the lower eyelid of the side face feature points, the fatigue state is comprehensively evaluated through the eye state and the mouth state, long-time experimental tests prove that the algorithm is high in robustness, can meet the requirement of real-time detection, provides a new thought for fatigue detection early warning, and has high practical value.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
FIG. 2 is a schematic diagram of the LBP operator of the present invention.
Fig. 3 is a schematic diagram of face detection and feature point acquisition after preprocessing according to the present invention.
Fig. 4 is a 68 characteristic point diagram of the present invention.
FIG. 5 is a graph of the change in the eye aspect ratio EAR value according to the present invention.
FIG. 6 is a diagram of a ROI without glasses and with glasses according to the present invention.
Detailed Description
The present invention will now be described in further detail with reference to the accompanying drawings.
As shown in fig. 1-6, a fatigue driving early warning system based on a depth camera includes a depth camera, a face feature point positioning module, a state recognition module, and a driving state early warning module;
the main depth camera is arranged in front of a driver and used for acquiring a front image set of the driver without wearing eyes and glasses;
the face feature point positioning module extracts visual feature parameters of each frame of image of the depth camera, identifies a face after infrared image preprocessing, and further obtains face feature points by using an LBF feature model trained by a method combining random forest and global linear regression;
the state recognition module is used for analyzing the face extracted by the face characteristic point positioning module, extracting eye state information and mouth state information and obtaining recognition result return values of the eye state and the mouth state;
and the driving state early warning module comprehensively analyzes the eye state and the mouth state to obtain a danger evaluation result of the driver, and if the danger evaluation result exceeds a set value, the driver carries out danger state early warning.
The depth camera adopts an Intel Realsense depth camera.
A fatigue driving early warning method based on a depth camera comprises the following steps:
s1 obtaining infrared image and depth image sample
The image samples comprise samples without wearing eyes, front face image samples with glasses and side face image samples with glasses;
s2, acquiring a face region and a face characteristic point set, identifying the face after infrared image preprocessing, and further acquiring the face characteristic points by using an LBF characteristic model trained by a method combining random forest and global linear regression, wherein the method specifically comprises the following steps:
s2.1 face region detection
Adopting a local Binary pattern to carry out face detection, wherein a Local Binary Pattern (LBP) (local Binary patterns) operator takes a central pixel value as a threshold value in a 3 x 3 window, and the peripheral 8 pixel values are compared with the threshold value and are greater than 1, otherwise, the pixel values are 0, and the formula is represented as:
Figure BDA0002133470910000071
wherein
Figure BDA0002133470910000072
First, calculate the LBPi with the size of (w-2 × radius, h-2 × radius) of the image, and then calculate the offset coordinate (d) of the pixel corresponding to each pixel in the n-th neighborhoodx,dy)n:
Figure BDA0002133470910000081
Then bilinear difference value calculates the gray value gray (x, y) of the nth neighborhood of the pixel (x, y)nAnd code lbp (x, y)n:
Figure BDA0002133470910000082
The LBP coding values for all pixels are derived:
Figure BDA0002133470910000083
calculate the width and height of each LBPi image:
Figure BDA0002133470910000084
counting the height of each LBPi histogram value by rows, storing the result in HIST, dividing the height by (the height of the LBPi image is equal to the width of the LBPi image), carrying out histogram normalization, converting the HIST into a 1-dimensional vector matrix by a row main sequence, and finally calculating the distance between the histograms to judge whether the face is the face:
Figure BDA0002133470910000085
s2.2 obtaining human face characteristic points without wearing eyes
After the face area is obtained, the face area comprises a front face area and a side face area, a method of combining random forest and global linear regression proposed by Ren Shaoqing is used for carrying out face calibration to obtain feature points, the accuracy and the real-time performance are high, and the method comprises the following steps:
s2.2.1, firstly, obtaining local binary features of the image by using a random forest algorithm and shape index features, then solving a regression model by linear regression, then obtaining an updated face region deltas by using a feature map obtained by training and a linear equation, adding the updated face region deltas and the previous stage to obtain the face region of the current step, and continuously iterating until the end;
s2.2.2, using OpenCV3.4 and a consistency module loading model thereof, firstly creating a face key Point detection object (facemark) class object, then loading a trained face alignment model (FaceAlignment), creating a container for storing the face key Point, and detecting the face key Point in the area where the face is detected, thus reducing the detection area, improving the calibration accuracy and real-time, then drawing the feature Point, and converting the feature Point into CV, namely Point class to obtain the feature Point coordinate;
before face detection, because a gray image obtained through infrared is dark, histogram equalization is needed to be carried out on the gray image, the face recognition rate of the processed image is obviously improved, and face detection and feature point detection are carried out after preprocessing.
S2.3 obtaining the characteristic points of the human face when wearing the glasses
Whether the glasses are worn or not affects the ratio threshold of the width to the height of the eyes, and the fitting of the characteristic points of the eyes in the image is affected by wearing the glasses, so that the corresponding threshold is added on the basis of S2.1 and S2.2, and the specific algorithm for detecting the characteristic points of the faces with the glasses is as follows:
s2.3.1 preprocessing the obtained infrared image and smoothing the image by mean filtering;
s2.3.2, using a sobel operator to detect the edge of the image in the Y direction;
s2.3.3 performing binarization processing on the face image after edge detection;
s2.3.4 calculating the distance between the middle coordinates of two eyes and the nose feature point and the inner corner point of the eye by using the facial feature points obtained in S2.2, so as to segment the ROI area in the middle of the glasses, and the segmented ROI is worn or not worn;
s2.3.5 calculating the percentage of the edge of the glasses in the ROI (region of interest) in the segmented ROI, namely the ratio of white pixels, and determining that the glasses are worn when the ratio exceeds 10% through long-time experimental analysis;
s3 is calculated to obtain its corresponding classification status based on the face feature point set acquired in step S2:
s3.1 obtaining the front eye state under the condition of wearing and not wearing the glasses, and calculating the aspect ratio of the eyes
After obtaining real-time facial feature points, calculating the aspect ratio of eyes based on a blink detection method of the feature points, wherein each eye has 6 feature points in 68 feature points of a human face, the feature points are in different states when the eyes are opened and closed, the 6 feature points are respectively P1 as an external canthus feature point, P2 as an internal canthus feature point, P2 as an upper eyelid edge arched point, P3 as an upper eyelid edge arched point, P5 as a lower eyelid edge concave point, and P6 as a lower eyelid edge concave point, wherein P2 and P6 are positioned on the outer side of a vertical midline of the eye, P3 and P5 are positioned on the inner side of the vertical midline of the eye, and calculating the ratio of the width to the height of the eye, namely EAR, the calculation method comprises the following steps:
Figure BDA0002133470910000101
according to actual experiments, the EAR value can obviously fluctuate when eyes are closed, so that a threshold value can be conveniently found out, whether eyes are blinked or not can be judged, the requirement of real-time detection is met, and the accuracy is high;
s3.2 calculating the aspect ratio of the mouth and calculating the state of the mouth
After the real-time facial feature points are obtained, the mouth state is obtained according to the feature points, the width and the height of the mouth are calculated, and the mouth state is calculated according to the aspect ratio of the mouth;
s4 fatigue driving assessment
S4.1 evaluation of eye State
The PERCLOS (percent of approximated fatigue Over the Pupil Over time) index is a physical quantity for measuring the fatigue state of a driver, which is proposed by the research of Cantonese Meilongan in the United states after a large number of experiments, and is the only method for detecting fatigue approved by the national road safety administration (NHTSH). PERCLOS refers to the ratio of the closed state of the eye over time to the total time, with three measurement criteria: p70, P80 and EM respectively judge that eyes are closed when the eyelids cover 70%, 80% and 50% of pupils, and in actual use, the standard P80 is used, namely the eyelids cover more than 80% of pupils and the eyes are considered to be closed;
s4.2 mouth State judgment
When the height-width ratio of the mouth is more than 24%, the mouth is considered to be in an open state, and if the mouth is continuously opened for 1.5s, the mouth is marked as one-time yawning;
s4.3 comprehensive fatigue State judgment
The eye PERCLOS index calculation method comprises the following steps:
Figure BDA0002133470910000111
wherein the framewinkFrame number for eye closuresumTaking 30 minutes as a time section for the total time frame number, calculating the eye PERCLOS value of the effective frame in the time section, counting the yawning times in the time section, wherein the PEICLOS value is more than 60 times and is +10 percent, and the comprehensive PERCLOS value after the yawning is fused is as follows:
Figure BDA0002133470910000112
wherein EYEperclosMean of PERCLOS values of the left and right eyes, yawn is the number of yawns in a time interval, if the PERCLOS values are combined in the time interval>40%, the fatigue stage is considered to be reached, and fatigue early warning is carried out forThe drowsiness of the driver is prevented, and when the eyes of the driver are continuously closed for 3s, corresponding fatigue early warning is sent out.
The present invention is not limited to the above embodiments, and any structural changes made under the teaching of the present invention shall fall within the protection scope of the present invention, which is similar or similar to the technical solutions of the present invention.
The techniques, shapes, and configurations not described in detail in the present invention are all known techniques.

Claims (7)

1. The utility model provides a driver fatigue early warning system based on depth camera which characterized in that: the system comprises a depth camera, a face feature point positioning module, a state recognition module and a driving state early warning module;
the depth camera is arranged in front of a driver and used for acquiring a front image set of the driver without wearing eyes and glasses;
the face feature point positioning module extracts visual feature parameters of each frame of image of the depth camera, identifies a face after infrared image preprocessing, and further obtains face feature points by using an LBF feature model trained by a method combining random forest and global linear regression;
the state recognition module is used for analyzing the face extracted by the face characteristic point positioning module, extracting eye state information and mouth state information and obtaining recognition result return values of the eye state and the mouth state;
and the driving state early warning module comprehensively analyzes the eye state and the mouth state to obtain a danger evaluation result of the driver, and if the danger evaluation result exceeds a set value, the driver carries out danger state early warning.
2. The depth camera-based fatigue driving warning system of claim 1, wherein: the depth camera adopts an Intel Realsense depth camera.
3. A fatigue driving early warning method based on a depth camera is characterized by comprising the following steps: the method comprises the following steps:
s1 obtaining infrared image and depth image sample
The image samples comprise samples without wearing eyes, front face image samples with glasses and side face image samples with glasses;
s2 obtaining human face area and human face characteristic point set
The face is identified after infrared image preprocessing, and then an LBF characteristic model trained by a method combining random forest and global linear regression comprises the following steps:
s2.1, detecting a face region, namely detecting the face by adopting a local binary pattern;
s2.2, obtaining the characteristic points of the human face without wearing eyes;
s2.3, acquiring face characteristic points when the glasses are worn;
s3 is calculated to obtain its corresponding classification status based on the face feature point set acquired in step S2:
s3.1, obtaining the front eye state under the condition of wearing glasses and not wearing glasses, calculating the ratio of the width to the height of eyes, after obtaining real-time facial feature points, calculating the height-width ratio of the eyes based on a blink detection method of the feature points, wherein each eye has 6 feature points in 68 feature points of the face and is in different states when the eyes are opened and closed, calculating the eye state by calculating the ratio of the width to the height of the eyes, obviously fluctuating the ratio of the width to the height of the eyes when the eyes are closed, finding out a threshold value, and judging whether the eyes blink or not;
s3.2 calculating the aspect ratio of the mouth and calculating the state of the mouth
After the real-time facial feature points are obtained, the mouth state is obtained according to the feature points, the width and the height of the mouth are calculated, and the mouth state is calculated according to the aspect ratio of the mouth;
s4 fatigue driving assessment
S4.1 evaluation of eye State
PERCLOS refers to the ratio of the closed state of the eye to the total time over a period of time, with the eyelids covering more than 80% of the pupil, considered as eye closure;
s4.2 mouth State judgment
When the height-width ratio of the mouth is more than 24%, the mouth is considered to be in an open state, and if the mouth is continuously opened for 1.5s, the mouth is marked as one-time yawning;
s4.3 comprehensive fatigue State judgment
Taking 30 minutes as a time section, calculating the eye PERCLOS value of the effective frame in the time section, counting the yawning times in the time section, wherein the PEICLOS value exceeds 60 times and is +10 percent, and the comprehensive PERCLOS value after the yawning is fused is as follows:
Figure FDA0002133470890000031
wherein EYEperclosMean of PERCLOS values of the left and right eyes, yawn is the number of yawns in a time interval, if the PERCLOS values are combined in the time interval>And if 40%, determining that the fatigue stage is reached, carrying out fatigue early warning, and sending out corresponding fatigue early warning after the eyes of the driver are continuously closed for 3 s.
4. The fatigue driving early warning method based on the depth camera as claimed in claim 3, wherein: the specific method for acquiring the face region in S2.1 in step S2 is as follows: the local binary pattern LBP operator takes the central pixel value as a threshold value in a 3 x 3 window, the surrounding 8 pixel values are compared with the threshold value and are larger than the threshold value and marked as 1, otherwise, the local binary pattern LBP operator is 0, firstly, LBPi of the image size is calculated, then, the offset coordinate of the pixel corresponding to the nth neighborhood of each pixel is calculated, then, the gray value and the code of the nth neighborhood of the pixel (x, y) are calculated through bilinear difference values, the LBP coding value of the pixel is obtained, the width and the height of each LBPi image are calculated, the height of each value of each LBPi histogram is counted according to lines, the result is stored in HIST, the height is divided by the width of the LBPi image, histogram normalization is carried out, the HIST is converted into a vector matrix of 1 dimension through a line-based sequence, and finally, the distance between the histograms is calculated to judge whether the face is the face.
5. The fatigue driving early warning method based on the depth camera as claimed in claim 3, wherein: the specific method for acquiring the face feature points without wearing the eyes in S2.2 in step S2 is as follows:
s2.2.1, firstly, obtaining local binary features of the image by using a random forest algorithm and shape index features, then solving a regression model by linear regression, then obtaining an updated face region deltas by using a feature map obtained by training and a linear equation, adding the updated face region deltas and the previous stage to obtain the face region of the current step, and continuously iterating until the end;
s2.2.2, using OpenCV3.4 and its contrib module to load model, firstly creating object class object of human face key point detection, then loading the trained human face alignment model, creating container for storing human face key points, in the region where human face is detected, detecting human face key points, then drawing feature points, and converting the feature points into class to obtain feature point coordinates.
6. The fatigue driving early warning method based on the depth camera as claimed in claim 3, wherein: in step S2, the method for acquiring the face feature points when wearing glasses in S2.3 includes: and (3) adding glasses wearing detection on the basis of S2.1 and S2.2 to correspond to corresponding threshold values, wherein a specific algorithm for detecting the face characteristic points of the people wearing glasses is as follows:
s2.3.1 preprocessing the obtained infrared image and smoothing the image by mean filtering;
s2.3.2, using a sobel operator to detect the edge of the image in the Y direction;
s2.3.3 performing binarization processing on the face image after edge detection;
s2.3.4 calculating the distance between the middle coordinates of two eyes and the nose feature point and the inner corner point of the eye by using the facial feature points obtained in S2.2, so as to segment the ROI area in the middle of the glasses, and the segmented ROI is worn or not worn;
s2.3.5 in the divided ROI area, the percentage of the edge of the glasses in the ROI, namely the ratio of white pixels is calculated, and the glasses wearing situation is determined when the ratio exceeds 10% through long-time experimental analysis.
7. The fatigue driving early warning method based on the depth camera as claimed in claim 3, wherein: before face detection, because a gray image obtained through infrared is dark, histogram equalization is needed to be carried out on the gray image, the face recognition rate of the processed image is obviously improved, and face detection and feature point detection are carried out after preprocessing.
CN201910646090.4A 2019-07-17 2019-07-17 Fatigue driving early warning method based on depth camera Active CN112241658B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910646090.4A CN112241658B (en) 2019-07-17 2019-07-17 Fatigue driving early warning method based on depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910646090.4A CN112241658B (en) 2019-07-17 2019-07-17 Fatigue driving early warning method based on depth camera

Publications (2)

Publication Number Publication Date
CN112241658A true CN112241658A (en) 2021-01-19
CN112241658B CN112241658B (en) 2023-09-01

Family

ID=74167643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910646090.4A Active CN112241658B (en) 2019-07-17 2019-07-17 Fatigue driving early warning method based on depth camera

Country Status (1)

Country Link
CN (1) CN112241658B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907464A (en) * 2021-02-01 2021-06-04 涂可致 Underwater thermal disturbance image restoration method
CN113240885A (en) * 2021-04-27 2021-08-10 宁波职业技术学院 Method for detecting fatigue of vehicle-mounted driver
CN113243919A (en) * 2021-04-01 2021-08-13 上海工程技术大学 Train driver fatigue state identification and monitoring system
CN113420656A (en) * 2021-06-23 2021-09-21 展讯通信(天津)有限公司 Fatigue driving detection method and device, electronic equipment and storage medium
CN113537176A (en) * 2021-09-16 2021-10-22 武汉未来幻影科技有限公司 Method, device and equipment for determining fatigue state of driver
CN113743232A (en) * 2021-08-09 2021-12-03 广州铁路职业技术学院(广州铁路机械学校) Fatigue detection method for urban rail driver
CN113780125A (en) * 2021-08-30 2021-12-10 武汉理工大学 Fatigue state detection method and device for multi-feature fusion of driver
CN113838265A (en) * 2021-09-27 2021-12-24 科大讯飞股份有限公司 Fatigue driving early warning method and device and electronic equipment
CN114022871A (en) * 2021-11-10 2022-02-08 中国民用航空飞行学院 Unmanned aerial vehicle driver fatigue detection method and system based on depth perception technology
CN114663964A (en) * 2022-05-24 2022-06-24 武汉理工大学 Ship remote driving behavior state monitoring and early warning method and system and storage medium
CN115798019A (en) * 2023-01-06 2023-03-14 山东星科智能科技股份有限公司 Intelligent early warning method for practical training driving platform based on computer vision
CN115798247A (en) * 2022-10-10 2023-03-14 深圳市昊岳科技有限公司 Smart bus cloud platform based on big data
CN117935231A (en) * 2024-03-20 2024-04-26 杭州臻稀生物科技有限公司 Non-inductive fatigue driving monitoring and intervention method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140055569A1 (en) * 2012-08-22 2014-02-27 Samsung Electro-Mechanics Co., Ltd. Apparatus and method for sensing drowsy driving
CN106778628A (en) * 2016-12-21 2017-05-31 张维忠 A kind of facial expression method for catching based on TOF depth cameras
CN108309311A (en) * 2018-03-27 2018-07-24 北京华纵科技有限公司 A kind of real-time doze of train driver sleeps detection device and detection algorithm
CN109614892A (en) * 2018-11-26 2019-04-12 青岛小鸟看看科技有限公司 A kind of method for detecting fatigue driving, device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140055569A1 (en) * 2012-08-22 2014-02-27 Samsung Electro-Mechanics Co., Ltd. Apparatus and method for sensing drowsy driving
CN106778628A (en) * 2016-12-21 2017-05-31 张维忠 A kind of facial expression method for catching based on TOF depth cameras
CN108309311A (en) * 2018-03-27 2018-07-24 北京华纵科技有限公司 A kind of real-time doze of train driver sleeps detection device and detection algorithm
CN109614892A (en) * 2018-11-26 2019-04-12 青岛小鸟看看科技有限公司 A kind of method for detecting fatigue driving, device and electronic equipment

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907464A (en) * 2021-02-01 2021-06-04 涂可致 Underwater thermal disturbance image restoration method
CN113243919A (en) * 2021-04-01 2021-08-13 上海工程技术大学 Train driver fatigue state identification and monitoring system
CN113240885A (en) * 2021-04-27 2021-08-10 宁波职业技术学院 Method for detecting fatigue of vehicle-mounted driver
CN113420656A (en) * 2021-06-23 2021-09-21 展讯通信(天津)有限公司 Fatigue driving detection method and device, electronic equipment and storage medium
CN113743232A (en) * 2021-08-09 2021-12-03 广州铁路职业技术学院(广州铁路机械学校) Fatigue detection method for urban rail driver
CN113780125A (en) * 2021-08-30 2021-12-10 武汉理工大学 Fatigue state detection method and device for multi-feature fusion of driver
CN113537176A (en) * 2021-09-16 2021-10-22 武汉未来幻影科技有限公司 Method, device and equipment for determining fatigue state of driver
CN113838265A (en) * 2021-09-27 2021-12-24 科大讯飞股份有限公司 Fatigue driving early warning method and device and electronic equipment
CN114022871A (en) * 2021-11-10 2022-02-08 中国民用航空飞行学院 Unmanned aerial vehicle driver fatigue detection method and system based on depth perception technology
CN114663964A (en) * 2022-05-24 2022-06-24 武汉理工大学 Ship remote driving behavior state monitoring and early warning method and system and storage medium
CN115798247A (en) * 2022-10-10 2023-03-14 深圳市昊岳科技有限公司 Smart bus cloud platform based on big data
CN115798247B (en) * 2022-10-10 2023-09-22 深圳市昊岳科技有限公司 Intelligent public transportation cloud platform based on big data
CN115798019A (en) * 2023-01-06 2023-03-14 山东星科智能科技股份有限公司 Intelligent early warning method for practical training driving platform based on computer vision
CN115798019B (en) * 2023-01-06 2023-04-28 山东星科智能科技股份有限公司 Computer vision-based intelligent early warning method for practical training driving platform
CN117935231A (en) * 2024-03-20 2024-04-26 杭州臻稀生物科技有限公司 Non-inductive fatigue driving monitoring and intervention method
CN117935231B (en) * 2024-03-20 2024-06-07 杭州臻稀生物科技有限公司 Non-inductive fatigue driving monitoring and intervention method

Also Published As

Publication number Publication date
CN112241658B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN112241658B (en) Fatigue driving early warning method based on depth camera
CN107292251B (en) Driver fatigue detection method and system based on human eye state
CN103714660B (en) System for achieving fatigue driving judgment on basis of image processing and fusion between heart rate characteristic and expression characteristic
CN108960065B (en) Driving behavior detection method based on vision
CN101593425B (en) Machine vision based fatigue driving monitoring method and system
CN106846734B (en) A kind of fatigue driving detection device and method
CN102054163B (en) Method for testing driver fatigue based on monocular vision
Alshaqaqi et al. Driver drowsiness detection system
CN1225375C (en) Method for detecting fatigue driving based on multiple characteristic fusion
Junaedi et al. Driver drowsiness detection based on face feature and PERCLOS
Batista A drowsiness and point of attention monitoring system for driver vigilance
CN103714659B (en) Fatigue driving identification system based on double-spectrum fusion
CN106250801A (en) Based on Face datection and the fatigue detection method of human eye state identification
CN110811649A (en) Fatigue driving detection method based on bioelectricity and behavior characteristic fusion
CN111753674A (en) Fatigue driving detection and identification method based on deep learning
Liu et al. Driver fatigue detection through pupil detection and yawing analysis
CN104616438A (en) Yawning action detection method for detecting fatigue driving
CN101593352A (en) Driving safety monitoring system based on face orientation and visual focus
CN111553214B (en) Method and system for detecting smoking behavior of driver
CN107595307A (en) Fatigue driving detection device and detection method based on machine vision eye recognition
CN110751051A (en) Abnormal driving behavior detection method based on machine vision
CN109977930A (en) Method for detecting fatigue driving and device
CN106203338B (en) Human eye state method for quickly identifying based on net region segmentation and threshold adaptive
CN107563346A (en) One kind realizes that driver fatigue sentences method for distinguishing based on eye image processing
CN103729646B (en) Eye image validity detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant