CN113642426A - Fatigue detection method and system based on target and key points - Google Patents

Fatigue detection method and system based on target and key points Download PDF

Info

Publication number
CN113642426A
CN113642426A CN202110862783.4A CN202110862783A CN113642426A CN 113642426 A CN113642426 A CN 113642426A CN 202110862783 A CN202110862783 A CN 202110862783A CN 113642426 A CN113642426 A CN 113642426A
Authority
CN
China
Prior art keywords
driver
state
detection
mouth
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110862783.4A
Other languages
Chinese (zh)
Inventor
王思懋
杜卫红
谢立欧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Beyebe Network Technology Co ltd
Original Assignee
Shenzhen Beyebe Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Beyebe Network Technology Co ltd filed Critical Shenzhen Beyebe Network Technology Co ltd
Priority to CN202110862783.4A priority Critical patent/CN113642426A/en
Publication of CN113642426A publication Critical patent/CN113642426A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Traffic Control Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention is suitable for the field of safety detection technology improvement, and provides a fatigue detection method and a system based on a target and a key point, wherein the method comprises the following steps: s1, detecting the face of the driver in the input video by adopting target detection and independently taking out the face frame; s2, detecting the states of the eyes and the mouth of the driver by using key point detection and acquiring a detection result; and S3, analyzing the continuous video frames according to the results of the target detection and the key point detection to obtain the current driving state of the driver. By analyzing the multi-dimensional information, false detection and missing detection caused by the state and scene change of a driver are effectively avoided. For different states of a driver, accurate detection results can be given by detecting and adjusting training data in a target; aiming at diversified scenes, the influence caused by background change can be eliminated in the target detection stage, and for some special complex application scenes, effective optimization can be performed through adjustment of target detection training data.

Description

Fatigue detection method and system based on target and key points
Technical Field
The invention belongs to the field of improvement of safety detection technology, and particularly relates to a fatigue detection method and system based on a target and a key point.
Background
In recent years, the number of private cars is rising year by year, and every year, the tragedies caused by fatigue driving are countless, and the reason for the result is that drivers are not concentrated in driving for a long time. The fatigue driving early warning system (DSM) can effectively avoid the tragedy. The DSM system can monitor the fatigue state, driving behavior and the like of a driver in the whole process during the driving process of the driver. After finding that the driver has fatigue, yawning, squinting and other wrong driving states, the early warning system can analyze the behaviors in time and carry out voice early warning prompt. The method for warning the driver and correcting the wrong driving behavior is achieved.
The existing DSM system usually directly adopts a key point detection method to monitor the face state in real time. This approach has two major drawbacks: firstly, be difficult to deal with driver's different states, some common incidents can cause very big influence to the rate of accuracy that the key point detected, if wear gauze mask, wear glasses etc.. Secondly, diversified scenes are difficult to deal with, and optimization is difficult to be carried out aiming at different scenes.
Disclosure of Invention
The invention aims to provide a fatigue detection method and system based on a target and a key point, and aims to solve the technical problems.
The invention is realized in such a way that a fatigue detection method based on targets and key points comprises the following steps:
s1, detecting the face of the driver in the input video by adopting target detection and independently taking out the face frame;
s2, detecting the states of the eyes and the mouth of the driver by using key point detection and acquiring a detection result;
and S3, analyzing the continuous video frames according to the results of the target detection and the key point detection to obtain the current driving state of the driver.
The further technical scheme of the invention is as follows: in step S1, when a face is detected, the eyes and the mouth are detected together, and the open/close states of the eyes and the mouth are detected.
The further technical scheme of the invention is as follows: the step S1 further includes the steps of:
s1, extracting single-frame image data from the monitored driver video;
s2, carrying out data annotation on human bodies, human faces, eyes, mouths, masks, glasses and hats in the extracted image data;
and S3, setting a data training set, a data verification set and a data testing set according to a set proportion, and performing training and testing.
The further technical scheme of the invention is as follows: in step S2, the head yaw angle is calculated from the face key points to determine whether the driver is in a head-off or head-down state.
The further technical scheme of the invention is as follows: in the step S2, 98 key points are marked on the face during the face detection, and the states of the eyes, the mouth, and the head are analyzed through the 98 key points.
The further technical scheme of the invention is as follows: the open-close state of human eyes in the key point detection is judged through a calculation function, and the function formula is as follows:
Figure BDA0003186337540000021
setting a threshold KeWhen L is presente>KeWhen the eyes are open, when Le≤KeIs determined that the eyes are closed, wherein LeuIs the ordinate, L, of a key point on the upper eyelidedIs the ordinate of the key point on the lower eyelid, and H is the height of the face frame.
The further technical scheme of the invention is as follows: the open and close state of the mouth is judged by using a calculation formula in the key point detection, and the calculation formula comprises the following steps:
Figure BDA0003186337540000031
setting a threshold KmWhen L is presentm>KmWhen the driver's mouth is open, when Lm≤KmIt is determined that the driver's mouth is closed, wherein LmuIs the ordinate, L, of a key point on the upper lipmdIs the ordinate of the keypoint on the lower lip.
The further technical scheme of the invention is as follows: the head deflection analysis in the key point detection comprises the following steps:
s21, solving the rotation vector r ═ r [ r ] for the key points by adopting an open source function solvePnP in OpenCVx ry rz]TThe rotation angle is theta;
s22, calculating a rotation matrix R through a calculation formula:
Figure BDA0003186337540000032
and acquiring deflection angles in three directions, namely a pitch angle pitch, a yaw angle yaw and a roll angle roll through the rotation matrix.
The further technical scheme of the invention is as follows: the driving state comprises a plurality of states including a non-driving state, an abnormal in-lens state, a normal driving state, a long-time eye closing state, a frequent blinking state, a yawning state, a long-time head lowering state and a long-time head deviation state.
The further technical scheme of the invention is as follows: during target detection, when a vehicle is in a starting state and a driver is not in a monitoring picture, judging the vehicle to be in a non-driving state according to a face detection result, and sending out a non-driving state danger early warning if the state duration time exceeds a preset time;
when the target is detected, when the human face is not detected and one or more of the human body, the human eyes or the mask is detected, the driver is judged not to normally enter the vehicle, and if the state continuously exceeds the preset time, an abnormal entry alarm is sent out;
during eye state detection, counting the percentage of eye closure in a preset time for a long-time eye closure state and a frequent blinking state, judging that a driver is in fatigue driving when the percentage of closure exceeds 70%, and sending out a secondary early warning of fatigue driving, or sending out a primary early warning of fatigue driving if the driver is in a fatigue driving state in two continuous unit times;
when head deflection is detected, when the head pitch angle is larger than 45 degrees, the driver is judged to be in a head-down state, when the absolute value of the head yaw angle is larger than 45 degrees or the absolute value of the roll angle is larger than 30 degrees, the driver is judged to be in a head-off state, when the driver is connected for preset time and is in the state, the driver is judged to be distracted, and attention-distracted early warning is sent out;
when the mouth state is detected, the driver contacts the mouth opening for a specified time, judges that the driver has yawning, and sends out a secondary early warning of fatigue driving.
Another objective of the present invention is to provide a fatigue detection system based on target and key points, which comprises
The face detection module is used for carrying out face detection on a driver in an input video by adopting target detection and independently taking out a face frame;
the key point detection module is used for detecting the states of eyes and a mouth of a driver by using key point detection and acquiring a detection result;
the state judgment module element is used for analyzing the continuous video frames according to the results of the target detection and the key point detection to obtain the current driving state of the driver;
the human face detection module detects eyes and mouth together and detects the opening and closing states of the eyes and the mouth when detecting a human face;
the face detection module further comprises
The extraction unit is used for extracting single-frame image data from the monitored driver video;
the labeling unit is used for performing data labeling on a human body, a human face, eyes, a mouth, a mask, glasses and a hat in the extracted image data;
the setting unit is used for setting a data training set, a verification set and a test set according to a set proportion and carrying out training and testing;
98 key points are marked on the face in the face detection in the key point detection module, and the states of eyes, mouth and head are respectively analyzed through the 98 key points;
the open-close state of human eyes in the key point detection is judged through a calculation function, and the function formula is as follows:
Figure BDA0003186337540000051
setting a threshold KeWhen L is presente>KeWhen the eyes are open, when Le≤KeIs determined that the eyes are closed, wherein LeuIs the ordinate, L, of a key point on the upper eyelidedH is the height of the face frame, which is the ordinate of the key point on the lower eyelid;
the open and close state of the mouth is judged by using a calculation formula in the key point detection, and the calculation formula comprises the following steps:
Figure BDA0003186337540000052
setting a threshold KmWhen L is presentm>KmWhen the driver's mouth is open, when Lm≤KmIt is determined that the driver's mouth is closed, wherein LmuIs the ordinate, L, of a key point on the upper lipmdIs the ordinate of the key point on the lower lip;
the analysis of the head deflection in the key point detection comprises
A rotation vector solving unit for solving the rotation vector r ═ r [ r ] from the key points by using the open source function solvePnP in OpenCVx ry rz]TThe rotation angle is theta;
and the deflection angle solving unit is used for calculating a rotation matrix R through a calculation formula:
Figure BDA0003186337540000061
acquiring deflection angles in three directions, namely a pitch angle pitch, a yaw angle yaw and a roll angle roll through a rotation matrix;
during target detection, when a vehicle is in a starting state and a driver is not in a monitoring picture, judging the vehicle to be in a non-driving state according to a face detection result, and sending out a non-driving state danger early warning if the state duration time exceeds a preset time;
when the target is detected, when the human face is not detected and one or more of the human body, the human eyes or the mask is detected, the driver is judged not to normally enter the vehicle, and if the state continuously exceeds the preset time, an abnormal entry alarm is sent out;
during eye state detection, counting the percentage of eye closure in a preset time for a long-time eye closure state and a frequent blinking state, judging that a driver is in fatigue driving when the percentage of closure exceeds 70%, and sending out a secondary early warning of fatigue driving, or sending out a primary early warning of fatigue driving if the driver is in a fatigue driving state in two continuous unit times;
when head deflection is detected, when the head pitch angle is larger than 45 degrees, the driver is judged to be in a head-down state, when the absolute value of the head yaw angle is larger than 45 degrees or the absolute value of the roll angle is larger than 30 degrees, the driver is judged to be in a head-off state, when the driver is connected for preset time and is in the state, the driver is judged to be distracted, and attention-distracted early warning is sent out;
when the mouth state is detected, the driver contacts the mouth opening for a specified time, judges that the driver has yawning, and sends out a secondary early warning of fatigue driving.
The invention has the beneficial effects that: by analyzing the multi-dimensional information, false detection and missing detection caused by the state and scene change of a driver are effectively avoided. The invention can provide accurate detection results for different states of the driver, such as wearing a mask or wearing glasses, by detecting and adjusting training data in the target; aiming at diversified scenes, the influence caused by background change can be eliminated in the target detection stage, and for some special complex application scenes, effective optimization can be performed through adjustment of target detection training data. Meanwhile, the invention classifies and defines the driver state more carefully, and can accurately monitor the fatigue state of the driver in real time and send out early warning.
Drawings
Fig. 1 is a flowchart of a fatigue detection method based on targets and key points according to an embodiment of the present invention.
Fig. 2 is a schematic position diagram of 98 key points of a face according to an embodiment of the present invention.
Detailed Description
Aiming at the defects of the traditional DSM system, the invention designs a fatigue detection method based on the combination of target detection and key point detection, which can accurately monitor the fatigue state of a driver in real time and send out early warning. The invention can provide accurate detection results for different states of the driver, and can effectively optimize aiming at diversified scenes.
As shown in fig. 1, the fatigue detection method based on the target and the key point provided by the present invention is detailed as follows:
step S1, detecting the driver' S face in the input video by target detection and taking out the face frame; in order to eliminate the influence of the background on subsequent detection, in the first stage, a target detection method is adopted for face detection, and a face frame is taken out separately for subsequent analysis. And detecting the eyes and the mouth at the same time of detecting the face, detecting the opening and closing states of the eyes and the mouth, and mutually verifying the opening and closing states with the key point detection result of the second stage. Meanwhile, targets such as a mask, glasses and a hat which possibly influence the detection of the key points can be detected, and interference is eliminated.
The process of creating the face detection model is as follows: firstly, preparing data, wherein a data set extracts single-frame image data from driver monitoring video data; secondly, data labeling is carried out manually, and the labeled targets comprise 7 targets such as a human body, a human face, eyes, a mouth, a mask, glasses and a hat. Thirdly, according to 8: 1: and 1, setting a training set, a verification set and a test set of data according to the proportion, and training and testing the face detection model.
Step S2, detecting the states of the eyes and the mouth of the driver by using key point detection and acquiring a detection result; after face detection, there is a preliminary analysis of the state of the driver's eyes and mouth. For the detection accuracy, the states of eyes and mouth are further detected by adopting key point detection, and meanwhile, the head deflection angle is calculated according to the face key point to judge whether the driver is in a head-off or head-lowering state. The positions of 98 key points of the face are shown in fig. 2, and the input data of key point detection is a face frame framed by the face detection. And marking 98 key points on the face by a manual marking method. And training and testing the key point detection model after the labeling is finished.
After the model is established, the output face frame of the target detection stage is input into the key point detection model, 98 key points of the face are output, and then the states of eyes, mouth and head are analyzed by the 98 key points respectively.
Analyzing the open and close state of human eyes and introducing a parameter Le,LeThe calculation formula of (c) is as follows. L iseLarger indicates that the eye is open more widely. Defining a threshold KeWhen L is presente>KeWhen the driver's eyes are open, when Le≤KeIt is determined that the driver's eyes are closed.
Figure BDA0003186337540000091
Wherein L iseuThe ordinate of the key points on the upper eyelid, i.e. the 6 points numbered 61, 62, 63, 69, 70, 71 in the figure, is indicated. L isedThe ordinate of the key points on the lower eyelid, i.e. the 6 points numbered 65, 66, 67, 73, 74, 75 in the figure, is indicated. H represents the height of the face frame, and in the calculation process, the height of the face frame is used as a calculation reference in order to avoid calculation errors caused by forward and backward movement of a driver.
Analyzing the opening and closing state of the mouth and introducing a parameter Lm,LmThe calculation formula of (c) is as follows. L ismLarger indicates that the mouth is more open. Defining a threshold KmWhen L is presentm>KmWhen the driver's mouth is open, when Lm≤KmIt is determined that the driver's mouth is closed.
Figure BDA0003186337540000092
Wherein L ismuThe ordinate of the key points on the upper lip, i.e. the 5 points numbered 77, 78, 79, 80, 81 in the figure, is shown. L ismdThe ordinate of the key points on the lower eyelid, i.e. the numbers 83, 84, 85, 86, 87 in the figureOrdinate of 5 points.
Analyzing the head deflection angle, firstly solving a rotation vector r ═ r [ r ] from a key point by using an open source function solvePnP in OpenCVx ry rz]TThe rotation angle is θ. Then, a rotation matrix R is calculated, and the calculation formula is as follows:
Figure BDA0003186337540000101
the rotation matrix is composed of matrixes in x, y and z directions, and the matrixes are respectively as follows:
Figure BDA0003186337540000102
Figure BDA0003186337540000103
Figure BDA0003186337540000104
the deflection angles in three directions can be obtained by rotating the matrix, namely a pitch angle pitch, a yaw angle yaw and a roll angle roll.
And S3, analyzing the continuous video frames according to the results of the target detection and the key point detection to obtain the current driving state of the driver. According to the results of the target detection and the key point detection, the single-frame images in the video can be analyzed to obtain the opening and closing state of the eyes, the opening and closing state of the mouth and the deflection state of the head of the driver. And finally analyzing the continuous video frames to obtain the driving state of the driver.
The driver state comprises 8 states including a non-driving state, a non-normal in-the-mirror state, a normal driving state, a long-time eye closing state, a frequent blinking state, a yawning state, a long-time head lowering state, a long-time head deviation state and the like.
For the non-driving state: when the vehicle is in a starting state and the driver is not in the monitoring picture, judging according to the result of face detection, and if the state lasts for more than 3 seconds, sending out a danger early warning: without driver!
Aiming at the abnormal in-lens state: when the target detection does not detect the human face and detects one or more of the human body, the human eyes or the mask, the driver possibly shields the face part by hands or other objects to cause that the human face cannot be detected, the driver is judged not to normally go into the mirror, and the state lasts for more than 3 seconds and then an early warning is given out: not normally go into the mirror!
For the long-time eye-closing state and the frequent blinking state: the percentage of eye closure in 3 seconds can be counted, when the percentage of eye closure exceeds 70%, the driver is judged to be in fatigue driving, secondary early warning of fatigue driving is sent out, and if the driver is in a fatigue driving state in two continuous unit times, primary early warning of fatigue driving is sent out.
For yawning state: and if the driver continuously opens the mouth for 2.5 seconds, judging that the driver yawns, and sending out a secondary early warning of fatigue driving.
For long-time head lowering state and long-time head bias state: when the Euler angle pitch (-60-70) of the head of the driver is larger than 45 degrees, the driver is judged to be in a head-down state, and when the absolute value of the angle of yaw (-75) is larger than 45 degrees or the absolute value of the angle of roll (-40) is larger than 30 degrees, the driver is judged to be in a head-off state. When the driver is in a low head or head-off state for 3 seconds continuously, the driver is judged to be inattentive, and an attention inattentive early warning is sent out.
Another objective of the present invention is to provide a fatigue detection system based on target and key points, which comprises
The face detection module is used for carrying out face detection on a driver in an input video by adopting target detection and independently taking out a face frame;
the key point detection module is used for detecting the states of eyes and a mouth of a driver by using key point detection and acquiring a detection result;
the state judgment module element is used for analyzing the continuous video frames according to the results of the target detection and the key point detection to obtain the current driving state of the driver;
the human face detection module detects eyes and mouth together and detects the opening and closing states of the eyes and the mouth when detecting a human face;
the face detection module further comprises
The extraction unit is used for extracting single-frame image data from the monitored driver video;
the labeling unit is used for performing data labeling on a human body, a human face, eyes, a mouth, a mask, glasses and a hat in the extracted image data;
the setting unit is used for setting a data training set, a verification set and a test set according to a set proportion and carrying out training and testing;
98 key points are marked on the face in the face detection in the key point detection module, and the states of eyes, mouth and head are respectively analyzed through the 98 key points;
the open-close state of human eyes in the key point detection is judged through a calculation function, and the function formula is as follows:
Figure BDA0003186337540000121
setting a threshold KeWhen L is presente>KeWhen the eyes are open, when Le≤KeIs determined that the eyes are closed, wherein LeuIs the ordinate, L, of a key point on the upper eyelidedH is the height of the face frame, which is the ordinate of the key point on the lower eyelid;
the open and close state of the mouth is judged by using a calculation formula in the key point detection, and the calculation formula comprises the following steps:
Figure BDA0003186337540000122
setting a threshold KmWhen L is presentm>KmWhen the driver's mouth is open, when Lm≤KmIt is determined that the driver's mouth is closed, wherein LmuIs the ordinate, L, of a key point on the upper lipmdIs the ordinate of the key point on the lower lip;
the analysis of the head deflection in the key point detection comprises
A rotation vector solving unit for solving the rotation vector r ═ r [ r ] from the key points by using the open source function solvePnP in OpenCVx ry rz]TThe rotation angle is theta;
and the deflection angle solving unit is used for calculating a rotation matrix R through a calculation formula:
Figure BDA0003186337540000131
acquiring deflection angles in three directions, namely a pitch angle pitch, a yaw angle yaw and a roll angle roll through a rotation matrix;
during target detection, when a vehicle is in a starting state and a driver is not in a monitoring picture, judging the vehicle to be in a non-driving state according to a face detection result, and sending out a non-driving state danger early warning if the state duration time exceeds a preset time;
when the target is detected, when the human face is not detected and one or more of the human body, the human eyes or the mask is detected, the driver is judged not to normally enter the vehicle, and if the state continuously exceeds the preset time, an abnormal entry alarm is sent out;
during eye state detection, counting the percentage of eye closure in a preset time for a long-time eye closure state and a frequent blinking state, judging that a driver is in fatigue driving when the percentage of closure exceeds 70%, and sending out a secondary early warning of fatigue driving, or sending out a primary early warning of fatigue driving if the driver is in a fatigue driving state in two continuous unit times;
when head deflection is detected, when the head pitch angle is larger than 45 degrees, the driver is judged to be in a head-down state, when the absolute value of the head yaw angle is larger than 45 degrees or the absolute value of the roll angle is larger than 30 degrees, the driver is judged to be in a head-off state, when the driver is connected for preset time and is in the state, the driver is judged to be distracted, and attention-distracted early warning is sent out;
when the mouth state is detected, the driver contacts the mouth opening for a specified time, judges that the driver has yawning, and sends out a secondary early warning of fatigue driving.
By analyzing the multi-dimensional information, false detection and missing detection caused by the state and scene change of a driver are effectively avoided. The invention can provide accurate detection results for different states of the driver, such as wearing a mask or wearing glasses, by detecting and adjusting training data in the target; aiming at diversified scenes, the influence caused by background change can be eliminated in the target detection stage, and for some special complex application scenes, effective optimization can be performed through adjustment of target detection training data. Meanwhile, the invention classifies and defines the driver state more carefully, and can accurately monitor the fatigue state of the driver in real time and send out early warning.
The fatigue detection method based on the combination of the target detection and the key point detection is described in detail herein, and the above description is only for helping understanding the method and the core idea of the present invention, and should not limit the protection scope of the present invention, and various omissions, substitutions or alterations without departing from the scope of the present invention are included in the protection scope of the present invention.

Claims (10)

1. A fatigue detection method based on targets and key points is characterized by comprising the following steps:
s1, detecting the face of the driver in the input video by adopting target detection and independently taking out the face frame;
s2, detecting the states of the eyes and the mouth of the driver by using key point detection and acquiring a detection result;
and S3, analyzing the continuous video frames according to the results of the target detection and the key point detection to obtain the current driving state of the driver.
2. The method of claim 1, wherein the eyes and mouth are detected together and the open/closed states of the eyes and mouth are detected when the face is detected in step S1.
3. The method for detecting fatigue based on targets and key points as claimed in claim 2, wherein said step S1 further comprises the steps of:
s1, extracting single-frame image data from the monitored driver video;
s2, carrying out data annotation on human bodies, human faces, eyes, mouths, masks, glasses and hats in the extracted image data;
and S3, setting a data training set, a data verification set and a data testing set according to a set proportion, and performing training and testing.
4. The method according to claim 3, wherein 98 key points are labeled on the face in the step S2, and the states of the eyes, mouth and head are analyzed by the 98 key points respectively.
5. The method of claim 4, wherein the open/close state of the human eye during the key point detection is determined by a calculation function, which is a function of:
Figure FDA0003186337530000021
setting a threshold KeWhen L is presente>KeWhen the eyes are open, when Le≤KeIs determined that the eyes are closed, wherein LeuIs the ordinate, L, of a key point on the upper eyelidedIs the ordinate of the key point on the lower eyelid, and H is the height of the face frame.
6. The method of claim 5, wherein the open/close state of the mouth is determined by using a calculation formula in the key point detection, wherein the calculation formula is as follows:
Figure FDA0003186337530000022
setting a threshold KmWhen L is presentm>KmWhen the driver's mouth is open, when Lm≤KmIt is determined that the driver's mouth is closed, wherein LmuOn the upper lipOrdinate, L, of the key pointsmdIs the ordinate of the keypoint on the lower lip.
7. The method of claim 6, wherein the analysis of head deflection in the keypoint detection comprises the following steps:
s21, solving the rotation vector r ═ r [ r ] for the key points by adopting an open source function solvePnP in OpenCVx ry rz]TThe rotation angle is theta;
s22, calculating a rotation matrix R through a calculation formula:
Figure FDA0003186337530000023
and acquiring deflection angles in three directions, namely a pitch angle pitch, a yaw angle yaw and a roll angle roll through the rotation matrix.
8. The method of claim 7, wherein the driving status comprises a plurality of statuses selected from a non-driving status, an abnormal in-sight status, a normal driving status, a long-time eye-closing status, a frequent blinking status, a yawning status, a long-time head-lowering status, and a long-time head-leaning status.
9. The method for detecting fatigue based on the target and the key point as claimed in claim 8, wherein during target detection, when the vehicle is already in a starting state and the driver is not in the monitoring picture, the vehicle is judged to be in a non-driving state according to the face detection result, and when the duration time of the state exceeds a preset time, a non-driving state danger early warning is sent out;
when the target is detected, when the human face is not detected and one or more of the human body, the human eyes or the mask is detected, the driver is judged not to normally enter the vehicle, and if the state continuously exceeds the preset time, an abnormal entry alarm is sent out;
during eye state detection, counting the percentage of eye closure in a preset time for a long-time eye closure state and a frequent blinking state, judging that a driver is in fatigue driving when the percentage of closure exceeds 70%, and sending out a secondary early warning of fatigue driving, or sending out a primary early warning of fatigue driving if the driver is in a fatigue driving state in two continuous unit times;
when head deflection is detected, when the head pitch angle is larger than 45 degrees, the driver is judged to be in a head-down state, when the absolute value of the head yaw angle is larger than 45 degrees or the absolute value of the roll angle is larger than 30 degrees, the driver is judged to be in a head-off state, when the driver is connected for preset time and is in the state, the driver is judged to be distracted, and attention-distracted early warning is sent out;
when the mouth state is detected, the driver contacts the mouth opening for a specified time, judges that the driver has yawning, and sends out a secondary early warning of fatigue driving.
10. The fatigue detection system based on the target and the key points is characterized by comprising
The face detection module is used for carrying out face detection on a driver in an input video by adopting target detection and independently taking out a face frame;
the key point detection module is used for detecting the states of eyes and a mouth of a driver by using key point detection and acquiring a detection result;
the state judgment module element is used for analyzing the continuous video frames according to the results of the target detection and the key point detection to obtain the current driving state of the driver;
the human face detection module detects eyes and mouth together and detects the opening and closing states of the eyes and the mouth when detecting a human face;
the face detection module further comprises
The extraction unit is used for extracting single-frame image data from the monitored driver video;
the labeling unit is used for performing data labeling on a human body, a human face, eyes, a mouth, a mask, glasses and a hat in the extracted image data;
the setting unit is used for setting a data training set, a verification set and a test set according to a set proportion and carrying out training and testing;
98 key points are marked on the face in the face detection in the key point detection module, and the states of eyes, mouth and head are respectively analyzed through the 98 key points;
the open-close state of human eyes in the key point detection is judged through a calculation function, and the function formula is as follows:
Figure FDA0003186337530000041
setting a threshold KeWhen L is presente>KeWhen the eyes are open, when Le≤KeIs determined that the eyes are closed, wherein LeuIs the ordinate, L, of a key point on the upper eyelidedH is the height of the face frame, which is the ordinate of the key point on the lower eyelid;
the open and close state of the mouth is judged by using a calculation formula in the key point detection, and the calculation formula comprises the following steps:
Figure FDA0003186337530000051
setting a threshold KmWhen L is presentm>KmWhen the driver's mouth is open, when Lm≤KmIt is determined that the driver's mouth is closed, wherein LmuIs the ordinate, L, of a key point on the upper lipmdIs the ordinate of the key point on the lower lip;
the analysis of the head deflection in the key point detection comprises
A rotation vector solving unit for solving the rotation vector r ═ r [ r ] from the key points by using the open source function solvePnP in OpenCVx ry rz]TThe rotation angle is theta;
and the deflection angle solving unit is used for calculating a rotation matrix R through a calculation formula:
Figure FDA0003186337530000052
acquiring deflection angles in three directions, namely a pitch angle pitch, a yaw angle yaw and a roll angle roll through a rotation matrix;
during target detection, when a vehicle is in a starting state and a driver is not in a monitoring picture, judging the vehicle to be in a non-driving state according to a face detection result, and sending out a non-driving state danger early warning if the state duration time exceeds a preset time;
when the target is detected, when the human face is not detected and one or more of the human body, the human eyes or the mask is detected, the driver is judged not to normally enter the vehicle, and if the state continuously exceeds the preset time, an abnormal entry alarm is sent out;
during eye state detection, counting the percentage of eye closure in a preset time for a long-time eye closure state and a frequent blinking state, judging that a driver is in fatigue driving when the percentage of closure exceeds 70%, and sending out a secondary early warning of fatigue driving, or sending out a primary early warning of fatigue driving if the driver is in a fatigue driving state in two continuous unit times;
when head deflection is detected, when the head pitch angle is larger than 45 degrees, the driver is judged to be in a head-down state, when the absolute value of the head yaw angle is larger than 45 degrees or the absolute value of the roll angle is larger than 30 degrees, the driver is judged to be in a head-off state, when the driver is connected for preset time and is in the state, the driver is judged to be distracted, and attention-distracted early warning is sent out;
when the mouth state is detected, the driver contacts the mouth opening for a specified time, judges that the driver has yawning, and sends out a secondary early warning of fatigue driving.
CN202110862783.4A 2021-07-29 2021-07-29 Fatigue detection method and system based on target and key points Pending CN113642426A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110862783.4A CN113642426A (en) 2021-07-29 2021-07-29 Fatigue detection method and system based on target and key points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110862783.4A CN113642426A (en) 2021-07-29 2021-07-29 Fatigue detection method and system based on target and key points

Publications (1)

Publication Number Publication Date
CN113642426A true CN113642426A (en) 2021-11-12

Family

ID=78418869

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110862783.4A Pending CN113642426A (en) 2021-07-29 2021-07-29 Fatigue detection method and system based on target and key points

Country Status (1)

Country Link
CN (1) CN113642426A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017193272A1 (en) * 2016-05-10 2017-11-16 深圳市赛亿科技开发有限公司 Vehicle-mounted fatigue pre-warning system based on human face recognition and pre-warning method
CN108363968A (en) * 2018-01-31 2018-08-03 上海瀚所信息技术有限公司 A kind of tired driver driving monitoring system and method based on key point extraction
CN108875642A (en) * 2018-06-21 2018-11-23 长安大学 A kind of method of the driver fatigue detection of multi-index amalgamation
CN112016429A (en) * 2020-08-21 2020-12-01 高新兴科技集团股份有限公司 Fatigue driving detection method based on train cab scene

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017193272A1 (en) * 2016-05-10 2017-11-16 深圳市赛亿科技开发有限公司 Vehicle-mounted fatigue pre-warning system based on human face recognition and pre-warning method
CN108363968A (en) * 2018-01-31 2018-08-03 上海瀚所信息技术有限公司 A kind of tired driver driving monitoring system and method based on key point extraction
CN108875642A (en) * 2018-06-21 2018-11-23 长安大学 A kind of method of the driver fatigue detection of multi-index amalgamation
CN112016429A (en) * 2020-08-21 2020-12-01 高新兴科技集团股份有限公司 Fatigue driving detection method based on train cab scene

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
董超俊;林庚华;吴承鑫;黄尚安;: "基于卷积专家神经网络的疲劳驾驶检测", 计算机工程与设计, no. 10, 16 October 2020 (2020-10-16), pages 2812 - 2817 *
郑伟成;李学伟;刘宏哲;代松银;: "基于深度学习的疲劳驾驶检测算法", 计算机工程, vol. 46, no. 07, 15 July 2020 (2020-07-15), pages 21 - 29 *

Similar Documents

Publication Publication Date Title
CN104616438B (en) A kind of motion detection method of yawning for fatigue driving detection
CN108960065B (en) Driving behavior detection method based on vision
CN104637246B (en) Driver multi-behavior early warning system and danger evaluation method
CN106295551B (en) A kind of personnel safety cap wear condition real-time detection method based on video analysis
CN104063722B (en) A kind of detection of fusion HOG human body targets and the safety cap recognition methods of SVM classifier
CN109002801B (en) Face shielding detection method and system based on video monitoring
CN110728223A (en) Helmet wearing identification method based on deep learning
CN112016457A (en) Driver distraction and dangerous driving behavior recognition method, device and storage medium
CN106056079A (en) Image acquisition device and facial feature occlusion detection method
CN110837784A (en) Examination room peeping cheating detection system based on human head characteristics
CN110633612B (en) Monitoring method and system for inspection robot
CN112364778A (en) Power plant safety behavior information automatic detection method based on deep learning
CN111553214B (en) Method and system for detecting smoking behavior of driver
CN116259002A (en) Human body dangerous behavior analysis method based on video
CN112613449A (en) Safety helmet wearing detection and identification method and system based on video face image
CN113887386B (en) Fatigue detection method based on multi-feature fusion of deep learning and machine learning
CN113688759A (en) Safety helmet identification method based on deep learning
CN113642426A (en) Fatigue detection method and system based on target and key points
CN112926364B (en) Head gesture recognition method and system, automobile data recorder and intelligent cabin
CN112528767A (en) Machine vision-based construction machinery operator fatigue operation detection system and method
CN115937829A (en) Method for detecting abnormal behaviors of operators in crane cab
CN109145684B (en) Head state monitoring method based on region best matching feature points
CN116152945A (en) Under-mine inspection system and method based on AR technology
CN111274888B (en) Helmet and work clothes intelligent identification method based on wearable mobile glasses
Liu et al. Design and implementation of multimodal fatigue detection system combining eye and yawn information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination