CN112016429A - Fatigue driving detection method based on train cab scene - Google Patents

Fatigue driving detection method based on train cab scene Download PDF

Info

Publication number
CN112016429A
CN112016429A CN202010850574.3A CN202010850574A CN112016429A CN 112016429 A CN112016429 A CN 112016429A CN 202010850574 A CN202010850574 A CN 202010850574A CN 112016429 A CN112016429 A CN 112016429A
Authority
CN
China
Prior art keywords
face
mouth
eye
closing
opening
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010850574.3A
Other languages
Chinese (zh)
Inventor
朱婷婷
林焕凯
董振江
王祥雪
黄仝宇
刘双广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gosuncn Technology Group Co Ltd
Original Assignee
Gosuncn Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gosuncn Technology Group Co Ltd filed Critical Gosuncn Technology Group Co Ltd
Priority to CN202010850574.3A priority Critical patent/CN112016429A/en
Publication of CN112016429A publication Critical patent/CN112016429A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention belongs to the technical field of face detection, and particularly relates to a fatigue driving detection method based on a train cab scene, wherein a deep learning method is utilized to firstly detect a face, 68 key points are detected by the face key points, the distance between eyes and the distance between lips in an eye region are calculated by the 68 points, and the first step judgment of a fatigue state is carried out by utilizing three states of head steering and the like calculated by turning the 68 points to an Euler angle. And then extracting visual features of the eye part to establish a visual model for judging the opening and closing of the eyes so as to improve the discrimination capability of the visual features, then performing yawning judgment by combining the opening and closing features of the lip region, further judging whether a driver is tired according to the frequency of the opening and closing of the eyes and the yawning, reducing misjudgment of single features, and reducing false alarm signals caused by misjudgment.

Description

Fatigue driving detection method based on train cab scene
Technical Field
The invention belongs to the technical field of face detection, and particularly relates to a fatigue driving detection method based on a train cab scene.
Background
Along with the rapid development of economy, the large-scale construction of urban roads, the mileage of passing trains, high-speed rails and other highways increases year by year, the incidence rate of traffic accidents also increases day by day, and meanwhile, modern people live fast, and drivers are easy to fatigue due to serious night-out phenomenon, lack of exercise, and the like, so that fatigue driving becomes the main cause of traffic accidents.
How to effectively prevent fatigue driving, scholars at home and abroad adopt different technologies to research the existing fatigue driving detection method, and can be divided into two categories according to fatigue characteristics: the method based on physiological characteristics and the method based on behavior characteristics, the method based on physiological characteristics comprises physiological signal characteristics and physiological reaction characteristics, the method based on physiological signals provides a sleep detection model based on electroencephalogram signals, and the result shows that the electroencephalogram signals can correctly distinguish a clear state from a sleep state, but the electroencephalogram signals are easily interfered by noise and are difficult to collect, and the method based on partial electrooculogram signals is easier to collect and can immunize slight noise compared with the electroencephalogram signals, but head equipment still needs to be installed for collection; a behavior-based method is used for carrying out real-time image acquisition and monitoring on three physiological characteristics, namely eyelid movement, face orientation and sight line direction, by using a remote video camera; the method based on the external sensor is characterized in that the distributed pressure sensor is embedded into the steering wheel to measure the pressure and position set of the hand-held steering wheel, and the measured data of each element is statistically analyzed to perform fatigue early warning; the lane line detection method utilizes a camera to capture road front road conditions, judges whether the vehicle runs normally or not by referring to white lines on the road so as to judge whether the vehicle is out of control or not, does not need a driver to directly contact a detection device based on the state method, has low equipment requirement on the basis of the existing device of the automobile, has strong practicability, is easy to popularize, and can be limited by road conditions and vehicle models. The prior art has problems in the aspects of safety, instantaneity, interference and the like, for example, the instantaneity is low, a driver is easily interfered, and the detection error rate is high.
With the success of deep learning in the fields of target detection and the like in recent years, the deep learning technology also promotes the research of fatigue driving detection, for example, CN201810368035.9 proposes a method based on a fatigue detection regression model, which uses a convolutional neural network to perform unsupervised learning feature expression, replaces the process of artificially designing feature extraction, and uses an lds (linear dynamics system) method to reduce adverse interference in post-processing; then, as CN2017215451667, extracting the discrimination information of hand position by using convolutional neural network, thereby unsupervised learning and predicting four safe and unsafe driving postures; for another example, CN108309331A proposes an eye state recognition method based on a convolutional neural network to calculate the blink frequency, but the fatigue parameter is too single. Research shows that the visual feature representation is directly learned from the image by using the deep network, compared with the manually designed feature, the feature has better robustness on condition changes such as illumination, posture and the like, and the prediction precision is obviously improved.
The detection method based on the physiological signal characteristics has higher precision, but the driver needs to be in direct physical contact with detection equipment to acquire signals, so that the driving operation of the driver can be interfered, the equipment cost is too high, and the method is more suitable for a laboratory environment and is not suitable for practical application; the detection method based on the behavior characteristics does not need a driver to directly contact the detection device, has low equipment requirement on the basis of the existing device of the automobile, has strong practicability and is easy to popularize, but is limited by personal habits of the driver, road conditions and vehicle models, and has low detection accuracy rate in rainy and snowy weather or in unsatisfactory road conditions; under the condition of ensuring certain accuracy and good real-time performance, the method for processing physiological response characteristics based on computer vision is easier to popularize, but the manually designed characteristic resolution capability is poor and only depends on one person, the visual characteristics can be difficult, such as the glasses reflect light when a driver wears glasses, the camera can not capture eyeball movement, in addition, the opening and closing degree of the eyes can be different from person to person, and false alarm can be generated due to irregular head movement.
Disclosure of Invention
Aiming at the defects, the invention provides a fatigue driving detection method based on a train cab scene.
The invention is realized by the following technical scheme:
a fatigue driving detection method based on a train cab scene comprises the following steps:
s0: acquiring a face image, and acquiring the face image under a train cab scene through image acquisition equipment;
s1: detecting a human face;
s2: detecting key points of the human face, namely acquiring the key points of the human face image by using a convolutional neural network;
s3: judging the face state of the current frame;
s4: performing multi-frame face state statistics based on the judgment result obtained in the step S3;
s5: and judging whether the set frame number is reached, if so, outputting the states of the mouth, the eyes and the face, and if not, returning to the step S1.
Further, step S2.1 is included after step S2: and (3) aligning the face, calculating a similarity transformation matrix between the two points by using the positions of five key points in the five sense organ region, and aligning the face by using the similarity transformation matrix to obtain the aligned face.
Further, the step S3 further includes:
s3.1: detecting the eyes which are opened and closed, and cutting out a left eye area and a right eye area by using the face area aligned in the step S2.1; carrying out eye opening and closing classification judgment on the eye region of the current face picture by using a first classification model, and outputting the eye opening or eye closing states of the left eye and the right eye;
s3.2: opening and closing the mouth, and cutting out a face mouth area by using the face area aligned in the step S2.1; carrying out mouth opening and mouth closing judgment on the mouth region of the face by using the second classification model, and outputting a mouth opening or mouth closing state;
s3.3: analyzing the head deflection angle, extracting 68 key points of the personal face of the current picture by utilizing the step S2, selecting 14 points to calculate the rotating Euler angle of the current face, comparing the rotating Euler angle with the set head deflection angle, and judging whether the current face has the phenomenon of inattention;
s3.4: analyzing the eyes opened and closed, namely selecting the center point of the eye region of the left eye and the center point of the eye region of the right eye within the head deflection angle range set in the step S3.3 by using the 68 key points of the current face picture extracted in the step S2, and respectively calculating the ratio of the Euclidean distance between the upper and lower eyelids of the left eye and the center point of the left eye and the ratio of the Euclidean distance between the upper and lower eyelids of the right eye and the center point of the right eye; comparing the ratio with a set eye opening and closing threshold value respectively, and judging the eye opening or closing state of the left and right eyes;
s3.5: and (4) mouth opening and closing analysis, namely selecting a central point of a mouth region within the range of the head deflection angle set in the step (S3.3) by using the 68 key points of the current face picture extracted in the step (S2), calculating a Euclidean distance ratio between the upper lip and the lower lip of the mouth region and the central point of the mouth region, comparing the Euclidean distance ratio with a mouth opening and closing set threshold value, and judging the mouth opening or closing state of the mouth.
Further, in step S1, libfacedetection is adopted as the face detection model.
Further, in step S2, face key points are extracted using mobilenet _ v 2.
Further, in step S3.4, the above-mentioned ratio is compared with the set eye opening/closing threshold respectively, and the eye opening or closing state of the left and right eyes is determined, specifically: if the ratio is smaller than a set threshold value, the eye closing state is determined, and if the ratio is larger than the threshold value, the eye opening state is determined;
and combining the results obtained by the step S3.1 of eye opening and closing detection, combining the values of the two to carry out AND operation, and obtaining the final eye closing state only when the eye closing state is simultaneously achieved.
Further, in step S3.5, comparing the opening/closing threshold with the mouth opening/closing setting threshold, and determining the mouth opening or closing state of the mouth, specifically: if the ratio is greater than the threshold value, the mouth is opened, and if the ratio is less than the threshold value, the mouth is closed;
and combining the result obtained by the step S3.2 of mouth opening and mouth closing detection, combining the values of the mouth opening and mouth closing detection and carrying out AND operation, and judging that the mouth is in a mouth opening state only if the mouth opening state and the mouth closing state are in a mouth opening state.
Further, after step S2.1, the method further comprises:
s2.2: extracting face features, namely extracting the face features based on a face feature extraction model according to the aligned images in the step S2.1 to obtain a face feature extractor;
s2.3: and comparing the faces, judging whether the driver driving the train is the driver himself or not, recording the first login time, and taking the time as the attendance statistics of the driver.
A computer-readable storage medium, having a computer program stored thereon, wherein the program, when executed by a processor, performs the steps of a method for fatigue driving detection in a train cab based scenario.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of a method for fatigue driving detection in a train cab based scenario.
The invention provides a method based on deep learning to carry out fatigue driving detection on the face state of a single-frame or multi-frame driver, and different optimization strategies are collected only in the aspects of model training compression and method realization in order to improve the accuracy rate and real-time performance of overall detection, so that the performance of fatigue driving state judgment is remarkably improved. Compared with the prior art, the invention has at least the following beneficial effects or advantages:
1. the invention comprehensively judges whether the person is a fatigue state or not according to the deflection angle of the head of the human face, the closing of the eyes of the human face and the degree of the mouth seal; in addition, whether fatigue driving exists or not is judged through statistics of multiple frames of data, and accuracy of fatigue driving judgment is improved.
2. The method adopts the Mobilene _ v2 as a key point detection backbone network, and improves the weight of the five sense organs and improves the accuracy of the key point detection of the eye region. Meanwhile, in order to ensure the detection real-time performance, the key point detection model is cut and quantized and compressed to different degrees, and the detection speed is further increased.
3. In order to improve the accuracy of eye opening and closing detection of an eye region in a fatigue state, the eye region is cut according to a human face key point detection model, a human eye region eye opening and closing classification model is added, and human eye key point eye closing judgment is carried out by combining the eye key point distance calculation and the classification result of the classification model.
4. In order to improve the accuracy of yawning detection of the mouth region, the mouth region is cut according to the face key point detection model, a mouth region opening and closing classification model is newly added, and the accuracy of yawning detection judgment is further improved by combining mouth region distance calculation and the classification model.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings;
fig. 1 is a flow chart of face fatigue state detection based on video multi-frame.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment discloses a fatigue driving detection method based on deep learning. The method mainly comprises the following steps:
s1, acquiring a face image, and acquiring the face image in a train cab scene through image acquisition equipment; .
In the embodiment, the picture can be acquired through the camera equipment installed in the train cab, the face in the acquired picture can be detected by using the libfacedetection face detection model to acquire the face image, in addition, the embodiment also uses the Onet network of the MTCNN as a confidence coefficient filtering model, and the false detection rate of the face detection is reduced by using the libfacedetection face result through the Onet network.
S2, detecting key points of the human face: and (3) carrying out key point detection on the face graph obtained in the step (1) through a face key point detection model so as to output face key points. The number of face key points in this step is 68, the face key point detection model may use mobilene _ v2, and there are different weights according to the facial region and the outline region, and those skilled in the art know that this embodiment uses the mobilene _ v2 model, but other face key point detection models may also be applicable.
S3, aligning the human face: after the key point detection of S2 is carried out on the face image, a similarity transformation matrix between two points is calculated by utilizing five key point positions of five sense organs and the like, and the face alignment is carried out by utilizing the similarity transformation matrix to obtain the aligned face.
S4, open and close eye detection: and cutting out a left eye area and a right eye area by using the face area aligned in the step S3, and sorting open-eye pictures and closed-eye pictures as two categories without distinguishing left eyes and right eyes. Here, the closed-eye picture is a picture in which all the eye regions are closed, and a face open-closed eye classification model is trained using the clipped Mobilent _ v2 as a backbone network. And performing open-close eye classification judgment on the eye region of the current face picture by using a classification model, and outputting the open-close eye state of the left and right eyes, wherein 0 represents the open eye state, and 1 represents the close eye state.
S5, mouth opening and mouth closing detection: and cutting out a face mouth region by using the face region aligned in the S3, and arranging two types of pictures of open mouth and closed mouth, wherein the closed mouth picture is that the mouth region is in a closed mouth state. And training a human face mouth-opening and mouth-closing classification model by using the cut Mobilene _ v2 as a backbone network. And (4) carrying out mouth opening and mouth closing judgment on the face and mouth region by using the current classification model, and outputting a mouth opening and mouth closing state, wherein 0 represents mouth opening, and 1 represents mouth closing state.
S6 head deflection angle analysis: 68 key points of the face of the current picture are extracted by using S2, and 14 points are selected to calculate the rotating Euler angle of the current face. And setting a threshold value of a large-angle face, and judging that the head deflection angle is too large and the attention of the face is not concentrated at present when the yaw angle of the face exceeds +/-60 degrees, the pitch angle exceeds +/-30 degrees and the roll angle exceeds +/-30 degrees.
S7 open and close eye analysis: selecting a left eye region to calculate the center point of a left eye region and calculating the ratio of Euclidean distances between the distance center points of upper and lower eyelids of a left eye within the head deflection angle range set in S8 by using 68 key points of the current face picture extracted in S2; selecting a right eye area to calculate the center point of the right eye area, and calculating the ratio of the Euclidean distance between the upper eyelid and the lower eyelid of the right eye and the center point of the right eye; and (3) taking out the value with the minimum right eye ratio and comparing the value with the set eye opening and closing threshold value (2.08), wherein the value is the eye closing state 1 when the value is smaller than the set threshold value, and the value is the eye opening state 0 when the value is larger than the threshold value.
Here, in order to improve the accuracy of the open-close eye judgment, since it is easy for erroneous judgment to occur if the threshold value comparison eye region is too small open, for example, some human faces have smaller eyes, it needs to be decided together with the eye state judged by the S4 open-close eye model, and the method is as follows: the open-close eye analysis and judgment is carried out according to the threshold value in S6, and then the classification result obtained by the open-close eye model in S4 is subjected to AND operation, and the final human eye state can be obtained only if the closed eye state is met at the same time.
S8 mouth opening and mouth closing analysis: and (3) selecting a central point of the mouth region within the range of the head deflection angle set in the step S6 by using the 68 key points of the current face picture extracted in the step S2, calculating a Euclidean distance ratio between the upper lip and the lower lip of the mouth region and the central point of the mouth, comparing the Euclidean distance ratio with a mouth opening and closing set threshold (2.22), and determining that the mouth opening state is 1 if the Euclidean distance ratio is larger than the threshold and determining that the mouth closing state is 0 if the Euclidean distance ratio is smaller than the threshold. Also this mouth region threshold is set larger because the mouth opening degree is larger when the person is yawning. In order to improve the accuracy of mouth region determination, the result obtained by combining the S5 mouth opening and closing detection model needs to be combined with the values of the two models for and operation, and the mouth is determined to be in the mouth opening state only if the two models are both 1.
By combining the 8 steps, the fatigue driving detection analysis can be carried out on the single-frame face picture, three analysis states can be correctly obtained in real time, and the three states can be used for carrying out supervision and report on background management personnel.
S9, multi-frame face state analysis: by analyzing the single-frame face state in combination with S1-S8, if the train cab collects a long video sequence, the multi-frame state needs to be counted, and the statistical process is as shown in fig. 1 below: reading each frame of picture according to video sequence input, performing single-frame face state analysis on each input frame of picture to obtain a final analysis result of each picture, storing the final analysis result into a buffer queue, wherein the size of the buffer queue is an 8-frame face state storage area, reading eight frames of fatigue state analysis results each time, caching the eight frames of fatigue state analysis results until the eight frames of the queue are full, and starting to read from the head of the queue and count the state value of each frame of picture. And (3) performing alarm when more than five frames of human faces have open and closed eyes or yawning or inattention events, analyzing and storing the state of the subsequent frames at the tail part of the queue, and outputting the statistical value of the human face state of continuous frames by the queue.
The method of the embodiment has the advantages that the accuracy of judging the whole fatigue state on the Rainshin micro RK3399 flat plate is over 96 percent, the speed can reach 25 frames to 30 frames per second, and the requirement of a train cab on real-time performance can be met.
The driving behavior of the embodiment comprehensively judges whether the driving behavior is a person or not and the fatigue state of the driving behavior is judged according to the head deflection angle of the human face, the closing of the eyes of the human face and the degree of mouth and seal; in addition, whether fatigue driving exists or not is judged through statistics of multiple frames of data, and accuracy of fatigue driving judgment is improved.
In the embodiment, a deep learning method is adopted to detect the fatigue driving state of the driver, and the basic steps are firstly to detect the human face, detect 68 key points of the human face, calculate the distance between the eyes and the distance between the lips of the eye region by using 68 points, and calculate the head steering by using 68 points to turn the Euler angle, so as to perform the first step judgment of the fatigue state. And then extracting visual features of the eye part to establish a visual model for judging the opening and closing of the eyes so as to improve the discrimination capability of the visual features, then performing yawning judgment by combining the opening and closing features of the lip region, and further judging whether a driver is tired according to the frequency of the opening and closing of the eyes and the yawning.
In addition, in the aspect of real-time performance, compared with the prior art, the method has the advantages that quantization compression is performed on models such as face detection, key point detection, eye regions, mouth regions and the like to different degrees, so that the overall detection speed is greatly increased compared with that of the traditional method, the detection speed of 25-30fps can be achieved, and the real-time performance requirement of the system can be basically met.
Preferably, after step S3, the method further includes:
s31: extracting the face features, namely training a face feature extraction model by using ArcFace according to the aligned images of S3, compressing the intra-class intervals of the face features, expanding the inter-class intervals and extracting the face features with higher resolution; the ArcFace loss function adopts face features to extract currently mainstream softmax and variant losses (A-softmax, ArcFace, cosface and the like) thereof to obtain the face feature extractor.
S32: comparing the faces, and establishing a standard test library of the driving faces, wherein the standard test library of the driving faces comprises face sample images of all postures under all scenes and at all angles of a cab as much as possible; and testing the currently input face image by using the face feature extractor obtained in the step S31 by using the test library, taking the cosine similarity as an evaluation index, analyzing the test result, judging whether the driver driving the train is the driver himself or not, recording the first login time, and taking the time as the attendance statistics of the driver.
The invention also provides a computer-readable storage medium having a computer program stored thereon, wherein the program realizes the steps of the fatigue driving detection method when executed by a processor.
The invention also provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the fatigue driving detection method when executing the program.
The above-mentioned embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, and it should be understood that the above-mentioned embodiments are only examples of the present invention and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the invention are also within the protection scope of the invention.

Claims (10)

1. A fatigue driving detection method based on a train cab scene is characterized by comprising the following steps:
s0: acquiring a face image, and acquiring the face image under a train cab scene through image acquisition equipment;
s1: detecting a human face;
s2: detecting key points of the human face, namely acquiring the key points of the human face image by using a convolutional neural network;
s3: judging the face state of the current frame;
s4: performing multi-frame face state statistics based on the judgment result obtained in the step S3;
s5: and judging whether the set frame number is reached, if so, outputting the states of the mouth, the eyes and the face, and if not, returning to the step S1.
2. The method for detecting fatigue driving in a train cab-based scenario according to claim 1, comprising step S2.1 after step S2: and (3) aligning the face, calculating a similarity transformation matrix between the two points by using the positions of five key points in the five sense organ region, and aligning the face by using the similarity transformation matrix to obtain the aligned face.
3. The method for detecting fatigue driving under a train cab based scenario as claimed in claim 2, further comprising in said step S3:
s3.1: detecting the eyes which are opened and closed, and cutting out a left eye area and a right eye area by using the face area aligned in the step S2.1; carrying out eye opening and closing classification judgment on the eye region of the current face picture by using a first classification model, and outputting the eye opening or eye closing states of the left eye and the right eye;
s3.2: opening and closing the mouth, and cutting out a face mouth area by using the face area aligned in the step S2.1; carrying out mouth opening and mouth closing judgment on the mouth region of the face by using the second classification model, and outputting a mouth opening or mouth closing state;
s3.3: analyzing the head deflection angle, extracting 68 key points of the personal face of the current picture by utilizing the step S2, selecting 14 points to calculate the rotating Euler angle of the current face, comparing the rotating Euler angle with the set head deflection angle, and judging whether the current face has the phenomenon of inattention;
s3.4: analyzing the eyes opened and closed, namely selecting the center point of the eye region of the left eye and the center point of the eye region of the right eye within the head deflection angle range set in the step S3.3 by using the 68 key points of the current face picture extracted in the step S2, and respectively calculating the ratio of the Euclidean distance between the upper and lower eyelids of the left eye and the center point of the left eye and the ratio of the Euclidean distance between the upper and lower eyelids of the right eye and the center point of the right eye; comparing the ratio with a set eye opening and closing threshold value respectively, and judging the eye opening or closing state of the left and right eyes;
s3.5: and (4) mouth opening and closing analysis, namely selecting a central point of a mouth region within the range of the head deflection angle set in the step (S3.3) by using the 68 key points of the current face picture extracted in the step (S2), calculating a Euclidean distance ratio between the upper lip and the lower lip of the mouth region and the central point of the mouth region, comparing the Euclidean distance ratio with a mouth opening and closing set threshold value, and judging the mouth opening or closing state of the mouth.
4. The method for detecting fatigue driving under a train cab scene as claimed in claim 1, wherein libfacedetection is adopted as a face detection model in step S1.
5. The method for detecting fatigue driving under a train cab scene as claimed in claim 1, wherein in step S2, a face key point is extracted using mobilene _ v 2.
6. The method for detecting fatigue driving under the train cab scene according to claim 3, wherein in step S3.4, the above-mentioned ratio is compared with the set eye opening/closing threshold respectively, to determine the eye opening or eye closing state of the left and right eyes, specifically: if the ratio is smaller than a set threshold value, the eye closing state is determined, and if the ratio is larger than the threshold value, the eye opening state is determined;
and combining the results obtained by the step S3.1 of eye opening and closing detection, combining the values of the two to carry out AND operation, and obtaining the final eye closing state only when the eye closing state is simultaneously achieved.
7. The method for detecting fatigue driving under a train cab scene according to claim 3, wherein in step S3.5, the mouth opening or mouth closing state of the mouth is determined by comparing the detected value with a mouth opening or closing setting threshold, specifically: if the ratio is greater than the threshold value, the mouth is opened, and if the ratio is less than the threshold value, the mouth is closed;
and combining the result obtained by the step S3.2 of mouth opening and mouth closing detection, combining the values of the mouth opening and mouth closing detection and carrying out AND operation, and judging that the mouth is in a mouth opening state only if the mouth opening state and the mouth closing state are in a mouth opening state.
8. The method for detecting fatigue driving under a train cab based scenario as claimed in claim 2, further comprising after step S2.1:
s2.2: extracting face features, namely extracting the face features based on a face feature extraction model according to the aligned images in the step S2.1 to obtain a face feature extractor;
s2.3: and comparing the faces, judging whether the driver driving the train is the driver himself or not, recording the first login time, and taking the time as the attendance statistics of the driver.
9. A computer-readable storage medium, having stored thereon a computer program, wherein the program, when executed by a processor, performs the steps of the method for fatigue driving detection in a train cab based scenario according to any of claims 1-8.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs the steps of the method for detecting fatigue driving in a train cab based scenario according to any of claims 1-8.
CN202010850574.3A 2020-08-21 2020-08-21 Fatigue driving detection method based on train cab scene Pending CN112016429A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010850574.3A CN112016429A (en) 2020-08-21 2020-08-21 Fatigue driving detection method based on train cab scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010850574.3A CN112016429A (en) 2020-08-21 2020-08-21 Fatigue driving detection method based on train cab scene

Publications (1)

Publication Number Publication Date
CN112016429A true CN112016429A (en) 2020-12-01

Family

ID=73505489

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010850574.3A Pending CN112016429A (en) 2020-08-21 2020-08-21 Fatigue driving detection method based on train cab scene

Country Status (1)

Country Link
CN (1) CN112016429A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613441A (en) * 2020-12-29 2021-04-06 新疆爱华盈通信息技术有限公司 Abnormal driving behavior recognition and early warning method and electronic equipment
CN112733628A (en) * 2020-12-28 2021-04-30 杭州电子科技大学 Fatigue driving state detection method based on MobileNet-V3
CN113537115A (en) * 2021-07-26 2021-10-22 东软睿驰汽车技术(沈阳)有限公司 Method and device for acquiring driving state of driver and electronic equipment
CN113642426A (en) * 2021-07-29 2021-11-12 深圳市比一比网络科技有限公司 Fatigue detection method and system based on target and key points
CN114898444A (en) * 2022-06-07 2022-08-12 嘉兴锐明智能交通科技有限公司 Fatigue driving monitoring method, system and equipment based on face key point detection
CN115861984A (en) * 2023-02-27 2023-03-28 联友智连科技有限公司 Driver fatigue detection method and system
CN116077798A (en) * 2023-04-10 2023-05-09 南京信息工程大学 Hypnotizing method based on combination of voice induction and computer vision

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101032405A (en) * 2007-03-21 2007-09-12 汤一平 Safe driving auxiliary device based on omnidirectional computer vision
CN103336973A (en) * 2013-06-19 2013-10-02 华南理工大学 Multi-feature decision fusion eye state recognition method
CN104809445A (en) * 2015-05-07 2015-07-29 吉林大学 Fatigue driving detection method based on eye and mouth states
CN107491769A (en) * 2017-09-11 2017-12-19 中国地质大学(武汉) Method for detecting fatigue driving and system based on AdaBoost algorithms
CN108446600A (en) * 2018-02-27 2018-08-24 上海汽车集团股份有限公司 A kind of vehicle driver's fatigue monitoring early warning system and method
CN108460345A (en) * 2018-02-08 2018-08-28 电子科技大学 A kind of facial fatigue detection method based on face key point location
CN108875602A (en) * 2018-05-31 2018-11-23 珠海亿智电子科技有限公司 Monitor the face identification method based on deep learning under environment
CN108960071A (en) * 2018-06-06 2018-12-07 武汉幻视智能科技有限公司 A kind of eye opening closed-eye state detection method
CN109165613A (en) * 2018-08-31 2019-01-08 镇江赛唯思智能科技有限公司 A kind of fatigue driving recognition methods and system
CN109919049A (en) * 2019-02-21 2019-06-21 北京以萨技术股份有限公司 Fatigue detection method based on deep learning human face modeling
CN109934199A (en) * 2019-03-22 2019-06-25 扬州大学 A kind of Driver Fatigue Detection based on computer vision and system
CN110532976A (en) * 2019-09-03 2019-12-03 湘潭大学 Method for detecting fatigue driving and system based on machine learning and multiple features fusion
CN110705453A (en) * 2019-09-29 2020-01-17 中国科学技术大学 Real-time fatigue driving detection method
CN110705500A (en) * 2019-10-12 2020-01-17 深圳创新奇智科技有限公司 Attention detection method and system for personnel working image based on deep learning
CN111191573A (en) * 2019-12-27 2020-05-22 中国电子科技集团公司第十五研究所 Driver fatigue detection method based on blink rule recognition

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101032405A (en) * 2007-03-21 2007-09-12 汤一平 Safe driving auxiliary device based on omnidirectional computer vision
CN103336973A (en) * 2013-06-19 2013-10-02 华南理工大学 Multi-feature decision fusion eye state recognition method
CN104809445A (en) * 2015-05-07 2015-07-29 吉林大学 Fatigue driving detection method based on eye and mouth states
CN107491769A (en) * 2017-09-11 2017-12-19 中国地质大学(武汉) Method for detecting fatigue driving and system based on AdaBoost algorithms
CN108460345A (en) * 2018-02-08 2018-08-28 电子科技大学 A kind of facial fatigue detection method based on face key point location
CN108446600A (en) * 2018-02-27 2018-08-24 上海汽车集团股份有限公司 A kind of vehicle driver's fatigue monitoring early warning system and method
CN108875602A (en) * 2018-05-31 2018-11-23 珠海亿智电子科技有限公司 Monitor the face identification method based on deep learning under environment
CN108960071A (en) * 2018-06-06 2018-12-07 武汉幻视智能科技有限公司 A kind of eye opening closed-eye state detection method
CN109165613A (en) * 2018-08-31 2019-01-08 镇江赛唯思智能科技有限公司 A kind of fatigue driving recognition methods and system
CN109919049A (en) * 2019-02-21 2019-06-21 北京以萨技术股份有限公司 Fatigue detection method based on deep learning human face modeling
CN109934199A (en) * 2019-03-22 2019-06-25 扬州大学 A kind of Driver Fatigue Detection based on computer vision and system
CN110532976A (en) * 2019-09-03 2019-12-03 湘潭大学 Method for detecting fatigue driving and system based on machine learning and multiple features fusion
CN110705453A (en) * 2019-09-29 2020-01-17 中国科学技术大学 Real-time fatigue driving detection method
CN110705500A (en) * 2019-10-12 2020-01-17 深圳创新奇智科技有限公司 Attention detection method and system for personnel working image based on deep learning
CN111191573A (en) * 2019-12-27 2020-05-22 中国电子科技集团公司第十五研究所 Driver fatigue detection method based on blink rule recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
昌山小屋: "由 6, 14 以及 68点人脸关键点计算头部姿态", 《HTTPS://BLOG.CSDN.NET/CHUIGEDAQIQIU/ARTICLE/DETAILS/88623267》, 29 March 2019 (2019-03-29), pages 1 - 2 *
昌山小屋: "由6, 14以及68点人脸关键点计算头部姿态", 《HTTPS://BLOG.CSDN.NET/CHUIGEDAQIQIU/ARTICLE/DETAILS/88623267》, pages 1 - 2 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733628A (en) * 2020-12-28 2021-04-30 杭州电子科技大学 Fatigue driving state detection method based on MobileNet-V3
CN112613441A (en) * 2020-12-29 2021-04-06 新疆爱华盈通信息技术有限公司 Abnormal driving behavior recognition and early warning method and electronic equipment
CN113537115A (en) * 2021-07-26 2021-10-22 东软睿驰汽车技术(沈阳)有限公司 Method and device for acquiring driving state of driver and electronic equipment
CN113642426A (en) * 2021-07-29 2021-11-12 深圳市比一比网络科技有限公司 Fatigue detection method and system based on target and key points
CN114898444A (en) * 2022-06-07 2022-08-12 嘉兴锐明智能交通科技有限公司 Fatigue driving monitoring method, system and equipment based on face key point detection
CN115861984A (en) * 2023-02-27 2023-03-28 联友智连科技有限公司 Driver fatigue detection method and system
CN116077798A (en) * 2023-04-10 2023-05-09 南京信息工程大学 Hypnotizing method based on combination of voice induction and computer vision

Similar Documents

Publication Publication Date Title
CN112016429A (en) Fatigue driving detection method based on train cab scene
CN101639894B (en) Method for detecting train driver behavior and fatigue state on line and detection system thereof
US11783601B2 (en) Driver fatigue detection method and system based on combining a pseudo-3D convolutional neural network and an attention mechanism
CN108216252B (en) Subway driver vehicle-mounted driving behavior analysis method, vehicle-mounted terminal and system
CN108791299B (en) Driving fatigue detection and early warning system and method based on vision
CN102436715B (en) Detection method for fatigue driving
CN102696041B (en) The system and method that the cost benefit confirmed for eye tracking and driver drowsiness is high and sane
CN110119676A (en) A kind of Driver Fatigue Detection neural network based
CN202130312U (en) Driver fatigue driving monitoring device
CN108446678A (en) A kind of dangerous driving behavior recognition methods based on skeleton character
CN202257856U (en) Driver fatigue-driving monitoring device
CN108596087B (en) Driving fatigue degree detection regression model based on double-network result
CN110717389B (en) Driver fatigue detection method based on generation countermeasure and long-short term memory network
CN110334600A (en) A kind of multiple features fusion driver exception expression recognition method
CN105243386A (en) Face living judgment method and system
CN102622600A (en) High-speed train driver alertness detecting method based on face image and eye movement analysis
CN103824420A (en) Fatigue driving identification system based on heart rate variability non-contact measuring
CN109740477A (en) Study in Driver Fatigue State Surveillance System and its fatigue detection method
CN104123549A (en) Eye positioning method for real-time monitoring of fatigue driving
CN115841651B (en) Constructor intelligent monitoring system based on computer vision and deep learning
CN108108651B (en) Method and system for detecting driver non-attentive driving based on video face analysis
CN115393830A (en) Fatigue driving detection method based on deep learning and facial features
CN115937830A (en) Special vehicle-oriented driver fatigue detection method
CN113744499B (en) Fatigue early warning method, glasses, system and computer readable storage medium
CN111104817A (en) Fatigue detection method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination