CN113486743A - Fatigue driving identification method and device - Google Patents

Fatigue driving identification method and device Download PDF

Info

Publication number
CN113486743A
CN113486743A CN202110701200.XA CN202110701200A CN113486743A CN 113486743 A CN113486743 A CN 113486743A CN 202110701200 A CN202110701200 A CN 202110701200A CN 113486743 A CN113486743 A CN 113486743A
Authority
CN
China
Prior art keywords
face
face image
key points
fatigue driving
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110701200.XA
Other languages
Chinese (zh)
Inventor
童振
李瑞峰
罗冠泰
张陈涛
汤思榕
梁培栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Quanzhou HIT Research Institute of Engineering and Technology
Original Assignee
Fujian Quanzhou HIT Research Institute of Engineering and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Quanzhou HIT Research Institute of Engineering and Technology filed Critical Fujian Quanzhou HIT Research Institute of Engineering and Technology
Priority to CN202110701200.XA priority Critical patent/CN113486743A/en
Publication of CN113486743A publication Critical patent/CN113486743A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a fatigue driving identification method and a device, wherein the method comprises the following steps: A. acquiring n standard key point coordinates on a standard face; B. collecting a plurality of face image sets as model training data; C. acquiring n key point coordinates of the face; D. determining face orientation information; E. determining eye opening and closing degree information and mouth opening and closing degree information; F. aggregating the face orientation information, the eye opening and closing degree information, the mouth opening and closing degree information and the n key points into a vector; G. taking a vector formed by a face image set as training data, and sending a plurality of training data to an SVM classifier for training; H. and taking a vector formed by corresponding each face image in the time t as identification data, and identifying fatigue driving by using a classifier model. The fatigue driving recognition method and the system can improve the stability, accuracy, practicability and universality of fatigue driving recognition, can effectively avoid frequent misjudgments and misreports, and improve user experience.

Description

Fatigue driving identification method and device
Technical Field
The invention relates to a fatigue driving identification method and device.
Background
Traffic safety is a hot problem directly related to the national civilians, and fatigue driving is one of the main reasons of multiple traffic accidents and is widely concerned by people. Fatigue driving can lead to the driver not concentrating on energy, the limbs react slowly, to the decline of emergency ability, very easily causes the traffic accident, and fatigue driving is difficult to be perceived and monitored again. For the research of fatigue driving, the current mainstream technology is to use a camera to shoot a human face, and use a human face key point extraction algorithm to analyze the closing time percentage and the yawning percentage of human eyes so as to judge the fatigue driving behavior, but the method has the following defects: the signal source is single, the recognition precision is low, and the practicability is poor; the stability is poor, firstly, because the eyes and the mouth of different people are different in size, and secondly, because the illumination of the shot face picture also has great influence on the stability of the algorithm.
Disclosure of Invention
The fatigue driving identification method and device provided by the invention can improve the stability, accuracy, practicability and universality of fatigue driving identification, can effectively avoid frequent misjudgments and false reports, and can improve the user experience.
The invention is realized by the following technical scheme:
a fatigue driving recognition method comprises the following steps:
A. acquiring a standard face image of which the face is right opposite to the camera and the front is observed visually, and processing the standard face image to obtain n standard key point coordinates on the standard face;
B. collecting a plurality of face image sets as model training data, wherein each face image set is a plurality of face images collected within continuous t time;
C. b, processing each face image respectively to obtain n key point coordinates of the face, wherein the n key points have one-to-one correspondence with the n standard key points in the step A;
D. respectively calculating deviation values between each key point in the step C and the corresponding standard key point, and calculating Euler angles according to the deviation values to determine face orientation information;
E. extracting six key points around the eyes and six key points around the mouth from the n key points respectively, and calculating EAR values of the eyes and the mouth respectively to determine eye opening and closing degree information and mouth opening and closing degree information;
F. aggregating the face orientation information, the eye opening and closing degree information, the mouth opening and closing degree information and the n key points into a vector;
G. taking a vector formed by a face image set as training data, and sending a plurality of training data to an SVM classifier for training to obtain a classifier model;
H. and C, acquiring a plurality of face images within continuous time t in the actual driving process, respectively carrying out the processing from the step C to the step F on each face image, taking a vector correspondingly formed by each face image within the time t as identification data, and inputting the identification data into a classifier model for identifying fatigue driving.
Further, the t time is 2-4 seconds.
Further, in the step a and the step C, the face image is processed by using a dlib library to extract key points, and the value of n is 68.
Further, in the step E, the calculation formula of the EAR value is as follows:
Figure BDA0003126851510000021
wherein, P36To P41Six key points of the eye or six key points of the mouth.
And further, sending out a corresponding alarm signal according to the fatigue driving grade identified by the classifier model.
The invention is also realized by the following technical scheme:
a fatigue driving recognition apparatus comprising:
a database establishment module: the system comprises a camera, a face acquisition module, a face recognition module and a face recognition module, wherein the face acquisition module is used for acquiring a standard face image of which the face is over against the camera and the front is observed visually, and processing the standard face image to obtain n standard key point coordinates on the standard face;
a training data acquisition module: the system comprises a plurality of face image sets, a plurality of image processing units and a plurality of image processing units, wherein the face image sets are used for acquiring a plurality of face image sets as model training data and respectively processing each face image to obtain n key point coordinates of a face, and the n key points and n standard key points on a standard face have one-to-one correspondence relationship; each face image set is a plurality of face images collected within continuous time t; a face orientation determination module: the system comprises a training data acquisition module, a face orientation module, a standard key point and a Euler angle calculation module, wherein the training data acquisition module is used for acquiring key points of a face;
eye and mouth information determination module: the method comprises the steps of extracting six key points around the eyes and six key points around the mouth from n key points respectively, and calculating EAR values of the eyes and the mouth respectively to determine eye opening and closing degree information and mouth opening and closing degree information;
a training module: the system comprises a face orientation information acquisition module, a face opening degree information acquisition module, a face image acquisition module and a Support Vector Machine (SVM) classifier, wherein the face orientation information, the eye opening degree information, the mouth opening degree information and n key points are aggregated into a vector, the vector correspondingly formed by a face image set is used as training data, and the training data are sent to the SVM classifier for training to obtain a classifier model;
an identification module: the system is used for collecting a plurality of face images within continuous time t in the actual driving process, processing each face image respectively to obtain an identification vector aggregating face orientation information, glasses opening and closing degree information and mouth opening and closing degree information, taking a vector formed by corresponding each face image within the time t as identification data, and inputting the identification data into a classifier model to identify fatigue driving.
Further, the t time is 2-4 seconds.
Further, the dlib library is used for processing the face image so as to extract n standard key points and n key points, and the value of n is 68.
Further, the calculation formula of the EAR value is as follows:
Figure BDA0003126851510000041
wherein, P36To P41Six key points of the eye or six key points of the mouth.
The invention has the following beneficial effects:
1. the invention collects a plurality of face images in continuous time t as a face image set, respectively processes each face image to obtain a vector formed by aggregating face orientation information, eye opening degree information, mouth opening degree information and n key points, when training an SVM, the vector corresponding to each face image of one face image set is used as training data, when in recognition, a recognition vector is obtained through the face images in continuous time t, thereby obtaining a final recognition result, compared with the method only considering eye opening degree and mouth opening degree in the prior art, the invention integrates face orientation, eye opening degree, mouth opening degree and face key points, considers a plurality of factors, judges from a plurality of dimensions, effectively improves the stability, accuracy, practicability and universality of fatigue driving recognition, considers the face orientation, can enough reflect driver's concentration from the side, can make eyes degree of opening and degree of opening of mouth judge more accurately again, and the emergence of frequent erroneous judgement and wrong report can be avoided to the consideration of time dimension, effectively promotes user experience, moreover, utilizes SVM to discern, and is succinct high-efficient, and the real-time is high, and the robustness is strong.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings.
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
As shown in fig. 1, a fatigue driving recognition method includes the following steps:
A. acquiring a standard face image of which the face is opposite to the camera and the front is observed visually, and processing the standard face image by using a dlib library to obtain n standard key point coordinates on the standard face, wherein n is 68;
B. collecting a plurality of face image sets as model training data, wherein each face image set is a plurality of face images collected within continuous t time;
the value range of the t time is 2-4 seconds, if the duration of eye closure, side face closing or yawning is longer than t, it is indicated that the state of the driver is abnormal, and if the duration of the action is shorter than t, the normal action of the driver is possible, so that frequent misjudgment or misstatement can be effectively avoided by considering the time dimension, and in the embodiment, the value of the t time is 3 seconds;
C. b, processing each face image respectively to obtain n key point coordinates of the face, wherein the n key points have one-to-one correspondence with the n standard key points in the step A; wherein n is 68;
D. respectively calculating deviation values between each key point in the step C and the corresponding standard key point, and calculating Euler angles according to the deviation values to determine face orientation information; the process of determining the face orientation according to the Euler angle is the prior art;
E. extracting six key points around the eyes and six key points around the mouth from the n key points respectively, and calculating EAR values of the eyes and the mouth respectively to determine eye opening and closing degree information and mouth opening and closing degree information; wherein, the EAR value calculation formula is as follows:
Figure BDA0003126851510000051
wherein, P36To P41Six key points of the eyes or six key points of the mouth;
F. aggregating the face orientation information, the eye opening and closing degree information, the mouth opening and closing degree information and the n key points into a vector;
G. taking a vector formed by a face image set as training data, and sending a plurality of training data to an SVM classifier for training to obtain a classifier model;
H. acquiring a plurality of face images within continuous time t in the actual driving process, respectively carrying out the processing from the step C to the step F on each face image, taking a vector correspondingly formed by each face image within the time t as identification data, and inputting the identification data into a classifier model for identifying fatigue driving;
I. and sending a corresponding warning signal according to the fatigue driving grade identified by the classifier model, wherein the fatigue driving grade can be determined according to the actual condition.
A fatigue driving recognition apparatus comprising:
a database establishment module: the system comprises a camera, a dlib library, n standard key point coordinates and a plurality of key point coordinates, wherein the standard face image is used for acquiring a standard face image of which the face is opposite to the camera and in front of the camera, and is processed by the dlib library to obtain n standard key point coordinates on the standard face; wherein n is 68;
a training data acquisition module: the system comprises a plurality of face image sets, a dlib library, a standard face and a plurality of key points, wherein the face image sets are used for acquiring a plurality of face image sets as model training data, and processing each face image by using the dlib library respectively to obtain n key point coordinates of a face, and the n key points and n standard key points on the standard face have one-to-one correspondence relationship; each face image set is a plurality of face images collected within continuous time t; t takes 3 seconds;
a face orientation determination module: the system comprises a training data acquisition module, a face orientation module, a standard key point and a Euler angle calculation module, wherein the training data acquisition module is used for acquiring key points of a face;
eye and mouth information determination module: the method comprises the steps of extracting six key points around the eyes and six key points around the mouth from n key points respectively, and calculating EAR values of the eyes and the mouth respectively to determine eye opening and closing degree information and mouth opening and closing degree information;
wherein, the EAR value calculation formula is as follows:
Figure BDA0003126851510000061
wherein, P36To P41Six key points of the eyes or six key points of the mouth;
a training module: the system comprises a face orientation information acquisition module, a face opening degree information acquisition module, a face image acquisition module and a Support Vector Machine (SVM) classifier, wherein the face orientation information, the eye opening degree information, the mouth opening degree information and n key points are aggregated into a vector, the vector correspondingly formed by a face image set is used as training data, and the training data are sent to the SVM classifier for training to obtain a classifier model;
an identification module: the system comprises a classifier model, a face image acquisition module, a face image processing module and a driver interface module, wherein the classifier model is used for acquiring a plurality of face images within continuous time t in the actual driving process, respectively processing each face image to obtain a vector aggregating face orientation information, glasses opening and closing degree information and mouth opening and closing degree information, taking a vector formed by each face image within the time t as identification data, and inputting the identification data into the classifier model for identifying fatigue driving;
an alarm module: and sending out a corresponding warning signal according to the fatigue driving grade identified by the classifier model.
The above description is only a preferred embodiment of the present invention, and therefore should not be taken as limiting the scope of the invention, which is defined by the appended claims and their equivalents and modifications within the scope of the description.

Claims (9)

1. A fatigue driving recognition method is characterized in that: the method comprises the following steps:
A. acquiring a standard face image of which the face is right opposite to the camera and the front is observed visually, and processing the standard face image to obtain n standard key point coordinates on the standard face;
B. collecting a plurality of face image sets as model training data, wherein each face image set is a plurality of face images collected within continuous t time;
C. b, processing each face image respectively to obtain n key point coordinates of the face, wherein the n key points have one-to-one correspondence with the n standard key points in the step A;
D. respectively calculating deviation values between each key point in the step C and the corresponding standard key point, and calculating Euler angles according to the deviation values to determine face orientation information;
E. extracting six key points around the eyes and six key points around the mouth from the n key points respectively, and calculating EAR values of the eyes and the mouth respectively to determine eye opening and closing degree information and mouth opening and closing degree information;
F. aggregating the face orientation information, the eye opening and closing degree information, the mouth opening and closing degree information and the n key points into a vector;
G. taking a vector formed by a face image set as training data, and sending a plurality of training data to an SVM classifier for training to obtain a classifier model;
H. and C, acquiring a plurality of face images within continuous time t in the actual driving process, respectively carrying out the processing from the step C to the step F on each face image, taking a vector correspondingly formed by each face image within the time t as identification data, and inputting the identification data into a classifier model for identifying fatigue driving.
2. The fatigue driving recognition method according to claim 1, wherein: and the t time is 2-4 seconds.
3. The fatigue driving recognition method according to claim 1, wherein: in the step A and the step C, the face image is processed by utilizing a dlib library to extract key points, and the value of n is 68.
4. A fatigue driving recognition method according to claim 1, 2 or 3, wherein: in the step E, the calculation formula of the EAR value is:
Figure FDA0003126851500000021
wherein, P36To P41Six key points of the eye or six key points of the mouth.
5. A fatigue driving recognition method according to claim 1, 2 or 3, wherein: further comprising the step of I: and sending out a corresponding warning signal according to the fatigue driving grade identified by the classifier model.
6. A fatigue driving recognition device characterized in that: the method comprises the following steps:
a database establishment module: the system comprises a camera, a face acquisition module, a face recognition module and a face recognition module, wherein the face acquisition module is used for acquiring a standard face image of which the face is over against the camera and the front is observed visually, and processing the standard face image to obtain n standard key point coordinates on the standard face;
a training data acquisition module: the system comprises a plurality of face image sets, a plurality of image processing units and a plurality of image processing units, wherein the face image sets are used for acquiring a plurality of face image sets as model training data and respectively processing each face image to obtain n key point coordinates of a face, and the n key points and n standard key points on a standard face have one-to-one correspondence relationship; each face image set is a plurality of face images collected within continuous time t;
a face orientation determination module: the system comprises a training data acquisition module, a face orientation module, a standard key point and a Euler angle calculation module, wherein the training data acquisition module is used for acquiring key points of a face;
eye and mouth information determination module: the method comprises the steps of extracting six key points around the eyes and six key points around the mouth from n key points respectively, and calculating EAR values of the eyes and the mouth respectively to determine eye opening and closing degree information and mouth opening and closing degree information;
a training module: the system comprises a face orientation information acquisition module, a face opening degree information acquisition module, a face image acquisition module and a Support Vector Machine (SVM) classifier, wherein the face orientation information, the eye opening degree information, the mouth opening degree information and n key points are aggregated into a vector, the vector correspondingly formed by a face image set is used as training data, and the training data are sent to the SVM classifier for training to obtain a classifier model;
an identification module: the system is used for collecting a plurality of face images within continuous time t in the actual driving process, processing each face image respectively to obtain an identification vector aggregating face orientation information, glasses opening and closing degree information and mouth opening and closing degree information, taking a vector formed by corresponding each face image within the time t as identification data, and inputting the identification data into a classifier model to identify fatigue driving.
7. A fatigue driving recognition apparatus according to claim 6, wherein: and the t time is 2-4 seconds.
8. A fatigue driving recognition apparatus according to claim 6, wherein: and processing the face image by using a dlib library to extract n standard key points and n key points, wherein the value of n is 68.
9. A fatigue driving recognition apparatus according to claim 6, 7 or 8, wherein: the EAR value calculation formula is as follows:
Figure FDA0003126851500000031
wherein, P36To P41Six key points of the eye or six key points of the mouth.
CN202110701200.XA 2021-06-22 2021-06-22 Fatigue driving identification method and device Pending CN113486743A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110701200.XA CN113486743A (en) 2021-06-22 2021-06-22 Fatigue driving identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110701200.XA CN113486743A (en) 2021-06-22 2021-06-22 Fatigue driving identification method and device

Publications (1)

Publication Number Publication Date
CN113486743A true CN113486743A (en) 2021-10-08

Family

ID=77935815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110701200.XA Pending CN113486743A (en) 2021-06-22 2021-06-22 Fatigue driving identification method and device

Country Status (1)

Country Link
CN (1) CN113486743A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690892A (en) * 2023-01-03 2023-02-03 京东方艺云(杭州)科技有限公司 Squinting recognition method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690892A (en) * 2023-01-03 2023-02-03 京东方艺云(杭州)科技有限公司 Squinting recognition method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN104637246B (en) Driver multi-behavior early warning system and danger evaluation method
CN108960065B (en) Driving behavior detection method based on vision
CN109460699B (en) Driver safety belt wearing identification method based on deep learning
CN105488453B (en) A kind of driver based on image procossing does not fasten the safety belt detection recognition method
CN103049740B (en) Fatigue state detection method based on video image and device
CN103714631B (en) ATM cash dispenser intelligent monitor system based on recognition of face
CN104616438A (en) Yawning action detection method for detecting fatigue driving
CN106056079A (en) Image acquisition device and facial feature occlusion detection method
CN111680613B (en) Method for detecting falling behavior of escalator passengers in real time
CN111126366B (en) Method, device, equipment and storage medium for distinguishing living human face
CN101556717A (en) ATM intelligent security system and monitoring method
CN209543514U (en) Monitoring and alarm system based on recognition of face
CN208498370U (en) Fatigue driving based on steering wheel detects prior-warning device
CN101655907A (en) Trainman driving state monitoring intelligent alarm system
CN108108651B (en) Method and system for detecting driver non-attentive driving based on video face analysis
CN108197575A (en) A kind of abnormal behaviour recognition methods detected based on target detection and bone point and device
CN104068868A (en) Method and device for monitoring driver fatigue on basis of machine vision
CN103065121A (en) Engine driver state monitoring method and device based on video face analysis
CN103700220A (en) Fatigue driving monitoring device
CN107832721B (en) Method and apparatus for outputting information
CN109543577A (en) A kind of fatigue driving detection method for early warning based on facial expression feature
CN107563346A (en) One kind realizes that driver fatigue sentences method for distinguishing based on eye image processing
CN110427830A (en) Driver's abnormal driving real-time detection system for state and method
CN113486743A (en) Fatigue driving identification method and device
CN112528767A (en) Machine vision-based construction machinery operator fatigue operation detection system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination