CN111626272A - Driver fatigue monitoring system based on deep learning - Google Patents

Driver fatigue monitoring system based on deep learning Download PDF

Info

Publication number
CN111626272A
CN111626272A CN202010735477.XA CN202010735477A CN111626272A CN 111626272 A CN111626272 A CN 111626272A CN 202010735477 A CN202010735477 A CN 202010735477A CN 111626272 A CN111626272 A CN 111626272A
Authority
CN
China
Prior art keywords
face
deep learning
monitoring system
image
fatigue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010735477.XA
Other languages
Chinese (zh)
Inventor
李吉成
李建东
曲原
蒋海军
刘云剑
陈远益
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Chaochuang Electronic Technology Co ltd
Original Assignee
Changsha Chaochuang Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Chaochuang Electronic Technology Co ltd filed Critical Changsha Chaochuang Electronic Technology Co ltd
Priority to CN202010735477.XA priority Critical patent/CN111626272A/en
Publication of CN111626272A publication Critical patent/CN111626272A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a driver fatigue monitoring system based on deep learning and a use method thereof, wherein the driver fatigue monitoring system based on deep learning comprises an image acquisition module, a face detection module, an image processing module and a server module; the invention provides a driver fatigue monitoring system based on deep learning and a use method thereof, and the driver fatigue monitoring system can be used for monitoring behaviors of fatigue driving, dangerous driving, sight line deviation and the like of a driver on a current hardware platform.

Description

Driver fatigue monitoring system based on deep learning
Technical Field
The invention relates to the technical field of video image processing and visual monitoring, in particular to a driver fatigue monitoring system based on deep learning.
Background
Automotive safety is becoming an increasing issue that must be considered. In addition to the vehicle itself, if the driver does not have good driving habits, the occupants are unsafe and even the safety equipment cannot perform its intended function. Such as fatigue driving, drunk driving, smoking driving, overspeed, no safety belt fastening during driving, etc., the consequences of an accident can be ignored, so that the first important point of driving the automobile is the safety awareness! The human illegal activities are the main causes of traffic accidents, and the percentage of accidents caused by the human road traffic illegal activities is 95.24 percent, and the number of dead people is 95.42 percent.
The basic elements of road traffic are people, vehicles and roads, of which the driver has a particularly important role. According to the statistics of traffic accidents, 80-90% of direct or indirect reasons of traffic accidents are related to drivers, so that the drivers have poor safety awareness and have no capability of coping with emergencies. According to analysis, the most direct cause of accidents is the poor driving performance of drivers, including behaviors of speeding, inattention and misoperation, and fatigue is often the chief culprit in generating these states.
The driving fatigue is a phenomenon of physical function reduction caused by continuous driving of a driver, and influences the perception, thinking judgment and limb coordination ability of the driver, thereby causing traffic accidents. The driving fatigue is directly expressed on the physiological state of a driver and the running state of a vehicle: physiological changes mainly include dull hand and foot reaction, dull eyes, dysphoria, etc., and even phenomena of continuous yawning, eye closure, head lowering, etc. can occur during severe fatigue; in the vehicle running state, it is mainly reflected that the steering frequency of the vehicle is reduced, the vehicle deviates from the current lane, and the like. Research has shown that under a mild driving fatigue state, partial driving ability of a driver can be restored through proper voice reminding or music playing; however, when the vehicle is in moderate or severe fatigue, active intervention control needs to be given to the vehicle according to actual conditions so as to avoid accidents. Therefore, a corresponding safety assistant driving system (ADAS) is developed for driving fatigue, and the real-time identification of the state of the driver is realized, so that the method has important practical application value.
Currently, in the field of driver monitoring, from the viewpoint of feature sources, the following are mainly included: driver physiological parameters, vehicle steering information, driver facial images, and the like. Among them, the fatigue detection method based on the image appearance features has become a hot point of research in the field due to the advantages of no interference to drivers and easy realization of a visual system. However, the method based on the image appearance features is greatly influenced by the external illumination environment and the image background, so that the detection performance of the detection algorithm is easily reduced. In addition, the detection method based on the static image features does not consider the dynamic change characteristics of the fatigue features, which is also not beneficial to improving the detection performance of the developed system to a certain extent.
Disclosure of Invention
In order to solve the technical problem, the invention provides a driver fatigue monitoring system based on deep learning.
The technical scheme of the invention is as follows: a driver fatigue monitoring system based on deep learning comprises an image acquisition module, a face detection module, an image processing module and a server module;
an image acquisition module: used for obtaining an original image;
the face detection module: carrying out face detection by using a YOLO v3 deep learning algorithm to obtain the position of a face;
an image processing module: the method mainly comprises face key point detection, fatigue state judgment, visual field deviation estimation and dangerous driving behavior identification, wherein the premise of the fatigue state judgment and the visual field deviation estimation is that face key point detection needs to be extracted, the face key point detection processes the acquired face position according to a face key point detection algorithm of a regression tree, after a frame of picture is obtained, the face key point detection algorithm takes a default feature point position as a detection initial position in a face region according to the position coordinate detected by the face, then iterative regression is carried out by taking a mean square error as a loss function to obtain a final face feature point position, and face contour, eyebrow, eye, nose and jaw parts can be positioned through 68 calculated feature points to finish the face key point detection;
the formula for a certain frame image iteration is shown as follows:
Figure DEST_PATH_IMAGE001
wherein, the position of the t + 1 th iteration estimation is shown, the position of the t th iteration estimation is shown, and the regressor of the current iteration is shown;
after 68 key points of eyes, a nose, a mouth and a face are obtained, judging whether a driver is in a fatigue state or not according to the ratio of the horizontal distance to the vertical distance of each eye as a judgment basis;
after obtaining 68 key points of eyes, a nose, a mouth and a face, extracting 7 key point coordinates of the left corner of a left eye, the right corner of the left eye, the left corner of a right eye, the right corner of the right eye, the middle part of the nose, the left corner of the mouth and the right corner of the mouth, and estimating the angle of the face posture through matrix transformation so as to judge whether the face deviates;
the dangerous driving behavior recognition adopts a Resnet image classification network based on deep learning to judge normal and dangerous driving;
a server module: the method comprises information fusion and a rear-end server, wherein the information fusion superposes information obtained by visual field deviation estimation and dangerous driving behavior recognition on an obtained original image in a character mode, and the rear-end server transmits the image superposed with the characters to the server and displays the image.
Preferably, if the ratio of the horizontal to vertical spacing is less than 5, it indicates that the eyes are open; and if not, judging the eye closure, and finally judging whether the driver is in a fatigue state according to the eye closure time and the PERCLOS criterion.
Preferably, the angles of the face pose include an azimuth angle, a pitch angle, and a roll angle.
Preferably, the raw image is acquired mainly by an ambient sensor and processed by the ISP.
Preferably, the ISP processing comprises auto-exposure, sharpening and enhancement.
Preferably, the Resnet image classification network is a Resnet50 image classification network.
Preferably, the respet 50 image classification network is composed of 15 residual blocks, a 7 × 7 convolution and an avgpool layer, and 3400 images of different classifications are collected for training.
Preferably, the dangerous driving comprises smoking and making a call.
Preferably, the information fusion carries out graded alarm on dangerous driving, visual field deviation and fatigue according to time, and when several kinds of abnormal behaviors of drivers occur, the visual field deviation alarm level is highest, the fatigue is second, and the dangerous driving is lowest.
Compared with the prior art, the invention has the following beneficial effects:
the fatigue detection system can monitor the fatigue driving, dangerous driving, sight line deviation and other behaviors of a driver on the existing hardware platform, and compared with the existing fatigue detection system, the fatigue detection system monitors the fatigue detection, the sight line deviation and the dangerous driving separately, so that the monitoring result is accurate, and finally, the classified display is carried out according to the danger degree and the duration, the multi-level and multi-time reminding can be realized, so that the driver obtains information for many times, the driving safety is improved, and the monitoring is more stable and accurate by an algorithm utilized by the system, and the method specifically comprises the following steps:
(1) the face detection method adopts a YOLO v3 face detection algorithm, and the face detection rate is greatly improved for images with low illumination, blurring, small targets and the like;
(2) the invention adopts the regression tree-based human face key point detection algorithm which is stable and reliable, thereby effectively improving the accuracy of judging the eyes to be opened, closed and the visual field to be deviated;
(3) the invention adopts Resnet50 network to realize dangerous driving behavior classification, and the network identification accuracy reaches 95%.
Drawings
FIG. 1 is a schematic flow diagram of the system of the present invention;
FIG. 2 is a schematic diagram of the network structure of YOLO v3 according to the present invention;
FIG. 3 is a diagram of the effect of the face detection by YOLO v3 according to the present invention;
FIG. 4 is a schematic diagram of an iterative regression process of key points of a human face according to the present invention;
FIG. 5 is a schematic diagram of a dangerous driving-fatigue screenshot in an experimental process of the present invention;
FIG. 6 is a schematic screenshot of dangerous driving versus view deviation during an experiment of the present invention;
FIG. 7 is a schematic diagram of a dangerous driving-call screen shot during an experiment of the present invention;
fig. 8 is a schematic diagram of a dangerous driving-smoking screenshot in the experimental process of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be understood that the terms "front", "back", "left", "right", "up", "down", and the like indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the devices or elements indicated by the terms must have specific orientations, be constructed and operated in specific orientations, and therefore, should not be construed as limiting the present invention.
Referring to fig. 1 to 8, the present invention provides the following technical solutions: a driver fatigue monitoring system based on deep learning comprises an image acquisition module, a face detection module, an image processing module and a server module; FIG. 1 is a system flow diagram of the overall system;
an image acquisition module: used for obtaining an original image;
the face detection module: carrying out face detection by using a YOLO v3 deep learning algorithm to obtain the position of a face;
an image processing module: the method mainly comprises face key point detection, fatigue state judgment, visual field deviation estimation and dangerous driving behavior identification, wherein the premise of the fatigue state judgment and the visual field deviation estimation is that face key point detection needs to be extracted, the face key point detection processes the acquired face position according to a face key point detection algorithm of a regression tree, after a frame of picture is obtained, the face key point detection algorithm takes a default feature point position as a detection initial position in a face region according to the position coordinate detected by the face, then iterative regression is carried out by taking a mean square error as a loss function to obtain a final face feature point position, and face contour, eyebrow, eye, nose and jaw parts can be positioned through 68 calculated feature points to finish the face key point detection;
the formula for a certain frame image iteration is shown as follows:
Figure 809887DEST_PATH_IMAGE001
wherein
Figure DEST_PATH_IMAGE002
Indicating the position of the t + 1 th iteration estimate,
Figure DEST_PATH_IMAGE003
indicating the position estimated at the t-th iteration,
Figure DEST_PATH_IMAGE004
a regressor representing the iteration;
after 68 key points of eyes, nose, mouth and face are obtained by the fatigue state judgment, the ratio of the horizontal distance to the vertical distance of each eye is used as a judgment basis. When the ratio of the horizontal distance to the longitudinal distance is less than 5, the eyes are opened; otherwise, the eyes are judged to be closed. Finally, judging whether the driver is in a fatigue state according to the eye-closing time and the PERCLOS criterion;
the visual field deviation estimation obtains 7 position key point coordinates of a left eye left angle, a left eye right angle, a nose middle part, a mouth left angle and a mouth right angle, and the azimuth angle, the pitch angle and the roll angle of the human face posture are estimated through matrix transformation so as to judge whether the deviation exists;
the dangerous driving behavior recognition adopts a Resnet image classification network based on deep learning to judge normal and dangerous driving;
the information fusion is to carry out graded alarm on dangerous driving, visual field deviation and fatigue according to time, and when several kinds of abnormal behaviors of drivers occur, the visual field deviation alarm level is highest, the fatigue is second, and the dangerous driving is lowest.
The server module superimposes the judgment result of the information fusion on the original image in a character mode, and transmits and displays the image to the server. The recording results are respectively shown in fig. 5 to 8, namely fatigue, visual field deviation, call making and smoking, and the results are displayed in the lower right corner of the image.
Further, the main positions of the face key point detection include eyes, nose, mouth and face shape.
Furthermore, the fatigue state judgment is mainly based on the ratio of the horizontal and vertical distances of each eye. When the ratio of the horizontal distance to the longitudinal distance is less than 5, the eyes are opened; otherwise, the eyes are judged to be closed. And finally, judging whether the driver is in a fatigue state according to the eye-closing time and the PERCLOS criterion.
Further, the visual field deviation estimation is to estimate the azimuth angle, the pitch angle and the roll angle of the face posture after extracting key point positions of the left eye left angle, the left eye right angle, the middle nose part, the mouth left angle and the mouth right angle from the key point of the face, and if a certain angle is larger, the visual field deviation is determined.
Further, the raw image is acquired mainly by an external sensor and processed by the ISP.
Further, ISP processing includes auto exposure, sharpening and enhancement.
Further, the Resnet image classification network is a Resnet50 image classification network.
Further, the respet 50 image classification network consists of 15 residual blocks, a 7 × 7 convolution and an avgpool layer, and 3400 images of different classifications are collected and trained.
Further, dangerous driving includes smoking and making a call.
Further, the information fusion is to carry out graded alarm on dangerous driving, visual field deviation and fatigue according to time.
The working principle of the invention is as follows:
1. face detection algorithm
Since the invention, Convolutional Neural Networks (CNNs) have been widely used in the field of computer vision, such as image classification, detection, recognition, segmentation, etc., CNNs are powerful in that they can extract features of different scales from images: for a shallow convolutional layer, the receptive field is small, and local features can be extracted and learned; for a deeper convolutional layer, the receptive field is larger, global and abstract features can be learned, and in order to improve the face detection effect, a YoLO v3 target detection framework is used.
YOLO v3 is a further discourse object detection algorithm of YOLO series after YOLO and YOLO 2, and is an improvement based on YOLO 2, and has faster speed and higher precision, and the network structure of YOLO v3 is shown in fig. 2, in which:
DBL: is a basic component of yolo _ v 3. It is the convolution + BN + Leakyrelu. For YOLO v3, BN and learklelu have been inseparable parts of the convolutional layer (except for the last layer of convolution), together constituting the smallest component.
And (2) resn: n represents a number, including res1, res2, …, res8, etc., indicating how many res _ units (defective units) are contained in the res _ block. This is a large component of yolo _ v3, and yolo _ v3 started to mirror the incomplete structure of ResNet, which allows deeper network structures (rising from dark net-19 at v2 to dark net-53 at v3, which has no residual structure). For the explanation of res _ block, it can be seen visually in the lower right-hand corner of fig. 2 that the basic component is also DBL.
concat: and (4) tensor splicing, namely splicing the up-sampling of the middle layer and the later layer of the darknet. The operation of splicing is different from that of the residual layer add, splicing expands the dimensionality of the tensor, and adding add directly does not result in a change in the tensor dimensionality.
Compared with other target detection algorithms, YOLO v3 has the following characteristics: 1. adopting a new network structure Darknet-53, which uses the method of residual error network residual, and setting shortcut links (shortcutconnections) between some layers; 2. the object detection is carried out by utilizing the multi-scale features, so that the detection rate of small targets is greatly improved; 3. the prior frames of 9 scales enable targets of different scales to be better detected, and fig. 3 is a real image of a face detection result by using YOLO v 3.
2. Face key point detection and fatigue state judgment
The detection of the key points of the human face is to detect the position of a specific area of the human face, such as a face contour, eyebrows, a nose, eyes, a mouth and the like, in a given human face area. Face keypoint detection may also be referred to as face keypoint localization or face alignment.
Judging whether the eyes of the driver are closed or not according to the ratio of the horizontal-vertical spacing of each eye, wherein the ratio formula of the horizontal-vertical spacing of the eyes is as follows:
Figure DEST_PATH_IMAGE005
wherein
Figure DEST_PATH_IMAGE006
Figure DEST_PATH_IMAGE007
Figure DEST_PATH_IMAGE008
Figure DEST_PATH_IMAGE009
Respectively representing x and y coordinates of a few characteristic points, and when W/H is less than 5, representing that eyes are open; otherwise, the eyes are judged to be closed.
Judging whether the eyes are in fatigue or not according to the eye closing time and the times, and adopting a PERCLOS (PercentageOfyeLidclosureoverhipiloftover time) judgment criterion as a detection basis. The PERCLOS criterion was originally proposed by the research institute of Chimerron in the card, has been widely used in fatigue detection, and is a very mature detection criterion; the PERCLOS criterion is obtained by calculating the ratio of the number of frames N of the eye closure state per unit time to the total number of frames N per unit time. It is generally accepted that when the PERCLOS value exceeds 0.4, the person being tested is in a state of fatigue. The formula for the PERCLOS criterion is as follows:
Figure DEST_PATH_IMAGE010
3. resnet network
The Resnet network is based on the concatenation of each independent residual block, each residual block having two convolutional layers and a shortcut layer. Assuming x is the input of the residual block, F is output through two convolutional layers
Figure DEST_PATH_IMAGE011
Where σ represents the non-linear function ReLU,
Figure DEST_PATH_IMAGE012
Figure DEST_PATH_IMAGE013
are convolution parameters. Then adding F and X through a short layer to obtain output y
Figure DEST_PATH_IMAGE014
Experiments prove that the residual block usually needs more than two layers, the single layer of the residual block cannot play a role in improving, the residual network really solves the problem of degradation, and the deeper network error rate is proved to be smaller on a training set and a check set.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (9)

1. A driver fatigue monitoring system based on deep learning is characterized in that: the system comprises an image acquisition module, a face detection module, an image processing module and a server module;
an image acquisition module: used for obtaining an original image;
the face detection module: carrying out face detection by using a YOLO v3 deep learning algorithm to obtain the position of a face;
an image processing module: the method mainly comprises face key point detection, fatigue state judgment, visual field deviation estimation and dangerous driving behavior identification, wherein the premise of the fatigue state judgment and the visual field deviation estimation is that face key point detection needs to be extracted, the face key point detection processes the acquired face position according to a face key point detection algorithm of a regression tree, after a frame of picture is obtained, the face key point detection algorithm takes a default feature point position as a detection initial position in a face region according to the position coordinate detected by the face, then iterative regression is carried out by taking a mean square error as a loss function to obtain a final face feature point position, and face contour, eyebrow, eye, nose and jaw parts can be positioned through 68 calculated feature points to finish the face key point detection;
the formula for a certain frame image iteration is shown as follows:
Figure 551388DEST_PATH_IMAGE001
wherein
Figure 671791DEST_PATH_IMAGE002
Indicating the position of the t + 1 th iteration estimate,
Figure 654790DEST_PATH_IMAGE003
indicating the position estimated at the t-th iteration,
Figure 518841DEST_PATH_IMAGE004
a regressor representing the iteration;
after 68 key points of eyes, a nose, a mouth and a face are obtained, judging whether a driver is in a fatigue state or not according to the ratio of the horizontal distance to the vertical distance of each eye as a judgment basis;
after obtaining 68 key points of eyes, a nose, a mouth and a face, extracting 7 key point coordinates of the left corner of a left eye, the right corner of the left eye, the left corner of a right eye, the right corner of the right eye, the middle part of the nose, the left corner of the mouth and the right corner of the mouth, and estimating the angle of the face posture through matrix transformation so as to judge whether the face deviates;
the dangerous driving behavior recognition adopts a Resnet image classification network based on deep learning to judge normal and dangerous driving;
a server module: the method comprises information fusion and a rear-end server, wherein the information fusion superposes information obtained by visual field deviation estimation and dangerous driving behavior recognition on an obtained original image in a character mode, and the rear-end server transmits the image superposed with the characters to the server and displays the image.
2. The deep learning based driver fatigue monitoring system of claim 1, wherein: the ratio of the horizontal distance to the longitudinal distance is less than 5, indicating that the eyes are open; and if not, judging the eye closure, and finally judging whether the driver is in a fatigue state according to the eye closure time and the PERCLOS criterion.
3. The deep learning based driver fatigue monitoring system of claim 1, wherein: the angles of the human face pose comprise an azimuth angle, a pitch angle and a roll angle.
4. The deep learning based driver fatigue monitoring system of claim 1, wherein: the raw image is acquired mainly by an external sensor and processed by the ISP.
5. The deep learning based driver fatigue monitoring system of claim 4, wherein: the ISP processing includes auto exposure, sharpening and enhancement.
6. The deep learning based driver fatigue monitoring system of claim 1, wherein: the Resnet image classification network is a Resnet50 image classification network.
7. The deep learning based driver fatigue monitoring system of claim 6, wherein: the Resnet50 image classification network is composed of 15 residual blocks, a 7 × 7 convolution and an avgpool layer, and 3400 images of different classifications are collected for training.
8. The deep learning based driver fatigue monitoring system of claim 1, wherein: the dangerous driving comprises smoking and calling.
9. The deep learning based driver fatigue monitoring system of claim 1, wherein: the information fusion carries out graded alarm on dangerous driving, visual field deviation and fatigue according to time, and when several kinds of abnormal behaviors of drivers occur, the visual field deviation alarm level is highest, the fatigue is second, and the dangerous driving is lowest.
CN202010735477.XA 2020-07-28 2020-07-28 Driver fatigue monitoring system based on deep learning Pending CN111626272A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010735477.XA CN111626272A (en) 2020-07-28 2020-07-28 Driver fatigue monitoring system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010735477.XA CN111626272A (en) 2020-07-28 2020-07-28 Driver fatigue monitoring system based on deep learning

Publications (1)

Publication Number Publication Date
CN111626272A true CN111626272A (en) 2020-09-04

Family

ID=72271532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010735477.XA Pending CN111626272A (en) 2020-07-28 2020-07-28 Driver fatigue monitoring system based on deep learning

Country Status (1)

Country Link
CN (1) CN111626272A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668548A (en) * 2021-01-15 2021-04-16 重庆大学 Method and system for detecting driver's fool
CN112699768A (en) * 2020-12-25 2021-04-23 哈尔滨工业大学(威海) Fatigue driving detection method and device based on face information and readable storage medium
CN112966589A (en) * 2021-03-03 2021-06-15 中润油联天下网络科技有限公司 Behavior identification method in dangerous area
CN113313012A (en) * 2021-05-26 2021-08-27 北京航空航天大学 Dangerous driving behavior identification method based on convolution generation countermeasure network
WO2022001091A1 (en) * 2020-06-29 2022-01-06 北京百度网讯科技有限公司 Dangerous driving behavior recognition method and apparatus, and electronic device and storage medium
CN114162119A (en) * 2021-10-27 2022-03-11 广州广日电气设备有限公司 Lateral control method, equipment, medium and product of automobile advanced driving auxiliary system
CN114596687A (en) * 2020-12-01 2022-06-07 咸瑞科技股份有限公司 In-vehicle driving monitoring system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080253610A1 (en) * 2007-04-10 2008-10-16 Denso Corporation Three dimensional shape reconstitution device and estimation device
CN107697069A (en) * 2017-10-31 2018-02-16 上海汽车集团股份有限公司 Fatigue of automobile driver driving intelligent control method
CN109919049A (en) * 2019-02-21 2019-06-21 北京以萨技术股份有限公司 Fatigue detection method based on deep learning human face modeling
CN110427830A (en) * 2019-07-08 2019-11-08 太航常青汽车安全***(苏州)股份有限公司 Driver's abnormal driving real-time detection system for state and method
CN110991353A (en) * 2019-12-06 2020-04-10 中国科学院自动化研究所 Early warning method for recognizing driving behaviors of driver and dangerous driving behaviors
CN111145496A (en) * 2020-01-03 2020-05-12 建德市公安局交通警察大队 Driver behavior analysis early warning system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080253610A1 (en) * 2007-04-10 2008-10-16 Denso Corporation Three dimensional shape reconstitution device and estimation device
CN107697069A (en) * 2017-10-31 2018-02-16 上海汽车集团股份有限公司 Fatigue of automobile driver driving intelligent control method
CN109919049A (en) * 2019-02-21 2019-06-21 北京以萨技术股份有限公司 Fatigue detection method based on deep learning human face modeling
CN110427830A (en) * 2019-07-08 2019-11-08 太航常青汽车安全***(苏州)股份有限公司 Driver's abnormal driving real-time detection system for state and method
CN110991353A (en) * 2019-12-06 2020-04-10 中国科学院自动化研究所 Early warning method for recognizing driving behaviors of driver and dangerous driving behaviors
CN111145496A (en) * 2020-01-03 2020-05-12 建德市公安局交通警察大队 Driver behavior analysis early warning system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022001091A1 (en) * 2020-06-29 2022-01-06 北京百度网讯科技有限公司 Dangerous driving behavior recognition method and apparatus, and electronic device and storage medium
CN114596687A (en) * 2020-12-01 2022-06-07 咸瑞科技股份有限公司 In-vehicle driving monitoring system
CN112699768A (en) * 2020-12-25 2021-04-23 哈尔滨工业大学(威海) Fatigue driving detection method and device based on face information and readable storage medium
CN112668548A (en) * 2021-01-15 2021-04-16 重庆大学 Method and system for detecting driver's fool
CN112966589A (en) * 2021-03-03 2021-06-15 中润油联天下网络科技有限公司 Behavior identification method in dangerous area
CN113313012A (en) * 2021-05-26 2021-08-27 北京航空航天大学 Dangerous driving behavior identification method based on convolution generation countermeasure network
CN114162119A (en) * 2021-10-27 2022-03-11 广州广日电气设备有限公司 Lateral control method, equipment, medium and product of automobile advanced driving auxiliary system

Similar Documents

Publication Publication Date Title
CN111626272A (en) Driver fatigue monitoring system based on deep learning
Dong et al. Fatigue detection based on the distance of eyelid
CN108309311A (en) A kind of real-time doze of train driver sleeps detection device and detection algorithm
CN110119676A (en) A kind of Driver Fatigue Detection neural network based
CN104013414A (en) Driver fatigue detecting system based on smart mobile phone
CN106250801A (en) Based on Face datection and the fatigue detection method of human eye state identification
CN105956548A (en) Driver fatigue state detection method and device
CN111553214B (en) Method and system for detecting smoking behavior of driver
CN109740477A (en) Study in Driver Fatigue State Surveillance System and its fatigue detection method
CN106548132A (en) The method for detecting fatigue driving of fusion eye state and heart rate detection
CN105913026A (en) Passenger detecting method based on Haar-PCA characteristic and probability neural network
CN109948433A (en) A kind of embedded human face tracing method and device
CN108108651B (en) Method and system for detecting driver non-attentive driving based on video face analysis
CN113989788A (en) Fatigue detection method based on deep learning and multi-index fusion
Simić et al. Driver monitoring algorithm for advanced driver assistance systems
Saif et al. Robust drowsiness detection for vehicle driver using deep convolutional neural network
CN116935361A (en) Deep learning-based driver distraction behavior detection method
CN110232327B (en) Driving fatigue detection method based on trapezoid cascade convolution neural network
CN111563468A (en) Driver abnormal behavior detection method based on attention of neural network
Rani et al. Development of an Automated Tool for Driver Drowsiness Detection
CN116740792A (en) Face recognition method and system for sightseeing vehicle operators
CN114792437A (en) Method and system for analyzing safe driving behavior based on facial features
CN113361441B (en) Sight line area estimation method and system based on head posture and space attention
CN114973214A (en) Unsafe driving behavior identification method based on face characteristic points
CN114037979A (en) Lightweight driver fatigue state detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200904

RJ01 Rejection of invention patent application after publication