CN110879973A - Driver fatigue state facial feature recognition and detection method - Google Patents
Driver fatigue state facial feature recognition and detection method Download PDFInfo
- Publication number
- CN110879973A CN110879973A CN201911051146.8A CN201911051146A CN110879973A CN 110879973 A CN110879973 A CN 110879973A CN 201911051146 A CN201911051146 A CN 201911051146A CN 110879973 A CN110879973 A CN 110879973A
- Authority
- CN
- China
- Prior art keywords
- eye
- face
- human
- state
- fatigue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for identifying and detecting facial features of a fatigue state of a driver, and relates to the technical field of driver fatigue early warning; the method comprises the following steps: s01, acquiring a real-time monitoring video of the driver, and identifying and detecting the face in the video; s02, positioning human eyes in the human face area; s03, recognizing the eye state, determining the eye state, making a comprehensive judgment on the fatigue degree according to the eye state, and sending out corresponding alarm information according to the fatigue degree. The method is suitable for the environment with complex and changeable cab illumination conditions and high-frequency low-amplitude vibration during operation, improves the identification accuracy, and accurately and effectively detects the fatigue of the driver.
Description
The technical field is as follows:
the invention relates to the technical field of driver fatigue early warning, in particular to a method for recognizing and detecting facial features of a driver in a fatigue state.
Background art:
the driving fatigue refers to the reduction of the reaction ability of the driver caused by insufficient sleep or long-time continuous driving, and the reduction is manifested by drowsiness, driving operation error or complete loss of the driving ability of the driver. Investigation of the causes of traffic accidents found that 85% of accidents were associated with drivers and vehicle and environmental factors accounted for only 15%. The accident is directly caused by the action and the fault of the driver at the moment before the accident, and the actions comprise perceptual delay, misdecision on the environment, mishandling of dangerous situations and the like. Among all driver errors, the most common are perceptual delays and decision errors that produce inattention, tarnishment, mishandling, etc., the root cause of which is driving fatigue.
In order to reduce the mental stress of a driver, prompt early warning for fatigue in time, carry out the work of fatigue detection and remind the driver, traffic accidents can be prevented and reduced to a great extent, and the travel of citizens is safer. Research on early warning of fatigue driving is underway, and although research on the early warning of fatigue driving gradually draws attention and attention of many countries, practical products are not released so far, and accuracy, reliability and effectiveness of system monitoring are urgently improved; the implementation method of the fatigue detection system is further researched, and the speed and the accuracy of the fatigue detection are improved. Because the environment in the cab changes more during operation, the existing fatigue detection and identification method is not suitable for the environment with complex and variable cab illumination conditions and high-frequency low-amplitude vibration during operation.
The invention content is as follows:
the invention aims to provide a method for identifying and detecting facial features of a driver in a fatigue state, and solves the problems that the existing fatigue detection and identification method is not suitable for a complex and changeable cab illumination condition and a high-frequency low-amplitude vibration environment during operation.
In order to achieve the purpose, the invention adopts the following technical scheme:
according to a first aspect of the application, a driver fatigue state facial feature recognition detection method is provided, and comprises the following steps:
s01, acquiring a real-time monitoring video of the driver, and identifying and detecting the face in the video;
s02, positioning human eyes in the human face area;
s03, recognizing the eye state, determining the eye state, making a comprehensive judgment on the fatigue degree according to the eye state, and sending out corresponding alarm information according to the fatigue degree.
In the above recognition and detection method, further, in step S01, the face detection method includes:
a face detection algorithm based on an Adaboost algorithm is adopted to carry out statistical learning, identification and detection on a face, wherein the statistical learning, identification and detection comprise face sample training and face detection on obtained sample characteristics so as to obtain a face region.
In the above face detection method, further, the face sample training method includes:
selecting a face library;
preprocessing a face library picture;
generating an integral graph;
generating Haar-like characteristic values, generating 69120 characteristic value data by utilizing the integrogram data, and generating a weak classifier;
verifying the correctness of the previous work and the feasibility of the Adaboost algorithm by repeating the generation of the weak classifier and the generation of the strong classifier for multiple times;
and (4) generating a cascade classifier.
In the above face detection method, further, the face detection includes: preprocessing an image; generating an integral image, calculating a characteristic value, judging by a cascade classifier, and determining a final face region.
In the above face detection method, further, the positioning the human eye in the face region includes:
training a human eye sample, including collecting the human eye sample, establishing a human eye sample library, and obtaining a human eye cascade classifier;
and detecting the human eye region in the human face region by a human eye cascade classifier obtained by training a human eye sample.
In the above identification detection method, further, in step S03, the determining the state of the human eye includes:
determining quasi-eye areas by adopting gray scale integral projection, and after accurately positioning face parts, according to the distribution of facial organs of a human face, the human eyes are positioned at the upper half part of the face, so that firstly, the human face area is intercepted to be the upper half part for processing; the gray value of the eye part in the human face image is usually smaller than that of the surrounding area, and the eye is positioned by utilizing the characteristic by using an integral projection method;
accurately positioning the eyes by adopting a template matching method; the template matching method comprises the steps of defining the size of an image S to be searched as W and H, defining the size of a template T as M and N, searching a sub-image which has the size, the square and the image similar to the template T in the image S to be searched through a template matching algorithm, and determining the coordinate position of the sub-image.
In the above method for determining human eye state, further, the template matching algorithm uses an eye image of various states manually extracted from a human face image as a template, uses a pair of eye images as a template, produces a two-dimensional matrix according to the eye images, and performs a correlation matching between the eye template and the human face image, wherein a matching error function is as follows:
and when the correlation coefficient E (i, j) of the template matching exceeds a threshold value, determining that the searched subgraph is matched with the template, thereby determining the state of the human eyes.
In the above recognition and detection method, the step S03 of comprehensively determining the fatigue degree includes: performing comprehensive judgment on the fatigue degree by adopting PERCLOS, eye closing time, eye blinking frequency, mouth opening degree and head movement, and determining that the calculated result reaches a preset threshold value and reaches the corresponding fatigue degree; wherein the PERCLOS is the percentage of eye closure time to a particular time; the mouth state is set to be three, closing, speaking and yawning, the lower half area of the face is binarized, the pixel value of a connected area is calculated upwards and downwards from the space between the lips, the opening degree of the mouth is obtained, and the mouth state is determined according to the characteristics of the opening degree of the mouth; wherein the eye closing time is expressed in terms of the time that the eye is closed to open; wherein the eye blinking frequency is the number of blinks accumulated over a period of time, and the eyes are closed D/3 to open D/3 is blinked once; the head movement is nodding and forward tilting, and is used as a basis for fatigue judgment.
According to a second aspect of the present invention, the present invention also provides a driver fatigue state facial feature recognition detection apparatus, comprising:
the image acquisition module is used for acquiring a real-time monitoring video of a driver;
the face recognition module is used for recognizing and detecting the face in the video;
the human eye positioning module is used for positioning human eyes in the human face area;
the human eye state identification module is used for identifying the human eye state and determining the human eye state;
and the determining alarm module is used for making comprehensive judgment on the fatigue degree according to the eye state of the person and sending out corresponding alarm information according to the fatigue degree.
According to a third aspect of the present application, there is provided a computer-readable storage medium storing a computer program for executing the driver fatigue state facial feature recognition detecting method of the first aspect described above.
According to a fourth aspect of the present application, there is provided an electronic apparatus comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to execute the method for detecting facial feature recognition of fatigue state of driver according to the first aspect.
The method for identifying and detecting the facial features of the fatigue state of the driver has the following beneficial effects: the method is suitable for the environment with complex and changeable cab illumination conditions and high-frequency low-amplitude vibration during operation, improves the recognition accuracy, and accurately and effectively detects the fatigue of the driver; the strategy of detecting the human face firstly and then detecting the human eyes is adopted, so that the calculated amount of a detection algorithm can be reduced, and meanwhile, the accuracy of human eye detection is improved; the quasi-eye region is determined by adopting gray scale integral projection, so that the eye state analysis has continuity and parameter adjustability based on a knowledge modeling method, an accurate geometric model is not needed, and the state analysis has good robustness.
Description of the drawings:
the following detailed description of embodiments of the invention is provided in conjunction with the appended drawings:
FIG. 1 is a schematic diagram of a method for identifying and detecting facial features of a driver in a fatigue state according to the present invention;
fig. 2 is a schematic view of training a human face sample in the method for identifying and detecting facial features of a fatigue state of a driver according to the present invention.
The specific implementation mode is as follows:
it should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An exemplary method:
as shown in fig. 1 and fig. 2, a method for identifying and detecting facial features of a fatigue state of a driver includes:
s01, acquiring a real-time monitoring video of the driver, and identifying and detecting the face in the video;
s02, positioning human eyes in the human face area;
s03, recognizing the eye state, determining the eye state, making a comprehensive judgment on the fatigue degree according to the eye state, and sending out corresponding alarm information according to the fatigue degree.
By adopting the scheme, the strategy of detecting the human face firstly and then detecting the human eyes is adopted, so that the calculated amount of a detection algorithm can be reduced, and meanwhile, the accuracy of human eye detection is improved.
Specifically, in step S01, the face detection method includes:
a face detection algorithm based on an Adaboost algorithm is adopted to carry out statistical learning, identification and detection on a face, wherein the statistical learning, identification and detection comprise face sample training and face detection on obtained sample characteristics so as to obtain a face region.
Specifically, the face sample training method includes:
selecting a face library, specifically adopting a CBCL face library;
preprocessing a face library picture, specifically, equalizing the mean square error and normalizing the size;
generating an integral chart, specifically generating positive oblique integral chart data of 5000 positive and negative samples;
generating Haar-like characteristic values, generating 69120 characteristic value data by utilizing the integrogram data, and generating a weak classifier to obtain the weak classifier with better classification performance;
verifying the correctness of the previous work and the feasibility of the Adaboost algorithm by repeating the generation of the weak classifier and the generation of the strong classifier for multiple times;
and the cascade classifier is generated, so that the face detection time is reduced.
In the above face detection method, specifically, the face detection includes: preprocessing an image; generating an integral image, calculating a characteristic value, judging by a cascade classifier, and determining a final face region.
In the above face detection method, specifically, the positioning the human eyes in the face region in the above step includes:
training a human eye sample, including collecting the human eye sample, establishing a human eye sample library, and obtaining a human eye cascade classifier;
and detecting the human eye region in the human face region by a human eye cascade classifier obtained by training a human eye sample.
In the above identification detection method, in step S03, specifically, the determining the state of the human eye includes:
determining quasi-eye areas by adopting gray scale integral projection, and after accurately positioning face parts, according to the distribution of facial organs of a human face, the human eyes are positioned at the upper half part of the face, so that firstly, the human face area is intercepted to be the upper half part for processing; the gray value of the eye part in the human face image is usually smaller than that of the surrounding area, and the eye is positioned by utilizing the characteristic by using an integral projection method;
accurately positioning the eyes by adopting a template matching method; the template matching method comprises the steps of defining the size of an image S to be searched as W and H, defining the size of a template T as M and N, searching a sub-image which has the size, the square and the image similar to the template T in the image S to be searched through a template matching algorithm, and determining the coordinate position of the sub-image.
In the method for determining the state of the human eye, specifically, the template matching algorithm takes an eye image in various states manually extracted from a human face image as a template, takes a pair of eye images as one template, produces a two-dimensional matrix according to the eye images, and performs related matching between the eye template and the human face image, wherein a matching error function is as follows:
and when the correlation coefficient E (i, j) of the template matching exceeds a threshold value, determining that the searched subgraph is matched with the template, thereby determining the state of the human eyes.
In the above identification detection method, the step S03 of comprehensively determining the fatigue level specifically includes: performing comprehensive judgment on the fatigue degree by adopting PERCLOS, eye closing time, eye blinking frequency, mouth opening degree and head movement, and determining that the calculated result reaches a preset threshold value and reaches the corresponding fatigue degree; wherein the PERCLOS is the percentage of eye closure time to a particular time; the mouth state is set to be three, closing, speaking and yawning, the lower half area of the face is binarized, the pixel value of a connected area is calculated upwards and downwards from the space between the lips, the opening degree of the mouth is obtained, and the mouth state is determined according to the characteristics of the opening degree of the mouth; wherein the eye closing time is expressed in terms of the time that the eye is closed to open; wherein the eye blinking frequency is the number of blinks accumulated over a period of time, and the eyes are closed D/3 to open D/3 is blinked once; the head movement is nodding and forward tilting, and is used as a basis for fatigue judgment.
An exemplary apparatus:
a driver fatigue state facial feature recognition detection device includes:
the image acquisition module is used for acquiring a real-time monitoring video of a driver;
the face recognition module is used for recognizing and detecting the face in the video;
the human eye positioning module is used for positioning human eyes in the human face area;
the human eye state identification module is used for identifying the human eye state and determining the human eye state;
and the determining alarm module is used for making comprehensive judgment on the fatigue degree according to the eye state of the person and sending out corresponding alarm information according to the fatigue degree.
Exemplary computer program products and computer-readable storage media:
a computer-readable storage medium storing a computer program for executing the driver fatigue state facial feature recognition detecting method of the first aspect described above.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
An exemplary electronic device:
an electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to execute the method for detecting facial feature recognition of fatigue state of driver according to the first aspect.
The processor may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
The memory may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by a processor to implement the methods of the various embodiments of the application described above and/or other desired functions.
The foregoing shows and describes the general principles, essential features, and inventive features of this invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is to be protected by the following claims and their equivalents.
Claims (10)
1. A facial feature recognition and detection method for a fatigue state of a driver is characterized by comprising the following steps: the method comprises the following steps:
s01, acquiring a real-time monitoring video of the driver, and identifying and detecting the face in the video;
s02, positioning human eyes in the human face area;
s03, recognizing the eye state, determining the eye state, making a comprehensive judgment on the fatigue degree according to the eye state, and sending out corresponding alarm information according to the fatigue degree.
2. The driver fatigue state facial feature recognition detection method according to claim 1, characterized in that: in step S01, the face detection method includes:
a face detection algorithm based on an Adaboost algorithm is adopted to carry out statistical learning, identification and detection on a face, wherein the statistical learning, identification and detection comprise face sample training and face detection on obtained sample characteristics so as to obtain a face region.
3. The driver fatigue state facial feature recognition detection method according to claim 2, characterized in that: the face sample training method comprises the following steps:
selecting a face library;
preprocessing a face library picture;
generating an integral graph;
generating Haar-like characteristic values, generating 69120 characteristic value data by utilizing the integrogram data, and generating a weak classifier;
verifying the correctness of the previous work and the feasibility of the Adaboost algorithm by repeating the generation of the weak classifier and the generation of the strong classifier for multiple times;
and (4) generating a cascade classifier.
4. The driver fatigue state facial feature recognition detection method according to claim 1, characterized in that: the face detection comprises: preprocessing an image; generating an integral image, calculating a characteristic value, judging by a cascade classifier, and determining a final face region.
5. The driver fatigue state facial feature recognition detection method according to claim 1, characterized in that: locating the human eye in the face region includes:
training a human eye sample, including collecting the human eye sample, establishing a human eye sample library, and obtaining a human eye cascade classifier;
and detecting the human eye region in the human face region by a human eye cascade classifier obtained by training a human eye sample.
6. The driver fatigue state facial feature recognition detection method according to claim 1, characterized in that: in the step S03, the determining the human eye state includes:
determining quasi-eye areas by adopting gray scale integral projection, and after accurately positioning face parts, according to the distribution of facial organs of a human face, the human eyes are positioned at the upper half part of the face, so that firstly, the human face area is intercepted to be the upper half part for processing; the gray value of the eye part in the human face image is usually smaller than that of the surrounding area, and the eye is positioned by utilizing the characteristic by using an integral projection method;
accurately positioning the eyes by adopting a template matching method; the template matching method comprises the steps of defining the size of an image S to be searched as W and H, defining the size of a template T as M and N, searching a sub-image which has similar size, square and image with the template T in the image S to be searched through a template matching algorithm, and determining the coordinate position of the sub-image;
the template matching algorithm takes eye images in various states manually extracted from a human face image as a template, takes a pair of eye images as one template, produces a two-dimensional matrix according to the eye images, and performs related matching by using the eye templates and the human face image, wherein a matching error function is as follows:
and when the correlation coefficient E (i, j) of the template matching exceeds a threshold value, determining that the searched subgraph is matched with the template, thereby determining the state of the human eyes.
7. The driver fatigue state facial feature recognition detection method according to claim 1, characterized in that: in step S03, the comprehensive fatigue level determination includes: performing comprehensive judgment on the fatigue degree by adopting PERCLOS, eye closing time, eye blinking frequency, mouth opening degree and head movement, and determining that the calculated result reaches a preset threshold value and reaches the corresponding fatigue degree; wherein the PERCLOS is the percentage of eye closure time to a particular time; the mouth state is set to be three, closing, speaking and yawning, the lower half area of the face is binarized, the pixel value of a connected area is calculated upwards and downwards from the space between the lips, the opening degree of the mouth is obtained, and the mouth state is determined according to the characteristics of the opening degree of the mouth; wherein the eye closing time is expressed in terms of the time that the eye is closed to open; wherein the eye blinking frequency is the number of blinks accumulated over a period of time, and the eyes are closed D/3 to open D/3 is blinked once; the head movement is nodding and forward tilting, and is used as a basis for fatigue judgment.
8. A facial feature recognition detection device for a fatigue state of a driver is characterized in that: the method comprises the following steps:
the image acquisition module is used for acquiring a real-time monitoring video of a driver;
the face recognition module is used for recognizing and detecting the face in the video;
the human eye positioning module is used for positioning human eyes in the human face area;
the human eye state identification module is used for identifying the human eye state and determining the human eye state;
and the determining alarm module is used for making comprehensive judgment on the fatigue degree according to the eye state of the person and sending out corresponding alarm information according to the fatigue degree.
9. A computer-readable storage medium characterized by: the storage medium stores a computer program for executing the driver fatigue state facial feature recognition detecting method according to any one of claims 1 to 7.
10. An electronic device, characterized in that: the electronic device includes:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the driver fatigue state facial feature recognition detection method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911051146.8A CN110879973A (en) | 2019-10-31 | 2019-10-31 | Driver fatigue state facial feature recognition and detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911051146.8A CN110879973A (en) | 2019-10-31 | 2019-10-31 | Driver fatigue state facial feature recognition and detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110879973A true CN110879973A (en) | 2020-03-13 |
Family
ID=69728217
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911051146.8A Pending CN110879973A (en) | 2019-10-31 | 2019-10-31 | Driver fatigue state facial feature recognition and detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110879973A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111476122A (en) * | 2020-03-26 | 2020-07-31 | 杭州鸿泉物联网技术股份有限公司 | Driving state monitoring method and device and storage medium |
CN111860098A (en) * | 2020-04-21 | 2020-10-30 | 北京嘀嘀无限科技发展有限公司 | Fatigue driving detection method and device, electronic equipment and medium |
CN112419672A (en) * | 2020-09-17 | 2021-02-26 | 阜阳师范大学 | Vehicle-mounted fatigue driving prevention system |
CN112699807A (en) * | 2020-12-31 | 2021-04-23 | 车主邦(北京)科技有限公司 | Driver state information monitoring method and device |
CN113076801A (en) * | 2021-03-04 | 2021-07-06 | 广州铁路职业技术学院(广州铁路机械学校) | Train on-road state intelligent linkage detection system and method |
CN113591533A (en) * | 2021-04-27 | 2021-11-02 | 浙江工业大学之江学院 | Anti-fatigue driving method, device, equipment and storage medium based on road monitoring |
CN113655877A (en) * | 2020-05-12 | 2021-11-16 | 华为技术有限公司 | Display adjustment method and device |
CN113971796A (en) * | 2021-09-27 | 2022-01-25 | 上海赫千电子科技有限公司 | Intelligent vehicle-mounted box and driving fatigue monitoring method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104063700A (en) * | 2014-07-04 | 2014-09-24 | 武汉工程大学 | Method for locating central points of eyes in natural lighting front face image |
CN104240446A (en) * | 2014-09-26 | 2014-12-24 | 长春工业大学 | Fatigue driving warning system on basis of human face recognition |
CN106250801A (en) * | 2015-11-20 | 2016-12-21 | 北汽银翔汽车有限公司 | Based on Face datection and the fatigue detection method of human eye state identification |
CN106530623A (en) * | 2016-12-30 | 2017-03-22 | 南京理工大学 | Fatigue driving detection device and method |
CN109344802A (en) * | 2018-10-29 | 2019-02-15 | 重庆邮电大学 | A kind of human-body fatigue detection method based on improved concatenated convolutional nerve net |
CN109902560A (en) * | 2019-01-15 | 2019-06-18 | 浙江师范大学 | A kind of fatigue driving method for early warning based on deep learning |
CN109934199A (en) * | 2019-03-22 | 2019-06-25 | 扬州大学 | A kind of Driver Fatigue Detection based on computer vision and system |
-
2019
- 2019-10-31 CN CN201911051146.8A patent/CN110879973A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104063700A (en) * | 2014-07-04 | 2014-09-24 | 武汉工程大学 | Method for locating central points of eyes in natural lighting front face image |
CN104240446A (en) * | 2014-09-26 | 2014-12-24 | 长春工业大学 | Fatigue driving warning system on basis of human face recognition |
CN106250801A (en) * | 2015-11-20 | 2016-12-21 | 北汽银翔汽车有限公司 | Based on Face datection and the fatigue detection method of human eye state identification |
CN106530623A (en) * | 2016-12-30 | 2017-03-22 | 南京理工大学 | Fatigue driving detection device and method |
CN109344802A (en) * | 2018-10-29 | 2019-02-15 | 重庆邮电大学 | A kind of human-body fatigue detection method based on improved concatenated convolutional nerve net |
CN109902560A (en) * | 2019-01-15 | 2019-06-18 | 浙江师范大学 | A kind of fatigue driving method for early warning based on deep learning |
CN109934199A (en) * | 2019-03-22 | 2019-06-25 | 扬州大学 | A kind of Driver Fatigue Detection based on computer vision and system |
Non-Patent Citations (1)
Title |
---|
张建保: "《中国生物医学工程进展 上》", 30 April 2007 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111476122A (en) * | 2020-03-26 | 2020-07-31 | 杭州鸿泉物联网技术股份有限公司 | Driving state monitoring method and device and storage medium |
CN111860098A (en) * | 2020-04-21 | 2020-10-30 | 北京嘀嘀无限科技发展有限公司 | Fatigue driving detection method and device, electronic equipment and medium |
CN113655877A (en) * | 2020-05-12 | 2021-11-16 | 华为技术有限公司 | Display adjustment method and device |
CN112419672A (en) * | 2020-09-17 | 2021-02-26 | 阜阳师范大学 | Vehicle-mounted fatigue driving prevention system |
CN112699807A (en) * | 2020-12-31 | 2021-04-23 | 车主邦(北京)科技有限公司 | Driver state information monitoring method and device |
CN113076801A (en) * | 2021-03-04 | 2021-07-06 | 广州铁路职业技术学院(广州铁路机械学校) | Train on-road state intelligent linkage detection system and method |
CN113591533A (en) * | 2021-04-27 | 2021-11-02 | 浙江工业大学之江学院 | Anti-fatigue driving method, device, equipment and storage medium based on road monitoring |
CN113971796A (en) * | 2021-09-27 | 2022-01-25 | 上海赫千电子科技有限公司 | Intelligent vehicle-mounted box and driving fatigue monitoring method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110879973A (en) | Driver fatigue state facial feature recognition and detection method | |
CN108875833B (en) | Neural network training method, face recognition method and device | |
CN107704805B (en) | Method for detecting fatigue driving, automobile data recorder and storage device | |
Gupta et al. | Implementation of motorist weariness detection system using a conventional object recognition technique | |
WO2019028798A1 (en) | Method and device for monitoring driving condition, and electronic device | |
Tadesse et al. | Driver drowsiness detection through HMM based dynamic modeling | |
CN110765807A (en) | Driving behavior analysis method, driving behavior processing method, driving behavior analysis device, driving behavior processing device and storage medium | |
Ragab et al. | A visual-based driver distraction recognition and detection using random forest | |
Yuen et al. | Looking at faces in a vehicle: A deep CNN based approach and evaluation | |
CN111434553B (en) | Brake system, method and device, and fatigue driving model training method and device | |
Jie et al. | Analysis of yawning behaviour in spontaneous expressions of drowsy drivers | |
CN107832721B (en) | Method and apparatus for outputting information | |
CN108108651B (en) | Method and system for detecting driver non-attentive driving based on video face analysis | |
Lashkov et al. | Driver dangerous state detection based on OpenCV & dlib libraries using mobile video processing | |
CN115331205A (en) | Driver fatigue detection system with cloud edge cooperation | |
Stan et al. | Eye-gaze tracking method driven by raspberry PI applicable in automotive traffic safety | |
CN112926364B (en) | Head gesture recognition method and system, automobile data recorder and intelligent cabin | |
Gatea et al. | Deep learning neural network for driver drowsiness detection using eyes recognition | |
US20170309040A1 (en) | Method and device for positioning human eyes | |
CN115641570B (en) | Driving behavior determination method, driving behavior determination device, electronic equipment and storage medium | |
CN114998874A (en) | Driver abnormal behavior detection method based on deep learning | |
KR20170028631A (en) | Method and Apparatus for Detecting Carelessness of Driver Using Restoration of Front Face Image | |
Ma et al. | A real-time fatigue driving detection system design and implementation | |
CN114792437A (en) | Method and system for analyzing safe driving behavior based on facial features | |
CN109657550B (en) | Fatigue degree detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200313 |
|
RJ01 | Rejection of invention patent application after publication |