CN117576668B - Multi-feature perception driving fatigue state detection method and system based on video frame - Google Patents

Multi-feature perception driving fatigue state detection method and system based on video frame Download PDF

Info

Publication number
CN117576668B
CN117576668B CN202410068812.3A CN202410068812A CN117576668B CN 117576668 B CN117576668 B CN 117576668B CN 202410068812 A CN202410068812 A CN 202410068812A CN 117576668 B CN117576668 B CN 117576668B
Authority
CN
China
Prior art keywords
fatigue
eye
mouth
analysis information
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410068812.3A
Other languages
Chinese (zh)
Other versions
CN117576668A (en
Inventor
徐志展
占思琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi University of Technology
Original Assignee
Jiangxi University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi University of Technology filed Critical Jiangxi University of Technology
Priority to CN202410068812.3A priority Critical patent/CN117576668B/en
Publication of CN117576668A publication Critical patent/CN117576668A/en
Application granted granted Critical
Publication of CN117576668B publication Critical patent/CN117576668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • G06V10/811Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Image Processing (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention is applicable to the technical field of fatigue driving detection, and provides a multi-feature perceived driving fatigue state detection method and system based on video frames, wherein the method comprises the following steps: when the starting of the vehicle is detected, acquiring a face video of a main driving position, and determining a calibrated face image according to the face video of the main driving position; acquiring a section of face video of a main driving position at each set time value, and extracting video frames of the face video to obtain a set number of face images; identifying eye features and mouth features of the facial image to obtain facial analysis information, wherein the facial analysis information comprises eye feature data and mouth feature data; calculating an eye calibration value and a mouth calibration value of the calibrated face image, comparing the face analysis information with the eye calibration value and the mouth calibration value of the calibrated face image, and judging whether fatigue signs exist in the face analysis information; according to the fatigue condition of all facial analysis information, whether the driver has fatigue driving can be accurately judged.

Description

Multi-feature perception driving fatigue state detection method and system based on video frame
Technical Field
The invention relates to the technical field of fatigue driving detection, in particular to a multi-feature perception driving fatigue state detection method and system based on video frames.
Background
Fatigue driving is an important factor affecting the driving safety of vehicles, and it is significant to prevent fatigue driving by detecting fatigue driving of drivers, such as traffic accidents. The method for implementing fatigue driving detection at present is mainly realized by detecting the blink frequency and the driving duration of eyes of a driver. Some drivers have high blink frequency, and some drivers still feel full after long-time driving or feel tired after short-time driving, so that the detection of fatigue driving is not accurate enough. Therefore, it is desirable to provide a method and a system for detecting a multi-feature perceived driving fatigue state based on video frames, so as to solve the above problems.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention aims to provide a multi-feature perception driving fatigue state detection method and system based on video frames, so as to solve the problems existing in the background art.
The invention is realized in such a way that the method for detecting the multi-feature perceived driving fatigue state based on the video frame comprises the following steps:
when the starting of the vehicle is detected, acquiring a face video of a main driving position, and determining a calibrated face image according to the face video of the main driving position;
acquiring a section of face video of a main driving position at each set time value, and extracting video frames of the face video to obtain a set number of face images;
identifying eye features and mouth features of the facial image to obtain facial analysis information, wherein the facial analysis information comprises eye feature data and mouth feature data;
calculating an eye calibration value and a mouth calibration value of the calibrated face image, comparing the face analysis information with the eye calibration value and the mouth calibration value of the calibrated face image, and judging whether fatigue signs exist in the face analysis information;
and judging whether the driver has fatigue driving risk according to the fatigue conditions of all the facial analysis information, and generating early warning information when the driver has fatigue driving risk.
As a further scheme of the invention: the step of determining the calibrated face image according to the face video of the main driving position specifically comprises the following steps:
matching the face video of the main driver position with a face image library, and determining the matched face image as a calibrated face image when the matching is successful;
and when the matching fails, determining a calibrated face image according to the face video of the main driving position.
As a further scheme of the invention: the step of identifying the eye features and the mouth features of the facial image to obtain facial analysis information specifically comprises the following steps:
the eye feature is subjected to edge tracing to determine an eye contour, so that left and right endpoints of the eye contour are obtained, the eye contour is divided into an upper contour and a lower contour by the left and right endpoints, and the midpoint of the upper contour and the midpoint of the lower contour are determined;
the eye characteristic data is obtained through calculation, and the calculation formula of the eye characteristic data is as follows:wherein->Calculated value representing eye characteristic data, +.>Reference value representing eye characteristic data, +.>Represents the distance between the left and right end points of the eye contour, < >>Representing the distance between the upper contour midpoint and the lower contour midpoint;
the mouth feature is subjected to edge drawing to determine a lip arc line, so that left and right end points of the lip arc line are obtained, the lip arc line is divided into an upper arc line and a lower arc line by the left and right end points, and the midpoint of the upper arc line and the midpoint of the lower arc line are determined;
calculating to obtain the mouth characteristic data, wherein the calculation formula of the mouth characteristic data is as follows:wherein->Calculated value representing the mouth characteristic data, +.>Reference value representing mouth characteristic data, +.>Correction factor representing a mouth characteristic data item, +.>Represents the distance between the left and right end points of the lip arc,/->Representing the distance between the midpoint of the upper arc and the midpoint of the lower arc;
and integrating the eye characteristic data and the mouth characteristic data to obtain facial analysis information.
As a further scheme of the invention: comparing the facial analysis information with an eye calibration value and a mouth calibration value of a calibrated face image, and judging whether the facial analysis information has fatigue signs or not, wherein the method specifically comprises the following steps of:
calculated value of eye characteristic dataAnd eye calibration value->Comparison is made when->,/>Representing an eye fatigue coefficient, and determining that the corresponding facial analysis information has eye characteristic item fatigue signs;
calculated value of the mouth characteristic dataAnd the mouth calibration value->Comparison is made when->,/>Representing the mouth fatigue coefficient, determining that the corresponding facial analysis information has the fatigue signs of the mouth characteristic items.
As a further scheme of the invention: the step of judging whether the driver has the risk of fatigue driving according to the fatigue condition of all the facial analysis information specifically comprises the following steps:
determining a fatigue comprehensive value of fatigue signs in a group of facial analysis information corresponding to the facial video of the main driver;
and judging the fatigue comprehensive value, and determining that the driver has fatigue driving risk when the fatigue comprehensive value is larger than a preset fatigue value.
Another object of the present invention is to provide a multi-feature perceived driving fatigue state detection system based on video frames, the system comprising:
the calibration face image module is used for acquiring face videos of the main driving position when the vehicle is detected to start, and determining a calibration face image according to the face videos of the main driving position;
the face image acquisition module is used for acquiring a section of face video of the main driving position at each set time value, and extracting video frames of the face video to obtain a set number of face images;
the facial analysis information module is used for identifying eye characteristics and mouth characteristics of the facial image to obtain facial analysis information, and the facial analysis information comprises eye characteristic data and mouth characteristic data;
the fatigue sign judging module is used for calculating an eye calibration value and a mouth calibration value of the calibrated face image, comparing the face analysis information with the eye calibration value and the mouth calibration value of the calibrated face image, and judging whether the face analysis information has fatigue signs or not;
and the fatigue driving judging module is used for judging whether the driver has the risk of fatigue driving according to the fatigue conditions of all the facial analysis information, and generating early warning information when the driver has the risk of fatigue driving.
As a further scheme of the invention: the face image calibrating module comprises:
the first calibration image unit is used for matching the face video of the main driver position with the face image library, and when the matching is successful, the matched face image is determined to be the calibration face image;
and the second calibration image unit is used for determining a calibration face image according to the face video of the main driving position when the matching fails.
As a further scheme of the invention: the facial analysis information module includes:
the eye contour recognition unit is used for carrying out edge tracing on the eye characteristics to determine the eye contour, so as to obtain left and right endpoints of the eye contour, wherein the left and right endpoints divide the eye contour into an upper contour and a lower contour, and a midpoint of the upper contour and a midpoint of the lower contour are determined;
the eye characteristic data unit is used for calculating eye characteristic data, and the calculation formula of the eye characteristic data is as follows:wherein->Calculated value representing eye characteristic data, +.>Reference value representing eye characteristic data, +.>Represents the distance between the left and right end points of the eye contour, < >>Representing the distance between the upper contour midpoint and the lower contour midpoint;
the lip arc line identification unit is used for carrying out edge drawing on the mouth characteristics to determine lip arc lines, so as to obtain left and right end points of the lip arc lines, wherein the left and right end points divide the lip arc lines into an upper arc line and a lower arc line, and the midpoint of the upper arc line and the midpoint of the lower arc line are determined;
the calculating formula for calculating the mouth characteristic data is as follows:wherein->Calculated value representing the mouth characteristic data, +.>Reference value representing mouth characteristic data, +.>Correction factor representing a mouth characteristic data item, +.>Represents the distance between the left and right end points of the lip arc,/->Representing the distance between the midpoint of the upper arc and the midpoint of the lower arc;
and the facial analysis information unit is used for integrating the eye characteristic data and the mouth characteristic data to obtain facial analysis information.
As a further scheme of the invention: the fatigue sign determination module includes:
a first fatigue sign unit for calculating an eye characteristic dataAnd eye calibration value->Comparison is made when,/>Representing an eye fatigue coefficient, and determining that the corresponding facial analysis information has eye characteristic item fatigue signs;
a second fatigue sign unit for calculating a value of the mouth characteristic dataAnd the mouth calibration value->Comparison is made when,/>Representing the mouth fatigue coefficient, determining that the corresponding facial analysis information has the fatigue signs of the mouth characteristic items.
As a further scheme of the invention: the fatigue driving determination module includes:
the fatigue comprehensive value calculation unit is used for determining a fatigue comprehensive value with fatigue signs in a group of facial analysis information corresponding to the facial video of the main driver;
and the fatigue comprehensive value judging unit is used for judging the fatigue comprehensive value, and determining that the driver has fatigue driving risk when the fatigue comprehensive value is larger than a preset fatigue value.
Compared with the prior art, the invention has the beneficial effects that:
the method comprises the steps of acquiring a section of face video of a main driving position, extracting video frames of the face video to obtain a set number of face images, identifying eye features and mouth features of the face images to obtain face analysis information, comparing the face analysis information with eye calibration values and mouth calibration values of calibrated face images, and judging whether the face analysis information has fatigue signs or not; and judging whether the driver has fatigue driving risk according to the fatigue conditions of all the facial analysis information. Therefore, whether the driver has fatigue driving can be accurately judged, and accidental factors are avoided.
Drawings
Fig. 1 is a flowchart of a method for detecting a multi-feature perceived driving fatigue state based on a video frame.
Fig. 2 is a schematic structural diagram of a multi-feature perceived driving fatigue state detection system based on video frames.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clear, the present invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Specific implementations of the invention are described in detail below in connection with specific embodiments.
As shown in fig. 1, an embodiment of the present invention provides a method for detecting a multi-feature perceived driving fatigue state based on a video frame, the method comprising the following steps:
s100, when the starting of the vehicle is detected, acquiring a face video of a main driving position, and determining a calibrated face image according to the face video of the main driving position;
s200, acquiring a section of face video of a main driving position at each set time value, and extracting video frames of the face video to obtain a set number of face images;
s300, identifying eye features and mouth features of the facial image to obtain facial analysis information, wherein the facial analysis information comprises eye feature data and mouth feature data;
s400, calculating an eye calibration value and a mouth calibration value of the calibrated face image, comparing the face analysis information with the eye calibration value and the mouth calibration value of the calibrated face image, and judging whether fatigue signs exist in the face analysis information;
s500, judging whether the driver has the risk of fatigue driving according to the fatigue conditions of all the facial analysis information, and generating early warning information when the driver has the risk of fatigue driving.
In the embodiment of the invention, in order to ensure that the driver is focused in driving and fatigue driving is avoided, when the vehicle is detected to start, the face video of the main driving position is automatically acquired, the calibrated face image is determined according to the face video of the main driving position, and the calibrated face image is the face image of the driver in a wakefulness state. Then, in the running process of the vehicle, a section of face video of a main driving position is acquired every set time value (for example, every 20 minutes), video frame extraction is carried out on the face video to obtain a set number of face images, for example, every two minutes of face video is acquired, 10 face images are obtained through extraction on the two minutes of face video, then the eye features and the mouth features of the face images are recognized to obtain face analysis information, one face image corresponds to one face analysis information, the face analysis information comprises eye feature data and mouth feature data, then an eye calibration value and a mouth calibration value of a calibrated face image are calculated, the face analysis information is compared with the eye calibration value and the mouth calibration value of the calibrated face image, whether fatigue signs exist in the face analysis information is judged, finally, whether the fatigue driving risk exists in the driver is judged according to the fatigue conditions of all the face analysis information, and when the fatigue driving risk exists in the driver is judged, the early warning information is generated, so that whether the driver has fatigue driving can be accurately judged, and the occurrence of danger can be timely prevented.
As a preferred embodiment of the present invention, the step of determining the calibration face image according to the face video of the main driver position specifically includes:
s101, matching a face video of a main driver seat with a face image library, and determining the matched face image as a calibrated face image when the matching is successful;
s102, when matching fails, determining a calibrated face image according to the face video of the main driver seat.
In the embodiment of the invention, a face image library is established, the face image library contains face images of people who possibly drive the vehicle in a waking state, the face video of a main driving position is matched with the face image library, and when the matching is successful, the matched face images are determined to be calibrated face images; when the matching fails, the driver driving the vehicle is not commonly used, such as driving, and a calibration face image is determined according to the face video of the main driver seat.
As a preferred embodiment of the present invention, the step of identifying the eye feature and the mouth feature of the face image to obtain the face analysis information specifically includes:
s301, performing edge tracing on the eye features to determine eye contours, so as to obtain left and right endpoints of the eye contours, wherein the left and right endpoints divide the eye contours into an upper contour and a lower contour, and a midpoint of the upper contour and a midpoint of the lower contour are determined;
s302, calculating to obtain eye characteristic data, wherein a calculation formula of the eye characteristic data is as follows:wherein->Calculated value representing eye characteristic data, +.>Reference value representing eye characteristic data, +.>Represents the distance between the left and right end points of the eye contour, < >>Representing the distance between the upper contour midpoint and the lower contour midpoint;
s303, conducting edge drawing on the mouth characteristics to determine lip arcs, obtaining left and right end points of the lip arcs, dividing the lip arcs into an upper arc and a lower arc by the left and right end points, and determining the middle point of the upper arc and the middle point of the lower arc;
s304, calculating to obtain the mouth characteristic data, wherein a calculation formula of the mouth characteristic data is as follows:wherein, the method comprises the steps of, wherein,calculated value representing the mouth characteristic data, +.>Reference value representing mouth characteristic data, +.>Correction factor representing a mouth characteristic data item, +.>Represents the distance between the left and right end points of the lip arc,/->Representing the distance between the midpoint of the upper arc and the midpoint of the lower arc;
s305, facial analysis information is obtained according to the integration of the eye characteristic data and the mouth characteristic data.
In the embodiment of the invention, in order to determine facial analysis information, firstly, the eye features are subjected to edge drawing to determine the eye contours, the left and right endpoints of the eye contours are obtained, and eye feature data are obtained through calculation; and then, the mouth feature is subjected to edge drawing to determine lip arcs, left and right end points of the lip arcs are obtained, and mouth feature data are obtained through calculation, so that facial analysis information can be obtained through integration according to the eye feature data and the mouth feature data.
As a preferred embodiment of the present invention, the step of comparing the facial analysis information with the eye calibration value and the mouth calibration value for calibrating the face image to determine whether the facial analysis information has fatigue signs, specifically includes:
s401, calculating the value of the eye characteristic dataAnd eye calibration value->Comparison is made when->,/>Representing an eye fatigue coefficient, and determining that the corresponding facial analysis information has eye characteristic item fatigue signs;
s402, calculating the value of the mouth characteristic dataAnd the mouth calibration value->Comparison is made when->,/>Representing the mouth fatigue coefficient, determining that the corresponding facial analysis information has the fatigue signs of the mouth characteristic items.
In the embodiments of the present invention, it is easily understood that, in a fatigue state, when eyelid sags, eye characteristic data is reduced, at this timeShould be less than +.>,/>The eye fatigue coefficient is a preset constant value; in addition, in fatigue state, when yawning is performed, the mouth characteristic data is greatly increased, and the degree of the increase is much higher than that of normal speaking, so that the user can distinguish +.>Should be greater than +.>,/>The mouth fatigue coefficient is also a preset constant value.
As a preferred embodiment of the present invention, the step of determining whether the driver is at risk of fatigue driving according to the fatigue condition of all the facial analysis information specifically includes:
s501, determining a fatigue comprehensive value of a fatigue sign in a group of facial analysis information corresponding to the facial video of the main driver seat;
the step S501 specifically includes the following sub-steps:
s5011, when judging that the facial analysis information has the fatigue signs of the eye characteristic items, calculating to obtain the fatigue indexes of the eye characteristic items;
in this step, the calculation formula of the eye characteristic term fatigue index is expressed as:
wherein,representing the fatigue index of the eye characteristic item->Reference value representing the fatigue index of the eye characteristic item, +.>A correction factor representing an eye feature term.
S5012, when judging that the facial analysis information has fatigue signs of the mouth characteristic items, calculating to obtain fatigue indexes of the mouth characteristic items;
in this step, the calculation formula of the fatigue index of the mouth characteristic term is expressed as:
wherein,representing the fatigue index of the mouth characteristic item->A reference value representing the fatigue index of the mouth characteristic item.
S5013, calculating according to the eye characteristic item fatigue index and the mouth characteristic item fatigue index to obtain a fatigue comprehensive value;
in this step, the calculation formula of the fatigue integrated value is expressed as:
wherein,representing the fatigue syndrome, < >>Reference value representing fatigue integrated value, +.>Weight factors representing eye feature items, +.>A weighting factor representing the mouth feature term.
S502, judging the fatigue comprehensive value, and determining that the driver has fatigue driving risk when the fatigue comprehensive value is larger than a preset fatigue value.
In the embodiment of the invention, it is easy to understand that the fatigue state is a long-term state, in order to avoid the influence of accidental factors, the fatigue comprehensive value of the fatigue signs in a group of facial analysis information corresponding to the facial video of the main driver is determined, and only when the fatigue comprehensive value is greater than the preset fatigue value, the risk of the fatigue driving of the driver is determined, so that the risk of misjudgment is further reduced, and the result is more accurate.
As shown in fig. 2, the embodiment of the invention further provides a multi-feature perceived driving fatigue state detection system based on video frames, which comprises:
the calibration face image module 100 is configured to collect a face video of a main driving position when the vehicle is detected to be started, and determine a calibration face image according to the face video of the main driving position;
the facial image acquisition module 200 is used for acquiring a section of facial video of a main driving position at each set time value, and extracting video frames of the facial video to obtain a set number of facial images;
a facial analysis information module 300, configured to identify eye features and mouth features of the facial image, and obtain facial analysis information, where the facial analysis information includes eye feature data and mouth feature data;
the fatigue sign judging module 400 is used for calculating an eye calibration value and a mouth calibration value of the calibrated face image, comparing the face analysis information with the eye calibration value and the mouth calibration value of the calibrated face image, and judging whether the face analysis information has fatigue signs or not;
the fatigue driving determination module 500 is configured to determine whether the driver has a risk of fatigue driving according to the fatigue conditions of all the facial analysis information, and generate early warning information when the driver has a risk of fatigue driving.
As a preferred embodiment of the present invention, the calibration face image module 100 includes:
the first calibration image unit is used for matching the face video of the main driver position with the face image library, and when the matching is successful, the matched face image is determined to be the calibration face image;
and the second calibration image unit is used for determining a calibration face image according to the face video of the main driving position when the matching fails.
As a preferred embodiment of the present invention, the facial analysis information module 300 includes:
the eye contour recognition unit is used for carrying out edge tracing on the eye characteristics to determine the eye contour, so as to obtain left and right endpoints of the eye contour, wherein the left and right endpoints divide the eye contour into an upper contour and a lower contour, and a midpoint of the upper contour and a midpoint of the lower contour are determined;
the eye characteristic data unit is used for calculating eye characteristic data, and the calculation formula of the eye characteristic data is as follows:wherein->Calculated value representing eye characteristic data, +.>Reference value representing eye characteristic data, +.>Represents the distance between the left and right end points of the eye contour, < >>Representing the distance between the upper contour midpoint and the lower contour midpoint;
the lip arc line identification unit is used for carrying out edge drawing on the mouth characteristics to determine lip arc lines, so as to obtain left and right end points of the lip arc lines, wherein the left and right end points divide the lip arc lines into an upper arc line and a lower arc line, and the midpoint of the upper arc line and the midpoint of the lower arc line are determined;
the mouth characteristic data unit is used for calculating to obtain mouth characteristic data, and the calculation formula of the mouth characteristic data is as follows:wherein->Calculated value representing the mouth characteristic data, +.>Reference value representing mouth characteristic data, +.>Correction factor representing a mouth characteristic data item, +.>Represents the distance between the left and right end points of the lip arc,/->Representing the distance between the midpoint of the upper arc and the midpoint of the lower arc;
and the facial analysis information unit is used for integrating the eye characteristic data and the mouth characteristic data to obtain facial analysis information.
As a preferred embodiment of the present invention, the fatigue sign determination module 400 includes:
a first fatigue sign unit for calculating an eye characteristic dataAnd eye calibration value->Comparison is made when,/>Representing an eye fatigue coefficient, and determining that the corresponding facial analysis information has eye characteristic item fatigue signs;
a second fatigue sign unit for calculating a value of the mouth characteristic dataAnd the mouth calibration value->Comparison is made when,/>Representing the mouth fatigue coefficient, determining that the corresponding facial analysis information has the fatigue signs of the mouth characteristic items.
As a preferred embodiment of the present invention, the fatigue driving determination module 500 includes:
the fatigue comprehensive value calculation unit is used for determining a fatigue comprehensive value with fatigue signs in a group of facial analysis information corresponding to the facial video of the main driver;
and the fatigue comprehensive value judging unit is used for judging the fatigue comprehensive value, and determining that the driver has fatigue driving risk when the fatigue comprehensive value is larger than a preset fatigue value.
The foregoing description of the preferred embodiments of the present invention should not be taken as limiting the invention, but rather should be understood to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in various embodiments may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
Other embodiments of the present disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (8)

1. The multi-feature perceived driving fatigue state detection method based on the video frames is characterized by comprising the following steps of:
when the starting of the vehicle is detected, acquiring a face video of a main driving position, and determining a calibrated face image according to the face video of the main driving position;
acquiring a section of face video of a main driving position at each set time value, and extracting video frames of the face video to obtain a set number of face images;
identifying eye features and mouth features of the facial image to obtain facial analysis information, wherein the facial analysis information comprises eye feature data and mouth feature data;
calculating an eye calibration value and a mouth calibration value of the calibrated face image, comparing the face analysis information with the eye calibration value and the mouth calibration value of the calibrated face image, and judging whether fatigue signs exist in the face analysis information;
judging whether a driver has fatigue driving risks according to the fatigue conditions of all the facial analysis information, and generating early warning information when the driver has fatigue driving risks;
the step of identifying the eye features and the mouth features of the facial image to obtain facial analysis information specifically comprises the following steps:
the eye feature is subjected to edge tracing to determine an eye contour, so that left and right endpoints of the eye contour are obtained, the eye contour is divided into an upper contour and a lower contour by the left and right endpoints, and the midpoint of the upper contour and the midpoint of the lower contour are determined;
the eye characteristic data is obtained through calculation, and the calculation formula of the eye characteristic data is as follows:wherein->Calculated value representing eye characteristic data, +.>Reference value representing eye characteristic data, +.>Represents the distance between the left and right end points of the eye contour, < >>Representing the distance between the upper contour midpoint and the lower contour midpoint;
the mouth feature is subjected to edge drawing to determine a lip arc line, so that left and right end points of the lip arc line are obtained, the lip arc line is divided into an upper arc line and a lower arc line by the left and right end points, and the midpoint of the upper arc line and the midpoint of the lower arc line are determined;
calculating to obtain the mouth characteristic data, wherein the calculation formula of the mouth characteristic data is as follows:wherein->Calculated value representing the mouth characteristic data, +.>Reference value representing mouth characteristic data, +.>Correction factor representing a mouth characteristic data item, +.>Represents the distance between the left and right end points of the lip arc,/->Representing the distance between the midpoint of the upper arc and the midpoint of the lower arcA distance;
integrating the eye characteristic data and the mouth characteristic data to obtain facial analysis information;
the step of determining the calibrated face image according to the face video of the main driving position specifically comprises the following steps:
matching the face video of the main driver position with a face image library, and determining the matched face image as a calibrated face image when the matching is successful;
when the matching fails, determining a calibrated face image according to the face video of the main driving position;
comparing the facial analysis information with an eye calibration value and a mouth calibration value of a calibrated face image, and judging whether the facial analysis information has fatigue signs or not, wherein the method specifically comprises the following steps of:
calculated value of eye characteristic dataAnd eye calibration value->Comparison is made when->,/>Representing an eye fatigue coefficient, and determining that the corresponding facial analysis information has eye characteristic item fatigue signs;
calculated value of the mouth characteristic dataAnd the mouth calibration value->Comparison is made when->,/>Representing the mouth fatigue coefficient, determining that the corresponding facial analysis information has the fatigue signs of the mouth characteristic items.
2. The method for detecting the fatigue state of the multi-feature perceived driving based on the video frame according to claim 1, wherein the step of determining whether the driver is at risk of fatigue driving according to the fatigue condition of all the facial analysis information specifically comprises:
determining a fatigue comprehensive value of fatigue signs in a group of facial analysis information corresponding to the facial video of the main driver;
and judging the fatigue comprehensive value, and determining that the driver has fatigue driving risk when the fatigue comprehensive value is larger than a preset fatigue value.
3. The method for detecting the fatigue state of the multi-feature perceived driving based on the video frame according to claim 2, wherein the method for determining the fatigue integrated value of the fatigue sign in the group of face analysis information corresponding to the face video of the main driver comprises the steps of:
when judging that the facial analysis information has the fatigue signs of the eye characteristic items, calculating to obtain the fatigue indexes of the eye characteristic items;
when judging that the facial analysis information has the fatigue signs of the mouth characteristic items, calculating to obtain the fatigue indexes of the mouth characteristic items;
and calculating according to the eye characteristic item fatigue index and the mouth characteristic item fatigue index to obtain a fatigue comprehensive value.
4. The method for detecting the fatigue state of multi-feature perceived driving based on video frames according to claim 3, wherein the calculation formula of the fatigue index of the eye feature term is expressed as:
wherein,representing the fatigue index of the eye characteristic item->Reference value representing the fatigue index of the eye characteristic item, +.>Correction factors representing eye feature items;
the calculation formula of the fatigue index of the mouth characteristic item is expressed as follows:
wherein,representing the fatigue index of the mouth characteristic item->A reference value representing a fatigue index of the mouth feature item;
the calculation formula of the fatigue comprehensive value is expressed as follows:
wherein,representing the fatigue syndrome, < >>Reference value representing fatigue integrated value, +.>The weight factors representing the eye feature items,representing mouth characteristicsWeight factors for the terms.
5. A video frame-based multi-feature-aware driving fatigue state detection system, characterized in that the video frame-based multi-feature-aware driving fatigue state detection method according to any one of claims 1 to 4 is applied, the system comprising:
the calibration face image module is used for acquiring face videos of the main driving position when the vehicle is detected to start, and determining a calibration face image according to the face videos of the main driving position;
the face image acquisition module is used for acquiring a section of face video of the main driving position at each set time value, and extracting video frames of the face video to obtain a set number of face images;
the facial analysis information module is used for identifying eye characteristics and mouth characteristics of the facial image to obtain facial analysis information, and the facial analysis information comprises eye characteristic data and mouth characteristic data;
the fatigue sign judging module is used for calculating an eye calibration value and a mouth calibration value of the calibrated face image, comparing the face analysis information with the eye calibration value and the mouth calibration value of the calibrated face image, and judging whether the face analysis information has fatigue signs or not;
the fatigue driving judging module is used for judging whether the driver has the risk of fatigue driving according to the fatigue conditions of all the facial analysis information, and generating early warning information when the driver has the risk of fatigue driving;
the facial analysis information module includes:
the eye contour recognition unit is used for carrying out edge tracing on the eye characteristics to determine the eye contour, so as to obtain left and right endpoints of the eye contour, wherein the left and right endpoints divide the eye contour into an upper contour and a lower contour, and a midpoint of the upper contour and a midpoint of the lower contour are determined;
the eye characteristic data unit is used for calculating eye characteristic data, and the calculation formula of the eye characteristic data is as follows:wherein->Calculated value representing eye characteristic data, +.>Reference value representing eye characteristic data, +.>Represents the distance between the left and right end points of the eye contour, < >>Representing the distance between the upper contour midpoint and the lower contour midpoint;
the lip arc line identification unit is used for carrying out edge drawing on the mouth characteristics to determine lip arc lines, so as to obtain left and right end points of the lip arc lines, wherein the left and right end points divide the lip arc lines into an upper arc line and a lower arc line, and the midpoint of the upper arc line and the midpoint of the lower arc line are determined;
the mouth characteristic data unit is used for calculating to obtain mouth characteristic data, and the calculation formula of the mouth characteristic data is as follows:wherein->Calculated value representing the mouth characteristic data, +.>Reference value representing mouth characteristic data, +.>Correction factor representing a mouth characteristic data item, +.>Represents the distance between the left and right end points of the lip arc,/->Representing the distance between the midpoint of the upper arc and the midpoint of the lower arc;
and the facial analysis information unit is used for integrating the eye characteristic data and the mouth characteristic data to obtain facial analysis information.
6. The video frame-based multi-feature perceived driving fatigue state detection system of claim 5, wherein the nominal face image module comprises:
the first calibration image unit is used for matching the face video of the main driver position with the face image library, and when the matching is successful, the matched face image is determined to be the calibration face image;
and the second calibration image unit is used for determining a calibration face image according to the face video of the main driving position when the matching fails.
7. The video frame-based multi-feature perceived driving fatigue status detection system of claim 6, wherein the fatigue sign determination module comprises:
a first fatigue sign unit for calculating an eye characteristic dataAnd eye calibration value->Comparison is made when,/>Representing an eye fatigue coefficient, and determining that the corresponding facial analysis information has eye characteristic item fatigue signs;
a second fatigue sign unit for calculating a value of the mouth characteristic dataMouth markConstant value->Comparison is made when,/>Representing the mouth fatigue coefficient, determining that the corresponding facial analysis information has the fatigue signs of the mouth characteristic items.
8. The video frame-based multi-feature aware driving fatigue status detection system of claim 7, wherein the fatigue driving determination module comprises:
the fatigue comprehensive value calculation unit is used for determining a fatigue comprehensive value with fatigue signs in a group of facial analysis information corresponding to the facial video of the main driver;
and the fatigue comprehensive value judging unit is used for judging the fatigue comprehensive value, and determining that the driver has fatigue driving risk when the fatigue comprehensive value is larger than a preset fatigue value.
CN202410068812.3A 2024-01-17 2024-01-17 Multi-feature perception driving fatigue state detection method and system based on video frame Active CN117576668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410068812.3A CN117576668B (en) 2024-01-17 2024-01-17 Multi-feature perception driving fatigue state detection method and system based on video frame

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410068812.3A CN117576668B (en) 2024-01-17 2024-01-17 Multi-feature perception driving fatigue state detection method and system based on video frame

Publications (2)

Publication Number Publication Date
CN117576668A CN117576668A (en) 2024-02-20
CN117576668B true CN117576668B (en) 2024-04-05

Family

ID=89890441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410068812.3A Active CN117576668B (en) 2024-01-17 2024-01-17 Multi-feature perception driving fatigue state detection method and system based on video frame

Country Status (1)

Country Link
CN (1) CN117576668B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877051A (en) * 2009-10-30 2010-11-03 江苏大学 Driver attention state monitoring method and device
CN110334600A (en) * 2019-06-03 2019-10-15 武汉工程大学 A kind of multiple features fusion driver exception expression recognition method
CN111860437A (en) * 2020-07-31 2020-10-30 苏州大学 Method and device for judging fatigue degree based on facial expression
CN115965950A (en) * 2023-01-12 2023-04-14 成都信息工程大学 Driver fatigue detection method based on multi-feature fusion state recognition network
CN116386277A (en) * 2022-11-28 2023-07-04 中国电信股份有限公司 Fatigue driving detection method and device, electronic equipment and medium
CN117392644A (en) * 2023-10-07 2024-01-12 无锡学院 Fatigue detection method and system based on machine vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI715958B (en) * 2019-04-08 2021-01-11 國立交通大學 Assessing method for a driver's fatigue score

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877051A (en) * 2009-10-30 2010-11-03 江苏大学 Driver attention state monitoring method and device
CN110334600A (en) * 2019-06-03 2019-10-15 武汉工程大学 A kind of multiple features fusion driver exception expression recognition method
CN111860437A (en) * 2020-07-31 2020-10-30 苏州大学 Method and device for judging fatigue degree based on facial expression
CN116386277A (en) * 2022-11-28 2023-07-04 中国电信股份有限公司 Fatigue driving detection method and device, electronic equipment and medium
CN115965950A (en) * 2023-01-12 2023-04-14 成都信息工程大学 Driver fatigue detection method based on multi-feature fusion state recognition network
CN117392644A (en) * 2023-10-07 2024-01-12 无锡学院 Fatigue detection method and system based on machine vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Robust driver fatigue recognition using image processing;Rafi Ahmed;2014 International Conference on Informatics, Electronics & Vision (ICIEV);20140710;第1-6页 *
视频数据处理方法、设备及存储介质;徐青云;中国优秀硕士学位论文全文数据库工程科技Ⅱ辑;20200715;第2020卷(第07期);第C035-239页 *

Also Published As

Publication number Publication date
CN117576668A (en) 2024-02-20

Similar Documents

Publication Publication Date Title
Kumar et al. Driver drowsiness monitoring system using visual behaviour and machine learning
JP6925615B2 (en) Identity verification document authenticity system, method and program
EP2754393A1 (en) Dozing-off detection method and device
US11861916B2 (en) Driver alertness monitoring system
Doshi et al. A comparative exploration of eye gaze and head motion cues for lane change intent prediction
CN112754498B (en) Driver fatigue detection method, device, equipment and storage medium
BRPI0712837A2 (en) Method and apparatus for determining and analyzing a location of visual interest.
CN109800984B (en) Driving level evaluation method, driving level evaluation device, computer device, and storage medium
KR101473957B1 (en) Apparatus and method for determining insurance premium based on driving pattern recognition
Svärd et al. Computational modeling of driver pre-crash brake response, with and without off-road glances: Parameterization using real-world crashes and near-crashes
US20200134729A1 (en) Information processing device, information processing system, information processing method, and program
US20210070306A1 (en) Closed eye determination device
CN117576668B (en) Multi-feature perception driving fatigue state detection method and system based on video frame
CN114220158A (en) Fatigue driving detection method based on deep learning
Girish et al. Driver fatigue detection
JP7172968B2 (en) Driving analysis device and driving analysis method
Jimenez et al. Detection of the tiredness level of drivers using machine vision techniques
CN112418314B (en) Method and device for setting threshold value in spectrum similarity matching classification
CN115107786A (en) Driving behavior correction system and method for intelligent automobile
CN111775948B (en) Driving behavior analysis method and device
CN114639089A (en) Driver fatigue detection method and device
CN115512337A (en) Fatigue driving detection method, device, equipment and storage medium
Ibnouf et al. Drowsy Driver Detection System For Poor Light Conditions
CN118247775B (en) Intelligent fatigue driving monitoring and early warning method and system based on camera
US20190266389A1 (en) Collator and method for displaying result of collation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant