CN111950371B - Fatigue driving early warning method and device, electronic equipment and storage medium - Google Patents

Fatigue driving early warning method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111950371B
CN111950371B CN202010662686.6A CN202010662686A CN111950371B CN 111950371 B CN111950371 B CN 111950371B CN 202010662686 A CN202010662686 A CN 202010662686A CN 111950371 B CN111950371 B CN 111950371B
Authority
CN
China
Prior art keywords
eyelid
early warning
fatigue
eye feature
closing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010662686.6A
Other languages
Chinese (zh)
Other versions
CN111950371A (en
Inventor
周坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qiyu Information and Technology Co Ltd
Original Assignee
Shanghai Qiyu Information and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qiyu Information and Technology Co Ltd filed Critical Shanghai Qiyu Information and Technology Co Ltd
Priority to CN202010662686.6A priority Critical patent/CN111950371B/en
Publication of CN111950371A publication Critical patent/CN111950371A/en
Application granted granted Critical
Publication of CN111950371B publication Critical patent/CN111950371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • B60W2040/0827Inactivity or incapacity of driver due to sleepiness
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a fatigue driving early warning method based on an intelligent mobile terminal, which comprises the following steps: the image acquisition device of the intelligent mobile terminal acquires the head image of the driver in real time; the intelligent mobile terminal preprocesses the head image obtained by periodic sampling to obtain an eye feature variable; the intelligent mobile terminal generates a fatigue index in the current period according to the eye feature variable and the pre-trained local model; and the intelligent mobile terminal performs real-time early warning based on the fatigue index. Correspondingly, the invention also provides a fatigue driving early warning device, electronic equipment and a computer readable storage medium.

Description

Fatigue driving early warning method and device, electronic equipment and storage medium
Technical Field
The invention relates to the field of computers, in particular to a fatigue driving early warning method based on an intelligent mobile terminal, a device, electronic equipment and a computer readable storage medium thereof.
Background
Fatigue driving is a phenomenon in which a driver loses physiological functions and psychological functions after continuous driving for a long time, and thus driving skills and reactions objectively deteriorate. If the driver continues to drive the vehicle after the driver is tired, the driver can feel drowsy and sleepy, the attention is not concentrated, the judging capability is reduced, even the mind absentmince and the instant memory disappear, unsafe factors such as action delay or early, operation pause or improper correction time occur, and road traffic accidents are easy to occur.
There are also more detection methods for fatigue driving at present, for example: electroencephalogram, electrocardiogram, pulse beat detection, electromyogram signal detection, head position detection, solid line direction detection, and the like. However, most of the methods have some defects in algorithm efficiency, instantaneity and the like, and the fatigue early warning devices appearing in the market at present are all additional devices, and are required to be purchased and installed additionally and are high in price.
Under the background, how to perform fatigue driving detection and early warning more conveniently and at low cost becomes an important subject, so the invention provides a fatigue driving early warning method and device based on an intelligent mobile terminal.
The above information disclosed in the background section is only for enhancement of understanding of the background of the disclosure and therefore it may include information that does not form the prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
In view of the foregoing, the present specification is directed to providing a fatigue driving early warning method based on an intelligent mobile terminal, an apparatus, an electronic device and a computer readable storage medium thereof, which overcome or at least partially solve the foregoing problems.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
In a first aspect, the invention discloses a fatigue driving early warning method based on an intelligent mobile terminal, which comprises the following steps:
the image acquisition device of the intelligent mobile terminal acquires the head image of the driver in real time;
the intelligent mobile terminal preprocesses the head image obtained by periodic sampling to obtain an eye feature variable;
the intelligent mobile terminal generates a fatigue index in the current period according to the eye feature variable, the mouth feature variable and a pre-trained local model;
and the intelligent mobile terminal performs real-time early warning based on the fatigue index.
In one exemplary embodiment of the present disclosure, the ocular feature variable comprises: percentage eyelid closure, eyelid closure amplitude, and line of sight angle offset time.
In one exemplary embodiment of the present disclosure, the local model is trained multiple times based on eye feature variables and corresponding fatigue indices obtained from training sample head images.
In an exemplary embodiment of the present disclosure, the step of performing real-time early warning based on the fatigue index specifically includes:
determining a risk score of the driving behavior of the driver based on a fatigue index threshold interval in which the fatigue index is currently located;
and determining corresponding early warning measures according to the determined danger scores to perform real-time early warning.
In an exemplary embodiment of the present disclosure, the step of preprocessing the head image to obtain an eye feature variable specifically includes:
extracting eye feature points from the head image by adopting a face recognition technology, wherein the eye feature points comprise: corner of eye feature points, upper eyelid feature points, and lower eyelid feature points;
calculating the included angle of the canthus according to the canthus characteristic points, the upper eyelid characteristic points and the lower eyelid characteristic points to determine the closing amplitude of the upper eyelid and the lower eyelid;
calculating the complete closing percentage of the upper eyelid and the lower eyelid in the current period to obtain the eyelid closing percentage of the driver in the current period;
judging whether the closing amplitude of the upper eyelid and the lower eyelid is smaller than a preset closing threshold value or not, and if so, acquiring the eyelid closing amplitude within the period of time, wherein the period of time continuously smaller than the preset closing threshold value is longer than or equal to the preset period of time.
In an exemplary embodiment of the present disclosure, the step of preprocessing the head image to obtain an eye feature variable further includes:
estimating the view angle of the head image through a view angle estimation depth neural network frame to obtain an actual view angle;
and counting the time of the actual sight angle of the driver to deviate from a preset sight angle threshold value to obtain the sight angle deviation time of the driver in the current period.
In a second aspect, the present invention provides a fatigue driving warning device, including:
the image acquisition module is used for acquiring the head image of the driver in real time;
the data processing module is used for periodically acquiring the head image and preprocessing the head image to obtain an eye characteristic variable;
the fatigue index calculation module is used for generating a fatigue index in the current period based on the eye feature variable and a pre-trained deep neural network model;
and the early warning module is used for carrying out real-time early warning according to the fatigue index output by the fatigue index calculation module.
In one exemplary embodiment of the present disclosure, the ocular feature variable comprises: percentage eyelid closure, eyelid closure amplitude, and line of sight angle offset time.
In an exemplary embodiment of the present disclosure, the fatigue driving early warning device further includes:
the database is used for storing training sample images for constructing the local model and fatigue indexes corresponding to each training sample image;
the model construction module is used for acquiring a training sample head image from the database, and training the eye feature variable and the corresponding fatigue index acquired from the training sample head image for a plurality of times to obtain a deep neural network model.
In an exemplary embodiment of the disclosure, the early warning module specifically includes:
the storage unit is used for storing a plurality of preset fatigue index threshold intervals and risk scores corresponding to each fatigue index threshold interval;
the judging unit is used for judging the fatigue index threshold interval in which the fatigue index calculated by the fatigue index calculating module is currently positioned;
the first matching unit is used for determining corresponding risk scores according to the fatigue index threshold interval in which the fatigue index obtained by the judging unit is currently located;
the second matching unit is used for matching corresponding early warning measures according to the risk score matched by the first matching unit;
and the early warning unit is used for carrying out real-time early warning according to the early warning measures matched by the second matching unit.
In an exemplary embodiment of the present disclosure, the data processing module specifically includes:
a first feature point extracting unit, configured to extract an eye feature point from the head image by using a face recognition technology, where the eye feature point includes: corner of eye feature points, upper eyelid feature points, and lower eyelid feature points;
a first calculation unit for calculating an included angle of the eye corners according to the eye corner feature points, the upper eyelid feature points and the lower eyelid feature points to determine the closing amplitude of the upper eyelid and the lower eyelid;
the second calculation unit is used for calculating the complete closing percentage of the upper eyelid and the lower eyelid in the current period to obtain the eyelid closing percentage of the driver in the current period;
and the third calculation unit is used for judging whether the closing amplitude of the upper eyelid and the lower eyelid is smaller than a preset closing threshold value or not, and if so, acquiring the eyelid closing amplitude in the period of time when the duration of the closing amplitude is longer than or equal to the preset duration of time which is continuously smaller than the preset closing threshold value.
In an exemplary embodiment of the present disclosure, the data processing module specifically includes:
a first feature point extracting unit, configured to extract an eye feature point from the head image by using a face recognition technology, where the eye feature point includes: corner of eye feature points, upper eyelid feature points, and lower eyelid feature points;
a first calculation unit for calculating an included angle of the eye corners according to the eye corner feature points, the upper eyelid feature points and the lower eyelid feature points to determine the closing amplitude of the upper eyelid and the lower eyelid;
the second calculation unit is used for calculating the complete closing percentage of the upper eyelid and the lower eyelid in the current period to obtain the eyelid closing percentage of the driver in the current period;
and the third calculation unit is used for judging whether the closing amplitude of the upper eyelid and the lower eyelid is smaller than a preset closing threshold value or not, and if so, acquiring the eyelid closing amplitude in the period of time when the duration of the closing amplitude is longer than or equal to the preset duration of time which is continuously smaller than the preset closing threshold value.
In an exemplary embodiment of the present disclosure, the data processing module further includes:
the model construction unit is used for constructing a view angle estimation depth neural network model in advance;
a fourth calculation unit, configured to input the head image into the gaze angle estimation depth neural network model to perform gaze angle estimation, so as to obtain an actual gaze angle of the driver;
and a fifth calculation unit, configured to count a time when the actual line of sight angle of the driver deviates from the preset line of sight angle threshold value, and obtain a line of sight angle deviation time of the driver in the current period.
In a third aspect, the present description provides an electronic device comprising a processor and a memory: the memory is used for storing a program of the method of any one of the above; the processor is configured to execute the program stored in the memory to implement the steps of the method of any one of the preceding claims.
In a fourth aspect, embodiments of the present description provide a computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements the steps of any of the methods described above.
The invention has the beneficial effects that:
the invention collects the head image of the driver in real time through the intelligent mobile terminal, and then carries out pretreatment to obtain the eye characteristic variable, thereby carrying out real-time early warning according to the pre-trained local model, realizing low-cost real-time early warning, and being realized locally from data acquisition to real-time early warning as the trained model is stored locally in the intelligent mobile terminal in advance, being not influenced by any network environment, and having wide application range.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart illustrating a fatigue driving pre-warning method based on an intelligent mobile terminal according to an exemplary embodiment;
FIG. 2 is a block diagram of a fatigue driving warning device, according to another exemplary embodiment;
fig. 3 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
However, the exemplary embodiments described below can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the disclosed aspects may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various components, these components should not be limited by these terms. These terms are used to distinguish one element from another element. Accordingly, a first component discussed below could be termed a second component without departing from the teachings of the concepts of the present disclosure. As used herein, the term "and/or" includes any one of the associated listed items and all combinations of one or more.
Those skilled in the art will appreciate that the drawings are schematic representations of example embodiments and that the modules or flows in the drawings are not necessarily required to practice the present disclosure, and therefore, should not be taken to limit the scope of the present disclosure.
The invention provides a fatigue driving early warning method based on an intelligent mobile terminal, which is used for solving the problems of high early warning cost and inconvenience caused by the need of a special detection device in the prior art, and the general idea of the invention is as follows in order to solve the problems:
the image acquisition device of the intelligent mobile terminal acquires the head image of the driver in real time;
the intelligent mobile terminal preprocesses the head image obtained by periodic sampling to obtain an eye feature variable;
the intelligent mobile terminal generates a fatigue index in the current period according to the eye feature variable and the pre-trained local model;
and the intelligent mobile terminal performs real-time early warning based on the fatigue index.
It is first to be noted that in the various embodiments of the present invention, the terms involved are:
the term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
The technical scheme of the invention is described and illustrated in detail below through a few specific embodiments.
Referring to fig. 1, the fatigue driving early warning method based on the intelligent mobile terminal in this embodiment includes:
s101, an image acquisition device of the intelligent mobile terminal acquires head images of a driver in real time.
In this embodiment, the head information of the driver is relatively fixed, and the image capturing is relatively stable, and along with the wide application of the intelligent mobile terminal and the popularization of the fixing device, the intelligent mobile terminal can be fixed at a certain position in front of the driver, so that the head image of the driver can be conveniently collected in real time through the image collecting device of the intelligent mobile terminal, such as a camera.
In a specific embodiment, in order to accurately and timely and effectively capture the face feature information, the camera fixing range of the intelligent mobile terminal should be: the front of the sight line head-up is used as a horizontal vertical baseline, the elevation angle is 30 degrees, and the included angle range of 90 degrees is formed by 45 degrees on the left and right sides of the horizontal direction.
S103, the intelligent mobile terminal preprocesses the periodically sampled head images to obtain eye feature variables.
In this embodiment, the ocular feature variables include: percentage eyelid closure, eyelid closure amplitude, and line of sight angle offset time.
In one embodiment, the step of preprocessing the head image to obtain the eye feature variable specifically includes:
the intelligent mobile terminal adopts a face recognition technology to extract eye feature points from the head image; specifically, extracting the head image based on opencv face recognition technology includes: face image acquisition and detection, face image preprocessing and face image feature extraction to obtain eye feature value information, namely eye feature points, comprising: corner of eye feature points, upper eyelid feature points, and lower eyelid feature points;
the intelligent mobile terminal calculates the included angle of the canthus according to the canthus feature point, the upper eyelid feature point and the lower eyelid feature point to determine the closing amplitude of the upper eyelid and the lower eyelid;
the intelligent mobile terminal calculates the complete closing percentage of the upper eyelid and the lower eyelid in the current period to obtain the closing percentage of the eyelid of the driver in the current period;
the intelligent mobile terminal judges whether the closing amplitude of the upper eyelid and the lower eyelid is smaller than a preset closing threshold value or not, the time duration of which is continuously smaller than the preset closing threshold value is longer than or equal to the preset time duration, and if yes, the eyelid closing amplitude in the time duration is obtained.
In another specific embodiment, the step of preprocessing the head image to obtain the eye feature variable includes, in addition to the steps described above:
estimating the view angle of the head image through a view angle estimation depth neural network frame to obtain an actual view angle; and then, counting the time of the actual sight angle of the driver to deviate from a preset sight angle threshold value, and obtaining the sight angle deviation time of the driver in the current period.
S105, the intelligent mobile terminal generates a fatigue index in the current period according to the eye feature variable and the pre-trained local model.
In this embodiment, the local model is obtained by performing multiple training based on the eye feature variables and the corresponding fatigue indexes obtained from the head images of the training samples.
In one embodiment, the local model is a deep neural network model, which includes a plurality of layers of neurons, each neuron has n inputs, each input corresponds to a weight w, the inputs are multiplied by weights and summed in the neurons, the summed result is differenced from the bias, and finally the result is put into an activation function, and the activation function gives a final output. The input layer is the three eye feature variables: the eyelid closing percentage, the eye closing amplitude and the sight angle offset time, and the output layer is the fatigue index to be calculated.
In this embodiment, the three eye feature variables are used as input layers of the local model to perform local calculation, so as to obtain corresponding fatigue indexes.
Further, before the fatigue index is generated by using the local model, the local model may be optimized, that is, new training samples are continuously added according to the result to perform optimization, which specifically includes: the method comprises the steps of optimizing an algorithm of eye closure percentage, eye closure amplitude and line of sight angle offset time, and adjusting the contribution degree of different variables to fatigue indexes, namely adjusting weights of the three variables in the local model.
In this embodiment, the local model is based on the performance and computing power of the intelligent mobile terminal device, so that the local real-time detection of the local model offline is greatly facilitated, that is, the local model does not depend on a 3G/4G/5G network, is highly available in a complex network environment, and can be realized.
And S107, the intelligent mobile terminal performs real-time early warning based on the fatigue index obtained in the step S105.
In this embodiment, the step S107 specifically includes: and determining the danger score of the driving behavior of the driver based on the fatigue index threshold interval in which the fatigue index obtained in the step S105 is currently located, and determining corresponding early warning measures according to the determined danger score to perform real-time early warning.
In a specific embodiment, the early warning can be prompted in a voice mode or through lamplight information, namely, the intelligent mobile terminal automatically generates a corresponding voice prompt or lamplight information according to the fatigue index; of course, the intelligent mobile terminal can be communicated with the vehicle-mounted equipment, so that the vehicle-mounted equipment is controlled to carry out corresponding voice prompt and light control based on the fatigue index.
Based on the same inventive concept as the fatigue driving early warning method based on the mobile intelligent terminal in the foregoing embodiment, the present invention further provides a fatigue driving early warning device, on which a computer program is stored, which when executed by a processor, implements the steps of any one of the foregoing fatigue driving early warning methods.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the methods of the present invention. For details not disclosed in the device embodiments of the present embodiment, please refer to the method embodiments of the present disclosure.
Referring to fig. 2, a fatigue driving device of the present embodiment includes:
an image acquisition module 201, configured to acquire a head image of a driver in real time;
a data processing module 202, configured to periodically acquire the head image, and perform preprocessing to obtain an eye feature variable and a mouth feature variable; wherein the ocular feature variables include: percentage eyelid closure, eyelid closure amplitude, and gaze angle offset time; the mouth characteristic variables include: beat the yawning frequency;
the fatigue index calculation module 203 is configured to generate a fatigue index in a current period based on the eye feature variable, the mouth feature variable, and a pre-trained deep neural network model;
the early warning module 204 is configured to perform real-time early warning according to the fatigue index output by the fatigue index calculation module.
In one embodiment, the data processing module specifically includes:
the first feature point extraction unit is used for extracting eye feature points from the head image by adopting a face recognition technology; wherein, eye feature points include: corner of eye feature points, upper eyelid feature points, and lower eyelid feature points;
a first calculation unit for calculating an included angle of the eye corners according to the eye corner feature points, the upper eyelid feature points and the lower eyelid feature points to determine a closing amplitude of the upper eyelid and the lower eyelid;
the second calculation unit is used for calculating the complete closing percentage of the upper eyelid and the lower eyelid in the current period to obtain the eyelid closing percentage of the driver in the current period;
and the third calculation unit is used for judging whether the closing amplitude of the upper eyelid and the lower eyelid is smaller than a preset closing threshold value or not, and if so, acquiring the eyelid closing amplitude in the period of time when the duration of the closing amplitude is longer than or equal to the preset duration of time which is continuously smaller than the preset closing threshold value.
Further, the data processing module further includes:
the model construction unit is used for constructing a view angle estimation depth neural network model in advance;
a fourth calculation unit, configured to input the head image into the gaze angle estimation depth neural network model to perform gaze angle estimation, so as to obtain an actual gaze angle of the driver;
a fifth calculation unit, configured to count a time when the actual line of sight angle of the driver deviates from the preset line of sight angle threshold value, to obtain a line of sight angle deviation time of the driver in the current period;
a second feature point obtaining unit, configured to extract a mouth feature point from the head image by using a face recognition technology; wherein, mouth feature points include: a mouth angle feature point, an upper lip highest feature point and a lower lip lowest feature point;
a sixth calculation unit, configured to calculate an included angle of the mouth angle according to the mouth angle feature point, the highest feature point of the upper lip, and the lowest feature point of the lower lip, so as to determine a mouth closing amplitude;
and the seventh calculation unit is used for calculating the yawing frequency of the driver in the current period according to the complete closing amplitude of the mouth in the current period.
In one embodiment, the early warning module includes:
the storage unit is used for storing a plurality of preset fatigue index threshold intervals and risk scores corresponding to each fatigue index threshold interval;
the judging unit is used for judging the fatigue index threshold interval in which the fatigue index calculated by the fatigue index calculating module is currently positioned;
the first matching unit is used for determining a corresponding risk score according to the fatigue index threshold interval where the fatigue index obtained by the judging unit is currently located;
the second matching unit is used for matching corresponding early warning measures according to the risk scores matched by the first matching unit;
and the early warning unit is used for carrying out real-time early warning according to the early warning measures matched by the second matching unit.
Further, the fatigue driving device of the embodiment further includes:
the database is used for storing training sample images for constructing the local model and fatigue indexes corresponding to each training sample image;
the model construction module is used for acquiring a training sample head image from the database, and training the eye feature variable, the mouth feature variable and the corresponding fatigue index acquired from the training sample head image for a plurality of times to obtain a deep neural network model.
The third embodiment of the present specification also provides an electronic device comprising a memory 302, a processor 301 and a computer program stored on the memory 302 and executable on the processor 301, said processor 301 implementing the steps of the method described above when executing said program. For convenience of description, only those parts related to the embodiments of the present specification are shown, and specific technical details are not disclosed, please refer to the method parts of the embodiments of the present specification. The server may be a server device formed by various electronic devices, such as a PC computer, a network cloud server, or even a server function provided on any electronic device such as a mobile phone, a tablet computer, a PDA (Personal Digital Assistant ), a POS (Point of Sales), a vehicle-mounted computer, or a desktop computer.
In particular, the server component block diagram shown in FIG. 3, which is related to the solution provided by the embodiments of the present specification, the bus 300 may comprise any number of interconnected buses and bridges linking together various circuits, including one or more processors, represented by the processor 301, and memory, represented by the memory 302. Bus 300 may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., as are well known in the art and, therefore, will not be described further herein. The communication interface 303 provides an interface between the bus 300 and a receiver and/or transmitter 304, which receiver and/or transmitter 304 may be a separate stand-alone receiver or transmitter or may be the same element, such as a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 301 is responsible for managing the bus 300 and general processing, while the memory 302 may be used to store data used by the processor 301 in performing operations.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a computer readable storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, or a network device, etc.) to perform the above-described method according to the embodiments of the present disclosure.
The computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable storage medium may also be any readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The computer-readable medium carries one or more programs, which when executed by one of the devices, cause the computer-readable medium to perform the functions of: the image acquisition device of the intelligent mobile terminal acquires the head image of the driver in real time; the intelligent mobile terminal preprocesses the head image obtained by periodic sampling to obtain an eye feature variable; the intelligent mobile terminal generates a fatigue index in a current period according to the eye feature variable and the pre-trained local model; and the intelligent mobile terminal performs real-time early warning based on the fatigue index.
Those skilled in the art will appreciate that the modules may be distributed throughout several devices as described in the embodiments, and that corresponding variations may be implemented in one or more devices that are unique to the embodiments. The modules of the above embodiments may be combined into one module, or may be further split into a plurality of sub-modules.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or in combination with the necessary hardware. Thus, the technical solutions according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and include several instructions to cause a computing device (may be a personal computer, a server, a mobile terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
While preferred embodiments of the present description have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the disclosure.
Exemplary embodiments of the present disclosure are specifically illustrated and described above. It is to be understood that this disclosure is not limited to the particular arrangements, instrumentalities and methods of implementation described herein; on the contrary, the disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. In addition, the structures, proportions, sizes, etc. shown in the drawings are merely for the purpose of understanding and reading the disclosure, and are not intended to limit the applicable limitations of the disclosure, so that any structural modifications, changes in proportions, or adjustments of sizes, which do not affect the technical effects and objectives achieved by the disclosure, are intended to fall within the scope of the disclosure. Meanwhile, the terms such as "upper", "first", "second", and "a" and the like recited in the present specification are also for convenience of description only, and are not intended to limit the scope of the disclosure, in which the relative relationship changes or modifications thereof are not limited to essential changes in technical content, but are also regarded as the scope of the disclosure.

Claims (12)

1. The fatigue driving early warning method based on the intelligent mobile terminal is characterized by comprising the following steps of:
the image acquisition device of the intelligent mobile terminal acquires the head image of the driver in real time;
the intelligent mobile terminal performs face recognition on the periodically sampled head image, extracts eye feature points and obtains eye feature variables, wherein the eye feature variables comprise: percentage eyelid closure, eyelid closure amplitude, and gaze angle offset time;
the eyelid closing percentage of the driver in the current period is obtained according to the calculated upper eyelid and lower eyelid complete closing percentage in the current period; judging whether the closing amplitude of the upper eyelid and the lower eyelid is smaller than a preset closing threshold value or not, and if so, acquiring the eyelid closing amplitude in the current period, wherein the time duration of the upper eyelid and the lower eyelid is longer than or equal to the preset time duration and is continuously smaller than the preset closing threshold value; counting the time of the actual sight angle of the driver to deviate from a preset sight angle threshold value to obtain the sight angle deviation time of the driver in the current period;
adjusting the weight of the eye feature variable in the local model, namely optimizing an algorithm of eyelid closing percentage, eye closing amplitude and sight angle offset time, and adjusting the contribution degree of different variables to the fatigue index; the local model is a deep neural network model;
the intelligent mobile terminal generates a fatigue index in a current period according to the eye feature variable and the pre-trained local model;
and the intelligent mobile terminal performs real-time early warning based on the fatigue index.
2. The fatigue driving early warning method according to claim 1, wherein the local model is stored in the intelligent mobile terminal after training for a plurality of times based on eye feature variables and corresponding fatigue indexes obtained from a local training sample head image.
3. The fatigue driving early warning method according to claim 2, characterized by comprising the step of performing real-time early warning based on the fatigue index, specifically comprising:
determining a risk score of the driving behavior of the driver based on a fatigue index threshold interval in which the fatigue index is currently located;
and determining corresponding early warning measures according to the determined danger scores to perform real-time early warning.
4. The fatigue driving pre-warning method according to claim 3, wherein the step of performing face recognition on the head image, extracting eye feature points, and obtaining eye feature variables comprises the following steps:
extracting eye feature points from the head image by adopting a face recognition technology, wherein the eye feature points comprise: corner of eye feature points, upper eyelid feature points, and lower eyelid feature points;
and calculating the included angle of the canthus according to the canthus characteristic points, the upper eyelid characteristic points and the lower eyelid characteristic points so as to determine the closing amplitude of the upper eyelid and the lower eyelid.
5. The method for early warning of fatigue driving according to claim 4, wherein the step of performing face recognition on the head image, extracting eye feature points, and obtaining eye feature variables further comprises:
and estimating the view angle of the head image through a view angle estimation depth neural network frame to obtain an actual view angle.
6. The utility model provides a driver fatigue early warning device which characterized in that includes:
the image acquisition module is used for acquiring the head image of the driver in real time;
the data processing module is used for periodically collecting the head image, carrying out face recognition, extracting eye feature points and obtaining eye feature variables, wherein the eye feature variables comprise: percentage eyelid closure, eyelid closure amplitude, and gaze angle offset time;
the fatigue index calculation module is used for adjusting the weight of the eye feature variable in the local model, namely, optimizing an algorithm of eyelid closing percentage, eye closing amplitude and sight angle offset time, and adjusting the contribution degree of different variables to the fatigue index; the local model is a deep neural network model;
generating a fatigue index in a current period based on the eye feature variable and a pre-trained deep neural network model;
the early warning module is used for carrying out real-time early warning according to the fatigue index output by the fatigue index calculation module;
wherein the data processing module comprises:
the second calculation unit is used for obtaining the eyelid closing percentage of the driver in the current period according to the calculated upper eyelid and lower eyelid complete closing percentage in the current period;
the third calculation unit is used for judging whether the closing amplitude of the upper eyelid and the lower eyelid is smaller than a preset closing threshold value or not, and if so, acquiring the eyelid closing amplitude in the current period, wherein the time duration of the upper eyelid and the lower eyelid is longer than or equal to the preset time duration and is continuously smaller than the preset closing threshold value;
and a fifth calculation unit, configured to count a time when the actual line of sight angle of the driver deviates from the preset line of sight angle threshold value, and obtain a line of sight angle deviation time of the driver in the current period.
7. The fatigue driving warning device of claim 6, further comprising:
the database is used for storing training sample images for constructing the local model and fatigue indexes corresponding to each training sample image;
the model construction module is used for acquiring a training sample head image from the database, and training the eye feature variable and the corresponding fatigue index acquired from the training sample head image for a plurality of times to obtain a deep neural network model.
8. The fatigue driving early warning device of claim 7, wherein the early warning module specifically comprises:
the storage unit is used for storing a plurality of preset fatigue index threshold intervals and risk scores corresponding to each fatigue index threshold interval;
the judging unit is used for judging the fatigue index threshold interval in which the fatigue index calculated by the fatigue index calculating module is currently positioned;
the first matching unit is used for determining corresponding risk scores according to the fatigue index threshold interval in which the fatigue index obtained by the judging unit is currently located;
the second matching unit is used for matching corresponding early warning measures according to the risk score matched by the first matching unit;
and the early warning unit is used for carrying out real-time early warning according to the early warning measures matched by the second matching unit.
9. The fatigue driving early warning device according to claim 8, wherein the data processing module specifically comprises:
a first feature point extracting unit, configured to extract an eye feature point from the head image by using a face recognition technology, where the eye feature point includes: corner of eye feature points, upper eyelid feature points, and lower eyelid feature points;
and the first calculation unit is used for calculating the included angle of the canthus according to the canthus characteristic points, the upper eyelid characteristic points and the lower eyelid characteristic points so as to determine the closing amplitude of the upper eyelid and the lower eyelid.
10. The fatigue driving warning device of claim 9, wherein the data processing module further comprises:
the model construction unit is used for constructing a view angle estimation depth neural network model in advance;
and the fourth calculation unit is used for inputting the head image into the sight angle estimation depth neural network model to perform sight angle estimation, so as to obtain the actual sight angle of the driver.
11. An electronic device comprising at least one processor, at least one memory, a communication interface, and a bus; wherein,,
the processor, the memory and the communication interface complete the communication with each other through the bus;
the memory is used for storing a program for executing the method of any one of claims 1 to 5;
the processor is configured to execute a program stored in the memory.
12. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method of any one of claims 1 to 5.
CN202010662686.6A 2020-07-10 2020-07-10 Fatigue driving early warning method and device, electronic equipment and storage medium Active CN111950371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010662686.6A CN111950371B (en) 2020-07-10 2020-07-10 Fatigue driving early warning method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010662686.6A CN111950371B (en) 2020-07-10 2020-07-10 Fatigue driving early warning method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111950371A CN111950371A (en) 2020-11-17
CN111950371B true CN111950371B (en) 2023-05-19

Family

ID=73341745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010662686.6A Active CN111950371B (en) 2020-07-10 2020-07-10 Fatigue driving early warning method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111950371B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI739675B (en) * 2020-11-25 2021-09-11 友達光電股份有限公司 Image recognition method and apparatus
CN114596687A (en) * 2020-12-01 2022-06-07 咸瑞科技股份有限公司 In-vehicle driving monitoring system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408878A (en) * 2014-11-05 2015-03-11 唐郁文 Vehicle fleet fatigue driving early warning monitoring system and method
CN109063545A (en) * 2018-06-13 2018-12-21 五邑大学 A kind of method for detecting fatigue driving and device
CN109697831A (en) * 2019-02-25 2019-04-30 湖北亿咖通科技有限公司 Fatigue driving monitoring method, device and computer readable storage medium
CN110188655A (en) * 2019-05-27 2019-08-30 上海蔚来汽车有限公司 Driving condition evaluation method, system and computer storage medium
WO2019190561A1 (en) * 2018-03-30 2019-10-03 Tobii Ab Deep learning for three dimensional (3d) gaze prediction
CN110378315A (en) * 2019-07-29 2019-10-25 成都航空职业技术学院 A kind of sight angle estimation method based on face image
CN111079476A (en) * 2018-10-19 2020-04-28 上海商汤智能科技有限公司 Driving state analysis method and device, driver monitoring system and vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408878A (en) * 2014-11-05 2015-03-11 唐郁文 Vehicle fleet fatigue driving early warning monitoring system and method
WO2019190561A1 (en) * 2018-03-30 2019-10-03 Tobii Ab Deep learning for three dimensional (3d) gaze prediction
CN109063545A (en) * 2018-06-13 2018-12-21 五邑大学 A kind of method for detecting fatigue driving and device
CN111079476A (en) * 2018-10-19 2020-04-28 上海商汤智能科技有限公司 Driving state analysis method and device, driver monitoring system and vehicle
CN109697831A (en) * 2019-02-25 2019-04-30 湖北亿咖通科技有限公司 Fatigue driving monitoring method, device and computer readable storage medium
CN110188655A (en) * 2019-05-27 2019-08-30 上海蔚来汽车有限公司 Driving condition evaluation method, system and computer storage medium
CN110378315A (en) * 2019-07-29 2019-10-25 成都航空职业技术学院 A kind of sight angle estimation method based on face image

Also Published As

Publication number Publication date
CN111950371A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
Deng et al. Where does the driver look? Top-down-based saliency detection in a traffic driving environment
Craye et al. Driver distraction detection and recognition using RGB-D sensor
Ji et al. Eye and mouth state detection algorithm based on contour feature extraction
CN111950371B (en) Fatigue driving early warning method and device, electronic equipment and storage medium
Ji et al. Fatigue state detection based on multi-index fusion and state recognition network
CN109460780A (en) Safe driving of vehicle detection method, device and the storage medium of artificial neural network
Wang et al. Driver distraction detection based on vehicle dynamics using naturalistic driving data
Ragab et al. A visual-based driver distraction recognition and detection using random forest
Tang et al. Real-time image-based driver fatigue detection and monitoring system for monitoring driver vigilance
WO2022142903A1 (en) Identity recognition method and apparatus, electronic device, and related product
Kasneci et al. Aggregating physiological and eye tracking signals to predict perception in the absence of ground truth
Zhao et al. Deep convolutional neural network for drowsy student state detection
Liu et al. An elaborate algorithm for automatic processing of eye movement data and identifying fixations in eye-tracking experiments
Zhu et al. Improving driver situation awareness prediction using human visual sensory and memory mechanism
Bai et al. Learning-based probabilistic modeling and verifying driver behavior using MDP
Utomo et al. Driver fatigue prediction using different sensor data with deep learning
Li et al. Driver fatigue detection based on comprehensive facial features and gated recurrent unit
Hua et al. Cognitive distraction state recognition of drivers at a nonsignalized intersection in a mixed traffic environment
Murukesh et al. Drowsiness detection for drivers using computer vision
Rathi et al. Personalized health framework for visually impaired
Kim et al. Context-based rider assistant system for two wheeled self-balancing vehicles
CN115116039A (en) Vehicle cabin outside sight line tracking method and device, vehicle and storage medium
Salman et al. Improvement of Eye Tracking Based on Deep Learning Model for General Purpose Applications
Jeong et al. Eye pupil detection system using an ensemble of regression forest and fast radial symmetry transform with a near infrared camera
CN112181149B (en) Driving environment recognition method and device and simulated driver

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant