CN112690794A - Driver state detection method, system and device - Google Patents

Driver state detection method, system and device Download PDF

Info

Publication number
CN112690794A
CN112690794A CN202011612628.9A CN202011612628A CN112690794A CN 112690794 A CN112690794 A CN 112690794A CN 202011612628 A CN202011612628 A CN 202011612628A CN 112690794 A CN112690794 A CN 112690794A
Authority
CN
China
Prior art keywords
driver
cockpit
alarm
body parts
calibrated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011612628.9A
Other languages
Chinese (zh)
Other versions
CN112690794B (en
Inventor
杨国栋
宋士佳
孙超
王文伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Automotive Research Institute of Beijing University of Technology
Original Assignee
Shenzhen Automotive Research Institute of Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Automotive Research Institute of Beijing University of Technology filed Critical Shenzhen Automotive Research Institute of Beijing University of Technology
Priority to CN202011612628.9A priority Critical patent/CN112690794B/en
Publication of CN112690794A publication Critical patent/CN112690794A/en
Application granted granted Critical
Publication of CN112690794B publication Critical patent/CN112690794B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/746Alarms related to a physiological condition, e.g. details of setting alarm thresholds or avoiding false alarms

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Psychology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of safe driving, in particular to a method, a system and a device for detecting a driver state. The method comprises the following steps: the method comprises the steps of acquiring the positions of a plurality of pre-calibrated body parts of a driver and the position of a cockpit in real time by a computer vision method; respectively calculating the angle relation between each pre-calibrated body part and the cockpit according to the positions of the body parts and the position of the cockpit; and then, whether the driver is in fatigue driving is detected by adopting the trained neural network model, and the detection result is more accurate due to the consideration of the angle relation between a plurality of body parts and the cockpit. In addition, compared with the existing hand-held detection method, the method has the advantages that the existing vehicle is slightly modified; compared with the existing glasses and face tracking method, the method has the advantages that a high-precision sensor is not needed, the technical problem that a driver cannot wear a mask and glasses is solved, and the detection result is more accurate.

Description

Driver state detection method, system and device
Technical Field
The invention relates to the technical field of safe driving, in particular to a method, a system and a device for detecting a driver state.
Background
In recent years, vehicles have become popular, and traffic accidents have increased. Wherein the driver state greatly affects the safety state of the vehicle; for example, when a driver uses a mobile phone during driving, bends over to pick up an object, or the driver is tired or sleeps insufficiently, the driver is likely to ignore road conditions and induce traffic accidents. Therefore, it is necessary to detect the driver's state and to perform a warning and a treatment when the driver's state is not good. In particular, Advanced Driver Assistance (ADAS) systems have been popularized and applied in recent years, and drivers are more likely to be distracted when driving such vehicles, thereby greatly increasing the driving risk.
Currently, some detection systems in the prior art employ methods and apparatuses that include: firstly, the driver is required to hold a steering wheel by hands to designate the position of a sensor, and the state of the driver is obtained by detecting the holding state, so that the fatigue state of the driver is difficult to identify by the method; the method requires high precision of the camera, and the detection false alarm rate is high because the change characteristics of the glasses and the small face are difficult to capture, so that the fatigue state of the driver is difficult to accurately identify. Meanwhile, the driver can not wear the mask and the glasses by adopting the method, so the use environment is limited to a certain extent.
Disclosure of Invention
The invention mainly solves the technical problem that the existing detection technology cannot accurately detect the fatigue state of a driver.
A driver state detection method, comprising:
the method comprises the steps of acquiring the positions of a plurality of pre-calibrated body parts of a driver and the position of a cockpit in real time by a computer vision method;
respectively calculating the angle relation between each pre-calibrated body part and the cockpit according to the positions of the body parts and the position of the cockpit;
inputting the angle relation between each pre-calibrated body part and the cockpit into a pre-trained neural network model to obtain the score of the fatigue degree of the driver;
and judging whether the score of the fatigue degree meets a preset alarm reminding triggering condition, and if so, sending an alarm reminding signal to remind a driver.
In one embodiment, the acquiring the positions of the plurality of pre-calibrated body parts of the driver and the position of the cockpit in real time by the computer vision method comprises:
acquiring an image in a cab in real time, wherein targets in the image at least comprise a driver and a cab;
and performing feature extraction on the image to acquire the positions of a plurality of pre-calibrated body parts of the driver and the position of the cockpit.
In one embodiment, further comprising: and judging whether the current alarm reminding signal is a false alarm by the driver, if so, manually triggering an uploading process so as to add the currently acquired image into a training set for continuously training the neural network model.
In one embodiment, the plurality of pre-calibrated body parts comprises: left arm, right arm, torso, head, and neck.
A driver state detection system comprising:
the position information acquisition unit is used for acquiring the positions of a plurality of pre-calibrated body parts of the driver and the position of the cockpit in real time by a computer vision method;
the angle relation calculation unit is used for respectively calculating the angle relation between each body part calibrated in advance and the cockpit according to the positions of the body parts and the position of the cockpit;
the score calculating unit is used for inputting the angle relation between each body part calibrated in advance and the cockpit into a neural network model trained in advance to obtain the score of the fatigue degree of the driver;
and the alarm reminding unit is used for judging whether the score of the fatigue degree meets a preset alarm reminding triggering condition or not, and if so, sending an alarm reminding signal to remind a driver.
In one embodiment, the position information acquiring unit includes:
the image acquisition module is used for acquiring images in a cab in real time, and targets in the images at least comprise a driver and the cab;
and the characteristic extraction module is used for extracting the characteristics of the image to obtain the positions of a plurality of pre-calibrated body parts of the driver and the position of the cockpit.
Further, the method also comprises the following steps:
and the false alarm processing module is used for triggering an uploading process when the driver judges that the current alarm reminding signal is a false alarm so as to add the currently acquired image into a training set for continuously training the neural network model.
A driver state detection device comprising:
the camera is used for acquiring images in a cockpit in real time, and targets in the images at least comprise a driver and a cockpit;
the processor is used for extracting the characteristics of the image to obtain the positions of a plurality of pre-calibrated body parts of the driver and the position of the cockpit; respectively calculating the angle relation between each pre-calibrated body part and the cockpit according to the positions of the body parts and the position of the cockpit; inputting the angle relation between each pre-calibrated body part and the cockpit into a pre-trained neural network model to obtain the score of the fatigue degree of the driver; judging whether the score of the fatigue degree meets a preset alarm reminding triggering condition or not, and if so, sending an alarm triggering signal to an alarm;
and the alarm is used for sending alarm information under the triggering of the alarm triggering signal.
A vehicle comprising a driver state detection system as described above or a driver state detection apparatus as described above.
A computer readable storage medium comprising a program executable by a processor to implement the method as described above.
The driver state detection method according to the above embodiment includes: the method comprises the steps of acquiring the positions of a plurality of pre-calibrated body parts of a driver and the position of a cockpit in real time by a computer vision method; respectively calculating the angle relation between each pre-calibrated body part and the cockpit according to the positions of the body parts and the position of the cockpit; inputting the angle relation between each pre-calibrated body part and the cockpit into a pre-trained neural network model to obtain the score of the fatigue degree of the driver; and judging whether the score of the fatigue degree meets a preset alarm reminding triggering condition, and if so, sending an alarm reminding signal to remind a driver. According to the method and the device, the angle relation between a plurality of pre-calibrated body parts and the cockpit is obtained, and the trained neural network model is adopted to detect whether the driver is tired to drive, so that the detection result is more accurate due to the consideration of the angle relation between the body parts and the cockpit. In addition, compared with the existing hand-held detection method, the method has the advantages that the existing vehicle is slightly modified; compared with the existing glasses and face tracking method, the method has the advantages that a high-precision sensor is not needed, the technical problem that a driver cannot wear a mask and glasses is solved, and the detection result is more accurate.
Drawings
FIG. 1 is a flow chart of a method for detecting a driver state according to an embodiment of the present disclosure;
FIG. 2 is a block diagram of a driver state detection system according to an embodiment of the present disclosure;
fig. 3 is a block diagram of a location information acquiring unit according to an embodiment of the present application;
FIG. 4 is a block diagram of a driver state detection apparatus according to an embodiment of the present application;
fig. 5 is a schematic view illustrating the installation of the detection device in the cab according to the embodiment of the present application.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like elements in different embodiments are numbered with like associated elements. In the following description, numerous details are set forth in order to provide a better understanding of the present application. However, those skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art.
Furthermore, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the method descriptions may be transposed or transposed in order, as will be apparent to one of ordinary skill in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
Computer Vision (Computer Vision) refers to the Computer-implemented human visual function-the perception, recognition and understanding of three-dimensional scenes of an objective world. This means that the research goal of computer vision technology is to make a computer have the ability to recognize three-dimensional environmental information through two-dimensional images. There is therefore a need for a machine that not only senses the geometric information (shape, position, pose, motion, etc.) of objects in a three-dimensional environment, but also describes, stores, recognizes and understands them. Computer vision is considered to be different from studying human or animal vision: it builds models by means of geometric, physical and learning techniques, and processes data statistically. The complete closed loop of artificial intelligence includes the processes of perception, cognition, reasoning, and then feedback to perception, where vision occupies the majority of the perception process in our perception system. Studying vision is an important step in studying the perception of computers.
A neural network: a neural network is an operational model, which is composed of a large number of nodes (or "neurons") and their interconnections. Each node represents a particular output function called the stimulus function, the activation function. The connection between every two nodes represents a weighted value, called weight, for the signal passing through the connection, which is equivalent to the memory of the artificial neural network. The output of the network is different according to the connection mode of the network, the weight value and the excitation function. The network itself is usually an approximation to some algorithm or function in nature, and may also be an expression of a logic strategy.
The first embodiment is as follows:
referring to fig. 1, the present application provides a method for detecting a driver state, comprising:
step 101: the positions of a plurality of pre-calibrated body parts of the driver and the position of the cockpit are obtained in real time through a computer vision method.
Specifically, in the embodiment, an image in a cockpit is obtained in real time, and a target in the image at least comprises a driver and a cockpit; and performing feature extraction on the image to acquire the positions of a plurality of pre-calibrated body parts of the driver and the position of the cockpit. The image acquisition module (such as a camera) of the embodiment is installed in the cockpit to acquire images in the cockpit. A reference coordinate system is established in the cockpit in advance, and coordinates of the position of the cockpit and the positions of a plurality of body parts calibrated in advance by the driver are expressed under the coordinate system, for example, calibration parameters of the cockpit coordinate system are completed through zooming, rotating and stretching of images or videos acquired in real time. Wherein, the plurality of pre-calibrated body parts in this embodiment include: left arm, right arm, torso, head, and neck. After the images are acquired, the positions of the cockpit (i.e. the seat of the cab) and the left arm, the right arm, the trunk, the head and the neck of the driver are identified by performing feature extraction on the images, and the coordinates of the cockpit and the driver in a pre-established coordinate system are acquired.
Step 102: and respectively calculating the angle relation between each pre-calibrated body part and the cockpit according to the positions of the body parts and the position of the cockpit. And calculating the spatial angle relation between the body part and the cockpit according to the coordinates of the left arm, the right arm, the trunk, the head and the neck and the coordinates of the cockpit.
The international society of automotive engineers, based on the publication of "seat position selected by SAE J1517 driver", SAE J287 driver hand control area ", SAE J1052 driver and passenger head position", etc., has proposed strict parameterization requirements for automobile manufacturers in designing and producing vehicle cabins, and the posture of the driver who correctly uses the seat, seat belt device, etc. generally meets the quantification requirements of the above documents, and the driver has a good view, convenient operation and comfort. Based on life experience, a driver has the behaviors of lowering head, raising head, turning body (turning head), askew sitting and the like when fatigue or distraction occurs, a plurality of quantitative indexes such as the angle between the neck and the trunk, the angle between the trunk and the cabin, the position of the head and the neck pillow and the like do not meet the requirements any more, and the fatigue or distraction state can be recognized according to the algorithm. Thus, international standards require cockpit design and also require the relationship of the head, neck, back, arms and cockpit. As long as the driver correctly adjusts the seat and is in a normal driving state, all parts of the body do not deviate from the design requirements.
Step 103: and inputting the angle relation between each pre-calibrated body part and the cockpit into a pre-trained neural network model to obtain the score of the fatigue degree or distraction degree of the driver.
The neural network model of the embodiment is obtained through training of a large amount of data and images, for example, a large number of images of a driver during normal driving and images of the driver during fatigue driving are obtained, and an angular relationship between a body part and a cockpit of the driver is obtained as input of an initial model, wherein for the angular relationship between the body part and the cockpit of the driver during driving corresponding to the large number of fatigue images, a fatigue degree score and a distraction degree score are marked for each angular relationship, and the angular relationship between the body part and the cockpit of the driver is used as input of training. And taking the scores of the fatigue degree and the distraction degree of the driver as output so as to train the neural network model. Thereby, a parameter W and an evaluation function f are obtained, in this embodiment, (y1, y2) ═ f (x) ═ W × x, where y1 and y2 are scores for the degree of fatigue and the degree of distraction, respectively. Wherein, W represents a weight parameter, and the physical meaning of x is a parameter obtained by computer vision recognition, and in the embodiment, W and x are both matrixes. For example:
W=[w11,w12,w13;W21,w22,w23…]。
x=[x1,x2,x3…]。
for example: x1 indicates the X-direction (head-tilt) rotation angle between the neck and the cabin backrest is 1.
X2 is 2 in the y direction (head up, head down) rotation angle of the neck and the cabin backrest.
X3 is 3 for the z-direction (swivel) rotation angle of the neck and cabin backrest.
W=[10,1,1;1,1,10]。
The larger the corresponding number is, that is, the larger the weight of the influence of the index on the final score is, the larger the weight of the score of the fatigue caused by head distortion is, and the larger the weight of the score of the distraction caused by head turning is. Then Y is [ Y1, Y2] is [15,33 ].
Step 104: and judging whether the score of the fatigue degree and/or the distraction degree meets a preset alarm reminding triggering condition, and if so, sending an alarm reminding signal to remind a driver.
And if the score of the fatigue degree or the distraction degree of the current driver obtained by calculation is higher than a set value, the condition that the alarm reminding triggering condition is met is indicated, and an alarm reminding signal is sent to remind the driver. The alarm reminding signal can be any one or a combination of a voice signal, a light signal and a vibration signal.
Further, in an embodiment, the driver further determines whether the current alarm reminding signal is a false alarm, and if so, manually triggers an uploading process to add the currently acquired image into a training set for continuing training the neural network model. Because of the difference of human body types, the vehicle can cover all people without hungry when being designed. For drivers who are too tall or slim, their natural sitting posture is greatly different from that of ordinary drivers, and thus a false alarm is generated. The input of the false-alarm image used for model training is the coordinates of the left arm, the right arm, the trunk, the head and the neck of the driver, the coordinates of the cockpit and the label of the false alarm as a training set. The output is the adjusted weight parameter W. For example, a false alarm trigger button is arranged on a control panel of a driver, and after the alarm reminding signal is sent out, when the driver judges that the current signal is false alarm, the false alarm trigger button on the control panel is triggered, and the currently acquired image is added into a training set. Repeated training can make the detection of the neural network model more accurate.
In another embodiment, a report button is arranged on the operation panel, the report button is connected with the processor through a data line, and the driver can trigger the report button to report information and add the currently acquired image into the training set.
In another embodiment, the report button may also communicate with the processor via wireless transmission. The wireless transmission forms include Bluetooth, wifi and the like. Or in another embodiment, a reporting instruction can be sent out in a voice recognition control mode, so that manual button triggering is avoided.
Further, in another embodiment, the driver monitoring system is turned off in order to avoid malicious driver shutdown. Because the computer is difficult to judge whether the feedback is false alarm after the driver feeds back the false alarm, if the feedback is directly added into the training set, the neural network model is trained by wrong data, the identification precision of the model is possibly damaged, and the state of the driver can not be accurately identified. Therefore, after the driver judges that the current alarm reminding signal is false alarm and triggers to upload the current image, the false alarm judging module further judges whether the current alarm information is false alarm or not, and if the current alarm reminding signal is judged to be false alarm, the current acquired image is added into the training set. The false alarm judging module can be connected with a background server, background service personnel can remotely analyze whether a driver is in fatigue driving or not through image information acquired by a camera in a cab, if the background further determines that the driver is in false alarm, the false alarm information is returned to the false alarm judging module to give a false alarm instruction, the false alarm judging module determines that the driver is in false alarm and then adds the currently acquired image into a training set, and if the background judges that the driver is not in false alarm, the currently acquired image is marked as a fatigue state.
According to the embodiment, the angle relationship between a plurality of body parts and the cockpit calibrated in advance is obtained, and then the trained neural network model is adopted to detect whether the driver is tired to drive, so that the detection result is more accurate due to the consideration of the angle relationship between the body parts and the cockpit. In addition, compared with the existing hand-held detection method, the method has the advantages that the existing vehicle is slightly modified; compared with the existing glasses and face tracking method, the method has the advantages that a high-precision sensor is not needed, the technical problem that a driver cannot wear a mask and glasses is solved, and the detection result is more accurate.
Example two:
referring to fig. 2, the present embodiment provides a driver state detection system, which includes a position information obtaining unit 201, an angle relation calculating unit 202, a score calculating unit 203, and an alarm reminding unit 204. The position information acquiring unit 201 is used for acquiring the positions of a plurality of pre-calibrated body parts of the driver and the position of the cockpit in real time by a computer vision method; the angular relation calculation unit 202 is configured to calculate an angular relation between each pre-calibrated body part and the cockpit according to the positions of the plurality of body parts and the position of the cockpit; the score calculating unit 203 is configured to input the angle relationship between each pre-calibrated body part and the cockpit into a pre-trained neural network model, so as to obtain a score of the fatigue degree and the distraction degree of the driver. The alarm reminding unit 204 is configured to determine whether the score of the fatigue degree and/or the distraction degree meets a preset alarm reminding triggering condition, and if so, send an alarm reminding signal to remind the driver.
The position information obtaining unit 201, the angle relation calculating unit 202, the score calculating unit 203 and the alarm reminding unit 204 all include a processor and a storage module, and programs are stored in the storage module and can be executed by the processor to realize data processing functions of the units.
As shown in fig. 3, the position information acquiring unit 201 includes: an image acquisition module 2011 and a feature extraction module 2012. The image capturing module 2011 is used for capturing images of the cabin in real time, wherein the objects in the images include at least the driver and the cabin. The image acquiring module 2011 of the present embodiment includes a plurality of cameras disposed in the cockpit, for example, the plurality of cameras are respectively disposed in front of, at the side of, behind, and above the driver, for acquiring images of the cockpit. The feature extraction module 2012 is configured to perform feature extraction on the image to obtain positions of a plurality of body parts calibrated in advance by the driver and a position of the cockpit.
The alarm reminding unit 204 further includes an alarm for sending an alarm reminding signal, and this embodiment may adopt any one or a combination of a voice signal, an optical signal, and a vibration signal.
Further, the driver state detection system provided in this embodiment further includes a false alarm processing module 205, configured to trigger an upload process when the driver determines that the current alarm reminding signal is a false alarm, so as to add the currently acquired image into the training set, so as to continue training the neural network model.
The system of the embodiment detects whether the driver is tired to drive by acquiring the angle relationship between a plurality of pre-calibrated body parts and the cockpit and then adopting the trained neural network model, and the detection result is more accurate due to the consideration of the angle relationship between the plurality of body parts and the cockpit. In addition, compared with the existing hand-held detection method, the method has the advantages that the existing vehicle is slightly modified; compared with the existing glasses and face tracking method, the method has the advantages that a high-precision sensor is not needed, the technical problem that a driver cannot wear a mask and glasses is solved, and the detection result is more accurate.
Example three:
a driver state detection device, as shown in fig. 4 and 5, comprising: camera 301, processor 302 and alarm 303.
The camera 301 is configured to obtain an image in the cockpit in real time, where an object in the image at least includes a driver and a cockpit. The processor 302 is configured to perform feature extraction on the image to obtain positions of a plurality of pre-calibrated body parts of the driver and positions of the cockpit; respectively calculating the angle relation between each pre-calibrated body part and the cockpit according to the positions of the body parts and the position of the cockpit; inputting the angle relation between each pre-calibrated body part and the cockpit into a pre-trained neural network model to obtain the scores of the fatigue degree and the distraction degree of the driver; and judging whether the score of the fatigue degree and/or the distraction degree meets the preset alarm reminding triggering condition, if so, sending an alarm triggering signal to an alarm. The alarm 303 is configured to send alarm information under the trigger of the alarm trigger signal.
Adopt the driver state device of this embodiment, the state that detects the driver that can be accurate under its tired or the condition of distraction, in time send alarm information in order to remind it, avoided driver fatigue, guaranteed the safety of driving.
Example four:
the present embodiment provides a computer-readable storage medium that includes a program executable by a processor to implement the driver state detection method provided in the first embodiment above.
Those skilled in the art will appreciate that all or part of the functions of the various methods in the above embodiments may be implemented by hardware, or may be implemented by computer programs. When all or part of the functions of the above embodiments are implemented by a computer program, the program may be stored in a computer-readable storage medium, and the storage medium may include: a read only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc., and the program is executed by a computer to realize the above functions. For example, the program may be stored in a memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above may be implemented. In addition, when all or part of the functions in the above embodiments are implemented by a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and may be downloaded or copied to a memory of a local device, or may be version-updated in a system of the local device, and when the program in the memory is executed by a processor, all or part of the functions in the above embodiments may be implemented.
The present invention has been described in terms of specific examples, which are provided to aid understanding of the invention and are not intended to be limiting. For a person skilled in the art to which the invention pertains, several simple deductions, modifications or substitutions may be made according to the idea of the invention.

Claims (10)

1. A driver state detection method characterized by comprising:
the method comprises the steps of acquiring the positions of a plurality of pre-calibrated body parts of a driver and the position of a cockpit in real time by a computer vision method;
respectively calculating the angle relation between each pre-calibrated body part and the cockpit according to the positions of the body parts and the position of the cockpit;
inputting the angle relation between each pre-calibrated body part and the cockpit into a pre-trained neural network model to obtain the score of the fatigue degree of the driver;
and judging whether the score of the fatigue degree meets a preset alarm reminding triggering condition, and if so, sending an alarm reminding signal to remind a driver.
2. The driver state detection method according to claim 1, wherein the acquiring in real time the positions of the plurality of pre-calibrated body parts of the driver and the position of the cabin by the computer vision method comprises:
acquiring an image in a cab in real time, wherein targets in the image at least comprise a driver and a cab;
and performing feature extraction on the image to acquire the positions of a plurality of pre-calibrated body parts of the driver and the position of the cockpit.
3. The driver state detection method according to claim 1, characterized by further comprising: and judging whether the current alarm reminding signal is a false alarm by the driver, if so, manually triggering an uploading process so as to add the currently acquired image into a training set for continuously training the neural network model.
4. The driver state detection method according to claim 1, wherein the plurality of pre-calibrated body parts includes: left arm, right arm, torso, head, and neck.
5. A driver state detection system, comprising:
the position information acquisition unit is used for acquiring the positions of a plurality of pre-calibrated body parts of the driver and the position of the cockpit in real time by a computer vision method;
the angle relation calculation unit is used for respectively calculating the angle relation between each body part calibrated in advance and the cockpit according to the positions of the body parts and the position of the cockpit;
the score calculating unit is used for inputting the angle relation between each body part calibrated in advance and the cockpit into a neural network model trained in advance to obtain the score of the fatigue degree of the driver;
and the alarm reminding unit is used for judging whether the score of the fatigue degree meets a preset alarm reminding triggering condition or not, and if so, sending an alarm reminding signal to remind a driver.
6. The driver state detection system according to claim 5, characterized in that the position information acquisition unit includes:
the image acquisition module is used for acquiring images in a cab in real time, and targets in the images at least comprise a driver and the cab;
and the characteristic extraction module is used for extracting the characteristics of the image to obtain the positions of a plurality of pre-calibrated body parts of the driver and the position of the cockpit.
7. The driver state detection system according to claim 5, characterized by further comprising: and the false alarm processing module is used for triggering an uploading process when the driver judges that the current alarm reminding signal is a false alarm so as to add the currently acquired image into a training set for continuously training the neural network model.
8. A driver state detection device characterized by comprising:
the camera is used for acquiring images in a cockpit in real time, and targets in the images at least comprise a driver and a cockpit;
the processor is used for extracting the characteristics of the image to obtain the positions of a plurality of pre-calibrated body parts of the driver and the position of the cockpit; respectively calculating the angle relation between each pre-calibrated body part and the cockpit according to the positions of the body parts and the position of the cockpit; inputting the angle relation between each pre-calibrated body part and the cockpit into a pre-trained neural network model to obtain the score of the fatigue degree of the driver; judging whether the score of the fatigue degree meets a preset alarm reminding triggering condition or not, and if so, sending an alarm triggering signal to an alarm;
and the alarm is used for sending alarm information under the triggering of the alarm triggering signal.
9. A vehicle characterized by comprising a driver state detection system according to any one of claims 5 to 7 or a driver state detection device according to claim 8.
10. A computer-readable storage medium, comprising a program executable by a processor to implement the method of any one of claims 1-4.
CN202011612628.9A 2020-12-30 2020-12-30 Driver state detection method, system and device Active CN112690794B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011612628.9A CN112690794B (en) 2020-12-30 2020-12-30 Driver state detection method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011612628.9A CN112690794B (en) 2020-12-30 2020-12-30 Driver state detection method, system and device

Publications (2)

Publication Number Publication Date
CN112690794A true CN112690794A (en) 2021-04-23
CN112690794B CN112690794B (en) 2022-08-30

Family

ID=75512641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011612628.9A Active CN112690794B (en) 2020-12-30 2020-12-30 Driver state detection method, system and device

Country Status (1)

Country Link
CN (1) CN112690794B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113624514A (en) * 2021-08-17 2021-11-09 中国汽车技术研究中心有限公司 Test method, system, electronic device and medium for driver state monitoring product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204229593U (en) * 2014-10-28 2015-03-25 寻俭青 A kind of anti-weariness working early warning system
CN105354988A (en) * 2015-12-11 2016-02-24 东北大学 Driver fatigue driving detection system based on machine vision and detection method
CN107697069A (en) * 2017-10-31 2018-02-16 上海汽车集团股份有限公司 Fatigue of automobile driver driving intelligent control method
CN108482380A (en) * 2018-03-06 2018-09-04 知行汽车科技(苏州)有限公司 The driving monitoring system of automatic adjusument sample frequency
CN109063686A (en) * 2018-08-29 2018-12-21 安徽华元智控科技有限公司 A kind of fatigue of automobile driver detection method and system
CN111301280A (en) * 2018-12-11 2020-06-19 北京嘀嘀无限科技发展有限公司 Dangerous state identification method and device
WO2020161610A2 (en) * 2019-02-04 2020-08-13 Jungo Connectivity Ltd. Adaptive monitoring of a vehicle using a camera

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204229593U (en) * 2014-10-28 2015-03-25 寻俭青 A kind of anti-weariness working early warning system
CN105354988A (en) * 2015-12-11 2016-02-24 东北大学 Driver fatigue driving detection system based on machine vision and detection method
CN107697069A (en) * 2017-10-31 2018-02-16 上海汽车集团股份有限公司 Fatigue of automobile driver driving intelligent control method
CN108482380A (en) * 2018-03-06 2018-09-04 知行汽车科技(苏州)有限公司 The driving monitoring system of automatic adjusument sample frequency
CN109063686A (en) * 2018-08-29 2018-12-21 安徽华元智控科技有限公司 A kind of fatigue of automobile driver detection method and system
CN111301280A (en) * 2018-12-11 2020-06-19 北京嘀嘀无限科技发展有限公司 Dangerous state identification method and device
WO2020161610A2 (en) * 2019-02-04 2020-08-13 Jungo Connectivity Ltd. Adaptive monitoring of a vehicle using a camera

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113624514A (en) * 2021-08-17 2021-11-09 中国汽车技术研究中心有限公司 Test method, system, electronic device and medium for driver state monitoring product

Also Published As

Publication number Publication date
CN112690794B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN108137050B (en) Driving control device and driving control method
CN109919049A (en) Fatigue detection method based on deep learning human face modeling
US11042766B2 (en) Artificial intelligence apparatus and method for determining inattention of driver
US20190370580A1 (en) Driver monitoring apparatus, driver monitoring method, learning apparatus, and learning method
WO2018085804A1 (en) System and method for driver distraction determination
CN108229345A (en) A kind of driver's detecting system
KR101276770B1 (en) Advanced driver assistance system for safety driving using driver adaptive irregular behavior detection
CN102975718B (en) In order to determine that vehicle driver is to method, system expected from object state and the computer-readable medium including computer program
García et al. Driver monitoring based on low-cost 3-D sensors
CN110663042B (en) Communication flow of traffic participants in the direction of an automatically driven vehicle
Jha et al. Probabilistic estimation of the driver's gaze from head orientation and position
US20060062472A1 (en) Method for detecting a person in a space
CN112690794B (en) Driver state detection method, system and device
CN109964184A (en) By comparing the autonomous vehicle control of transition prediction
CN116194342A (en) Computer-implemented method for analyzing a vehicle interior
WO2019137913A1 (en) Method for crash prevention for an automotive vehicle comprising a driving support system
CN116189153A (en) Method and device for identifying sight line of driver, vehicle and storage medium
Bergasa et al. Visual monitoring of driver inattention
Louie et al. Towards a driver monitoring system for estimating driver situational awareness
Jha et al. Driver visual attention estimation using head pose and eye appearance information
US20230184024A1 (en) Device and method for controlling door of vehicle
WO2021024905A1 (en) Image processing device, monitoring device, control system, image processing method, computer program, and recording medium
Srivastava Driver's drowsiness identification using eye aspect ratio with adaptive thresholding
JP2021009503A (en) Personal data acquisition system, personal data acquisition method, face sensing parameter adjustment method for image processing device and computer program
KR102597068B1 (en) Vehicle device for determining a driver's gaze state using artificial intelligence and control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant