CN113191212A - Driver road rage risk early warning method and system - Google Patents

Driver road rage risk early warning method and system Download PDF

Info

Publication number
CN113191212A
CN113191212A CN202110388686.6A CN202110388686A CN113191212A CN 113191212 A CN113191212 A CN 113191212A CN 202110388686 A CN202110388686 A CN 202110388686A CN 113191212 A CN113191212 A CN 113191212A
Authority
CN
China
Prior art keywords
data
driver
road rage
fusion
risk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110388686.6A
Other languages
Chinese (zh)
Other versions
CN113191212B (en
Inventor
孙晓
汪萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Zhongjuyuan Intelligent Technology Co ltd
Original Assignee
Institute of Artificial Intelligence of Hefei Comprehensive National Science Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Artificial Intelligence of Hefei Comprehensive National Science Center filed Critical Institute of Artificial Intelligence of Hefei Comprehensive National Science Center
Priority to CN202110388686.6A priority Critical patent/CN113191212B/en
Publication of CN113191212A publication Critical patent/CN113191212A/en
Application granted granted Critical
Publication of CN113191212B publication Critical patent/CN113191212B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0872Driver physiology
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/22Psychological state; Stress level or workload
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Transportation (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mechanical Engineering (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a driver road rage risk early warning method and a driver road rage risk early warning system, which belong to the technical field of intelligent driving assistance and comprise the following steps: acquiring real-time driving state data of a driver to be detected, wherein the state data comprises facial activity data, head posture data, limb action data and heart rate data; fusing the state data in a minimum interval of the emotion change time length to obtain fused data; and (4) taking the fusion data as the input of a pre-trained road rage grade recognition time series model to obtain the road rage risk grade of the driver. The method can continuously detect and early warn dangerous driving risk emotions of the driver in the driving period, and can early warn before the driver generates a road rage state to prompt the driver to adjust self emotion.

Description

Driver road rage risk early warning method and system
Technical Field
The invention relates to the technical field of intelligent driving assistance, in particular to a driver road rage risk early warning method and system.
Background
With the increasingly accelerated progress of automobile intellectualization, the control system of the automobile is increasingly complex, and the contradiction between the driver and the automobile control system is increasingly prominent. Meanwhile, "road rage" and the driving tendency of the driver have a crucial influence on the driving safety.
At present, the technical scheme for detecting and warning anger emotion mainly comprises the following two main types:
(1) multi-mode fusion based emotion recognition and detection method
The emotion recognition based on multi-mode fusion refers to extracting data of expressions, characters, voice, body states, physiology and other modes used in multimedia data through multimedia data in the forms of videos, characters, voice and the like based on a deep learning model, fusing the data and inputting the fused data into the pre-trained deep learning model or a machine learning model, and performing emotion recognition and detection.
(2) Emotion detection method based on text data and natural language processing technology
The emotion detection method based on the text data and the natural language processing technology is mainly based on the text data including but not limited to text data in the forms of conversation data, text single-phase expression data and the like, the text content of the text data is analyzed by adopting the natural language processing technology and is matched with a specific emotion keyword dictionary to obtain emotion characteristics of local conversation and whole conversation, and emotion (emotion) characteristic identification is carried out.
The above existing solutions cannot directly provide a straightforward early warning solution for the road rage risk problem of the driver, and the specific defects are embodied in the following aspects:
(1) the original risk early warning of a driver mainly aims at risk early warning before a post, but the driving is a continuous activity, and the risk early warning before the post is simply carried out, so that the risk change process in the driving process can not be measured and estimated completely.
(2) The prior emotion recognition technology comprises emotion labels of 'normal, angry, happy and sad' and 'active, neutral and negative', and has the defects that the emotional characteristics of grade differentiation, such as '1-grade anger' and '2-grade anger' are not considered, but the driving behavior of a driver is greatly influenced by the emotion activation degree in practice.
(3) The original emotion recognition technology mainly aims at recognizing and detecting the current emotional state, but cannot predict the future emotional state, particularly the negative emotional states such as anger, sadness, anxiety and the like, and practically needs to predict the negative emotional state so as to solve the problem of preventing risks in advance.
(4) The original emotion recognition device does not consider the carrying problem under the vehicle-mounted environment, so that the problems of data storage and data calling after the data storage and the data calling are considered by combining vehicle hardware and vehicle environment conditions and corresponding design are lacked, and the method and the device are possibly difficult to enter the vehicle-mounted environment for operation.
Disclosure of Invention
The present invention is directed to overcoming the above-mentioned deficiencies in the background art to continuously detect and warn a driver of dangerous driving emotions during continuous driving.
In order to achieve the above object, on one hand, a driver road rage risk early warning method is adopted, which comprises the following steps:
acquiring real-time driving state data of a driver to be detected, wherein the state data comprises facial activity data, head posture data, limb action data and heart rate data;
fusing the state data in a minimum interval of the emotion change time length to obtain fused data;
and (4) taking the fusion data as the input of a pre-trained road rage grade recognition time series model to obtain the road rage risk grade of the driver.
Further, the fusing the state data in the minimum interval of the time length of the emotion change to obtain fused data includes:
respectively defining the facial activity data, the Head posture data and the limb action data as Face-t, Head-t and Body-t;
fusing the Face activity data Face-t at the time t-ti with the heart rate data at the time ti to obtain data Fusion Set 1;
performing data Fusion on the data Fusion Set1 and Head posture data Head-t at the time when t is ti + delta t to obtain data Fusion Set 2;
performing data Fusion on the data Fusion Set2 and the limb motion data Body-t at the time when t is ti + delta t' to obtain Fusion data Fusion Set 3;
and constructing Fusion Set3t, ti e [ t1, tn ] as Fusion data in the continuous acquisition time of the minimum emotion change time interval [ t1, tn ].
Further, before the real-time driving state data of the driver to be tested is obtained, the method further comprises the following steps:
acquiring driving state data of a sample driver under an angry stimulus source in different driving scenes as sample data;
fusing the sample data in the minimum interval of the emotion change time length to obtain training data;
and taking training data and road rage grades corresponding to different angry stimulus sources as the input of the road rage grade recognition time series model, so as to train the road rage grade recognition time series model and obtain the trained road rage grade recognition time series model.
Further, the road rage stimulus scene sources include a level 1-vehicle congestion scene stimulus, a level 2-vehicle fleet scene stimulus, a level 3-vehicle being stimulated by a malicious crowded scene, a level 4-vehicle being stimulated by a malicious block to travel, and a level 5-driver being stimulated by abuse by others.
Further, the parameters of the road rage level identification time series model are as follows:
ot=g(Vst)
st=f(Uxt+Wst-1)
where ot represents output data, Vst represents a weight matrix from a hidden layer to an output layer, st represents a time point, Uxt represents a weight matrix from an input layer to a hidden layer, Wst represents a weight matrix of a previous hidden layer, g (Vst) represents an output value, and f (Uxt + Wst-1) represents an output value in this state.
Further, still include:
acquiring state data of a sample driver before angry grade data occurs by using the pre-trained road rage grade recognition time series model;
adding labels and transfer paths to state data before the angry level data occurs, wherein the labels comprise anxiety, sadness and depression;
fusing the anxiety tag data, the sad tag data and the depression tag data in a minimum interval of the mood change time length to obtain anxiety fusion data, sad fusion data and depression fusion data;
and training by utilizing the anxiety fusion data, the sadness fusion data and the depression fusion data to obtain an anxiety recognition time series model, a sadness recognition time series model and a sadness recognition time series model respectively.
Further, still include:
respectively calculating the conditional probability of the anger emotion appearing after different time lengths after the EC is anxiety, sadness and depression, wherein the EC is the anxiety, sadness and depression emotion appearing at the time t 0;
according to the conditional probability, designing an early warning trigger condition to carry out road rage risk early warning, wherein the early warning trigger condition is as follows:
if the EC1 is triggered and the duration reaches a first time length, road rage risk early warning is carried out;
if the EC2 is triggered and the duration reaches a second time length, carrying out road rage risk early warning;
if the EC3 is triggered and the duration reaches a third time length, carrying out road rage risk early warning;
wherein EC1, EC2 and EC3 respectively represent anxiety emotion, sad emotion and depressed emotion.
Further, the road rage risk level comprises 5 levels, and when the road rage risk level of the driver is greater than 3 levels, road rage risk early warning is carried out.
Further, still include:
starting a storage medium capacity detection task at regular time;
and when the storage capacity of the vehicle-mounted storage equipment reaches 80%, deleting the original data acquisition record for the driver.
On the other hand, adopt a driver road anger risk early warning system, including data acquisition module, data fusion module and road anger risk calculation module, wherein:
the data acquisition module is used for acquiring real-time driving state data of a driver to be detected, wherein the state data comprises facial activity data, head posture data, limb action data and heart rate data;
the data fusion module is used for fusing the state data in a minimum interval of the emotion change time length to obtain fused data;
and the road rage risk calculation module is used for taking the fusion data as the input of a pre-trained road rage grade recognition time sequence model to obtain the road rage risk grade of the driver.
Compared with the prior art, the invention has the following technical effects: the invention provides an emotion detection scheme suitable for a driving scene, which can continuously detect and early warn dangerous driving risk emotions of a driver in a driving period, can early warn before the driver generates a road rage state, and prompts the driver to adjust self emotion; the road rage test accuracy is higher than 95%, and exceeds the average level of the emotion recognition model.
Drawings
The following detailed description of embodiments of the invention refers to the accompanying drawings in which:
FIG. 1 is a flow chart of a driver road rage risk early warning method;
fig. 2 is a structural diagram of a driver road rage risk early warning system.
Detailed Description
To further illustrate the features of the present invention, refer to the following detailed description of the invention and the accompanying drawings. The drawings are for reference and illustration purposes only and are not intended to limit the scope of the present disclosure.
As shown in fig. 1, the present embodiment discloses a method for warning a driver' S road rage risk, which includes the following steps S1 to S3:
s1, acquiring real-time driving state data of the driver to be detected, wherein the state data comprise facial activity data, head posture data, limb action data and heart rate data;
s2, fusing the state data in a minimum interval of the emotion change time length to obtain fused data;
and S3, taking the fusion data as the input of the pre-trained road rage grade recognition time series model to obtain the road rage risk grade of the driver.
It should be noted that, in the present embodiment, the dangerous driving psychological state of the driver can be continuously detected and prevented during the driving period of the driver, and the specific principle is as follows: according to the principles of emotional psychology and cognitive psychology, when an individual is in a negative emotional state such as anger, sadness, anxiety and the like, the cognitive ability, particularly the attention and behavior control ability of the individual can be disturbed to cause functional disorder due to disturbance of negative emotions, so that the individual is influenced to make wrong behavior decisions, particularly in a driving scene, after the negative emotions are accumulated for a period of time, the attention control and action execution ability of a driver can be influenced, and a driving error phenomenon can be caused. The early warning system can give an early warning in advance before a driver takes place a road rage state, prompts the driver to adjust emotion by oneself, and can carry out graded prediction on the anger level of the driver, and prompts the driver to drive safely.
As a more preferable embodiment, in step S2: fusing the state data in the minimum interval of the emotional change time length to obtain fused data, wherein the fused data comprises the following subdivision steps S21-S25:
s21, respectively defining the facial activity data, the Head posture data and the limb action data as Face-t, Head-t and Body-t;
s22, fusing the Face activity data Face-t at the time t-ti with the heart rate data at the time ti to obtain data Fusion Set 1;
s23, fusing the data Fusion Set1 and the Head posture data Head-t at the time when t is ti + delta t to obtain data Fusion Set 2;
s24, performing data Fusion on the data Fusion Set2 and the limb movement data Body-t at the time t ═ ti + delta t', and obtaining Fusion data Fusion Set 3;
s25, constructing Fusion data Fusion Set3t in the duration acquisition time of the minimum emotion change time interval [ t1, tn ], wherein the duration acquisition time is the ti epsilon [ t1, tn ].
The data fusion mode adopted in the application has two main advantages: firstly, the problem of high missing report rate and false report rate caused by single data and single mode early warning is avoided, and the alarm accuracy and recall rate are improved simultaneously by fusing modes; secondly, as the driving scene is a complex dynamic scene, the complex dynamic noise in the scene is more, the noise influence in the scene can be avoided by adopting multi-data fusion, and the disturbance of the noise on the alarm signal is reduced as much as possible.
It should be noted that, the minimum interval of the time length of the emotion change in this embodiment is obtained through experiments, and the purpose is to provide early warning when the emotion state endangering the driving safety appears at the first time, and simultaneously, to eliminate the false alarm phenomenon caused by random errors, to reduce the false alarm rate to the minimum, the minimum interval of the time length is specifically: 200 tested self-reports are selected to be capable of subjectively sensing the time point of the self emotion change, researchers record the time length from the starting timing of the tested to the first change of the reported emotion, and the minimum unit of the obtained result is 30 seconds, wherein the average time length of the emotion change which can be subjectively sensed by the 200 tested. In this embodiment, a sliding window with a length of 30 seconds and one unit is placed in a facial activity data stream collected by a camera, the data stream is divided into a group of picture vectors of { T0-T30}, similarly, head activity data and limb activity data are processed in the same way, wherein 1 feature point is respectively determined by the head activity data at the top of the head, temple and forehead center for measuring the head position; the limb movement data respectively determines 1 characteristic point for the tested wrist, elbow and shoulder to measure the position of the limb.
It should be noted that, in this embodiment, Δ t is 0.24s, Δ t' is 1.15s, and the values are obtained through experiments, specifically:
in pre-experiment 2, after 200 subjects started an angry stimulus, facial data, Head postures and limb movements were all changed, and the facial data was defined as Face-t, the Head postures as Head-t and the limb movements as Body-t, and the average time difference between two pairs was obtained as shown in table 1 below:
TABLE 1
Face-t and Head-t Face-t and Body-t Head-t and Body-t
0.83s 1.15s 0.91s
It should be noted that the time difference mainly serves to provide a design basis for how various data are fused, for example, the fusion time of the head and the face data is to fuse the head data appearing 0.83 seconds after the face data time point with the current face data, and the same is true for other types of fusion modes.
As a further preferred few options, in the above step S1: before acquiring real-time driving state data of a driver to be tested, the method further comprises the following steps:
(1) the method comprises the following steps of obtaining driving state data of a sample driver under an angry stimulus source in different driving scenes as sample data, specifically:
1-1) recruiting and covering male/female, wherein the age groups are three age groups of 18-30/30-40/40-50 years, the education degree is 200 subjects above junior middle school/high school/specialized subject/subject, the subjects are required to subjectively assess 5 levels of driving scene irritation sources of anger, and the irritation sources of anger are designed according to the sequence with the highest ranking consistency number.
1-2) after the 1.1 step, the irritation of 5, etc. is designed to be level 1-vehicle jam scene irritation, level 2-vehicle queue scene irritation, level 3-vehicle irritation by malicious crowded scene, level 4-vehicle irritation by malicious block, level 5-driver irritation by abusive abuse of others.
1-3) after the step of 1.2, 200 experiment subjects are recruited, the recruited experiment subjects subjectively accept the above 5 levels of irritation of anger emotion, and the recruited experiment subjects participate in the experiment and development of the road anger emotion detection module.
1-4) requiring 200 tested seats recruited in the step 1.3 to enter the simulated cockpit, inquiring about the psychological state and recording after confirming that the tested seats enter the cockpit, and then respectively starting 1-5 levels of stimulus source scenes, wherein the duration of each stimulus source scene is 15 minutes.
1-5) continuously recording the facial activity, the head activity and the limb activity of a driver after the road rage stimulus source with 5 grades is exposed after the driver is tested by using a vehicle-mounted camera, collecting the tested heart rate value by using a bracelet, and forming a corresponding record.
(2) And fusing the sample data in the minimum interval of the emotion change time length to obtain training data, wherein the specific process is the same as that in the step S2.
(3) And taking training data and road rage grades corresponding to different angry stimulus sources as the input of the road rage grade recognition time series model, so as to train the road rage grade recognition time series model and obtain the trained road rage grade recognition time series model.
It should be noted that the output of the model is a score value of grade 1-5 corresponding to the irritation source, and the RNN model is trained. The main parameters of the RNN model are as follows:
ot=g(Vst)
st=f(Uxt+Wst-1)
the state time interval between St and St-1 is 1.74s, one state interval is formed every 1.74s, ot represents output data, Vst represents a weight matrix from a hidden layer to an output layer, St represents a time point, Uxt represents a weight matrix from an input layer to a hidden layer, Wst represents a weight matrix of a previous hidden layer, g (Vst) represents an output value, and f (Uxt + Wst-1) represents an output value in the current state.
As a further preferable embodiment, the method further includes:
and randomly extracting 20 persons from 200 samples of the tested samples to carry out model verification, and obtaining an RNN driving scene specialized road rage characterization anger grade identification time series model with the testing accuracy rate of more than or equal to 95.5%.
As a further preferred technical solution, the road irritability stimulation scene sources include a level 1-vehicle jam scene stimulation, a level 2-vehicle queue scene stimulation, a level 3-vehicle stimulation by a vicious crowded scene, a level 4-vehicle stimulation by a vicious block driving, and a level 5-driver stimulation by abusive abuse of others.
As a further preferable embodiment, the method further includes:
acquiring state data of a sample driver before angry grade data occurs by using the pre-trained road rage grade recognition time series model;
the method specifically comprises the following steps: 200 subjects were continuously observed by using a road rage level recognition time series model, and emotion time series data before data of 5 angry levels appeared in 200 subjects were retrospectively collected.
Adding labels and transfer paths to state data before the angry level data occurs, wherein the labels comprise anxiety, sadness and depression;
the method specifically comprises the following steps: the obtained data were interviewed, self-statements of the subjects were collected, the psychosensory experience of the self was detailed, and emotional states including three labels of anxiety, sadness, depression and a transition pathway before the occurrence of grade 5 angry were obtained.
Fusing the anxiety tag data, the sad tag data and the depression tag data in a minimum interval of the mood change time length to obtain anxiety fusion data, sad fusion data and depression fusion data;
note that, the calculation process of the fused data refers to the above step S2.
And training by utilizing the anxiety fusion data, the sadness fusion data and the depression fusion data to obtain an anxiety recognition time series model, a sadness recognition time series model and a sadness recognition time series model respectively.
As a further preferable embodiment, the method further includes:
respectively calculating the conditional probability of the anger emotion appearing after different time lengths after the EC is anxiety, sadness and depression, wherein the EC is the anxiety, sadness and depression emotion appearing at the time t 0; such as
According to the conditional probability, designing an early warning trigger condition to carry out road rage risk early warning, wherein the early warning trigger condition is as follows:
if the EC1 is triggered and the duration reaches a first time length, road rage risk early warning is carried out;
if the EC2 is triggered and the duration reaches a second time length, carrying out road rage risk early warning;
if the EC3 is triggered and the duration reaches a third time length, carrying out road rage risk early warning;
wherein EC1, EC2 and EC3 respectively represent anxiety emotion, sad emotion and depressed emotion. Specifically, taking the type with the largest number of samples among 200 samples, calculating the pre-anger emotional condition transition probability is shown in table 2 below:
TABLE 2
EC Anxiety disorder Sadness and sorrow Suppression of stress
Transition probability of anger 76.73% 47.65% 54.32%
Angry shift time Ave=30s Ave=67s Ave=54s
The values of the first time length, the second time length and the third time length are respectively 30s, 67s and 54 s.
The embodiment considers the probability of other negative emotions to anger transition, establishes the state transition early warning path and embodies the scientific guiding significance of the psychological theory
As a further preferable technical scheme, the road rage risk level includes level 5, and when the road rage risk level of the driver is greater than level 3, the road rage risk early warning is performed.
The embodiment can perform graded prediction on the anger level of the driver and prompt the driver to drive safely.
As a further preferred technical solution, the embodiment integrates the early warning rule with the angry class early warning rule, and the designed early warning rule is as follows:
if EC1 is triggered and the duration reaches 30s, warning;
if EC2 is triggered and the duration reaches 67s, warning;
if EC3 is triggered and the duration reaches 54s, an early warning is given.
And if the angry level is greater than 3 grades, early warning.
As a further preferable technical scheme, since the storage capacity of the vehicle-mounted storage device is generally different from 8G to 32G, the storage capacity of the storage card needs to be detected; and when the detection model finds that the storage capacity reaches 80%, starting a condition task and deleting the original data acquisition record of the driver.
As a further preferable technical solution, in this embodiment, a timing task is set, and through experiments, it is found that the time of the storage capacity reaching 80% on average is 1h, so that the storage medium capacity detection task is started once at a timing of 1 h.
It should be noted that, in the present embodiment, the limitation of the storage medium in the vehicle-mounted environment is considered, and the storage medium storage amount detection model suitable for the vehicle-mounted condition and the task of automatically deleting the original data are designed to ensure that the system can be adapted to various vehicle environments.
As shown in fig. 2, this embodiment discloses a driver road rage risk early warning system, including data acquisition module, data fusion module and road rage risk calculation module, wherein:
the data acquisition module is used for acquiring real-time driving state data of a driver to be detected, wherein the state data comprises facial activity data, head posture data, limb action data and heart rate data;
the data fusion module is used for fusing the state data in a minimum interval of the emotion change time length to obtain fused data;
and the road rage risk calculation module is used for taking the fusion data as the input of a pre-trained road rage grade recognition time sequence model to obtain the road rage risk grade of the driver.
As a further preferred technical solution, the data fusion module is specifically configured to:
respectively defining the facial activity data, the Head posture data and the limb action data as Face-t, Head-t and Body-t;
fusing the Face activity data Face-t at the time t-ti with the heart rate data at the time ti to obtain data Fusion Set 1;
performing data Fusion on the data Fusion Set1 and Head posture data Head-t at the time when t is ti + delta t to obtain data Fusion Set 2;
performing data Fusion on the data Fusion Set2 and the limb motion data Body-t at the time when t is ti + delta t' to obtain Fusion data Fusion Set 3;
and constructing Fusion Set3t, ti e [ t1, tn ] as Fusion data in the continuous acquisition time of the minimum emotion change time interval [ t1, tn ].
As a further preferred technical solution, the system further comprises a sample data construction module, a sample data fusion module and a model training module, wherein:
the sample data construction module is used for acquiring driving state data of a sample driver under angry stimulus sources in different driving scenes as sample data;
the sample data fusion module is used for fusing the sample data in a minimum interval of the emotion change time length to obtain training data;
and the model training module is used for taking training data and road rage grades corresponding to different angry stimulus sources as the input of the road rage grade recognition time sequence model so as to train the road rage grade recognition time sequence model and obtain the trained road rage grade recognition time sequence model.
As a further preferred technical solution, the system further comprises an emotion recognition model construction module, specifically configured to:
acquiring state data of a sample driver before angry grade data occurs by using the pre-trained road rage grade recognition time series model;
adding labels and transfer paths to state data before the angry level data occurs, wherein the labels comprise anxiety, sadness and depression;
fusing the anxiety tag data, the sad tag data and the depression tag data in a minimum interval of the mood change time length to obtain anxiety fusion data, sad fusion data and depression fusion data;
and training by utilizing the anxiety fusion data, the sadness fusion data and the depression fusion data to obtain an anxiety recognition time series model, a sadness recognition time series model and a sadness recognition time series model respectively.
As a further preferred technical solution, the system further comprises an emotional state transition path prediction module, specifically configured to:
respectively calculating the conditional probability of the anger emotion appearing after different time lengths after the EC is anxiety, sadness and depression, wherein the EC is the anxiety, sadness and depression emotion appearing at the time t 0;
according to the conditional probability, designing an early warning trigger condition to carry out road rage risk early warning, wherein the early warning trigger condition is as follows:
if the EC1 is triggered and the duration reaches a first time length, road rage risk early warning is carried out;
if the EC2 is triggered and the duration reaches a second time length, carrying out road rage risk early warning;
if the EC3 is triggered and the duration reaches a third time length, carrying out road rage risk early warning;
wherein EC1, EC2 and EC3 respectively represent anxiety emotion, sad emotion and depressed emotion.
As a further preferred technical scheme, the early warning device further comprises an early warning module for early warning according to a set early warning rule, wherein the early warning rule is as follows:
if EC1 is triggered and the duration reaches 30s, warning;
if EC2 is triggered and the duration reaches 67s, warning;
if EC3 is triggered and the duration reaches 54s, an early warning is given.
And if the angry level is greater than 3 grades, early warning.
As a further preferred technical solution, the apparatus further includes a storage medium capacity detection module, specifically configured to:
starting a storage medium capacity detection task at regular time;
and when the storage capacity of the vehicle-mounted storage equipment reaches 80%, deleting the original data acquisition record for the driver.
It should be noted that the driver road rage risk early warning system disclosed in this embodiment and the driver road rage risk early warning method disclosed in the foregoing embodiment have corresponding technical features and effects, and details are not repeated here.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A driver road rage risk early warning method is characterized by comprising the following steps:
acquiring real-time driving state data of a driver to be detected, wherein the state data comprises facial activity data, head posture data, limb action data and heart rate data;
fusing the state data in a minimum interval of the emotion change time length to obtain fused data;
and (4) taking the fusion data as the input of a pre-trained road rage grade recognition time series model to obtain the road rage risk grade of the driver.
2. The driver road rage risk early warning method according to claim 1, wherein the fusing the state data within a minimum interval of the emotion change time length to obtain fused data comprises:
respectively defining the facial activity data, the Head posture data and the limb action data as Face-t, Head-t and Body-t;
fusing the Face activity data Face-t at the time t-ti with the heart rate data at the time ti to obtain data Fusion Set 1;
performing data Fusion on the data Fusion Set1 and Head posture data Head-t at the time when t is ti + delta t to obtain data Fusion Set 2;
performing data Fusion on the data Fusion Set2 and the limb motion data Body-t at the time when t is ti + delta t' to obtain Fusion data Fusion Set 3;
and constructing Fusion Set3t, ti e [ t1, tn ] as Fusion data in the continuous acquisition time of the minimum emotion change time interval [ t1, tn ].
3. The driver road rage risk early warning method as claimed in claim 1, wherein before the obtaining of the real-time driving state data of the driver to be tested, the method further comprises:
acquiring driving state data of a sample driver under an angry stimulus source in different driving scenes as sample data;
fusing the sample data in the minimum interval of the emotion change time length to obtain training data;
and taking training data and road rage grades corresponding to different angry stimulus sources as the input of the road rage grade recognition time series model, so as to train the road rage grade recognition time series model and obtain the trained road rage grade recognition time series model.
4. The driver road rage risk warning method as claimed in claim 3, wherein the road rage stimulus scenario sources comprise a level 1-vehicle congestion scenario stimulus, a level 2-vehicle squad scenario stimulus, a level 3-vehicle being maliciously crowded scenario stimulus, a level 4-vehicle being maliciously obstructed from traveling stimulus, and a level 5-driver being abused by others.
5. The driver road rage risk early warning method as claimed in claim 3, wherein the road rage level recognition time series model has the parameters:
ot=g(Vst)
st=f(Uxt+Wst-1)
where ot represents output data, Vst represents a weight matrix from a hidden layer to an output layer, st represents a time point, Uxt represents a weight matrix from an input layer to a hidden layer, Wst represents a weight matrix of a previous hidden layer, g (Vst) represents an output value, and f (Uxt + Wst-1) represents an output value in this state.
6. The driver road rage risk warning method as claimed in any one of claims 1 to 5, further comprising:
acquiring state data of a sample driver before angry grade data occurs by using the pre-trained road rage grade recognition time series model;
adding labels and transfer paths to state data before the angry level data occurs, wherein the labels comprise anxiety, sadness and depression;
fusing the anxiety tag data, the sad tag data and the depression tag data in a minimum interval of the mood change time length to obtain anxiety fusion data, sad fusion data and depression fusion data;
and training by utilizing the anxiety fusion data, the sadness fusion data and the depression fusion data to obtain an anxiety recognition time series model, a sadness recognition time series model and a sadness recognition time series model respectively.
7. The driver road rage risk warning method as claimed in claim 6, further comprising:
respectively calculating the conditional probability of the anger emotion appearing after different time lengths after the EC is anxiety, sadness and depression, wherein the EC is the anxiety, sadness and depression emotion appearing at the time t 0;
according to the conditional probability, designing an early warning trigger condition to carry out road rage risk early warning, wherein the early warning trigger condition is as follows:
if the EC1 is triggered and the duration reaches a first time length, road rage risk early warning is carried out;
if the EC2 is triggered and the duration reaches a second time length, carrying out road rage risk early warning;
if the EC3 is triggered and the duration reaches a third time length, carrying out road rage risk early warning;
wherein EC1, EC2 and EC3 respectively represent anxiety emotion, sad emotion and depressed emotion.
8. The driver road rage risk warning method as claimed in claim 7, wherein the road rage risk level comprises 5 grades, and when the road rage risk level of the driver is greater than 3 grades, the road rage risk warning is performed.
9. The driver road rage risk early warning method as set forth in claim 1, further comprising:
starting a storage medium capacity detection task at regular time;
and when the storage capacity of the vehicle-mounted storage equipment reaches 80%, deleting the original data acquisition record for the driver.
10. The utility model provides a driver road anger risk early warning system which characterized in that, includes data acquisition module, data fusion module and road anger risk calculation module, wherein:
the data acquisition module is used for acquiring real-time driving state data of a driver to be detected, wherein the state data comprises facial activity data, head posture data, limb action data and heart rate data;
the data fusion module is used for fusing the state data in a minimum interval of the emotion change time length to obtain fused data;
and the road rage risk calculation module is used for taking the fusion data as the input of a pre-trained road rage grade recognition time sequence model to obtain the road rage risk grade of the driver.
CN202110388686.6A 2021-04-12 2021-04-12 Driver road rage risk early warning method and system Active CN113191212B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110388686.6A CN113191212B (en) 2021-04-12 2021-04-12 Driver road rage risk early warning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110388686.6A CN113191212B (en) 2021-04-12 2021-04-12 Driver road rage risk early warning method and system

Publications (2)

Publication Number Publication Date
CN113191212A true CN113191212A (en) 2021-07-30
CN113191212B CN113191212B (en) 2022-06-07

Family

ID=76975357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110388686.6A Active CN113191212B (en) 2021-04-12 2021-04-12 Driver road rage risk early warning method and system

Country Status (1)

Country Link
CN (1) CN113191212B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113715833A (en) * 2021-09-09 2021-11-30 重庆金康赛力斯新能源汽车设计院有限公司 Road rage preventing method, device and system
CN113997939A (en) * 2021-11-08 2022-02-01 清华大学 Road rage detection method and device for driver
CN116035564A (en) * 2022-12-06 2023-05-02 北京顺源辰辰科技发展有限公司 Dysphagia and aspiration intelligent detection method and device and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140196529A1 (en) * 2009-12-31 2014-07-17 John Edward Cronin System and method for sensing and managing pothole location and pothole characteristics
CN103956028A (en) * 2014-04-23 2014-07-30 山东大学 Automobile multielement driving safety protection method
CN105740767A (en) * 2016-01-22 2016-07-06 江苏大学 Driver road rage real-time identification and warning method based on facial features
CN107822623A (en) * 2017-10-11 2018-03-23 燕山大学 A kind of driver fatigue and Expression and Action method based on multi-source physiologic information
CN108216254A (en) * 2018-01-10 2018-06-29 山东大学 The road anger Emotion identification method merged based on face-image with pulse information
CN109299253A (en) * 2018-09-03 2019-02-01 华南理工大学 A kind of social text Emotion identification model construction method of Chinese based on depth integration neural network
CN109993093A (en) * 2019-03-25 2019-07-09 山东大学 Road anger monitoring method, system, equipment and medium based on face and respiratory characteristic
CN110751381A (en) * 2019-09-30 2020-02-04 东南大学 Road rage vehicle risk assessment and prevention and control method
US20200242421A1 (en) * 2019-01-30 2020-07-30 Cobalt Industries Inc. Multi-sensor data fusion for automotive systems

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140196529A1 (en) * 2009-12-31 2014-07-17 John Edward Cronin System and method for sensing and managing pothole location and pothole characteristics
CN103956028A (en) * 2014-04-23 2014-07-30 山东大学 Automobile multielement driving safety protection method
CN105740767A (en) * 2016-01-22 2016-07-06 江苏大学 Driver road rage real-time identification and warning method based on facial features
CN107822623A (en) * 2017-10-11 2018-03-23 燕山大学 A kind of driver fatigue and Expression and Action method based on multi-source physiologic information
CN108216254A (en) * 2018-01-10 2018-06-29 山东大学 The road anger Emotion identification method merged based on face-image with pulse information
CN109299253A (en) * 2018-09-03 2019-02-01 华南理工大学 A kind of social text Emotion identification model construction method of Chinese based on depth integration neural network
US20200242421A1 (en) * 2019-01-30 2020-07-30 Cobalt Industries Inc. Multi-sensor data fusion for automotive systems
CN109993093A (en) * 2019-03-25 2019-07-09 山东大学 Road anger monitoring method, system, equipment and medium based on face and respiratory characteristic
CN110751381A (en) * 2019-09-30 2020-02-04 东南大学 Road rage vehicle risk assessment and prevention and control method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FLORIAN EYBEN等: "Emotion on the road—necessity, acceptance, and feasibility of affective computing in the car", 《ADVANCES IN HUMAN-COMPUTER INTERACTION》 *
万平: "基于信息融合的驾驶愤怒识别方法研究", 《中国博士学位论文全文数据库 (工程科技Ⅱ辑)》 *
于申浩: "基于深度学习与信息融合的路怒情绪识别研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *
徐涵: "《大数据、人工智能和网络舆情治理》", 31 October 2018 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113715833A (en) * 2021-09-09 2021-11-30 重庆金康赛力斯新能源汽车设计院有限公司 Road rage preventing method, device and system
CN113997939A (en) * 2021-11-08 2022-02-01 清华大学 Road rage detection method and device for driver
CN116035564A (en) * 2022-12-06 2023-05-02 北京顺源辰辰科技发展有限公司 Dysphagia and aspiration intelligent detection method and device and electronic equipment

Also Published As

Publication number Publication date
CN113191212B (en) 2022-06-07

Similar Documents

Publication Publication Date Title
CN113191212B (en) Driver road rage risk early warning method and system
CN108664932B (en) Learning emotional state identification method based on multi-source information fusion
CN101583313B (en) Awake state judging model making device, awake state judging device, and warning device
Vhaduri et al. Estimating drivers' stress from GPS traces
CN105877766A (en) Mental state detection system and method based on multiple physiological signal fusion
CN110796207A (en) Fatigue driving detection method and system
Ye et al. A combined motion-audio school bullying detection algorithm
Bakhtiyari et al. Fuzzy model on human emotions recognition
CN111199205A (en) Vehicle-mounted voice interaction experience evaluation method, device, equipment and storage medium
CN112233800B (en) Disease prediction system based on abnormal behaviors of children
El Masri et al. Toward self-policing: Detecting drunk driving behaviors through sampling CAN bus data
Zhao et al. Research on fatigue detection based on visual features
Parab et al. Stress and emotion analysis using IoT and deep learning
CN114626818A (en) Big data-based sentry mood comprehensive evaluation method
Saruchi et al. Modeling of occupant’s head movement behavior in motion sickness study via time delay neural network
CN111724896B (en) Drug addiction evaluation system based on multi-stimulus image or video ERP
Yu et al. A LSTM network-based learners’ monitoring model for academic self-efficacy evaluation using EEG signal analysis
Wei et al. A driver distraction detection method based on convolutional neural network
Wang Campus intelligence mental health searching system based on face recognition technology
CN115905977A (en) System and method for monitoring negative emotion in family sibling interaction process
CN112948554B (en) Real-time multi-mode dialogue emotion analysis method based on reinforcement learning and domain knowledge
Miyajima et al. Behavior signal processing for vehicle applications
CN114938958A (en) Driver emotion recognition method and system based on smart bracelet and thermal infrared camera
Salous et al. Visual and memory-based hci obstacles: Behaviour-based detection and user interface adaptations analysis
Nor et al. Pre-post accident analysis relates to pre-cursor emotion for driver behavior understanding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220209

Address after: 230000 No. 5089, high tech Zone, Hefei, Anhui

Applicant after: Hefei zhongjuyuan Intelligent Technology Co.,Ltd.

Address before: No. 5089, Wangjiang West Road, Hefei City, Anhui Province, 230000, b1205-b1208, future center, Institute of advanced technology, University of science and technology of China

Applicant before: Artificial Intelligence Research Institute of Hefei comprehensive national science center (Artificial Intelligence Laboratory of Anhui Province)

GR01 Patent grant
GR01 Patent grant