WO2008127465A1 - Real-time driving danger level prediction - Google Patents

Real-time driving danger level prediction Download PDF

Info

Publication number
WO2008127465A1
WO2008127465A1 PCT/US2007/087337 US2007087337W WO2008127465A1 WO 2008127465 A1 WO2008127465 A1 WO 2008127465A1 US 2007087337 W US2007087337 W US 2007087337W WO 2008127465 A1 WO2008127465 A1 WO 2008127465A1
Authority
WO
WIPO (PCT)
Prior art keywords
driver
danger
driving
driving danger
learning
Prior art date
Application number
PCT/US2007/087337
Other languages
French (fr)
Inventor
Jinjun Wang
Wei Xu
Yihong Gong
Original Assignee
Nec Laboratories America, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nec Laboratories America, Inc. filed Critical Nec Laboratories America, Inc.
Publication of WO2008127465A1 publication Critical patent/WO2008127465A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/10Longitudinal speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/10Longitudinal speed
    • B60W2520/105Longitudinal acceleration
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/18Steering angle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/22Psychological state; Stress level or workload
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/221Physiology, e.g. weight, heartbeat, health or special needs

Definitions

  • the present invention relates to driving danger prediction.
  • the first is to monitor the drivers' visual behavior using remote camera(s) and apply computer vision techniques to extract features that are correlated to their fatigue state. For example, the driver's head pose and face direction were recognized from multiple camera using 3D stereo matching or from single camera using template matching. In one head/eye tracking system, a single camera monitors driver's drowsiness level. To cope with different lighting condition, infrared LED is used for illumination. To reduce uncertainty or ambiguity from single visual cue, multiple visual features could be utilized to improve accuracy and reliability.
  • the task of predicting current driving danger level can be regarded as an anomaly detection problem.
  • Anomaly detection has many important real-world applications, ranging from security, finance, biology, manufacturing and astrophysics, each domain with a huge volume of literature.
  • the rule-based methods can be used where any violation of the rule(s) is regarded as an anomaly.
  • a complex rule-based approach has been used to characterize the anomalous pattern for disease outbreak detection.
  • Each rule is carefully evaluated using Fisher's Exact Test and a randomization test.
  • For more complex anomaly detection task such as driver danger level prediction in this paper, defining rules becomes extreme difficult.
  • many other researches applied statistical modeling methods for anomaly detection are used.
  • the Fisher projection and linear classifier can model the low/medium/high stress level using physiological features.
  • a newly coming data was classified using the Bayesian approach.
  • a two-category classifier using SVM classifies the incoming data as normal or anomalous.
  • these methods overlooked the spatial correlation between features.
  • the Bayesian Network can fuse different features for inference.
  • Systems and methods are disclosed to predict driving danger by capturing vehicle dynamic parameter, driver physiological data and driver behavior feature; applying a learning algorithm to the features; and predicting driving danger.
  • Implementations of the above systems and methods may include one or more of the following.
  • the learning algorithm includes one of: Hidden Markov Model, Conditional Random Field and Reinforcement Learning.
  • the vehicle dynamic parameter includes one or more of: driver's lateral lane position, steering wheel angle, longitudinal acceleration, longitudinal velocity, distance between vehicles.
  • the driver's physiological data includes one or more of: respiration, heart rate, blood volume, skin temperature, skin conductance.
  • the driver behavior feature can be a PERCLOSE feature.
  • the driver behavior feature can capture fatigue, vision, distraction.
  • the method includes training the learning algorithm and performing off line cross-validation.
  • the sytem can predict driving danger in real time.
  • the system can communicate a reason that caused it to predict driving danger to help a user understand the risk(s).
  • the system can use one or more features to dynamically monitor the vehicle and the driver during driving, specifically the vehicle dynamic parameters, the driver's physiological data and the driver's behavior.
  • the system uses the vehicle dynamic parameter features which serve as a highly informative feature for driving danger level prediction in a real-time system.
  • sequential learning algorithms such as Hidden Markov Model, Random Field, and Reinforcement Learning can be used with the Reinforcement Learning based method using non-linear value function achieves best results during cross-validation.
  • the resulting live danger level prediction system gives real-time danger prediction for the driver to prevent a set of potential risks, including speed exceedance, sudden acceleration/deceleration/turning, off-road, crash with cars or pedestrian, among others.
  • the real time danger prediction provides an automated driving assistant, leading to a safer driving environment.
  • FIG. 1 shows one exemplary process for a driving danger level prediction system.
  • FIG. 2 shows an exemplary danger level curve generated using a Hidden Markove Model method.
  • FIG. 3 shows an exemplary danger level curve generated using a Reinformcement Learning method.
  • FIG. 4 shows prediction performance using Vehicle's Dynamic parameters where each feature is extracted from 5 seconds long raw input.
  • FIG. 5 shows a prediction performance using Vehicle's Dynamic parameters where each feature is extracted from 15 seconds long raw input.
  • FIG. 6 shows a prediction performance using Vehicle's Dynamic parameters and driver's phyiological data where each feature is extracted from 5 seconds long raw input.
  • FIG. 6 shows a prediction performance using Vehicle's Dynamic parameters and driver's phyiological data where each feature is extracted from 5 seconds long raw input.
  • FIG. 7 shows a prediction performance using Vehicle's Dynamic parameters and driver's physiological data where each feature is extracted from 15 seconds long raw input.
  • FIG. 8 shows prediction performance using Vehicle's Dynamic parameters, drivers physiological data and driver's visual behavior feature where each feature is extracted from 5 seconds long raw input.
  • FIG. 9 shows prediction performance using Vehicle's Dynamic parameters, drivers physiological data and driver's visual behavior feature where each feature is extracted from 15 seconds long raw input.
  • FIG. 10 shows an exemplary user interface for communicating dangerous driving conditions.
  • FIG. 1 shows one exemplary process for a driving danger level prediction system.
  • the process collects driving condition data from a plurality of sensor inputs (10).
  • the system uses multiple sensor inputs and statistical modeling to predict the driving risk level.
  • three types of features were collected, specifically the vehicle dynamic parameters, the driver's physiological data and the driver's behavior feature.
  • one or more sequential supervised learning algorithms are used, including Hidden Markov Model, Conditional Random Field and Reinforcement Learning (20).
  • Reinforcement Learning based method with the vehicle dynamic parameters feature is used to predict risk level.
  • Reinforcement Learning is used with the other two features to further improve the prediction accuracy.
  • alarms can be generated to help the driver recover from the danger (30).
  • vehicle dynamic parameters feature are applied to the Reinforcement Learning module.
  • the system analyzes the sensor readings and outputs a numerical danger level value in real-time.
  • a predefined threshold based on training data is exceeded, acoustic warning is sent to the driver to prevent potential driving risks.
  • the live system is non-intrusive to the driver, and hence highly desirable for driving danger prevention applications.
  • the system consists of three major modules: 1) The data acquisition module that captures the vehicle's dynamic parameters and driver behaviors in real-time; 2) The feature extraction that converts the raw sensor readings into defined statistical features as described above; and 3) The danger level prediction module that uses the Reinforcement Learning algorithm to generate a numerical danger level score. The score is used to trigger the warning interface if a predefined threshold learned from training samples is exceeded.
  • the system captures the driver's physiological data, the driver's visual behavior and the vehicle's dynamic state.
  • the Driver's Physiological Data F 1
  • the driver's behavior is a critical risk-increasing factor.
  • a driver's behavior is affected by fatigue, poor vision, major distraction, etc condition, and is reflected in the variation of his/her physiological data.
  • the physiological data provides the most accurate techniques for monitoring driver's vigilance level.
  • a physiological sensing system called “TlexComp Infmiti” from Thought Technology connects five sensors to the driver, and recorded the sensor readings at the terminal in a continuous way without interrupting the user (Fig.l).
  • F 1 Each sample from FlexComp Infmiti is denoted as F 1 which is an R 9 column vector. Table 1 lists the physical meanings of each exemplary dimension of F 1 .
  • the percentage of eye closure feature is used because firstly, it's closely related to the driver's physiological state. For example, when people begin to drowse, their eye-blinks slow down, and there are fewer of them whose eyes stay closed for a longer time. Secondly and more importantly, with certain equipment, the PERCLOSE feature can be extracted very reliably.
  • the "Eye Alert Fatigue Warning System" from EyeAlert, Inc. is used to collect the PERCLOSE feature.
  • the extracted PERCLOSE feature can be denoted as F 2 which is an R 6 column vector. Table 2 gives the meaning of each dimension of F 0 .
  • the third set of features collected is the vehicle's dynamic parameters (F 3 ), including speed, acceleration/deceleration, steering angle, lane position, etc physical data from the vehicle.
  • the advantages of using the vehicle's dynamic data are firstly, collecting vehicle dynamic data is non-intrusive to the driver, and secondly, the dynamic data is a direct reflection of the vehicle's state, hence it's more sensitive to the change of driving danger level due to either the changes of driver's physiological state or the vehicle/environment condition.
  • the vehicle dynamic parameters are collected from a driving simulator called "STISIM" by Systems Technology, Inc.
  • STISIM is a PC based interactive driving simulator that allows the user to control all aspects of driving such as throttling, breaking and steering.
  • the whole system includes a computer with the STISIM software, a projector displaying the driving scenarios, a steering wheel, and brake and throttle pedals.
  • the driving scenario including weather, road condition, traffic light/sign, pedestrian, buildings and so on, was carefully designed to make the simulation as close to reality as possible.
  • "STISIM" outputs the vehicle's dynamic parameters simultaneously.
  • the set of features can be denoted as F 3 , and Table 3 lists the 7 selected parameters.
  • F 1 , F 2 and F 3 data are all time-stamped for synchronization, they are of different sample rate (32, 3 and 30 samples per second, respectively). In addition, dropped samples are detected.
  • T w and step size T s the following statistical features are derived using a fixed sliding window size T w and step size T s :
  • F [max(F n ),min(F n ),mean(F n ),variance(F n )] and n can be any combination of 1 , 2 and 3 .
  • the max , min , mean and variance operators measure the corresponding statistics over all the samples in the window.
  • the next system module uses sequential supervised learning algorithms to mine the specific patterns for the driving risk prediction task.
  • Sequential Supervised Learning process will be discussed.
  • the problem of discovering feature patterns that result in safe/dangerous landing from continuous sensors readings can be regarded as a supervised learning problem.
  • any dangerous situation e.g. crash
  • any dangerous situation is caused by a sequence of actions rather than a single action.
  • the danger level prediction problem can be better modeled as a sequential supervised learning problem.
  • There are many algorithms that are suitable for the problem such as Recurrent Sliding Windows, Maximum Entropy Markov Models, etc. Three algorithms: Hidden Markov Model, Conditional Random Field, and Reinforcement Learning, have been used.
  • HMM Hidden Markov Model
  • b q x is modeled by Gaussian Mixture Model.
  • the whole input feature vector sequence is segmented into smaller sequences (frames) with fixed length and step size.
  • Each frame of features are fed to both the "safe” and “crashed” HMMs.
  • the danger level DL at time t is selected to be the logarithm likelihood of the frame being generated by the "crashed" HMM over that generated by the "safe” HMM, which is computed as follows:
  • FIG. 2 shows a computed danger level curve for a 21 minutes long sequence. There are 7 computed dangerous points as indicated by red circles. When playing back the driving session, it is found that 4 out of the 7 points are crashes, and the rests are also dangerous situations such as sudden break, close to pedestrian, etc.
  • each observed X n in HMM is only conditioned on the state q n , and the transition probability of states p(Q
  • HMM imposes strong assumptions on the independence amongst the observed features x , which the collected features for danger level prediction may not follow.
  • MMM Maximum Entropy Markov Model
  • IOHMM Input/Output HMM
  • CRF Conditional Random Field
  • CRF Conditional Random Field
  • the CRF computes the conditional probability p(X ⁇ Q, ⁇ ) according to
  • M n (x n ) is the (N + 2) x (N + 2) matrix of potentials for all possible pairs of labels for q n _ l and q n , such that the normalizer becomes a necessary term to make p(X ⁇ Q, ⁇ ) a probability scores.
  • the entire feature sequences are fed to the trained CRF model, and a probability score of each feature vector being either of the two state is computed. Then, similar to the HMM based method, a numerical danger level score for each x can be computed as
  • FIG. 3 gives a computed danger level curve for the same sequence as that used in FIG. 2.
  • algorithms based on iterative scaling and gradient descent have been developed both for optimizing p(Q ⁇ X) and also for separately optimizing p(q n ⁇ x) for loss functions that depend only on the individual labels.
  • RL Reinforcement learning
  • a penalty negative value
  • a reward positive value
  • safe sequences the RL could propagates the penalty/reward in the feature space by trial-and-error interactions alone these trajectories, and thus the obtained value function has values in the entire feature space.
  • the value function converts a feature vector into a penalty value, which can be used as the danger-level indicator.
  • the RL usually involves two major tasks, how to select approximation architecture to represent the value function, and how to train the parameters for the selected architecture.
  • the value function can be simply represented by a look-up table and a training algorithm approximates the function by iteratively updating the table.
  • the above least square problem can be solved by an incremental gradient method.
  • one trajectory is considered for each iteration, and ⁇ is updated iteratively by
  • the temporal difference provides an indication as to whether or how much the estimate ⁇ should be increased or decreased.
  • d s is multiplied by a weight ⁇ s ⁇ " , 0 ⁇ ⁇ ⁇ 1 to decrease the influence of farther temporal difference on V @ DL(x n ' , ⁇ ) .
  • both a linear and non-linear form for DL(x n , ⁇ ) are linear and non-linear form for DL(x n , ⁇ ) .
  • a linear danger level function for a linear danger level function
  • for the linear form, any random initialization of ⁇ is applicable because the convergence is guaranteed as long as the it converge. However the initialization is crucial to the quality of the nonlinear function approximation and even the convergence.
  • the ⁇ for the non-linear value function includes that the center ⁇ and the weight a of each RBF as well as the const ⁇ . As all the trajectories for the dangerous training sequences sink to the crash state, an intuitive choice of ⁇ could be the cluster centers of all the dangerous training samples.
  • FIG. 3 gives an generated danger level curve for the same sequence used in FIG. 2. More dangerous situations were identified in FIG. 3 as compared to FIG. 2, and the curve is more dynamic which shows that the RL based method is more sensitive to the input.
  • Figures 4-9 show additional prediction performance using various combinations of data features.
  • the system generated a numerical danger-level value for every time instance rather than binary danger/safety classification only. Although this is very desirable for a live prediction system, it is difficult to obtain ground truth labels for every time instance based on speed, respiration rate, etc features.
  • the system collected sequences that ended with crashes as danger samples, and the rest as safe driving samples. All the sequences have the same length (60 seconds in the current setup). Note that such a scheme would bring noises for the safe sample sequences, because there might be dangerous driving patterns that resulted in no crash selected as safe samples. Hence more safe sequences are collected than the crash samples to reduce the influence of such noisy sequences.
  • t n t r is the summation of where the predicted danger time is really danger.
  • FIGS. 4-9 show exemplary prediction performance using one or more sensor inputs.
  • FIG. 4 shows prediction performance using Vehicle's Dynamic parameters (Each feature is extracted from 5 seconds long raw input).
  • driving danger prediction using only the vehicle dynamic parameters can achieve satisfactory accuracy, and the additional driver's physiological data and behavior feature only improve the performance in a limited extent. Due to the intrusive nature of the driver's physiological data and the large computational expense to achieve accurate driver's visual behavior measurement, the vehicle's dynamic parameter feature is more desirable for driving risk prevention applications. Of the three danger level prediction methods, the Reinforcement Learning algorithms the best performance.
  • FIG. 5 shows a prediction performance using Vehicle's Dynamic parameters (Each feature is extracted from 15 seconds long raw input).
  • FIG. 5 shows a prediction performance using Vehicle's Dynamic parameters (Each feature is extracted from 15 seconds long raw input).
  • FIG. 6 shows a prediction performance using Vehicle's Dynamic parameters and driver's phyiological data (Each feature is extracted from 5 seconds long raw input).
  • FIG. 6 shows a prediction performance using Vehicle's Dynamic parameters and driver's phyiological data (Each feature is extracted from 5 seconds long raw input).
  • FIG. 7 shows a prediction performance using Vehicle's Dynamic parameters and driver's physiological data (Each feature is extracted from 15 seconds long raw input).
  • FIG. 8 shows prediction performance using Vehicle's Dynamic parameters, drivers physiological data and deiver's visual behavior feature (Each feature is extracted from 5 seconds long raw input).
  • FIG. 9 shows prediction performance using Vehicle's Dynamic parameters, drivers physiological data and driver's visual behavior feature (Each feature is extracted from 15 seconds long raw input).
  • FIG. 10 shows a prototype system (the right screen) which works as a add-on to the "STISIM" simulator (the left screen). 11 participants were invited to operate the system. Results shows that the system can predict driving risks due to sharp turning, sudden acceleration/decelation, continues weaving, approaching object, etc events accurately. It is sensitive to the changes of vehicle's condition resulted from the driver's emotional state change, e.g. fatigue, or the road condition change, e.g. windy and slippery road. However, if the participant wants to crash the vehicle on purpose, such as suddenly turning into the opposite lane to hit the incoming vehicle, the system won't generate an alarm because these types of dangerous situations do not occur in the collected training samples.
  • the selected non- linear value function for the RL algorithm has 5 RBFs. Their trained weights are ⁇ 4.0732,1.7731,2.8959,0.9888,-5.0044 ⁇ respectively. It can be seen that only the 5 th RBF has negative weight value, and hence the closer a feature vector is to the 5 th RBF's center, the more danger it is.
  • any feature vector that is close to the 1 st RBF's center represents safe, as the 1 st RBF has the greatest positive weight value.
  • the dimension that differs most between the 1 st and 5 th RBFs' centers is the most distinguishing feature.
  • Table 4 lists the top- 10 features that differ most between safe and crash.
  • the driver's physiological data, the driver's visual behavior and the vehicle's dynamic parameter features can be used for driving risk prediction by analytic engines such as the Hidden Markov Model, the Conditional Random Field and the Reinforcement Learning (RL) algorithms and the Reinforcement Learning (RL) algorithm with a non-linear value function.
  • analytic engines such as the Hidden Markov Model, the Conditional Random Field and the Reinforcement Learning (RL) algorithms and the Reinforcement Learning (RL) algorithm with a non-linear value function.
  • a real-time driving danger level prediction system has been discussed above, the inventors contemplate that other systems can be added, including a risk reason analysis method for the driver when potential driving risk has been predicted.
  • the system can incorporate the driver's visual behavior based features to further improve performance.
  • the system can be applied to larger data sets so that more driving safe/dangerous patterns can be modeled. These features improve the reliability of the predictive system and maximizes the users' confidence in the driving risk prediction system.
  • the invention may be implemented in hardware, firmware or software, or a combination of the three.
  • the invention is implemented in a computer program executed on a programmable computer having a processor, a data storage system, volatile and non-volatile memory and/or storage elements, at least one input device and at least one output device.
  • the computer preferably includes a processor, random access memory (RAM), a program memory (preferably a writable read-only memory (ROM) such as a flash ROM) and an input/output (I/O) controller coupled by a CPU bus.
  • RAM random access memory
  • program memory preferably a writable read-only memory (ROM) such as a flash ROM
  • I/O controller coupled by a CPU bus.
  • the computer may optionally include a hard drive controller which is coupled to a hard disk and CPU bus. Hard disk may be used for storing application programs, such as the present invention, and data. Alternatively, application programs may be stored in RAM or ROM.
  • I/O controller is coupled by means of an I/O bus to an I/O interface.
  • I/O interface receives and transmits data in analog or digital form over communication links such as a serial link, local area network, wireless link, and parallel link.
  • a display, a keyboard and a pointing device may also be connected to I/O bus.
  • separate connections may be used for I/O interface, display, keyboard and pointing device.
  • Programmable processing system may be preprogrammed or it may be programmed (and reprogrammed) by downloading a program from another source (e.g., a floppy disk, CD-ROM, or another computer).
  • Each computer program is tangibly stored in a machine-readable storage media or device (e.g., program memory or magnetic disk) readable by a general or special purpose programmable computer, for configuring and controlling operation of a computer when the storage media or device is read by the computer to perform the procedures described herein.
  • the inventive system may also be considered to be embodied in a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

Systems and methods are disclosed to predict driving danger by capturing vehicle dynamic parameter, driver physiological data and driver behavior feature; applying a learning algorithm to the features; and predicting driving danger.

Description

REAL-TIME DRIVING DANGER LEVEL PREDICTION
The present application claims priority to Provisional Application Serial No. 60/911,092, filed 4/11/2007, the content of which is incorporated by reference.
The present invention relates to driving danger prediction. BACKGROUND
The availability of on-board electronics and in-vehicle information systems has demanded the development of more intelligent vehicles. One such important intelligence is the possibility to evaluate the driving danger level to prevent potential driving risks. Although protocols to measure the driver's workload have been developed by both the government and industry, such as eye-glance and on-road metrics, they have been criticized as too costly and difficult to obtain. In addition, existing uniform heuristics such as the 15-Second for Total Risk Time, do not account for the changes in individual driver's and vehicle's environment. Hence understanding the driver's and the vehicle's frustration to prevent potential driving risks has been listed by many international companies as one of the key area for improving intelligent transportation systems.
In the past decades, most reported work sought to discovery effective physiological and bio-behavioral measures to detect the diminished driver vigilance level due to stress, fatigue or drowsiness to prevent potential risks. The most accurate techniques for monitoring human vigilance level is based on physiological features like brain waves, heart rate, blood volume pulse and respiration. Examples based on physiological measure include the ASV (Advanced Safety Vehicle) system and the SmartCar project from MIT. However, acquiring physiological data is intrusive because some electrodes or sensors must be attached to the drivers, which causes annoyance to them. For example, to obtain the electroencephalograph signal (EEG) for "Mind Switch" technique, a head-band device must embedded with electrodes to make contact with the driver's scalp so as to measure the brain waves. Good results have also been reported with techniques that monitor pupil response, eye blinking/closure/gaze and eyelid/face/head movement using head-mounted devices. These techniques, though less intrusive, are still not practically acceptable.
To develop non-intrusive driving risk monitoring and alerting system, two sets of features are available. The first is to monitor the drivers' visual behavior using remote camera(s) and apply computer vision techniques to extract features that are correlated to their fatigue state. For example, the driver's head pose and face direction were recognized from multiple camera using 3D stereo matching or from single camera using template matching. In one head/eye tracking system, a single camera monitors driver's drowsiness level. To cope with different lighting condition, infrared LED is used for illumination. To reduce uncertainty or ambiguity from single visual cue, multiple visual features could be utilized to improve accuracy and reliability.
However, systems relying on visual cues may exhibit difficulty when the required visual features cannot be acquired accurately or reliably. For example, drivers with sun glasses could pose serious problem to those techniques based on detecting eye characteristics. Although multiple visual cues can be combined systematically, how to select suitable model to fuse these features to improve the overall accuracy remains challenging. Hence another set of non-intrusive features based on the vehicle's dynamic state have been examined, such as lateral position, steering wheel movement, throttle acceleration/break deceleration, etc. In fact, the vehicle' dynamic state is a direct reflection of the state of the driving, while researches focusing on modeling driver's vigilance have assumed the close correlation between fatigue/stress and driving danger. Hence many researchers used this set of features for driver safety monitoring. Some important examples include the Spanish TCD (Tech. Co. Driver) project and the ASV system. However, although the extraction of these vehicle's dynamic parameters can be blind to the driver, it is argued that their quality is subjected to the vehicle type, driver experience, geometric characteristics, state of the road, etc limitations.
On the other hand, from a pattern recognition point of view, the task of predicting current driving danger level can be regarded as an anomaly detection problem. Anomaly detection has many important real-world applications, ranging from security, finance, biology, manufacturing and astrophysics, each domain with a huge volume of literature. To detect anomaly in simple scenario, the rule-based methods can be used where any violation of the rule(s) is regarded as an anomaly. For example, a complex rule-based approach has been used to characterize the anomalous pattern for disease outbreak detection. Each rule is carefully evaluated using Fisher's Exact Test and a randomization test. For more complex anomaly detection task such as driver danger level prediction in this paper, defining rules becomes extreme difficult. Hence many other researches applied statistical modeling methods for anomaly detection. For example, the Fisher projection and linear classifier can model the low/medium/high stress level using physiological features. A newly coming data was classified using the Bayesian approach. In another example, a two-category classifier using SVM classifies the incoming data as normal or anomalous. However, these methods overlooked the spatial correlation between features. To cope with the limitation, the Bayesian Network can fuse different features for inference.
SUMMARY
Systems and methods are disclosed to predict driving danger by capturing vehicle dynamic parameter, driver physiological data and driver behavior feature; applying a learning algorithm to the features; and predicting driving danger.
Implementations of the above systems and methods may include one or more of the following. The learning algorithm includes one of: Hidden Markov Model, Conditional Random Field and Reinforcement Learning. The vehicle dynamic parameter includes one or more of: driver's lateral lane position, steering wheel angle, longitudinal acceleration, longitudinal velocity, distance between vehicles. The driver's physiological data includes one or more of: respiration, heart rate, blood volume, skin temperature, skin conductance. The driver behavior feature can be a PERCLOSE feature. The driver behavior feature can capture fatigue, vision, distraction. The method includes training the learning algorithm and performing off line cross-validation. The sytem can predict driving danger in real time. The system can communicate a reason that caused it to predict driving danger to help a user understand the risk(s).
Advantages of the above systems and methods may include one or more of the following. The system can use one or more features to dynamically monitor the vehicle and the driver during driving, specifically the vehicle dynamic parameters, the driver's physiological data and the driver's behavior. The system uses the vehicle dynamic parameter features which serve as a highly informative feature for driving danger level prediction in a real-time system. To discover the temporal patterns that lead to safe/dangerous driving situation, sequential learning algorithms such as Hidden Markov Model, Random Field, and Reinforcement Learning can be used with the Reinforcement Learning based method using non-linear value function achieves best results during cross-validation. The resulting live danger level prediction system gives real-time danger prediction for the driver to prevent a set of potential risks, including speed exceedance, sudden acceleration/deceleration/turning, off-road, crash with cars or pedestrian, among others. The real time danger prediction provides an automated driving assistant, leading to a safer driving environment. BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows one exemplary process for a driving danger level prediction system.
FIG. 2 shows an exemplary danger level curve generated using a Hidden Markove Model method.
FIG. 3 shows an exemplary danger level curve generated using a Reinformcement Learning method.
FIG. 4 shows prediction performance using Vehicle's Dynamic parameters where each feature is extracted from 5 seconds long raw input.
FIG. 5 shows a prediction performance using Vehicle's Dynamic parameters where each feature is extracted from 15 seconds long raw input.
FIG. 6 shows a prediction performance using Vehicle's Dynamic parameters and driver's phyiological data where each feature is extracted from 5 seconds long raw input.
FIG. 6 shows a prediction performance using Vehicle's Dynamic parameters and driver's phyiological data where each feature is extracted from 5 seconds long raw input.
FIG. 7 shows a prediction performance using Vehicle's Dynamic parameters and driver's physiological data where each feature is extracted from 15 seconds long raw input.
FIG. 8 shows prediction performance using Vehicle's Dynamic parameters, drivers physiological data and driver's visual behavior feature where each feature is extracted from 5 seconds long raw input.
FIG. 9 shows prediction performance using Vehicle's Dynamic parameters, drivers physiological data and driver's visual behavior feature where each feature is extracted from 15 seconds long raw input.
FIG. 10 shows an exemplary user interface for communicating dangerous driving conditions.
DESCRIPTION
FIG. 1 shows one exemplary process for a driving danger level prediction system. First, the process collects driving condition data from a plurality of sensor inputs (10). The system uses multiple sensor inputs and statistical modeling to predict the driving risk level. In one implementation, three types of features were collected, specifically the vehicle dynamic parameters, the driver's physiological data and the driver's behavior feature. Next, to model the temporal patterns that lead to safe/dangerous driving state, one or more sequential supervised learning algorithms are used, including Hidden Markov Model, Conditional Random Field and Reinforcement Learning (20). In the preferred embodiment, Reinforcement Learning based method with the vehicle dynamic parameters feature is used to predict risk level. In another embodiment, Reinforcement Learning is used with the other two features to further improve the prediction accuracy. Finally, if the process detects dangerous driving conditions, alarms can be generated to help the driver recover from the danger (30).
In a live driving danger level prediction embodiment, vehicle dynamic parameters feature are applied to the Reinforcement Learning module. The system analyzes the sensor readings and outputs a numerical danger level value in real-time. When a predefined threshold based on training data is exceeded, acoustic warning is sent to the driver to prevent potential driving risks. Compared to many previous researches that focused on monitoring the driver's vigilance level to infer the possibility of potential driving risk, the live system is non-intrusive to the driver, and hence highly desirable for driving danger prevention applications. In this driving prediction embodiment, the system consists of three major modules: 1) The data acquisition module that captures the vehicle's dynamic parameters and driver behaviors in real-time; 2) The feature extraction that converts the raw sensor readings into defined statistical features as described above; and 3) The danger level prediction module that uses the Reinforcement Learning algorithm to generate a numerical danger level score. The score is used to trigger the warning interface if a predefined threshold learned from training samples is exceeded.
The system captures the driver's physiological data, the driver's visual behavior and the vehicle's dynamic state. With respect to the Driver's Physiological Data (F1 ), although a driver's knowledge, skill, perceptual and cognitive abilities is almost constant during any driving session, the driver's behavior is a critical risk-increasing factor. A driver's behavior is affected by fatigue, poor vision, major distraction, etc condition, and is reflected in the variation of his/her physiological data. Hence, the physiological data provides the most accurate techniques for monitoring driver's vigilance level. In one embodiment, a physiological sensing system called "TlexComp Infmiti" from Thought Technology connects five sensors to the driver, and recorded the sensor readings at the terminal in a continuous way without interrupting the user (Fig.l). Each sample from FlexComp Infmiti is denoted as F1 which is an R9 column vector. Table 1 lists the physical meanings of each exemplary dimension of F1.
Figure imgf000008_0001
Table 1 : Driver's Physiological Data (F1 )
As to the Driver's Visual Behavior (F2), although the driver's physiological data gives important information about the wearer's drowsy, fatigue, emotional, etc states, its acquisition is intrusive to the wearer. Hence people have been searching for other physiological indicators that can be collected non-intrusively to the driver. In this line there are many researches using computer vision based techniques to analyze the drivers' visual behavior to infer their physiological state. Reported visual behavior features include head orientation, eye movement, eyelid closure rate, etc.
In one embodiment, the percentage of eye closure feature (PERCLOSE) is used because firstly, it's closely related to the driver's physiological state. For example, when people begin to drowse, their eye-blinks slow down, and there are fewer of them whose eyes stay closed for a longer time. Secondly and more importantly, with certain equipment, the PERCLOSE feature can be extracted very reliably. In one embodiment, the "Eye Alert Fatigue Warning System") from EyeAlert, Inc., is used to collect the PERCLOSE feature. The extracted PERCLOSE feature can be denoted as F2 which is an R6 column vector. Table 2 gives the meaning of each dimension of F0 .
Figure imgf000009_0001
Table 2: Driver's Percentage of Eyelid Closure (PERCLOSE, (F1 ))
The third set of features collected is the vehicle's dynamic parameters (F3 ), including speed, acceleration/deceleration, steering angle, lane position, etc physical data from the vehicle. The advantages of using the vehicle's dynamic data are firstly, collecting vehicle dynamic data is non-intrusive to the driver, and secondly, the dynamic data is a direct reflection of the vehicle's state, hence it's more sensitive to the change of driving danger level due to either the changes of driver's physiological state or the vehicle/environment condition. In the study, the vehicle dynamic parameters are collected from a driving simulator called "STISIM" by Systems Technology, Inc. STISIM is a PC based interactive driving simulator that allows the user to control all aspects of driving such as throttling, breaking and steering. The whole system includes a computer with the STISIM software, a projector displaying the driving scenarios, a steering wheel, and brake and throttle pedals. The driving scenario, including weather, road condition, traffic light/sign, pedestrian, buildings and so on, was carefully designed to make the simulation as close to reality as possible. During simulation, "STISIM" outputs the vehicle's dynamic parameters simultaneously. The set of features can be denoted as F3 , and Table 3 lists the 7 selected parameters.
Figure imgf000010_0001
Table 3 : Vehicle Dynamic Parameters (F3 )
Next, the process to derive Statistical Features is discussed. Although the F1 , F2 and F3 data are all time-stamped for synchronization, they are of different sample rate (32, 3 and 30 samples per second, respectively). In addition, dropped samples are detected. Hence to synchronize F1 , F2 and F3 , the following statistical features are derived using a fixed sliding window size Tw and step size Ts :
[F;AF;F2;A2F] where F =[max(Fn),min(Fn),mean(Fn),variance(Fn)] and n can be any combination of 1 , 2 and 3 . The max , min , mean and variance operators measure the corresponding statistics over all the samples in the window.
With the obtained feature sequences, the next system module uses sequential supervised learning algorithms to mine the specific patterns for the driving risk prediction task. Next, the Sequential Supervised Learning process will be discussed. The problem of discovering feature patterns that result in safe/dangerous landing from continuous sensors readings can be regarded as a supervised learning problem. In addition, it is believed that any dangerous situation, e.g. crash, is caused by a sequence of actions rather than a single action. Hence there exists both short-term and long-term interactions between features, and thus the danger level prediction problem can be better modeled as a sequential supervised learning problem. There are many algorithms that are suitable for the problem, such as Recurrent Sliding Windows, Maximum Entropy Markov Models, etc. Three algorithms: Hidden Markov Model, Conditional Random Field, and Reinforcement Learning, have been used.
The Hidden Markov Model (HMM) based classifier has the ability to model both the generative patterns of any single hidden state and the temporal transition patterns across different states. HMM has been proved robust and accurate for many problems, such as Automatic Speech Recognition, image processing, communications, signal processing, finance, traffic modeling, etc.
To apply HMM for the problem, two HMMs were trained for the safe and dangerous sequences respectively. The HMM classifier works in the following manner: Given a set of states S = [S1, S2,..., s κ} and an observation sequence X = [x1,x2,...,xN} , the likelihood of X with respect to a HMM with parameters Θ expands as p{X | Θ) and: p(X \ Θ) = ∑p(X,Q \ Θ) all Q where p(X,Q \ Θ) = p(X \ Q, Θ)p(Q | Θ) (Bayes) Then
Figure imgf000011_0001
-b 1NXN
and
Figure imgf000011_0002
Q = [qγ,q2,...,qN) is a (hidden) state sequence where each qt e S ; πt = p(qγ = S1) is the prior probabilities of S1 being the first state of a state sequence; a denotes the transition probabilities to go from state i to state j , and bq x is the emission probabilities. bq x is modeled by Gaussian Mixture Model.
During the danger level prediction phase, the whole input feature vector sequence is segmented into smaller sequences (frames) with fixed length and step size. Each frame of features are fed to both the "safe" and "crashed" HMMs. The danger level DL at time t is selected to be the logarithm likelihood of the frame being generated by the "crashed" HMM over that generated by the "safe" HMM, which is computed as follows:
DLt = log(p(X \ I Θcrasked)) -log(p(Xt \ Θsafe)) where X1 is the observed frame at time t, ®crashed is the parameters of the "crashed" HMM, and Θsafe the parameters of the "safe" HMM. Ideally, the danger level DL during the driving should remain constant for most of the time, and drop before instances of danger. FIG. 2 shows a computed danger level curve for a 21 minutes long sequence. There are 7 computed dangerous points as indicated by red circles. When playing back the driving session, it is found that 4 out of the 7 points are crashes, and the rests are also dangerous situations such as sudden break, close to pedestrian, etc.
However, as a generative model, each observed Xn in HMM is only conditioned on the state qn , and the transition probability of states p(Q | Θ) is independent of observation X .
Hence HMM imposes strong assumptions on the independence amongst the observed features x , which the collected features for danger level prediction may not follow. To overcome this limitation, several directions have been explored, including Maximum Entropy Markov Model (MEMM), Input/Output HMM (IOHMM), and Conditional Random Field (CRF). The MEMM and IOHMM have the so called label bias problem where the contribution of certain observations in likelihood computation might be weakened. Hence the CRF algorithm is evalued next for the danger level prediction task.
In Conditional Random Field (CRF) algorithm, the way in which the adjacent q values influence each other is determined by the observed features. Specifically, CRF models the relationship among adjacent pairs qn_l and qn as an Markov Random Field (MRF) conditioned on the observation X . In other words, the CRF is represented by a set of potentials Mn (qn-ι ^n \ x«) defined as
M« fø«-i ,q« \ χ«) = p(
Figure imgf000013_0001
where the fa are boolean features that encode some information about qn_l , qn and arbitrary information about xn , and the ga are boolean features that encode some information about qn and Xn .
The CRF computes the conditional probability p(X \ Q, Θ) according to
Figure imgf000013_0002
where q0 = 0 and qN+l = N + 1 . Mn(xn) is the (N + 2) x (N + 2) matrix of potentials for all possible pairs of labels for qn_l and qn , such that the normalizer becomes a necessary term to make p(X \ Q, Θ) a probability scores.
To apply CRF, the selected state space S contains only {sλ = dangerous, S2 = safe} two states. The entire feature sequences are fed to the trained CRF model, and a probability score of each feature vector being either of the two state is computed. Then, similar to the HMM based method, a numerical danger level score for each x can be computed as
DLt = \og(p(qt = S1 I xt,&)) -\og(p(qt = S2 1 x,,Θ))
FIG. 3 gives a computed danger level curve for the same sequence as that used in FIG. 2. To train the CRF model, algorithms based on iterative scaling and gradient descent have been developed both for optimizing p(Q \ X) and also for separately optimizing p(qn \ x) for loss functions that depend only on the individual labels.
The Reinforcement learning (RL) algorithm was originally proposed to solve complex planning and sequential decision making problems under uncertainty. It draws on the theory of function approximation, dynamic programming, and iterative optimization. More importantly, RL combines both dynamic programming and supervised learning to successfully solve problems that neither discipline can address individually.
To apply the RL in the system, a penalty (negative value) is given at the end of the crash sequences while a reward (positive value) given for safe sequences. As these training sequences can be regarded as sparse trajectories in the feature space, the RL could propagates the penalty/reward in the feature space by trial-and-error interactions alone these trajectories, and thus the obtained value function has values in the entire feature space. During prediction, the value function converts a feature vector into a penalty value, which can be used as the danger-level indicator.
The RL usually involves two major tasks, how to select approximation architecture to represent the value function, and how to train the parameters for the selected architecture. In a simple case of RL, the value function can be simply represented by a look-up table and a training algorithm approximates the function by iteratively updating the table. However, as the danger level function takes continuous values and high-dimensional feature space, a look-up table representation would require large memory and long searching time. Therefore, a continuous danger level function DLn = DL(xn,Θ) with parameters Θ is used to approximate the actual danger level DL* (xn) at time instance n .
In RL, the value function D*(xn) implicitly gives the maximum probability that the system will collapse from the current state Xn . If the transition from state xn_x to Xn incurs a reward r(xn l,xn) , then D*(xn) should satisfy the Bellman's equation
D" (xn ) = min r(xn , Xn+1 ) + D* (xn+l ) xn+l
To train the parameters Θ of the value function J , suppose there are K training trajectories, denoted as X' , i = l,..., K and each trajectory X' contains T1 feature vectors.
For a single trajectory X' = {xn' } n = \,...,Tl , the actual danger level for at time n should be D * (xn' ) and accordingly :
Figure imgf000014_0001
s=n where R' is the penalty/reward given at the end of the i th trajectory. Now the approximated value function can be obtained by solving a least square optimization problem where Θ* = arg min ∑∑(DL(xn' , Θ) - DL* (xn' ))
The above least square problem can be solved by an incremental gradient method. In the implementation, one trajectory is considered for each iteration, and Θ is updated iteratively by
ΔΘ = -γ
Figure imgf000015_0001
where [xvx2,...xτ] is a trajectory, V@DL(xn' ,Θ) is the partial differentiation with respect to Θ , and γ is a step size. ΔΘ can be rewritten as
T1 —1 T1 —1
AΘ = r ∑V@DL(xn' ,Θ) ∑ds n=\ s=n where the quantities ds is called temporal difference and is defined as ds = r(xs ,xs+ι) + DL(xs+ι,Θ) -DL(xs ,Θ)
Here DL(xT ,Θ) is arbitrarily given as the penalty/reward of that trajectory because
r(xs , xs+l) + DL(xs+l, Θ) , which is a sample of DL(xs,Θ) , is more likely to be correct because it is closer to DL(xT , Θ) .
The temporal difference provides an indication as to whether or how much the estimate Θ should be increased or decreased. Usually ds is multiplied by a weight λs~" , 0 < λ < 1 to decrease the influence of farther temporal difference on V@DL(xn' , Θ) . Hence: τ -\ τ -\
AΘ = r ∑V&DL(xn' ,Θ) ∑dsλs n κ=l
In one implementation, both a linear and non-linear form for DL(xn , Θ) . Specifically, for a linear danger level function,
DL(xn,Θ) = xnΘ and for a non-linear danger level function DL(Xn , Θ) = ∑ac exp(- K μ 2 Λ ) + β
which is a weighted summation of several RBF functions.
Both the two forms of danger level function can be trained. For the linear form, any random initialization of Θ is applicable because the convergence is guaranteed as long as the it converge. However the initialization is crucial to the quality of the nonlinear function approximation and even the convergence. The Θ for the non-linear value function includes that the center μ and the weight a of each RBF as well as the const β . As all the trajectories for the dangerous training sequences sink to the crash state, an intuitive choice of μ could be the cluster centers of all the dangerous training samples.
FIG. 3 gives an generated danger level curve for the same sequence used in FIG. 2. More dangerous situations were identified in FIG. 3 as compared to FIG. 2, and the curve is more dynamic which shows that the RL based method is more sensitive to the input. Figures 4-9 show additional prediction performance using various combinations of data features.
To evaluate the three algorithms, 14 participants test-drove a "STISIM" simulator. Each subject played 2-3 sessions, and totally 40 sessions were conducted. All participants were familiar with the simulator. Each simulation session last around 20 minutes. The driver's physiological data and visual behavior features as well as the vehicle dynamic parameters were recorded and synchronized.
The system generated a numerical danger-level value for every time instance rather than binary danger/safety classification only. Although this is very desirable for a live prediction system, it is difficult to obtain ground truth labels for every time instance based on speed, respiration rate, etc features. Hence to have objective safe/dangerous evaluation for training, the system collected sequences that ended with crashes as danger samples, and the rest as safe driving samples. All the sequences have the same length (60 seconds in the current setup). Note that such a scheme would bring noises for the safe sample sequences, because there might be dangerous driving patterns that resulted in no crash selected as safe samples. Hence more safe sequences are collected than the crash samples to reduce the influence of such noisy sequences. In such a manner, totally 370 sequences were obtained, 85 as crash sequences, and 285 as safe sequences. To separate the sequences into training/testing partition, the leave-one-out method was used, i.e., in every round, the sessions from one driver were leaved out for testing and the rest for training.
An evaluation metric was used to measure the performance of the system against random guess. The intuitive idea of the metric is to measure how much a predicted accident really occur when the danger level is below a defined threshold. The higher the precision, the better the performance. To illustrate the metric, let the predicted danger time being where the computed danger value is below a threshold, and the threshold is selected in the way that the total predicted danger time would take ω (&> e (0,0.2] can be regarded as the sensitivity of the predictor) fraction of the sequence length T , hence the predicted danger time adds up to tp = ωx T . Let td seconds before each crash point till that crash being real danger time, and thus the total real danger time adds up to tr . The prediction precision can then be expressed as
where t n tr is the summation of where the predicted danger time is really danger.
Different feature combinations were examined for each algorithm. As the goal is a real-time non-intrusive system, the combinations with F3 were given priority because it's both non-intrusive and sensitive to driving state change. Specifically, the combination of F1 +F2 +F3 , F1 + F3 and F3 were evaluated. Figures 4-9 shows the particular performance. In each figure, the Y axis represents the ratio between prediction accuracy and random guess, and the X axis is the ω value ranging from [0,0.2] . td is selected to be 10 seconds.
FIGS. 4-9 show exemplary prediction performance using one or more sensor inputs. FIG. 4 shows prediction performance using Vehicle's Dynamic parameters (Each feature is extracted from 5 seconds long raw input). As can be seen from FIG. 4, driving danger prediction using only the vehicle dynamic parameters can achieve satisfactory accuracy, and the additional driver's physiological data and behavior feature only improve the performance in a limited extent. Due to the intrusive nature of the driver's physiological data and the large computational expense to achieve accurate driver's visual behavior measurement, the vehicle's dynamic parameter feature is more desirable for driving risk prevention applications. Of the three danger level prediction methods, the Reinforcement Learning algorithms the best performance. FIG. 5 shows a prediction performance using Vehicle's Dynamic parameters (Each feature is extracted from 15 seconds long raw input). FIG. 6 shows a prediction performance using Vehicle's Dynamic parameters and driver's phyiological data (Each feature is extracted from 5 seconds long raw input). FIG. 6 shows a prediction performance using Vehicle's Dynamic parameters and driver's phyiological data (Each feature is extracted from 5 seconds long raw input). FIG. 7 shows a prediction performance using Vehicle's Dynamic parameters and driver's physiological data (Each feature is extracted from 15 seconds long raw input). FIG. 8 shows prediction performance using Vehicle's Dynamic parameters, drivers physiological data and deiver's visual behavior feature (Each feature is extracted from 5 seconds long raw input). FIG. 9 shows prediction performance using Vehicle's Dynamic parameters, drivers physiological data and driver's visual behavior feature (Each feature is extracted from 15 seconds long raw input).
Based on the off-line cross validation result, a live driving danger level prediction system was built. It uses only the vehicle's dynamic parameters feature and Reinforcement Learning algorithm with non- linear value function for prediction. FIG. 10 shows a prototype system (the right screen) which works as a add-on to the "STISIM" simulator (the left screen). 11 participants were invited to operate the system. Results shows that the system can predict driving risks due to sharp turning, sudden acceleration/decelation, continues weaving, approaching object, etc events accurately. It is sensitive to the changes of vehicle's condition resulted from the driver's emotional state change, e.g. fatigue, or the road condition change, e.g. windy and slippery road. However, if the participant wants to crash the vehicle on purpose, such as suddenly turning into the opposite lane to hit the incoming vehicle, the system won't generate an alarm because these types of dangerous situations do not occur in the collected training samples.
An anticipated feedback from the trial is that, when the danger level was above the threshold, the drivers were often unclear about which action caused the risk. Although a precise danger reason probe module is not available at the current stage, it is able to analyze the trained parameters to roughly know which features might be the reason. As mentioned above, the selected non- linear value function for the RL algorithm has 5 RBFs. Their trained weights are {4.0732,1.7731,2.8959,0.9888,-5.0044} respectively. It can be seen that only the 5 th RBF has negative weight value, and hence the closer a feature vector is to the 5 th RBF's center, the more danger it is. Similarly, any feature vector that is close to the 1 st RBF's center represents safe, as the 1 st RBF has the greatest positive weight value. In such a manner, the dimension that differs most between the 1 st and 5 th RBFs' centers is the most distinguishing feature. Table 4 lists the top- 10 features that differ most between safe and crash.
Figure imgf000019_0001
Table 4 - Top- 10 features
Unexpectedly, speed is not among the top 5 reasons for dangerous driving behavior. This is because speeding alone is seldom the direct reason for a crash. When the vehicle is running above 60mph, for example, it requires greater steering input to make left/right turn. If it resulted in an off-road crash, the steer input angle pattern is learned by the system rather than the speed.
The driver's physiological data, the driver's visual behavior and the vehicle's dynamic parameter features can be used for driving risk prediction by analytic engines such as the Hidden Markov Model, the Conditional Random Field and the Reinforcement Learning (RL) algorithms and the Reinforcement Learning (RL) algorithm with a non-linear value function.
Although a real-time driving danger level prediction system has been discussed above, the inventors contemplate that other systems can be added, including a risk reason analysis method for the driver when potential driving risk has been predicted. The system can incorporate the driver's visual behavior based features to further improve performance. The system can be applied to larger data sets so that more driving safe/dangerous patterns can be modeled. These features improve the reliability of the predictive system and maximizes the users' confidence in the driving risk prediction system.
The invention may be implemented in hardware, firmware or software, or a combination of the three. Preferably the invention is implemented in a computer program executed on a programmable computer having a processor, a data storage system, volatile and non-volatile memory and/or storage elements, at least one input device and at least one output device.
By way of example, a block diagram of a computer to support the system is discussed next. The computer preferably includes a processor, random access memory (RAM), a program memory (preferably a writable read-only memory (ROM) such as a flash ROM) and an input/output (I/O) controller coupled by a CPU bus. The computer may optionally include a hard drive controller which is coupled to a hard disk and CPU bus. Hard disk may be used for storing application programs, such as the present invention, and data. Alternatively, application programs may be stored in RAM or ROM. I/O controller is coupled by means of an I/O bus to an I/O interface. I/O interface receives and transmits data in analog or digital form over communication links such as a serial link, local area network, wireless link, and parallel link. Optionally, a display, a keyboard and a pointing device (mouse) may also be connected to I/O bus. Alternatively, separate connections (separate buses) may be used for I/O interface, display, keyboard and pointing device. Programmable processing system may be preprogrammed or it may be programmed (and reprogrammed) by downloading a program from another source (e.g., a floppy disk, CD-ROM, or another computer).
Each computer program is tangibly stored in a machine-readable storage media or device (e.g., program memory or magnetic disk) readable by a general or special purpose programmable computer, for configuring and controlling operation of a computer when the storage media or device is read by the computer to perform the procedures described herein. The inventive system may also be considered to be embodied in a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
The invention has been described herein in considerable detail in order to comply with the patent Statutes and to provide those skilled in the art with the information needed to apply the novel principles and to construct and use such specialized components as are required. However, it is to be understood that the invention can be carried out by specifically different equipment and devices, and that various modifications, both as to the equipment details and operating procedures, can be accomplished without departing from the scope of the invention itself.
Although specific embodiments of the present invention have been illustrated in the accompanying drawings and described in the foregoing detailed description, it will be understood that the invention is not limited to the particular embodiments described herein, but is capable of numerous rearrangements, modifications, and substitutions without departing from the scope of the invention. The following claims are intended to encompass all such modifications.

Claims

1. A method to predict driving danger, comprising: a. capturing vehicle dynamic parameter, driver physiological data and driver behavior feature; b. applying a learning algorithm to the features; and c. predicting driving danger.
2. The method of claim 1, wherein the learning algorithm inclucdes one of: Hidden Markov Model, Conditional Random Field and Reinforcement Learning
3. The method of claim 1, wherein the vehicle dynamic parameter includes one or more of: driver's lateral lane position, steering wheel angle, longitudinal acceleration, longitudinal velocity, distance between vehicles.
4. The method of claim 1, wherein the driver's physiological data includes one or more of: respiration, heart rate, blood volume, skin temperature, skin conductance.
5. The method of claim 1, wherein the driver behavior feature comprises a PERCLOSE feature.
6. The method of claim 1, wherein the driver behavior feature comprises fatigue, vision, distraction.
7. The method of claim 1, comprising training the learning algorithm.
8. The method of claim 1, comprising performing off line cross-validation.
9. The method of claim 7, comprising prediticting driving danger in real time.
10. The method of claim 1, comprising communicating a reason for a predicted driving danger to a user.
11. A system to predict driving danger, comprising: a. a data acquisition unit to capture vehicle dynamic parameter, driver physiological data and driver behavior feature; b. a learning processor coupled to the data acquisition unit to process the features; and c. a user interface coupled to the learning processor to indicate driving danger.
12. The system of claim 11, wherein the learning processor inclucdes one of: Hidden Markov Model, Conditional Random Field and Reinforcement Learning
13. The system of claim 11, wherein the vehicle dynamic parameter includes one or more of: driver's lateral lane position, steering wheel angle, longitudinal acceleration, longitudinal velocity, distance between vehicles.
14. The system of claim 11, wherein the driver's physiological data includes one or more of: respiration, heart rate, blood volume, skin temperature, skin conductance.
15. The system of claim 11, wherein the driver behavior feature comprises a PERCLOSE feature.
16. The system of claim 11, wherein the driver behavior feature comprises fatigue, vision, distraction.
17. The system of claim 11, wherein the learning processor is trained using previously captured data.
18. The system of claim 11, wherein the learning processor cross-validates predicted driving dangers.
19. The system of claim 7, wherein the learning processor predicts driving danger in real time.
20. The system of claim 1, comprising a display to communicate a reason for a predicted driving danger to a user.
PCT/US2007/087337 2007-04-11 2007-12-13 Real-time driving danger level prediction WO2008127465A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US91109207P 2007-04-11 2007-04-11
US60/911,092 2007-04-11
US11/950,765 2007-12-05
US11/950,765 US7839292B2 (en) 2007-04-11 2007-12-05 Real-time driving danger level prediction

Publications (1)

Publication Number Publication Date
WO2008127465A1 true WO2008127465A1 (en) 2008-10-23

Family

ID=39864222

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/087337 WO2008127465A1 (en) 2007-04-11 2007-12-13 Real-time driving danger level prediction

Country Status (2)

Country Link
US (1) US7839292B2 (en)
WO (1) WO2008127465A1 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2182501A1 (en) * 2008-10-30 2010-05-05 Aisin Aw Co., Ltd. Safe driving evaluation system and safe driving evaluation program
ITBO20090514A1 (en) * 2009-07-31 2011-02-01 T E Systems And Advanced Tec Hnologies Engi Sa METHOD OF ANALYSIS OF THE CONDUCT OF THE DRIVER OF A ROAD VEHICLE
CN101987017A (en) * 2010-11-18 2011-03-23 上海交通大学 Electroencephalo-graph (EEG) signal identification and detection method for measuring alertness of driver
CN102568200A (en) * 2011-12-21 2012-07-11 辽宁师范大学 Method for judging vehicle driving states in real time
EP2615598A1 (en) 2012-01-11 2013-07-17 Honda Research Institute Europe GmbH Vehicle with computing means for monitoring and predicting traffic participant objects
CN103544850A (en) * 2013-09-13 2014-01-29 中国科学技术大学苏州研究院 Collision prediction method based on vehicle distance probability distribution for internet of vehicles
US9189897B1 (en) 2014-07-28 2015-11-17 Here Global B.V. Personalized driving ranking and alerting
EP3006297A1 (en) * 2014-10-09 2016-04-13 Hitachi, Ltd. Driving characteristics diagnosis device, driving characteristics diagnosis system, driving characteristics diagnosis method, information output device, and information output method
US9628565B2 (en) 2014-07-23 2017-04-18 Here Global B.V. Highly assisted driving platform
US9766625B2 (en) 2014-07-25 2017-09-19 Here Global B.V. Personalized driving of autonomously driven vehicles
US9824505B2 (en) 2014-02-25 2017-11-21 Ford Global Technologies, Llc Method for triggering a vehicle system monitor
WO2018014953A1 (en) 2016-07-20 2018-01-25 Toyota Motor Europe Control device, system and method for determining a comfort level of a driver
CN108009587A (en) * 2017-12-01 2018-05-08 驭势科技(北京)有限公司 A kind of method and apparatus based on intensified learning and the definite driving strategy of rule
WO2018211301A1 (en) 2017-05-15 2018-11-22 Toyota Motor Europe Control device, system, and method for determining a comfort level of a driver
EP3444160A1 (en) * 2014-01-28 2019-02-20 Volvo Truck Corporation A vehicle driver feedback system and corresponding method
US10679143B2 (en) 2016-07-01 2020-06-09 International Business Machines Corporation Multi-layer information fusing for prediction
CN112382068A (en) * 2020-11-02 2021-02-19 陈松山 Station waiting line crossing detection system based on BIM and DNN
EP3895949A1 (en) 2020-04-17 2021-10-20 Toyota Jidosha Kabushiki Kaisha Method and device for evaluating user discomfort
US11401847B2 (en) 2019-09-09 2022-08-02 Ford Global Technologies, Llc Methods and systems for an exhaust tuning valve
CN115376316A (en) * 2022-08-19 2022-11-22 北京航空航天大学 Hidden Markov chain-based dangerous following behavior identification method

Families Citing this family (178)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8269617B2 (en) 2009-01-26 2012-09-18 Drivecam, Inc. Method and system for tuning the effect of vehicle characteristics on risk prediction
US8508353B2 (en) 2009-01-26 2013-08-13 Drivecam, Inc. Driver risk assessment system and method having calibrating automatic event scoring
US20090089108A1 (en) * 2007-09-27 2009-04-02 Robert Lee Angell Method and apparatus for automatically identifying potentially unsafe work conditions to predict and prevent the occurrence of workplace accidents
JP4551439B2 (en) * 2007-12-17 2010-09-29 株式会社沖データ Image processing device
US9665910B2 (en) * 2008-02-20 2017-05-30 Hartford Fire Insurance Company System and method for providing customized safety feedback
JP5272605B2 (en) * 2008-09-18 2013-08-28 日産自動車株式会社 Driving operation support device and driving operation support method
US8854199B2 (en) * 2009-01-26 2014-10-07 Lytx, Inc. Driver risk assessment system and method employing automated driver log
CN103258409B (en) * 2010-07-29 2016-05-25 福特全球技术公司 Based on the system and method for driver's work load scheduling driver interface task
JP2013539572A (en) 2010-07-29 2013-10-24 フォード グローバル テクノロジーズ、リミテッド ライアビリティ カンパニー Method for managing driver interface tasks and vehicle
US8972106B2 (en) 2010-07-29 2015-03-03 Ford Global Technologies, Llc Systems and methods for scheduling driver interface tasks based on driver workload
US9213522B2 (en) 2010-07-29 2015-12-15 Ford Global Technologies, Llc Systems and methods for scheduling driver interface tasks based on driver workload
US8698639B2 (en) 2011-02-18 2014-04-15 Honda Motor Co., Ltd. System and method for responding to driver behavior
US9292471B2 (en) 2011-02-18 2016-03-22 Honda Motor Co., Ltd. Coordinated vehicle response system and method for driver behavior
US8731736B2 (en) 2011-02-22 2014-05-20 Honda Motor Co., Ltd. System and method for reducing driving skill atrophy
TWI434233B (en) 2011-05-17 2014-04-11 Ind Tech Res Inst Predictive drowsiness alarm method
DE102011078641A1 (en) 2011-07-05 2013-01-10 Robert Bosch Gmbh Radar system for motor vehicles and motor vehicle with a radar system
US9208203B2 (en) * 2011-10-05 2015-12-08 Christopher M. Ré High-speed statistical processing in a database
US8930227B2 (en) * 2012-03-06 2015-01-06 State Farm Mutual Automobile Insurance Company Online system for training novice drivers and rating insurance products
US9129519B2 (en) * 2012-07-30 2015-09-08 Massachussetts Institute Of Technology System and method for providing driver behavior classification at intersections and validation on large naturalistic data sets
JP6215944B2 (en) * 2012-08-20 2017-10-18 オートリブ ディベロップメント エービー Processing related to eyelid movement to detect drowsiness
US20150258996A1 (en) * 2012-09-17 2015-09-17 Volvo Lastvagnar Ab Method for providing a context based coaching message to a driver of a vehicle
US9085262B2 (en) 2012-10-08 2015-07-21 Microsoft Technology Licensing, Llc Tinting indication of environmental conditions
US9265458B2 (en) 2012-12-04 2016-02-23 Sync-Think, Inc. Application of smooth pursuit cognitive testing paradigms to clinical drug development
US8981942B2 (en) 2012-12-17 2015-03-17 State Farm Mutual Automobile Insurance Company System and method to monitor and reduce vehicle operator impairment
US8930269B2 (en) 2012-12-17 2015-01-06 State Farm Mutual Automobile Insurance Company System and method to adjust insurance rate based on real-time data about potential vehicle operator impairment
US20140240132A1 (en) * 2013-02-28 2014-08-28 Exmovere Wireless LLC Method and apparatus for determining vehicle operator performance
US9380976B2 (en) 2013-03-11 2016-07-05 Sync-Think, Inc. Optical neuroinformatics
US9751534B2 (en) 2013-03-15 2017-09-05 Honda Motor Co., Ltd. System and method for responding to driver state
US10445758B1 (en) 2013-03-15 2019-10-15 Allstate Insurance Company Providing rewards based on driving behaviors detected by a mobile computing device
US9352751B2 (en) 2014-06-23 2016-05-31 Honda Motor Co., Ltd. System and method for determining the information transfer rate between a driver and vehicle
US10499856B2 (en) 2013-04-06 2019-12-10 Honda Motor Co., Ltd. System and method for biological signal processing with highly auto-correlated carrier sequences
AU2013206671B2 (en) * 2013-07-03 2015-05-14 Safemine Ag Operator drowsiness detection in surface mines
US11182859B2 (en) * 2013-12-04 2021-11-23 State Farm Mutual Automobile Insurance Company Assigning mobile device data to a vehicle
US10417486B2 (en) 2013-12-30 2019-09-17 Alcatel Lucent Driver behavior monitoring systems and methods for driver behavior monitoring
TWI603213B (en) 2014-01-23 2017-10-21 國立交通大學 Method for selecting music based on face recognition, music selecting system and electronic apparatus
KR102051142B1 (en) * 2014-06-13 2019-12-02 현대모비스 주식회사 System for managing dangerous driving index for vehicle and method therof
US10077055B2 (en) 2014-06-23 2018-09-18 Honda Motor Co., Ltd. System and method for determining the information transfer rate between a driver and vehicle
EP3177204A1 (en) 2014-09-09 2017-06-14 Torvec, Inc. Methods and apparatus for monitoring alertness of an individual utilizing a wearable device and providing notification
US9771081B2 (en) * 2014-09-29 2017-09-26 The Boeing Company System for fatigue detection using a suite of physiological measurement devices
US9573600B2 (en) * 2014-12-19 2017-02-21 Toyota Motor Engineering & Manufacturing North America, Inc. Method and apparatus for generating and using driver specific vehicle controls
WO2016109635A1 (en) * 2014-12-30 2016-07-07 Robert Bosch Gmbh Adaptive user interface for an autonomous vehicle
LU92628B1 (en) * 2015-01-02 2016-07-04 Univ Luxembourg Vehicular motion monitoring method
US10137902B2 (en) * 2015-02-12 2018-11-27 Harman International Industries, Incorporated Adaptive interactive voice system
US9507974B1 (en) 2015-06-10 2016-11-29 Hand Held Products, Inc. Indicia-reading systems having an interface with a user's nervous system
US20160362111A1 (en) * 2015-06-12 2016-12-15 Jaguar Land Rover Limited Driver awareness sensing and indicator control
US10131362B1 (en) * 2015-06-23 2018-11-20 United Services Automobile Association (Usaa) Automobile detection system
US9493118B1 (en) * 2015-06-24 2016-11-15 Delphi Technologies, Inc. Cognitive driver assist with variable warning for automated vehicles
DE102015218306A1 (en) * 2015-09-23 2017-03-23 Robert Bosch Gmbh A method and apparatus for determining a drowsiness condition of a driver
US10373143B2 (en) 2015-09-24 2019-08-06 Hand Held Products, Inc. Product identification using electroencephalography
US9892464B2 (en) * 2015-10-08 2018-02-13 Blackbird Holdings, LLC System and method of real time detection of aerial vehicle flight patterns and insurance policy updates
JP6696679B2 (en) * 2015-11-11 2020-05-20 株式会社デンソーテン Driving support device
US10308256B1 (en) 2015-12-01 2019-06-04 State Farm Mutual Automobile Insurance Company Technology for notifying vehicle operators of incident-prone locations
CA3014812A1 (en) 2016-02-18 2017-08-24 Curaegis Technologies, Inc. Alertness prediction system and method
WO2017170086A1 (en) * 2016-03-31 2017-10-05 日本電気株式会社 Information processing system, information processing device, simulation method, and recording medium containing simulation program
US10296796B2 (en) 2016-04-06 2019-05-21 Nec Corporation Video capturing device for predicting special driving situations
US10304335B2 (en) 2016-04-12 2019-05-28 Ford Global Technologies, Llc Detecting available parking spaces
JP6778872B2 (en) * 2016-06-28 2020-11-04 パナソニックIpマネジメント株式会社 Driving support device and driving support method
US9919648B1 (en) 2016-09-27 2018-03-20 Robert D. Pedersen Motor vehicle artificial intelligence expert system dangerous driving warning and control system and method
CN106251583B (en) * 2016-09-30 2018-09-25 江苏筑磊电子科技有限公司 Fatigue driving discrimination method based on driving behavior and eye movement characteristics
US9739627B1 (en) 2016-10-18 2017-08-22 Allstate Insurance Company Road frustration index risk mapping and mitigation
US10830605B1 (en) 2016-10-18 2020-11-10 Allstate Insurance Company Personalized driving risk modeling and estimation system and methods
EP3316227A1 (en) * 2016-10-28 2018-05-02 Thomson Licensing LLC Method and intelligent system for generating a predictive outcome of a future event
US10346697B2 (en) * 2016-12-21 2019-07-09 Hyundai America Technical Center, Inc Driver state monitoring using corneal reflection detection
US10228693B2 (en) 2017-01-13 2019-03-12 Ford Global Technologies, Llc Generating simulated sensor data for training and validation of detection models
US10322727B1 (en) * 2017-01-18 2019-06-18 State Farm Mutual Automobile Insurance Company Technology for assessing emotional state of vehicle operator
US10311312B2 (en) 2017-08-31 2019-06-04 TuSimple System and method for vehicle occlusion detection
US10147193B2 (en) 2017-03-10 2018-12-04 TuSimple System and method for semantic segmentation using hybrid dilated convolution (HDC)
US9953236B1 (en) 2017-03-10 2018-04-24 TuSimple System and method for semantic segmentation using dense upsampling convolution (DUC)
US11587304B2 (en) 2017-03-10 2023-02-21 Tusimple, Inc. System and method for occluding contour detection
US10671873B2 (en) 2017-03-10 2020-06-02 Tusimple, Inc. System and method for vehicle wheel detection
US10710592B2 (en) 2017-04-07 2020-07-14 Tusimple, Inc. System and method for path planning of autonomous vehicles based on gradient
US9952594B1 (en) 2017-04-07 2018-04-24 TuSimple System and method for traffic data collection using unmanned aerial vehicles (UAVs)
US10471963B2 (en) 2017-04-07 2019-11-12 TuSimple System and method for transitioning between an autonomous and manual driving mode based on detection of a drivers capacity to control a vehicle
KR102287316B1 (en) * 2017-04-14 2021-08-09 현대자동차주식회사 Apparatus and method for autonomous driving control, vehicle system
US10552691B2 (en) 2017-04-25 2020-02-04 TuSimple System and method for vehicle position and velocity estimation based on camera and lidar data
US11055605B2 (en) 2017-04-25 2021-07-06 Nec Corporation Detecting dangerous driving situations by parsing a scene graph of radar detections
US10481044B2 (en) 2017-05-18 2019-11-19 TuSimple Perception simulation for improved autonomous vehicle control
US10558864B2 (en) 2017-05-18 2020-02-11 TuSimple System and method for image localization based on semantic segmentation
CN107215307A (en) * 2017-05-24 2017-09-29 清华大学深圳研究生院 Driver identity recognition methods and system based on vehicle sensors correction data
JP2020522798A (en) 2017-05-31 2020-07-30 ベイジン ディディ インフィニティ テクノロジー アンド ディベロップメント カンパニー リミティッド Device and method for recognizing driving behavior based on motion data
US10474790B2 (en) 2017-06-02 2019-11-12 TuSimple Large scale distributed simulation for realistic multiple-agent interactive environments
US10762635B2 (en) 2017-06-14 2020-09-01 Tusimple, Inc. System and method for actively selecting and labeling images for semantic segmentation
US10303522B2 (en) 2017-07-01 2019-05-28 TuSimple System and method for distributed graphics processing unit (GPU) computation
US10737695B2 (en) 2017-07-01 2020-08-11 Tusimple, Inc. System and method for adaptive cruise control for low speed following
US10493988B2 (en) 2017-07-01 2019-12-03 TuSimple System and method for adaptive cruise control for defensive driving
US10308242B2 (en) 2017-07-01 2019-06-04 TuSimple System and method for using human driving patterns to detect and correct abnormal driving behaviors of autonomous vehicles
US10752246B2 (en) 2017-07-01 2020-08-25 Tusimple, Inc. System and method for adaptive cruise control with proximate vehicle detection
CN107352497B (en) 2017-07-21 2018-10-12 北京图森未来科技有限公司 A kind of automatic oiling methods, devices and systems of vehicle
CN107403206A (en) 2017-07-21 2017-11-28 北京图森未来科技有限公司 Realize method and system, the relevant device of vehicle automatic loading and unloading goods
CN107393074B (en) 2017-07-21 2019-01-18 北京图森未来科技有限公司 Realize the automatic method and system for crossing card of vehicle, relevant device
CN107421615A (en) 2017-07-21 2017-12-01 北京图森未来科技有限公司 Realize the method and system, relevant device that vehicle weighs automatically
CN107416754B (en) 2017-07-21 2018-11-02 北京图森未来科技有限公司 A kind of automatic oiling methods, devices and systems of long-distance vehicle
CN107272657B (en) 2017-07-21 2020-03-10 北京图森未来科技有限公司 Method and system for realizing automatic overhaul of vehicle and related equipment
CN107381488B (en) 2017-07-21 2018-12-18 北京图森未来科技有限公司 A kind of automatic oiling methods, devices and systems of vehicle
CN107369218B (en) 2017-07-21 2019-02-22 北京图森未来科技有限公司 Realize method and system, the relevant device of vehicle automatic fee
US11029693B2 (en) 2017-08-08 2021-06-08 Tusimple, Inc. Neural network based vehicle dynamics model
US10360257B2 (en) 2017-08-08 2019-07-23 TuSimple System and method for image annotation
US10816354B2 (en) 2017-08-22 2020-10-27 Tusimple, Inc. Verification module system and method for motion-based lane detection with multiple sensors
US10762673B2 (en) 2017-08-23 2020-09-01 Tusimple, Inc. 3D submap reconstruction system and method for centimeter precision localization using camera-based submap and LiDAR-based global map
US10565457B2 (en) 2017-08-23 2020-02-18 Tusimple, Inc. Feature matching and correspondence refinement and 3D submap position refinement system and method for centimeter precision localization using camera-based submap and LiDAR-based global map
US10303956B2 (en) 2017-08-23 2019-05-28 TuSimple System and method for using triplet loss for proposal free instance-wise semantic segmentation for lane detection
US10678234B2 (en) 2017-08-24 2020-06-09 Tusimple, Inc. System and method for autonomous vehicle control to minimize energy cost
US10783381B2 (en) 2017-08-31 2020-09-22 Tusimple, Inc. System and method for vehicle occlusion detection
US10953881B2 (en) 2017-09-07 2021-03-23 Tusimple, Inc. System and method for automated lane change control for autonomous vehicles
US10782693B2 (en) 2017-09-07 2020-09-22 Tusimple, Inc. Prediction-based system and method for trajectory planning of autonomous vehicles
US10656644B2 (en) 2017-09-07 2020-05-19 Tusimple, Inc. System and method for using human driving patterns to manage speed control for autonomous vehicles
US10953880B2 (en) 2017-09-07 2021-03-23 Tusimple, Inc. System and method for automated lane change control for autonomous vehicles
US10649458B2 (en) 2017-09-07 2020-05-12 Tusimple, Inc. Data-driven prediction-based system and method for trajectory planning of autonomous vehicles
US10782694B2 (en) 2017-09-07 2020-09-22 Tusimple, Inc. Prediction-based system and method for trajectory planning of autonomous vehicles
US10671083B2 (en) 2017-09-13 2020-06-02 Tusimple, Inc. Neural network architecture system for deep odometry assisted by static scene optical flow
US10552979B2 (en) 2017-09-13 2020-02-04 TuSimple Output of a neural network method for deep odometry assisted by static scene optical flow
US10387736B2 (en) 2017-09-20 2019-08-20 TuSimple System and method for detecting taillight signals of a vehicle
US10733465B2 (en) 2017-09-20 2020-08-04 Tusimple, Inc. System and method for vehicle taillight state recognition
US10962979B2 (en) 2017-09-30 2021-03-30 Tusimple, Inc. System and method for multitask processing for autonomous vehicle computation and control
US10768626B2 (en) 2017-09-30 2020-09-08 Tusimple, Inc. System and method for providing multiple agents for decision making, trajectory planning, and control for autonomous vehicles
US10970564B2 (en) 2017-09-30 2021-04-06 Tusimple, Inc. System and method for instance-level lane detection for autonomous vehicle control
US10410055B2 (en) 2017-10-05 2019-09-10 TuSimple System and method for aerial video traffic analysis
WO2019074478A1 (en) * 2017-10-09 2019-04-18 Vivek Anand Sujan Autonomous safety systems and methods for vehicles
US10739775B2 (en) 2017-10-28 2020-08-11 Tusimple, Inc. System and method for real world autonomous vehicle trajectory simulation
US10812589B2 (en) 2017-10-28 2020-10-20 Tusimple, Inc. Storage architecture for heterogeneous multimedia data
US10666730B2 (en) 2017-10-28 2020-05-26 Tusimple, Inc. Storage architecture for heterogeneous multimedia data
US10657390B2 (en) 2017-11-27 2020-05-19 Tusimple, Inc. System and method for large-scale lane marking detection using multimodal sensor data
US10528823B2 (en) 2017-11-27 2020-01-07 TuSimple System and method for large-scale lane marking detection using multimodal sensor data
US10528851B2 (en) 2017-11-27 2020-01-07 TuSimple System and method for drivable road surface representation generation using multimodal sensor data
US10860018B2 (en) 2017-11-30 2020-12-08 Tusimple, Inc. System and method for generating simulated vehicles with configured behaviors for analyzing autonomous vehicle motion planners
US10877476B2 (en) 2017-11-30 2020-12-29 Tusimple, Inc. Autonomous vehicle simulation system for analyzing motion planners
EP3495223A1 (en) 2017-12-11 2019-06-12 Volvo Car Corporation Driving intervention in vehicles
US11312334B2 (en) 2018-01-09 2022-04-26 Tusimple, Inc. Real-time remote control of vehicles with high redundancy
CN111989716B (en) 2018-01-11 2022-11-15 图森有限公司 Monitoring system for autonomous vehicle operation
JP7118757B2 (en) * 2018-01-22 2022-08-16 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Server, program and method
CN108256489B (en) * 2018-01-24 2020-09-25 清华大学 Behavior prediction method and device based on deep reinforcement learning
US11009356B2 (en) 2018-02-14 2021-05-18 Tusimple, Inc. Lane marking localization and fusion
US11009365B2 (en) 2018-02-14 2021-05-18 Tusimple, Inc. Lane marking localization
US10322728B1 (en) * 2018-02-22 2019-06-18 Futurewei Technologies, Inc. Method for distress and road rage detection
US10685244B2 (en) 2018-02-27 2020-06-16 Tusimple, Inc. System and method for online real-time multi-object tracking
US10685239B2 (en) 2018-03-18 2020-06-16 Tusimple, Inc. System and method for lateral vehicle detection
EP3543985A1 (en) * 2018-03-21 2019-09-25 dSPACE digital signal processing and control engineering GmbH Simulation of different traffic situations for a test vehicle
CN110378184A (en) 2018-04-12 2019-10-25 北京图森未来科技有限公司 A kind of image processing method applied to automatic driving vehicle, device
CN116129376A (en) 2018-05-02 2023-05-16 北京图森未来科技有限公司 Road edge detection method and device
WO2019213763A1 (en) 2018-05-10 2019-11-14 Beauchamp Bastien Method and system for vehicle-to-pedestrian collision avoidance
US11104334B2 (en) 2018-05-31 2021-08-31 Tusimple, Inc. System and method for proximate vehicle intention prediction for autonomous vehicles
US20210272020A1 (en) * 2018-06-29 2021-09-02 Sony Corporation Information processing apparatus and information processing method
US10839234B2 (en) 2018-09-12 2020-11-17 Tusimple, Inc. System and method for three-dimensional (3D) object detection
US11518380B2 (en) * 2018-09-12 2022-12-06 Bendix Commercial Vehicle Systems, Llc System and method for predicted vehicle incident warning and evasion
CN118289018A (en) 2018-09-13 2024-07-05 图森有限公司 Remote safe driving method and system
US11620494B2 (en) 2018-09-26 2023-04-04 Allstate Insurance Company Adaptable on-deployment learning platform for driver analysis output generation
US11518382B2 (en) * 2018-09-26 2022-12-06 Nec Corporation Learning to simulate
US10796402B2 (en) 2018-10-19 2020-10-06 Tusimple, Inc. System and method for fisheye image processing
US10942271B2 (en) 2018-10-30 2021-03-09 Tusimple, Inc. Determining an angle between a tow vehicle and a trailer
US11279344B2 (en) 2018-11-30 2022-03-22 International Business Machines Corporation Preemptive mitigation of collision risk
US11940790B2 (en) * 2018-12-12 2024-03-26 Allstate Insurance Company Safe hand-off between human driver and autonomous driving system
CN111319629B (en) 2018-12-14 2021-07-16 北京图森智途科技有限公司 Team forming method, device and system for automatically driving fleet
US11235776B2 (en) * 2019-01-31 2022-02-01 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for controlling a vehicle based on driver engagement
US11285963B2 (en) * 2019-03-10 2022-03-29 Cartica Ai Ltd. Driver-based prediction of dangerous events
US11249184B2 (en) 2019-05-07 2022-02-15 The Charles Stark Draper Laboratory, Inc. Autonomous collision avoidance through physical layer tracking
US20200380354A1 (en) * 2019-05-30 2020-12-03 International Business Machines Corporation Detection of operation tendency based on anomaly detection
US10740634B1 (en) * 2019-05-31 2020-08-11 International Business Machines Corporation Detection of decline in concentration based on anomaly detection
US11823460B2 (en) 2019-06-14 2023-11-21 Tusimple, Inc. Image fusion for autonomous vehicle operation
US11132562B2 (en) 2019-06-19 2021-09-28 Toyota Motor Engineering & Manufacturing North America, Inc. Camera system to detect unusual circumstances and activities while driving
US10875537B1 (en) 2019-07-12 2020-12-29 Toyota Research Institute, Inc. Systems and methods for monitoring the situational awareness of a vehicle according to reactions of a vehicle occupant
CN110705628B (en) * 2019-09-26 2022-07-19 长安大学 Method for detecting risk level of driver based on hidden Markov model
CN111942396B (en) * 2020-02-17 2021-12-24 采埃孚汽车***(上海)有限公司 Automatic driving control device and method and automatic driving system
CN111353636A (en) * 2020-02-24 2020-06-30 交通运输部水运科学研究所 Multi-mode data based ship driving behavior prediction method and system
US11170649B2 (en) 2020-02-26 2021-11-09 International Business Machines Corporation Integrated collision avoidance and road safety management system
CN111341102B (en) * 2020-03-02 2021-04-23 北京理工大学 Motion primitive library construction method and device and motion primitive connection method and device
US11263896B2 (en) 2020-04-06 2022-03-01 B&H Licensing Inc. Method and system for detecting jaywalking of vulnerable road users
EP3893150A1 (en) 2020-04-09 2021-10-13 Tusimple, Inc. Camera pose estimation techniques
CN111775948B (en) * 2020-06-09 2022-07-19 浙江吉利汽车研究院有限公司 Driving behavior analysis method and device
AU2021203567A1 (en) 2020-06-18 2022-01-20 Tusimple, Inc. Angle and orientation measurements for vehicles with multiple drivable sections
CN111803065B (en) * 2020-06-23 2023-12-26 北方工业大学 Dangerous traffic scene identification method and system based on electroencephalogram data
US11481607B2 (en) 2020-07-01 2022-10-25 International Business Machines Corporation Forecasting multivariate time series data
JP7272338B2 (en) * 2020-09-24 2023-05-12 トヨタ自動車株式会社 Autonomous driving system
CN112336349B (en) * 2020-10-12 2024-05-14 易显智能科技有限责任公司 Method and related device for identifying psychological state of driver
CN112450950B (en) * 2020-12-10 2021-10-22 南京航空航天大学 Brain-computer aided analysis method and system for aviation accident
US20220297726A1 (en) * 2021-03-17 2022-09-22 Pony Ai Inc. Computerized detection of unsafe driving scenarios
US11820387B2 (en) * 2021-05-10 2023-11-21 Qualcomm Incorporated Detecting driving behavior of vehicles
CN115063766B (en) * 2022-06-17 2024-05-24 公安部交通管理科学研究所 Automatic driving automobile operation safety assessment and early warning method
CN116746931B (en) * 2023-06-15 2024-03-19 中南大学 Incremental driver bad state detection method based on brain electricity

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5585785A (en) * 1995-03-03 1996-12-17 Gwin; Ronnie Driver alarm
US5786765A (en) * 1996-04-12 1998-07-28 Mitsubishi Jidosha Kogyo Kabushiki Kaisha Apparatus for estimating the drowsiness level of a vehicle driver
US6130617A (en) * 1999-06-09 2000-10-10 Hyundai Motor Company Driver's eye detection method of drowsy driving warning system
US6599243B2 (en) * 2001-11-21 2003-07-29 Daimlerchrysler Ag Personalized driver stress prediction using geographical databases

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3269153B2 (en) * 1993-01-06 2002-03-25 三菱自動車工業株式会社 Arousal level determination device
US7751960B2 (en) * 2006-04-13 2010-07-06 Gm Global Technology Operations, Inc. Driver workload-based vehicle stability enhancement control

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5585785A (en) * 1995-03-03 1996-12-17 Gwin; Ronnie Driver alarm
US5786765A (en) * 1996-04-12 1998-07-28 Mitsubishi Jidosha Kogyo Kabushiki Kaisha Apparatus for estimating the drowsiness level of a vehicle driver
US6130617A (en) * 1999-06-09 2000-10-10 Hyundai Motor Company Driver's eye detection method of drowsy driving warning system
US6599243B2 (en) * 2001-11-21 2003-07-29 Daimlerchrysler Ag Personalized driver stress prediction using geographical databases

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
QIANG JI. ET AL.: "Real Time Nonintrusive Monitoring and Prediction of Driver Fatigue", IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, vol. 53, no. 4, July 2004 (2004-07-01), pages 1052 - 1068, XP011115287, DOI: doi:10.1109/TVT.2004.830974 *

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2182501A1 (en) * 2008-10-30 2010-05-05 Aisin Aw Co., Ltd. Safe driving evaluation system and safe driving evaluation program
US8258982B2 (en) 2008-10-30 2012-09-04 Aisin Aw Co., Ltd. Safe driving evaluation system and safe driving evaluation program
US8260491B2 (en) 2009-07-31 2012-09-04 Systems and Advances Technologoes Engineering S.r.l. Road vehicle driver behaviour analysis method
ITBO20090514A1 (en) * 2009-07-31 2011-02-01 T E Systems And Advanced Tec Hnologies Engi Sa METHOD OF ANALYSIS OF THE CONDUCT OF THE DRIVER OF A ROAD VEHICLE
CN101987017A (en) * 2010-11-18 2011-03-23 上海交通大学 Electroencephalo-graph (EEG) signal identification and detection method for measuring alertness of driver
CN102568200B (en) * 2011-12-21 2015-04-22 辽宁师范大学 Method for judging vehicle driving states in real time
CN102568200A (en) * 2011-12-21 2012-07-11 辽宁师范大学 Method for judging vehicle driving states in real time
EP2615598A1 (en) 2012-01-11 2013-07-17 Honda Research Institute Europe GmbH Vehicle with computing means for monitoring and predicting traffic participant objects
US9104965B2 (en) 2012-01-11 2015-08-11 Honda Research Institute Europe Gmbh Vehicle with computing means for monitoring and predicting traffic participant objects
CN103544850A (en) * 2013-09-13 2014-01-29 中国科学技术大学苏州研究院 Collision prediction method based on vehicle distance probability distribution for internet of vehicles
CN103544850B (en) * 2013-09-13 2016-01-20 中国科学技术大学苏州研究院 Based on the collision predicting method of vehicle headway probability distribution in car networking
US11302209B2 (en) 2014-01-28 2022-04-12 Volvo Truck Corporation Vehicle driver feedback system and corresponding method
EP3444160A1 (en) * 2014-01-28 2019-02-20 Volvo Truck Corporation A vehicle driver feedback system and corresponding method
RU2670579C2 (en) * 2014-02-25 2018-10-23 ФОРД ГЛОУБАЛ ТЕКНОЛОДЖИЗ, ЭлЭлСи Method for on-board diagnostics of the vehicle (options) and the method for on-borne diagnostics of the vehicle with the hybrid drive
US9824505B2 (en) 2014-02-25 2017-11-21 Ford Global Technologies, Llc Method for triggering a vehicle system monitor
US11343316B2 (en) 2014-07-23 2022-05-24 Here Global B.V. Highly assisted driving platform
US9628565B2 (en) 2014-07-23 2017-04-18 Here Global B.V. Highly assisted driving platform
US10334049B2 (en) 2014-07-23 2019-06-25 Here Global B.V. Highly assisted driving platform
US9766625B2 (en) 2014-07-25 2017-09-19 Here Global B.V. Personalized driving of autonomously driven vehicles
US9754501B2 (en) 2014-07-28 2017-09-05 Here Global B.V. Personalized driving ranking and alerting
US9189897B1 (en) 2014-07-28 2015-11-17 Here Global B.V. Personalized driving ranking and alerting
EP3006297A1 (en) * 2014-10-09 2016-04-13 Hitachi, Ltd. Driving characteristics diagnosis device, driving characteristics diagnosis system, driving characteristics diagnosis method, information output device, and information output method
US9707971B2 (en) 2014-10-09 2017-07-18 Hitachi, Ltd. Driving characteristics diagnosis device, driving characteristics diagnosis system, driving characteristics diagnosis method, information output device, and information output method
US10679143B2 (en) 2016-07-01 2020-06-09 International Business Machines Corporation Multi-layer information fusing for prediction
WO2018014953A1 (en) 2016-07-20 2018-01-25 Toyota Motor Europe Control device, system and method for determining a comfort level of a driver
US11173919B2 (en) 2016-07-20 2021-11-16 Toyota Motor Europe Control device, system and method for determining a comfort level of a driver
WO2018211301A1 (en) 2017-05-15 2018-11-22 Toyota Motor Europe Control device, system, and method for determining a comfort level of a driver
CN108009587A (en) * 2017-12-01 2018-05-08 驭势科技(北京)有限公司 A kind of method and apparatus based on intensified learning and the definite driving strategy of rule
CN108009587B (en) * 2017-12-01 2021-04-16 驭势科技(北京)有限公司 Method and equipment for determining driving strategy based on reinforcement learning and rules
US11401847B2 (en) 2019-09-09 2022-08-02 Ford Global Technologies, Llc Methods and systems for an exhaust tuning valve
EP3895949A1 (en) 2020-04-17 2021-10-20 Toyota Jidosha Kabushiki Kaisha Method and device for evaluating user discomfort
CN112382068A (en) * 2020-11-02 2021-02-19 陈松山 Station waiting line crossing detection system based on BIM and DNN
CN112382068B (en) * 2020-11-02 2022-09-16 鲁班软件股份有限公司 Station waiting line crossing detection system based on BIM and DNN
CN115376316A (en) * 2022-08-19 2022-11-22 北京航空航天大学 Hidden Markov chain-based dangerous following behavior identification method
CN115376316B (en) * 2022-08-19 2023-05-30 北京航空航天大学 Dangerous following behavior identification method based on hidden Markov chain

Also Published As

Publication number Publication date
US7839292B2 (en) 2010-11-23
US20090040054A1 (en) 2009-02-12

Similar Documents

Publication Publication Date Title
US7839292B2 (en) Real-time driving danger level prediction
Braunagel et al. Ready for take-over? A new driver assistance system for an automated classification of driver take-over readiness
Wang et al. Real-time driving danger-level prediction
Craye et al. A multi-modal driver fatigue and distraction assessment system
US20220095975A1 (en) Detection of cognitive state of a driver
Doshi et al. On-road prediction of driver's intent with multimodal sensory cues
Craye et al. Driver distraction detection and recognition using RGB-D sensor
EP2201496B1 (en) Inattentive state determination device and method of determining inattentive state
KR101276770B1 (en) Advanced driver assistance system for safety driving using driver adaptive irregular behavior detection
TW202036465A (en) Method, device and electronic equipment for monitoring driver&#39;s attention
JP5161643B2 (en) Safe driving support system
WO2008133746A1 (en) Systems and methods for detecting unsafe conditions
Doshi et al. A comparative exploration of eye gaze and head motion cues for lane change intent prediction
Wu et al. Reasoning-based framework for driving safety monitoring using driving event recognition
Celona et al. A multi-task CNN framework for driver face monitoring
Lethaus et al. Using pattern recognition to predict driver intent
Tran et al. A driver assistance framework based on driver drowsiness detection
Damousis et al. Fuzzy fusion of eyelid activity indicators for hypovigilance-related accident prediction
Flores-Monroy et al. Visual-based real time driver drowsiness detection system using CNN
Vasudevan et al. Driver drowsiness monitoring by learning vehicle telemetry data
KR20160067681A (en) Method for analyzing driving concentration level of driver
Gong et al. Face detection and status analysis algorithms in day and night enivironments
Zhang et al. Detecting driver distractions using a deep learning approach and multi-source naturalistic driving data
Fasanmade et al. Context-Aware Quantitative Risk Assessment Machine Learning Model for Drivers Distraction
CN117227740B (en) Multi-mode sensing system and method for intelligent driving vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07869194

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07869194

Country of ref document: EP

Kind code of ref document: A1