CN113386775A - Driver intention identification method considering human-vehicle-road characteristics - Google Patents

Driver intention identification method considering human-vehicle-road characteristics Download PDF

Info

Publication number
CN113386775A
CN113386775A CN202110665334.0A CN202110665334A CN113386775A CN 113386775 A CN113386775 A CN 113386775A CN 202110665334 A CN202110665334 A CN 202110665334A CN 113386775 A CN113386775 A CN 113386775A
Authority
CN
China
Prior art keywords
driving
vehicle
data
driver
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110665334.0A
Other languages
Chinese (zh)
Other versions
CN113386775B (en
Inventor
陈慧勤
陈海龙
刘昊
陈勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202110665334.0A priority Critical patent/CN113386775B/en
Publication of CN113386775A publication Critical patent/CN113386775A/en
Application granted granted Critical
Publication of CN113386775B publication Critical patent/CN113386775B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0019Control system elements or transfer functions
    • B60W2050/0028Mathematical models, e.g. for simulation
    • B60W2050/0029Mathematical model of the driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0019Control system elements or transfer functions
    • B60W2050/0028Mathematical models, e.g. for simulation
    • B60W2050/0031Mathematical model of the vehicle
    • B60W2050/0033Single-track, 2D vehicle model, i.e. two-wheel bicycle model
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/06Direction of travel
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/10Longitudinal speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/18Steering angle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4042Longitudinal speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/80Spatial relation or speed relative to objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a driver intention identification method considering human-vehicle-road characteristics, which is characterized by comprising the following steps: the method comprises the following steps: step 1, acquiring a vehicle and data related to surrounding vehicles, behavior and action of a driver and scene information outside the driving cab, which are recorded from a driving simulator; step 2, preprocessing the vehicle and surrounding environment data acquired from the driving simulator, inputting the data into a trained GrowNet network, and obtaining probability values P of five categoriesi(P1,P2,…,P5) (ii) a And 3, respectively storing and processing the video data acquired by the two cameras to finally obtain probability values P 'of five categories'i(P'1,P'2,…,P'5) (ii) a Step 4. obtaining P from step 2 and step 3iAnd P'iWeighted summation, taking the sum of five categoriesThe category corresponding to the latter maximum value is the finally recognized driving intention. The invention makes full use of the driving simulator, can collect data without depending on a vehicle-mounted sensor, and is more convenient and faster in experiment. In addition, off-line training can be performed, and on-line testing can be performed, so that the applicability is improved.

Description

Driver intention identification method considering human-vehicle-road characteristics
Technical Field
The invention relates to the field of man-machine driving, in particular to a driver intention identification method considering man-vehicle-road characteristics.
Background
Nowadays, people pay more and more attention to driving safety in the advanced transportation process. Research shows that most of traffic accidents are caused by improper operation of drivers, and driving safety is always a focus of regulations and automobile manufacturers. The passive safety system developed in the past can not meet the current requirement, and the wide use of the advanced driving auxiliary system ensures the safety of people going out. If the intention of the driver can be recognized in advance, the advanced driving assistance system can be more intelligent, so that the potential danger of the driver in the driving process can be better warned, and the initiative of vehicle safety is further enhanced.
Although the field of unmanned driving is rapidly developed, a lot of core technologies remain to be broken through from commercial application, so that a driver needs to undertake a part of driving tasks for a long time. Under the cooperative driving of people and vehicles, it is necessary to bring the driver into a person-vehicle-road closed loop, and the traditional machine learning method has difficulty in finding suitable and effective characteristics when defining the behavior characteristics of the driver. Most of the disclosed existing driver intention recognition methods use SVM and hidden Markov models and variants thereof, for example, the methods disclosed in patents CN 103318181B and CN 104494600B only consider the driver to operate the vehicle and the vehicle state information, and are difficult to realize early recognition of the driving intention; the method disclosed in patent CN 106971194B only considers the head posture of the driver and does not fully consider the influence of the complexity of the surrounding environment; the method disclosed in patent CN 111717217 a uses a series of features that are artificially defined, difficult to correlate with driving intent, and may lose important features. The above four patents for comparison have a common disadvantage in that it is difficult to identify intentions on a time series. Therefore, a more reasonable method for identifying the intention of the driver is needed, and a safe basis is provided for the cooperative driving of the human and the vehicle.
Disclosure of Invention
The invention provides a driver intention identification method considering human-vehicle-road characteristics, aiming at solving the problems of low reliability, low applicability and the like of the existing driver intention identification method, and aims to provide a solution for the research of human-machine driving technology and improve the driving safety.
In order to achieve the purpose, the invention adopts the following technical scheme: a driver's intention recognition method considering a human-vehicle-road characteristic, characterized in that: comprises the following steps
Step 1, acquiring a vehicle and data related to surrounding vehicles, behavior and action of a driver and scene information outside the driving cab, which are recorded from a driving simulator;
step 2, preprocessing the vehicle and surrounding environment data acquired from the driving simulator, inputting the data into a trained GrowNet network, and obtaining probability values P of five categoriesi(P1,P2,…,P5) (ii) a Wherein, the five categories are respectively turning left, turning right, changing left, changing right and keeping straight;
step 3, video data collected by the two cameras are respectively stored and processed, driving action video data of a driver obtained by the No. 1 camera are input into a fast channel of the improved two-stream model, driving scene data obtained by the No. 2 camera are input into a slow channel, and finally, five types of probability values P are obtained through the full connection layer and the softmax layeri'(P1',P2',…,P5');
Step 4. obtaining P from step 2 and step 3iAnd Pi' weighted summation, weight being ω respectively1、ω2I.e. omega1Pi2Pi' the category corresponding to the maximum value after the summation of the five categories is taken as the finally identified driving intention.
Further, step 1 is realized by:
step 1.1, arranging two cameras and a display screen, wherein the camera 1 is arranged right in front of a driver, the camera 2 is opposite to a driver screen, at least three display screens are needed, and the display screens need to present the view of the automobile rearview mirrors;
step 1.2, after hardware equipment is arranged, using software matched with a driving simulator to construct a driving scene according to the requirement of identifying the type according to the intention;
step 1.3, arranging that not less than 5 participants with driving licenses respectively complete the driving tasks of the whole journey on the simulated driver, wherein the participants need to do normal operation according to traffic rules;
and step 1.4, the vehicle data recorded by the driving simulator comprises the running speed of the vehicle, the turn angle of a steering wheel, the course angle of a lane line and the serial number of a lane where the mass center of the vehicle is located.
Further, step 2 is realized by:
step 2.1, preprocessing comprises intercepting data 5 seconds before the vehicle finishes the related driving behaviors and making labels, wherein the related driving behaviors refer to five driving intents needing to be identified; dividing the processed data into a training set and a testing set, and normalizing the data by using a Batch Normalization layer (BN layer) before inputting the data into a model;
step 2.2. use the perception machine network with two hidden layers as the weak learner, use the simulation of Newton' S iteration method training network, the cross entropy of loss function, get the confidence probability value S of every intention classificationk(xi) Wherein x isiIs a certain sample of the input. The output of the classifier needs to pass through a softmax layer, and each weak classifier is weighted and summed to obtain a final result pi=∑iwoutSk(xi),woutWeight parameters generated for the training process and whose values will be continuously adjusted according to the updated learning rate;
and 2.3, pre-training the growth by using the training set data in the step 2.1, and storing the model parameters.
Further, step 3 is realized by:
step 3.1, processing comprises intercepting a video clip 5 seconds before the vehicle finishes driving behavior and making a label, wherein the data loaded into the model is the picture or video data after frame extraction, and the image resolution is 224 multiplied by 224;
step 3.2, fusing the extracted behavior characteristics of the driver and the extracted driving scene change characteristics by using an improved two-stream model;
3.3, initializing a weight parameter by using a pre-training model of a kinetic data set Kinetics-400, and then training on the basis by using a Brain4Cars data set to accelerate model convergence;
step 3.4, inputting the data processed in the step 3.1 into the finely adjusted two-stream pre-training model to obtain the probability value P of each categoryi′。
Further, step 4 is implemented by:
step 4.1. determining the weight ω1、ω2The process is shown in fig. 6. Let omega1Is 0, after the e-th training, ω1Ae, where a is 0.1, e is 1,2, … 10, ω2=1-ω1(ii) a In 10 training, the weight value ω corresponding to the maximum classification accuracy (maxACC) is retained1、ω2
Step 4.2, using the weight parameter determined in step 4.1 to calculate omega1Pi2Pi' the driving intention is max (ω)1Pi2Pi') corresponding output category
The invention has the beneficial effects that:
(1) the invention integrates two technical routes, wherein one is emphatically considering the state of the vehicle and the interaction condition of the vehicle with surrounding vehicles, and the other is emphatically considering the behavior and action information of the driver, such as the operation behavior, the observation environment and the like, thereby enhancing the reliability and the accuracy of the method for recognizing the driving intention.
(2) The invention makes full use of the driving simulator, can collect data without depending on a vehicle-mounted sensor, and is more convenient and faster in experiment. In addition, off-line training can be performed, and on-line testing can be performed, so that the applicability is improved.
(3) The two-stream provided by the invention can fully utilize information such as vehicle lane change and the like contained in a driving scene, and is fused with driving behaviors, so that the identification precision is improved. The problem of low recognition accuracy caused by incompleteness or ineffectiveness of artificially defined features can be solved by adopting an end-to-end training mode, and the Brain4Cars data set used for model fine tuning is a video clip collected during real vehicle running, so that the defect of data collected only by using a driving simulator can be overcome, and the applicability of the driving simulator is enhanced.
(4) And finally, calculating the weighted sum of the two network model output class probabilities to achieve the driving intention recognition considering the human-vehicle-road characteristics at the same time.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a schematic diagram of an improved GrowNet network structure;
FIG. 3 is a schematic diagram of a process for training a GrowNet network parameter;
FIG. 4 is a schematic structure of an improved two-stream model;
FIG. 5 is a schematic view of the two-stream principle;
fig. 6 is a flowchart of weight calculation.
Detailed description of the preferred embodiments
The technical scheme of the invention is explained in detail in the following with reference to the attached drawings 1-6.
As shown in fig. 1, this embodiment provides a driver intention recognition method considering a human-vehicle-road characteristic, including the following steps
Step 1, obtaining the vehicle and the data related to the surrounding vehicles, the behavior and the action of the driver and the scene information outside the driving cab recorded from the driving simulator.
The method is realized in the following way in the step 1:
step 1.1, arranging two cameras and a display screen, wherein the camera 1 is arranged right in front of a driver, the camera 2 is opposite to a driver screen, at least three display screens are needed, and the display screens need to present the view of the automobile rearview mirrors.
And 1.2, after the hardware equipment is arranged, using software matched with the driving simulator to construct a driving scene according to the requirement of identifying the type according to the intention.
The invention identifies 5 driving intentions, namely turning left, turning right, changing left lane, changing right lane and keeping straight. The driving scenes constructed in the driving simulator are as follows: the automobile runs on a three-lane road, the weather and light conditions are good, other vehicles are intermittently introduced around the automobile, the total length of the road is 30 kilometers, and eight curves in different directions are randomly arranged.
And 1.3, arranging that not less than 5 participants with the driving licenses respectively complete the driving tasks of the whole journey on the simulated driver, wherein the participants need to do normal operation according to traffic rules.
In order to familiarize the participants with the use of the driving simulator, some driving tasks may be done in advance as a pre-experiment.
And step 1.4, the vehicle data recorded by the driving simulator comprises the running speed of the vehicle, the turn angle of a steering wheel, the course angle of a lane line and the serial number of a lane where the mass center of the vehicle is located. The environmental data output by the simulator includes the relative distance between the vehicle and the surrounding vehicles (the surrounding vehicles need to pay attention to the front vehicle, the rear vehicle, the left side vehicle and the right side vehicle), the collision time and the running speed of the surrounding vehicles. The No. 1 camera is arranged right in front of a driver and used for completely recording driving action information of the driver, and the No. 2 camera is right opposite to a driver screen and used for recording front scene information.
Step 2, preprocessing the vehicle and surrounding environment data acquired from the driving simulator, inputting the preprocessed vehicle and surrounding environment data into a trained GrowNet network, and obtaining probability values P of various categoriesi(P1,P2,…,P5);
The step 2 is specifically realized by the following method:
and 2.1, preprocessing comprises intercepting data 5 seconds before the vehicle finishes the relevant driving behaviors and making a label.
The relevant driving behaviors refer to the five driving intentions to be recognized mentioned in step 1. The sign for completing lane change is the change of the serial number of the lane where the center of mass of the vehicle is located, the sign for completing steering is the moment when the heading angle begins to become smaller, and the data which is 5 seconds after the lane change or the steering is completed and is intercepted for keeping straight going is the data. Dividing the processed data into a training set and a testing set, and normalizing the data by using a Batch Normalization layer (BN layer) before inputting the data into a model.
And 2.2, using a perceptron network with two Hidden layers (Hidden layers) as a weak learner, and updating the learning rate by an Adam optimization algorithm. Because the data volume obtained from the driving simulator is small, the network is trained by using a quasi-Newton iteration method, and the function cross entropy is lost. To obtain confidence probability values S for each intention classk(xj) Wherein x isjIs a certain sample of the input. The output of the classifier needs to pass through a softmax layer, and each weak classifier is weighted and summed to obtain a final result pi=∑iwoutSk(xj),woutThe weight parameters are generated for the training process and their values are continuously adjusted according to the updated learning rate. The structure of which is schematically shown in fig. 2.
And 2.3, pre-training the growth by using the training set data in the step 2.1 (the divided test set is used for determining the parameters in the step 4), and storing the model parameters, wherein a schematic diagram is shown in fig. 3.
And 3, respectively storing and processing the video data acquired by the two cameras. Inputting the driving action video data of the driver obtained by the No. 1 camera into a fast channel of an improved two-stream model, inputting the driving scene data obtained by the No. 2 camera into a slow channel, and finally obtaining probability values P of various categories through a full connection Layer (FC Layer) and a softmax Layeri'(P1',P2',…,P5')。
And 3.1, processing comprises intercepting a video clip 5 seconds before the vehicle finishes driving behavior and making a label, wherein the data loaded into the model can be a picture after frame extraction, or can be directly loaded into video data, and the resolution of the image is 224 multiplied by 224.
This step is required to ensure consistency in time with the data intercepted in step 2.
And 3.2, fusing the extracted behavior characteristics of the driver and the extracted driving scene change characteristics respectively by using the improved two-stream model.
The model refers to a network architecture proposed in slowfast, and is improved in that a slow channel (a convolutional network with a low frame rate, T is a sampling frame number) and a fast channel (a convolutional network with a high frame rate, a sampling step reduction rate alpha is 8 times of that of the former, and a channel number reduction rate beta is 1/8 of the former) are respectively adopted to process semantic features of driving behavior actions (insides) and time sequence features of external scenes (outside), so that the problem that an original network only processes single information is avoided (two channels respectively process two data sets inside and outside a driving cab). The improved model fuses external scene characteristics into driving behavior characteristics in a three-dimensional convolution mode (T-conv), wherein input and output are sequentially subjected to a three-dimensional convolution operation with the number of 5 output channels, Batchnorm3d standardization operation and ReLu activation function. The characteristic layers output by the two channels of the model are connected in series and then input into a full connection layer and a softmax layer, so that scene change information during driving is fully utilized. The structure of the improved two-stream model is schematically shown in fig. 4, the backbone network used by the two channels is based on the structure of resnet50, and the internal modules are shown in fig. 5, wherein the 3Dconv layer uses the structure of resnet3D, and 2Dconv is the structure of resnet 2D.
And 3.3, initializing weight parameters by using a pre-training model of a kinetic data set Kinetics-400, and then training on the basis by using a Brain4Cars data set to accelerate model convergence.
When the Kinetics-400 pre-training model is used, the last full connection layer of the classifier needs to be adjusted to 5 classes. After the weights are initialized, the Brain4Cars dataset will be used to retrain the network and the output classification results are the 5 classes discussed in step 1. In processing the data collected from the driving simulator, the model accuracy may be reduced and to avoid this problem the model may be trained appropriately on a fine-tuned basis using the data from step 3.1.
Step 3.4, inputting the data processed in the step 3.1 into the finely adjusted two-stream pre-training model to obtain the probability value P of each categoryi′。
Step 4. obtaining P from step 2 and step 3iAnd Pi' weighted summation, weight being ω respectively1、ω2I.e. omega1Pi2Pi' the category corresponding to the maximum value after the summation of the categories is taken as the finally identified driving intention.
Step 4.1. determining the weight ω1、ω2The process is shown in fig. 6. Let omega1Is 0, after the e-th training, ω1Ae, where a is 0.1, e is 1,2, … 10, ω2=1-ω1In 10 training, the weight value ω corresponding to the time (maxACC) at which the classification accuracy is maximized is retained1、ω2.
Step 4.2, using the weight parameter determined in step 4.1 to calculate omega1Pi2Pi' the driving intention is max (ω)1Pi2Pi') corresponding output category.
Although the present application has been disclosed in detail with reference to the accompanying drawings, it is to be understood that such description is merely illustrative and not restrictive of the application of the present application. The scope of the present application is defined by the appended claims and may include various modifications, adaptations, and equivalents of the invention without departing from the scope and spirit of the application.

Claims (5)

1. A driver's intention recognition method considering a human-vehicle-road characteristic, characterized in that: comprises the following steps
Step 1, acquiring a vehicle and data related to surrounding vehicles, behavior and action of a driver and scene information outside the driving cab, which are recorded from a driving simulator;
step 2, preprocessing the data of the vehicle and the surrounding environment acquired from the driving simulator, andinputting the probability values into a trained GrowNet network to obtain probability values P of five categoriesi(P1,P2,…,P5) (ii) a Wherein, the five categories are respectively turning left, turning right, changing left, changing right and keeping straight;
and 3, respectively storing and processing the video data acquired by the two cameras, inputting the driving action video data of the driver acquired by the No. 1 camera into a fast channel of the improved two-stream model, inputting the driving scene data acquired by the No. 2 camera into a slow channel, and finally acquiring probability values P of five categories through the full connecting layer and the softmax layeri'(P'1,P'2,…,P'5);
Step 4. obtaining P from step 2 and step 3iAnd Pi' weighted summation, weight being ω respectively1、ω2I.e. omega1Pi2Pi' the category corresponding to the maximum value after the summation of the five categories is taken as the finally identified driving intention.
2. The driver's intention recognition method considering the man-vehicle-road characteristic as set forth in claim 1, wherein: step 1 is realized by the following steps:
step 1.1, arranging two cameras and a display screen, wherein the camera 1 is arranged right in front of a driver, the camera 2 is opposite to a driver screen, at least three display screens are needed, and the display screens need to present the view of an automobile rearview mirror;
step 1.2, after hardware equipment is arranged, using software matched with a driving simulator to construct a driving scene according to the requirement of identifying the type according to the intention;
step 1.3, arranging that not less than 5 participants with driving licenses respectively complete the driving tasks of the whole journey on the simulated driver, wherein the participants need to do normal operation actions according to traffic rules;
and step 1.4, the vehicle data recorded by the driving simulator comprises the running speed of the vehicle, the steering wheel rotation angle, the course angle of the vehicle lane line and the serial number of the lane where the mass center of the vehicle is located.
3. The driver's intention recognition method considering the man-vehicle-road characteristic as set forth in claim 1, wherein: step 2 is realized by the following steps:
step 2.1, preprocessing comprises intercepting data 5 seconds before the vehicle finishes the related driving behaviors and making labels, wherein the related driving behaviors refer to five driving intents needing to be identified; dividing the processed data into a training set and a testing set, and normalizing the data by using a Batch normazaton layer before inputting the data into a model;
step 2.2, using a perception machine network with two hidden layers as a weak learner, training the network by using a quasi-Newton iteration method, losing function cross entropy and obtaining confidence probability values S of all intention categoriesk(xi) Wherein x isiIs a certain sample of the input. The output of the classifier needs to pass through a softmax layer, and each weak classifier is weighted and summed to obtain a final result pi=∑iwoutSk(xi),woutThe weight parameters are generated for the training process and their values are continuously adjusted according to the updated learning rate.
And 2.3, pre-training the growth by using the training set data in the step 2.1, and storing model parameters.
4. The driver's intention recognition method considering the man-vehicle-road characteristic as set forth in claim 1, wherein: step 3 is realized by the following steps:
step 3.1, processing comprises intercepting a video clip 5 seconds before the vehicle finishes driving behavior and making a label, wherein the data loaded into the model is the picture or video data after frame extraction, and the resolution of the picture is 224 multiplied by 224;
step 3.2, fusing the extracted behavior characteristics of the driver and the extracted driving scene change characteristics by using an improved two-stream model;
3.3, initializing a weight parameter by using a pre-training model of a kinetic data set Kinetics-400, and then training on the basis by using a Brain4Cars data set to accelerate model convergence;
step 3.4 general procedure3.1 inputting the processed data into the fine-tuned two-stream pre-training model to obtain the probability value P of each categoryi′。
5. The driver's intention recognition method considering the man-vehicle-road characteristic as set forth in claim 1, wherein: step 4 is realized by the following steps:
step 4.1. determining the weight ω1、ω2Value of (a), let ω1Is 0, after the e-th training, ω1Ae, where a is 0.1, e is 1,2, … 10, ω2=1-ω1(ii) a In 10 training, the corresponding weight value omega when the classification accuracy is maximum is reserved1、ω2
Step 4.2, using the weight parameter determined in step 4.1 to calculate omega1Pi2Pi' the driving intention is max (ω)1Pi2Pi') corresponding output category.
CN202110665334.0A 2021-06-16 2021-06-16 Driver intention identification method considering human-vehicle-road characteristics Active CN113386775B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110665334.0A CN113386775B (en) 2021-06-16 2021-06-16 Driver intention identification method considering human-vehicle-road characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110665334.0A CN113386775B (en) 2021-06-16 2021-06-16 Driver intention identification method considering human-vehicle-road characteristics

Publications (2)

Publication Number Publication Date
CN113386775A true CN113386775A (en) 2021-09-14
CN113386775B CN113386775B (en) 2022-06-17

Family

ID=77621341

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110665334.0A Active CN113386775B (en) 2021-06-16 2021-06-16 Driver intention identification method considering human-vehicle-road characteristics

Country Status (1)

Country Link
CN (1) CN113386775B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299473A (en) * 2021-12-24 2022-04-08 杭州电子科技大学 Driver behavior identification method based on multi-source information fusion
CN117485348A (en) * 2023-11-30 2024-02-02 长春汽车检测中心有限责任公司 Driver intention recognition method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106971194A (en) * 2017-02-16 2017-07-21 江苏大学 A kind of driving intention recognition methods based on the double-deck algorithms of improvement HMM and SVM
CN108995655A (en) * 2018-07-06 2018-12-14 北京理工大学 A kind of driver's driving intention recognition methods and system
CN110427850A (en) * 2019-07-24 2019-11-08 中国科学院自动化研究所 Driver's super expressway lane-changing intention prediction technique, system, device
CN111717217A (en) * 2020-06-30 2020-09-29 重庆大学 Driver intention identification method based on probability correction
CN112085077A (en) * 2020-08-28 2020-12-15 东软集团股份有限公司 Method and device for determining lane change of vehicle, storage medium and electronic equipment
CN112396120A (en) * 2020-11-25 2021-02-23 浙江天行健智能科技有限公司 SVM algorithm-based vehicle lane change intention recognition modeling method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106971194A (en) * 2017-02-16 2017-07-21 江苏大学 A kind of driving intention recognition methods based on the double-deck algorithms of improvement HMM and SVM
CN108995655A (en) * 2018-07-06 2018-12-14 北京理工大学 A kind of driver's driving intention recognition methods and system
CN110427850A (en) * 2019-07-24 2019-11-08 中国科学院自动化研究所 Driver's super expressway lane-changing intention prediction technique, system, device
CN111717217A (en) * 2020-06-30 2020-09-29 重庆大学 Driver intention identification method based on probability correction
CN112085077A (en) * 2020-08-28 2020-12-15 东软集团股份有限公司 Method and device for determining lane change of vehicle, storage medium and electronic equipment
CN112396120A (en) * 2020-11-25 2021-02-23 浙江天行健智能科技有限公司 SVM algorithm-based vehicle lane change intention recognition modeling method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299473A (en) * 2021-12-24 2022-04-08 杭州电子科技大学 Driver behavior identification method based on multi-source information fusion
CN117485348A (en) * 2023-11-30 2024-02-02 长春汽车检测中心有限责任公司 Driver intention recognition method

Also Published As

Publication number Publication date
CN113386775B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
US10176405B1 (en) Vehicle re-identification techniques using neural networks for image analysis, viewpoint-aware pattern recognition, and generation of multi- view vehicle representations
Miyajima et al. Driver-behavior modeling using on-road driving data: A new application for behavior signal processing
CN108995654B (en) Driver state identification method and system
US10068171B2 (en) Multi-layer fusion in a convolutional neural network for image classification
CN110949398B (en) Method for detecting abnormal driving behavior of first-vehicle drivers in vehicle formation driving
Jafarnejad et al. Towards a real-time driver identification mechanism based on driving sensing data
CN113386775B (en) Driver intention identification method considering human-vehicle-road characteristics
CN111231983B (en) Vehicle control method, device and equipment based on traffic accident memory network
CN107368890A (en) A kind of road condition analyzing method and system based on deep learning centered on vision
CN105844257A (en) Early warning system based on machine vision driving-in-fog road denoter missing and early warning method
CN110119714B (en) Driver fatigue detection method and device based on convolutional neural network
CN111114556A (en) Lane change intention identification method based on LSTM under multi-source exponential weighting loss
CN113723528B (en) Vehicle-mounted language-vision fusion multi-mode interaction method and system, equipment and storage medium
US11420623B2 (en) Systems for determining object importance in on-road driving scenarios and methods thereof
CN111540222A (en) Intelligent interaction method and device based on unmanned vehicle and unmanned vehicle
CN115205729A (en) Behavior recognition method and system based on multi-mode feature fusion
Kim et al. Toward explainable and advisable model for self‐driving cars
JP2009096365A (en) Risk recognition system
CN114299473A (en) Driver behavior identification method based on multi-source information fusion
Kim et al. Driving style-based conditional variational autoencoder for prediction of ego vehicle trajectory
CN115861981A (en) Driver fatigue behavior detection method and system based on video attitude invariance
CN112052829B (en) Pilot behavior monitoring method based on deep learning
CN112559968A (en) Driving style representation learning method based on multi-situation data
CN113525402B (en) Advanced assisted driving and unmanned visual field intelligent response method and system
WO2021241261A1 (en) Information processing device, information processing method, program, and learning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant