CN114212105A - Interactive vehicle driving intention prediction method and device with high generalization capability - Google Patents

Interactive vehicle driving intention prediction method and device with high generalization capability Download PDF

Info

Publication number
CN114212105A
CN114212105A CN202111544865.0A CN202111544865A CN114212105A CN 114212105 A CN114212105 A CN 114212105A CN 202111544865 A CN202111544865 A CN 202111544865A CN 114212105 A CN114212105 A CN 114212105A
Authority
CN
China
Prior art keywords
interactive
vehicle
driving
scene
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111544865.0A
Other languages
Chinese (zh)
Other versions
CN114212105B (en
Inventor
李峻翔
李晓辉
孙振平
刘大学
叶磊
史美萍
吴涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202111544865.0A priority Critical patent/CN114212105B/en
Publication of CN114212105A publication Critical patent/CN114212105A/en
Application granted granted Critical
Publication of CN114212105B publication Critical patent/CN114212105B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0027Planning or execution of driving tasks using trajectory prediction for other traffic participants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0019Control system elements or transfer functions
    • B60W2050/0028Mathematical models, e.g. for simulation
    • B60W2050/0029Mathematical model of the driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0019Control system elements or transfer functions
    • B60W2050/0028Mathematical models, e.g. for simulation
    • B60W2050/0031Mathematical model of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • B60W60/0016Planning or execution of driving tasks specially adapted for safety of the vehicle or its occupants

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Optimization (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Analysis (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Algebra (AREA)
  • Probability & Statistics with Applications (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

The application relates to a high-generalization-capability interactive vehicle driving intention prediction method and device. The method comprises the following steps: determining the interactive scene type by acquiring interactive scene information of course relation information, reference route information and road structure information between the vehicle and the interactive vehicle; extracting characteristics of interactive scene information according to different interactive scene types by adopting a characteristic extraction algorithm, inputting the characteristics into a pre-designed interactive vehicle driving intention prediction model based on a dynamic Bayesian network, and training the interactive vehicle driving intention prediction model through pre-constructed training data to obtain a trained interactive vehicle driving intention prediction model; and predicting the driving intention of the interactive vehicle through the trained interactive vehicle driving intention prediction model, and presenting the prediction result in a man-machine interaction mode.

Description

Interactive vehicle driving intention prediction method and device with high generalization capability
Technical Field
The application relates to the field of automobile active safety and autonomous driving, in particular to a method and a device for predicting driving intention of an interactive vehicle with high generalization capability.
Background
The automobile active safety technology and the autonomous driving technology are beneficial to preventing automobile traffic accidents and reducing the injury of personnel. Existing active safety techniques include adaptive cruise control, blind zone monitoring systems, and the like. The adaptive cruise control is to monitor the road traffic environment in front of the automobile through sensors such as a vehicle-mounted radar and the like, and once other vehicles are found in front of the current driving lane, the self-vehicle and the front vehicle are controlled to keep a proper safe distance. The blind area monitoring system is used for detecting the range of the blind area of the rearview mirror through two millimeter wave radars arranged at the rear part of the vehicle, and giving a warning to a driver when other road participants exist in the blind area, so as to assist in driving or changing lanes.
These technologies can improve safety of driving behavior by detecting a real physical state of a surrounding vehicle using a sensor mounted on an automobile. However, only the detected physical state of the vehicle is adopted, and future behaviors are not predicted, so that more misjudgments are inevitable, and the early warning level of the active safety system is reduced. For a fully autonomous driving vehicle, the safety and smoothness of autonomous driving are further disturbed. Therefore, the prediction of the driving intention of the interactive vehicle becomes a hot issue in the field of active safety and autonomous driving of automobiles.
In a conventional vehicle driving intention prediction method, a future trajectory is predicted from a historical trajectory by assuming that a vehicle runs according to a motion model of Constant Velocity (CV), Constant Acceleration (CA), and Constant angular Velocity (CTR), so as to predict a discretized driving intention.
In addition, with the improvement of the storage capacity and the calculation power of the calculation equipment, the intention of learning the driving behavior from a large sample is learned by acquiring a large number of traffic pictures or a large number of traffic vehicle driving tracks and utilizing a deep learning method based on a multilayer neural network, so that the real-time prediction is also a commonly used technical scheme at present.
The technical scheme of predicting the driving intention by adopting the kinematic model only considers the motion limit of the vehicle and does not consider the interaction factors of the road structure and the environment. Therefore, the accuracy of the prediction of the long-term motion of the vehicle is low.
The technical route for predicting the driving intention by adopting deep learning can model the interactive characteristics implicitly, but the prediction mechanism is not intuitive, the acquisition amount of training data is high, the training power consumption is high, and the generalization performance is poor.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an interactive vehicle driving intention prediction method, apparatus, computer device, and storage medium capable of high-generalization capability of vehicle driving intention prediction, in view of the above technical problems.
A highly generalized interactive vehicle driving intent prediction method, the method comprising:
acquiring interactive scene information of vehicle interactive driving, and determining an interactive scene type according to the interactive scene information; the interactive environment information comprises course relation information, reference route information and road structure information between the vehicle and the interactive vehicle; the interactive scene types comprise equidirectional scene driving, reverse scene driving and transverse scene driving;
extracting features of the interactive scene information according to different interactive scene types by adopting a feature extraction algorithm, taking the extracted features as nodes, and inputting feature values serving as node states into a pre-designed interactive vehicle driving intention prediction model; the interactive vehicle driving intention prediction model is a dynamic Bayesian network; the dynamic Bayesian network comprises an evidence layer, an intention layer and a diagnosis layer;
the feasibility of executing a specific intention is evaluated according to the road structure related features and the multi-vehicle interaction related features through the evidence layer, the driving operation at the current moment is executed according to the output of the evidence layer and the output of the last intention layer through the intention layer, and the probability of specific change of the current vehicle running state is obtained through the diagnosis layer according to the output of the intention layer;
training the interactive vehicle driving intention prediction model through pre-constructed training data to obtain a trained interactive vehicle driving intention prediction model;
and predicting the driving intention of the interactive vehicle through the trained interactive vehicle driving intention prediction model, and presenting the prediction result in a man-machine interaction mode.
In one embodiment, the method further comprises the following steps: the driving intention corresponding to the same-direction scene driving comprises the following steps: left lane changing, right lane changing and lane keeping; the driving intention corresponding to the reverse scene driving comprises lane occupation and yielding; the driving intent corresponding to the lateral scene driving includes: preemptive pass and yield.
In one embodiment, the method further comprises the following steps: when the interactive scene type is equidirectional scene driving, feature extraction is carried out on the interactive scene information by adopting a feature extraction algorithm, and the extracted features comprise: the distance feature of the interactive vehicle from the reference line at the previous moment, the distance feature of the interactive vehicle from the reference line at the current moment, the most lateral lane feature of the interactive vehicle, the lateral speed feature of the interactive vehicle and the deviation angle feature of the interactive vehicle.
In one embodiment, the method further comprises the following steps: and taking the characteristic of the most lateral lane of the interactive vehicle as an evidence layer node, taking the distance characteristic from the previous moment of the interactive vehicle to a reference line, the distance characteristic from the current moment of the interactive vehicle to the reference line, the transverse speed characteristic of the interactive vehicle and the deviation angle characteristic of the interactive vehicle as diagnosis layer nodes, and taking characteristic values corresponding to the characteristics as node states to input the node states into a pre-designed interactive vehicle driving intention prediction model.
In one embodiment, the method further comprises the following steps: when the interactive scene type is reverse scene driving, feature extraction is carried out on the interactive scene information by adopting a feature extraction algorithm, and the extracted features comprise: the distance characteristic of the interactive vehicle from the vehicle reference line, the interactive vehicle transverse speed characteristic, the interactive vehicle transverse acceleration characteristic, the interactive vehicle longitudinal speed characteristic, the interactive vehicle longitudinal acceleration characteristic and the interactive vehicle slip angle characteristic.
In one embodiment, the method further comprises the following steps: and taking the distance characteristic of the interactive vehicle from the reference line of the vehicle as an evidence layer node, taking the lateral speed characteristic of the interactive vehicle, the lateral acceleration characteristic of the interactive vehicle, the longitudinal speed characteristic of the interactive vehicle, the longitudinal acceleration characteristic of the interactive vehicle and the deviation angle characteristic of the interactive vehicle as diagnosis layer nodes, and taking a characteristic value corresponding to the characteristics as a node state to input the node state into a pre-designed interactive vehicle driving intention prediction model.
In one embodiment, the method further comprises the following steps: when the interactive scene type is transverse scene driving, feature extraction is carried out on the interactive scene information by adopting a feature extraction algorithm, and the extracted features comprise: the distance characteristic, the longitudinal speed characteristic, the longitudinal acceleration characteristic and the intersection point distance of the interactive vehicle and the host vehicle reference line of the interactive vehicle are obtained.
In one embodiment, the method further comprises the following steps: and taking the distance characteristic of the interactive vehicle from the reference line of the vehicle as an evidence layer node, taking the longitudinal speed characteristic of the interactive vehicle, the longitudinal acceleration characteristic of the interactive vehicle and the intersection distance characteristic of the interactive vehicle and the reference line of the vehicle as diagnosis layer nodes, and taking characteristic values corresponding to the characteristics as node states to input the node states into a pre-designed interactive vehicle driving intention prediction model.
In one embodiment, the method further comprises the following steps: simultaneously carrying out intention prediction on interactive vehicles of a plurality of interactive scene types;
performing intention prediction on a plurality of interactive vehicles in the same scene according to the interactive scene type;
and selecting the vehicle with the largest driving risk caused by the driving of the vehicle from each type of interactive scenes through a preset algorithm for marking.
A highly generalized interactive vehicle driving intent prediction apparatus, the apparatus comprising:
the interactive scene information acquisition module is used for acquiring interactive scene information of vehicle interactive driving and determining an interactive scene type according to the interactive scene information; the interactive environment information comprises course relation information, reference route information and road structure information between the vehicle and the interactive vehicle; the interactive scene types comprise equidirectional scene driving, reverse scene driving and transverse scene driving;
the feature extraction module is used for extracting features of the interactive scene information according to different interactive scene types by adopting a feature extraction algorithm, taking the extracted features as nodes, and inputting feature values serving as node states into a pre-designed interactive vehicle driving intention prediction model; the interactive vehicle driving intention prediction model is a dynamic Bayesian network; the dynamic Bayesian network comprises an evidence layer, an intention layer and a diagnosis layer;
the intention prediction module is used for evaluating the feasibility of executing a specific intention according to the road structure related features and the multi-vehicle interaction related features through the evidence layer, executing the driving operation at the current moment according to the output of the evidence layer and the output of the last intention layer through the intention layer, and obtaining the probability of the specific change of the current vehicle running state according to the output of the intention layer through the diagnosis layer;
the model training module is used for training the interactive vehicle driving intention prediction model through pre-constructed training data to obtain a trained interactive vehicle driving intention prediction model;
and the human-computer interaction module is used for predicting the driving intention of the interactive vehicle through the trained interactive vehicle driving intention prediction model and presenting the prediction result in a human-computer interaction mode.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring interactive scene information of vehicle interactive driving, and determining an interactive scene type according to the interactive scene information; the interactive environment information comprises course relation information, reference route information and road structure information between the vehicle and the interactive vehicle; the interactive scene types comprise equidirectional scene driving, reverse scene driving and transverse scene driving;
extracting features of the interactive scene information according to different interactive scene types by adopting a feature extraction algorithm, taking the extracted features as nodes, and inputting feature values serving as node states into a pre-designed interactive vehicle driving intention prediction model; the interactive vehicle driving intention prediction model is a dynamic Bayesian network; the dynamic Bayesian network comprises an evidence layer, an intention layer and a diagnosis layer;
the feasibility of executing a specific intention is evaluated according to the road structure related features and the multi-vehicle interaction related features through the evidence layer, the driving operation at the current moment is executed according to the output of the evidence layer and the output of the last intention layer through the intention layer, and the probability of specific change of the current vehicle running state is obtained through the diagnosis layer according to the output of the intention layer;
training the interactive vehicle driving intention prediction model through pre-constructed training data to obtain a trained interactive vehicle driving intention prediction model;
and predicting the driving intention of the interactive vehicle through the trained interactive vehicle driving intention prediction model, and presenting the prediction result in a man-machine interaction mode.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring interactive scene information of vehicle interactive driving, and determining an interactive scene type according to the interactive scene information; the interactive environment information comprises course relation information, reference route information and road structure information between the vehicle and the interactive vehicle; the interactive scene types comprise equidirectional scene driving, reverse scene driving and transverse scene driving;
extracting features of the interactive scene information according to different interactive scene types by adopting a feature extraction algorithm, taking the extracted features as nodes, and inputting feature values serving as node states into a pre-designed interactive vehicle driving intention prediction model; the interactive vehicle driving intention prediction model is a dynamic Bayesian network; the dynamic Bayesian network comprises an evidence layer, an intention layer and a diagnosis layer;
the feasibility of executing a specific intention is evaluated according to the road structure related features and the multi-vehicle interaction related features through the evidence layer, the driving operation at the current moment is executed according to the output of the evidence layer and the output of the last intention layer through the intention layer, and the probability of specific change of the current vehicle running state is obtained through the diagnosis layer according to the output of the intention layer;
training the interactive vehicle driving intention prediction model through pre-constructed training data to obtain a trained interactive vehicle driving intention prediction model;
and predicting the driving intention of the interactive vehicle through the trained interactive vehicle driving intention prediction model, and presenting the prediction result in a man-machine interaction mode.
According to the interactive vehicle driving intention prediction method with high generalization capability, the device, the computer equipment and the storage medium, interactive scene information of vehicle interactive driving is obtained, and the interactive scene type is determined according to the interactive scene information; the interactive environment information comprises course relation information, reference route information and road structure information between the vehicle and the interactive vehicle; extracting features of the interactive scene information according to different interactive scene types by adopting a feature extraction algorithm, taking the extracted features as nodes, and inputting feature values serving as node states into a pre-designed interactive vehicle driving intention prediction model based on a dynamic Bayesian network, wherein the dynamic Bayesian network comprises an evidence layer, an intention layer and a diagnosis layer; the feasibility of executing a specific intention is evaluated according to the road structure related features and the multi-vehicle interaction related features through the evidence layer, the driving operation at the current moment is executed according to the output of the evidence layer and the output of the last intention layer through the intention layer, and the probability of specific change of the current vehicle running state is obtained according to the output of the intention layer through the diagnosis layer; training the interactive vehicle driving intention prediction model through pre-constructed training data to obtain a trained interactive vehicle driving intention prediction model; and predicting the driving intention of the interactive vehicle through the trained interactive vehicle driving intention prediction model, and presenting the prediction result in a man-machine interaction mode. The invention can judge the interactive scenes of the vehicle and the environmental vehicle according to the vehicle body information, the road structure, the reference route and the detected environmental vehicle information, predict the driving intention of the interactive vehicle according to different interactive scenes, and remind the vehicle in a man-machine interaction mode when dangerous interaction occurs.
Drawings
FIG. 1 is a schematic flow diagram of a highly generalized interactive vehicle driving intent prediction method in one embodiment;
FIG. 2 is a schematic representation of feature node causality for co-directional scene driving in one embodiment;
FIG. 3 is a characteristic node causal graph of reverse scene driving in one embodiment;
FIG. 4 is a characteristic node causal graph of lateral scene driving in one embodiment;
FIG. 5 is a block diagram showing the structure of a highly generalized interactive vehicle driving intention prediction apparatus according to an embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The interactive vehicle driving intention prediction method with high generalization capability can be applied to the application environment shown in fig. 1. The terminal executes an interactive vehicle driving intention prediction method with high generalization capability, and determines an interactive scene type according to interactive scene information by acquiring the interactive scene information of vehicle interactive driving; the interactive environment information comprises course relation information, reference route information and road structure information between the vehicle and the interactive vehicle; extracting features of the interactive scene information according to different interactive scene types by adopting a feature extraction algorithm, taking the extracted features as nodes, and inputting feature values serving as node states into a pre-designed interactive vehicle driving intention prediction model based on a dynamic Bayesian network, wherein the dynamic Bayesian network comprises an evidence layer, an intention layer and a diagnosis layer; the feasibility of executing a specific intention is evaluated according to the road structure related features and the multi-vehicle interaction related features through the evidence layer, the driving operation at the current moment is executed according to the output of the evidence layer and the output of the last intention layer through the intention layer, and the probability of specific change of the current vehicle running state is obtained according to the output of the intention layer through the diagnosis layer; training the interactive vehicle driving intention prediction model through pre-constructed training data to obtain a trained interactive vehicle driving intention prediction model; and predicting the driving intention of the interactive vehicle through the trained interactive vehicle driving intention prediction model, and presenting the prediction result in a man-machine interaction mode. The terminal can be, but is not limited to, an on-board computer or an on-board embedded device.
In one embodiment, as shown in fig. 1, there is provided a highly generalized interactive vehicle driving intent prediction method, comprising the steps of:
102, obtaining interactive scene information of vehicle interactive driving, and determining an interactive scene type according to the interactive scene information.
The interactive environment information comprises course relation information, reference route information and road structure information between the vehicle and the interactive vehicle; the interactive scene types include co-directional scene driving, reverse scene driving and transverse scene driving.
The interactive vehicle refers to a vehicle which has a certain mutual influence on the driving behavior of an automatic driving vehicle or a vehicle with an auxiliary driving function in the normal driving process, such as an overtaking vehicle, an oncoming vehicle and the like, and other dynamic driving vehicles which are not the self vehicle.
The invention classifies the interactive scenes and adopts different network mechanisms to predict the driving intention aiming at different interactive scenes.
And 104, extracting the characteristics of the interactive scene information according to different interactive scene types by adopting a characteristic extraction algorithm, taking the extracted characteristics as nodes, and inputting characteristic values serving as node states into a pre-designed interactive vehicle driving intention prediction model.
The interactive vehicle driving intention prediction model is a dynamic Bayesian network; the dynamic Bayesian network includes an evidence layer, an intention layer, and a diagnosis layer.
Dynamic Bayesian Network (DBN): the method is a machine learning method, and can be used for representing uncertainty and correlation by using a probability graph model so as to predict the state.
Because the driving interaction habits of human drivers are different and the predicted intentions (such as whether to give way or change lanes left and right) are different in the three scenes, different characteristics are selected according to different interaction scenes to serve as nodes in the dynamic Bayesian network. By classifying and refining the interactive scenes, different dynamic Bayesian networks are trained aiming at different interactive scenes, so that scene recognition is more accurate, and the driving intention of the interactive vehicle is more accurate to predict.
And 106, evaluating the feasibility of executing the specific intention according to the road structure related characteristics and the multi-vehicle interaction related characteristics through the evidence layer, executing the driving operation at the current moment according to the output of the evidence layer and the output of the last intention layer through the intention layer, and obtaining the probability of the specific change of the current vehicle running state according to the output of the intention layer through the diagnosis layer.
The dynamic Bayesian network of the present invention includes features related to road structure, features related to multi-vehicle interaction, and features related to vehicle physical state.
1) Features relating to road construction
Features related to road structure often determine the feasibility of the intent while also affecting the interactive behavior of the dynamic vehicle, but these features do not form causal reasoning across time segments. These features include the following nodes.
2) Features relating to multi-vehicle interaction
Vehicle driving intent is typically influenced by the interactive behavior of surrounding vehicles, which are considered to be vehicles within the area of interest, and it is assumed that the intent of a dynamic vehicle is not influenced by a following vehicle behind it in the same lane. The characteristics of the multi-vehicle interaction behavior are modeled from the relationship of the distance and relative speed of the dynamic vehicle to the surrounding vehicles.
3) Features relating to physical state of vehicle
The features related to the physical state of the vehicle (except the VC node) may change over time and form causal inferences across time segments. These features include the following nodes.
And 108, training the interactive vehicle driving intention prediction model through pre-constructed training data to obtain the trained interactive vehicle driving intention prediction model.
The attribute range of each feature in the dynamic Bayesian network can be obtained through training, and the attribute range of each feature in the dynamic Bayesian network can also be obtained through experimental data analysis.
And 110, predicting the driving intention of the interactive vehicle through the trained interactive vehicle driving intention prediction model, and presenting the prediction result in a man-machine interaction mode.
The man-machine interaction can be that the result of the intention prediction (such as the predicted intention and the probability thereof) is displayed on a computer screen, and meanwhile, the voice broadcast is carried out on the concerned special driving intention (such as lane occupation, preemptive passing and the like). The data can also be transmitted to the head-up display of the driver or in a vehicle-mounted entertainment system, and can also be directly input into the autonomous driving algorithm as auxiliary information of the autonomous driving system.
In the interactive vehicle driving intention prediction method with high generalization capability, interactive scene types are determined according to interactive scene information by acquiring the interactive scene information of vehicle interactive driving; the interactive environment information comprises course relation information, reference route information and road structure information between the vehicle and the interactive vehicle; extracting features of the interactive scene information according to different interactive scene types by adopting a feature extraction algorithm, taking the extracted features as nodes, and inputting feature values serving as node states into a pre-designed interactive vehicle driving intention prediction model based on a dynamic Bayesian network, wherein the dynamic Bayesian network comprises an evidence layer, an intention layer and a diagnosis layer; the feasibility of executing a specific intention is evaluated according to the road structure related features and the multi-vehicle interaction related features through the evidence layer, the driving operation at the current moment is executed according to the output of the evidence layer and the output of the last intention layer through the intention layer, and the probability of specific change of the current vehicle running state is obtained according to the output of the intention layer through the diagnosis layer; training the interactive vehicle driving intention prediction model through pre-constructed training data to obtain a trained interactive vehicle driving intention prediction model; and predicting the driving intention of the interactive vehicle through the trained interactive vehicle driving intention prediction model, and presenting the prediction result in a man-machine interaction mode. The invention can judge the interactive scenes of the vehicle and the environmental vehicle according to the vehicle body information, the road structure, the reference route and the detected environmental vehicle information, predict the driving intention of the interactive vehicle according to different interactive scenes, and remind the vehicle in a man-machine interaction mode when dangerous interaction occurs.
In one embodiment, the method further comprises the following steps: the driving intentions corresponding to the same-direction scene driving include: left lane changing, right lane changing and lane keeping; the driving intention corresponding to the reverse scene driving comprises lane occupation and yielding; the driving intentions corresponding to the lateral scene driving include: preemptive pass and yield.
In one embodiment, the method further comprises the following steps: when the interactive scene type is equidirectional scene driving, feature extraction is carried out on interactive scene information by adopting a feature extraction algorithm, and the extracted features comprise: the distance feature of the interactive vehicle from the reference line at the previous moment, the distance feature of the interactive vehicle from the reference line at the current moment, the most lateral lane feature of the interactive vehicle, the lateral speed feature of the interactive vehicle and the deviation angle feature of the interactive vehicle. And taking the most lateral lane feature of the interactive vehicle as an evidence layer node, taking the distance feature from the reference line at the previous moment of the interactive vehicle, the distance feature from the reference line at the current moment of the interactive vehicle, the lateral speed feature of the interactive vehicle and the deviation angle feature of the interactive vehicle as diagnosis layer nodes, and taking a feature value corresponding to the feature as a node state to input the node state into a pre-designed interactive vehicle driving intention prediction model.
As shown in fig. 2, when the interactive scene type is the node causal relationship diagram when the same-direction scene driving is performed, in engineering implementation, a distance factor is obtained according to the distance from the reference line at the time T-1, the distance from the reference line at the time T, and the most lateral lane, a lateral velocity factor is obtained according to the lateral velocity, a deviation angle factor is obtained according to the deviation angle, and then the same-direction driving intention is inferred according to the distance factor, the lateral velocity factor, and the deviation angle factor.
In one embodiment, the method further comprises the following steps: when the interactive scene type is reverse scene driving, feature extraction is carried out on interactive scene information by adopting a feature extraction algorithm, and the extracted features comprise: the distance characteristic of the interactive vehicle from the vehicle reference line, the interactive vehicle transverse speed characteristic, the interactive vehicle transverse acceleration characteristic, the interactive vehicle longitudinal speed characteristic, the interactive vehicle longitudinal acceleration characteristic and the interactive vehicle slip angle characteristic. And taking the distance characteristic of the interactive vehicle from the vehicle reference line as an evidence layer node, taking the interactive vehicle transverse speed characteristic, the interactive vehicle transverse acceleration characteristic, the interactive vehicle longitudinal speed characteristic, the interactive vehicle longitudinal acceleration characteristic and the interactive vehicle deviation angle characteristic as diagnosis layer nodes, and taking a characteristic value corresponding to the characteristic as a node state to input the node state into a pre-designed interactive vehicle driving intention prediction model.
For example, fig. 3 is a node causal relationship diagram when the interactive scene type is reverse scene driving, and in engineering implementation, a longitudinal factor at time T is obtained according to a distance from a reference line, a longitudinal speed, and a longitudinal acceleration at time T, a lateral factor at time T is obtained according to a lateral speed and a lateral acceleration at time T, and a divergence angle factor at time T is obtained according to a divergence angle at time T. And then obtaining the reverse driving intention at the T moment according to the longitudinal factor, the transverse factor and the deviation angle factor at the T moment.
In one embodiment, the method further comprises the following steps: when the interactive scene type is transverse scene driving, feature extraction is carried out on interactive scene information by adopting a feature extraction algorithm, and the extracted features comprise: the distance characteristic, the longitudinal speed characteristic, the longitudinal acceleration characteristic and the intersection point distance of the interactive vehicle and the host vehicle reference line of the interactive vehicle are obtained. And taking the distance characteristic of the interactive vehicle from the vehicle reference line as an evidence layer node, taking the longitudinal speed characteristic of the interactive vehicle, the longitudinal acceleration characteristic of the interactive vehicle and the intersection distance characteristic of the interactive vehicle and the vehicle reference line as diagnosis layer nodes, and taking a characteristic value corresponding to the characteristic as a node state to input the node state into a pre-designed interactive vehicle driving intention prediction model.
Fig. 4 is a node causal graph when the interactive scene type is horizontal scene driving, and the horizontal scene can be extended to a rural road irregular intersection, i.e. the prediction angle range of the horizontal scene type is extended to a larger range, such as 40 ° to 130 °. And obtaining a longitudinal factor of the T moment according to the distance from the T moment to the reference line, the longitudinal speed and the longitudinal acceleration, and obtaining a deviation angle factor of the T moment according to the intersection point distance of the T moment. And then determining the transverse driving intention at the T moment according to the longitudinal factor and the deviation angle factor at the T moment.
In one embodiment, the method further comprises the following steps: simultaneously carrying out intention prediction on interactive vehicles of a plurality of interactive scene types; performing intention prediction on a plurality of interactive vehicles in the same scene according to the interactive scene type; and selecting the vehicle with the largest driving risk caused by the driving of the vehicle from each type of interactive scenes through a preset algorithm for marking.
In one embodiment, the method further comprises the following steps: when a plurality of interactive vehicles exist at the same time, classifying the interactive vehicles according to the interactive scene types; selecting a vehicle closest to the vehicle at the next moment from each type of interactive scene through a preset algorithm for marking; and for the marked interactive vehicles, feature extraction is carried out on the interactive scene information according to different interactive scene types by adopting a feature extraction algorithm, the extracted features are used as nodes, and feature values are used as node states and input into a pre-designed interactive vehicle driving intention prediction model.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 1 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 5, there is provided a highly generalized interactive vehicle driving intention prediction apparatus, including: an interactive scene information obtaining module 502, a feature extraction module 504, an intention prediction module 506, a model training module 508 and a human-computer interaction module 510, wherein:
the interactive scene information acquiring module 502 is configured to acquire interactive scene information of vehicle interactive driving, and determine an interactive scene type according to the interactive scene information; the interactive environment information comprises course relation information, reference route information and road structure information between the vehicle and the interactive vehicle; the interactive scene types comprise equidirectional scene driving, reverse scene driving and transverse scene driving;
the feature extraction module 504 is configured to perform feature extraction on the interactive scene information according to different interactive scene types by using a feature extraction algorithm, use the extracted features as nodes, and input feature values as node states into a pre-designed interactive vehicle driving intention prediction model; the interactive vehicle driving intention prediction model is a dynamic Bayesian network; the dynamic Bayesian network comprises an evidence layer, an intention layer and a diagnosis layer;
the intention prediction module 506 is used for evaluating the feasibility of executing a specific intention according to the road structure related features and the multi-vehicle interaction related features through the evidence layer, executing the driving operation at the current moment according to the output of the evidence layer and the output of the last intention layer through the intention layer, and obtaining the probability of the specific change of the current vehicle running state according to the output of the intention layer through the diagnosis layer;
the model training module 508 is configured to train the interactive vehicle driving intention prediction model through pre-constructed training data to obtain a trained interactive vehicle driving intention prediction model;
and the human-computer interaction module 510 is configured to predict the driving intention of the interactive vehicle through the trained interactive vehicle driving intention prediction model, and present a prediction result in a human-computer interaction manner.
The feature extraction module 504 is further configured to, when the interactive scene type is equidirectional scene driving, perform feature extraction on the interactive scene information by using a feature extraction algorithm, where the extracted features include: the distance feature of the interactive vehicle from the reference line at the previous moment, the distance feature of the interactive vehicle from the reference line at the current moment, the most lateral lane feature of the interactive vehicle, the lateral speed feature of the interactive vehicle and the deviation angle feature of the interactive vehicle.
The feature extraction module 504 is further configured to use the feature of the most lateral lane of the interactive vehicle as an evidence layer node, use the distance feature from the reference line at the previous time of the interactive vehicle, the distance feature from the reference line at the current time of the interactive vehicle, the lateral speed feature of the interactive vehicle, and the slip angle feature of the interactive vehicle as diagnosis layer nodes, and use a feature value corresponding to the feature as a node state to be input into a pre-designed interactive vehicle driving intention prediction model.
The feature extraction module 504 is further configured to, when the interactive scene type is reverse scene driving, perform feature extraction on the interactive scene information by using a feature extraction algorithm, where the extracted features include: the distance characteristic of the interactive vehicle from the vehicle reference line, the interactive vehicle transverse speed characteristic, the interactive vehicle transverse acceleration characteristic, the interactive vehicle longitudinal speed characteristic, the interactive vehicle longitudinal acceleration characteristic and the interactive vehicle slip angle characteristic.
The feature extraction module 504 is further configured to use a distance feature of the interactive vehicle from the vehicle reference line as an evidence layer node, use an interactive vehicle lateral velocity feature, an interactive vehicle lateral acceleration feature, an interactive vehicle longitudinal velocity feature, an interactive vehicle longitudinal acceleration feature, and an interactive vehicle deviation angle feature as diagnosis layer nodes, and use a feature value corresponding to the feature as a node state to be input into a pre-designed interactive vehicle driving intention prediction model.
The feature extraction module 504 is further configured to, when the interactive scene type is a transverse scene driving, perform feature extraction on the interactive scene information by using a feature extraction algorithm, where the extracted features include: the distance characteristic, the longitudinal speed characteristic, the longitudinal acceleration characteristic and the intersection point distance of the interactive vehicle and the host vehicle reference line of the interactive vehicle are obtained.
The feature extraction module 504 is further configured to use a distance feature of the interactive vehicle from the own vehicle reference line as an evidence layer node, use a longitudinal speed feature of the interactive vehicle, a longitudinal acceleration feature of the interactive vehicle, and a distance feature of an intersection point of the interactive vehicle and the own vehicle reference line as a diagnosis layer node, and use a feature value corresponding to the feature as a node state to be input into a pre-designed interactive vehicle driving intention prediction model.
The feature extraction module 504 is further configured to perform intent prediction on interactive vehicles of multiple interactive scene types simultaneously; performing intention prediction on a plurality of interactive vehicles in the same scene according to the interactive scene type; and selecting the vehicle with the largest driving risk caused by the driving of the vehicle from each type of interactive scenes through a preset algorithm for marking.
For specific limitations of the interactive vehicle driving intention prediction device with high generalization capability, reference may be made to the above limitations of the interactive vehicle driving intention prediction method with high generalization capability, which are not described herein again. The various modules in the above-described high-generalization-capability interactive vehicle driving-intention prediction apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a highly generalized interactive vehicle driving intent prediction method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is provided, comprising a memory storing a computer program and a processor implementing the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A highly generalized interactive vehicle driving intent prediction method, the method comprising:
acquiring interactive scene information of vehicle interactive driving, and determining an interactive scene type according to the interactive scene information; the interactive environment information comprises course relation information, reference route information and road structure information between the vehicle and the interactive vehicle; the interactive scene types comprise equidirectional scene driving, reverse scene driving and transverse scene driving;
extracting features of the interactive scene information according to different interactive scene types by adopting a feature extraction algorithm, taking the extracted features as nodes, and inputting feature values serving as node states into a pre-designed interactive vehicle driving intention prediction model; the interactive vehicle driving intention prediction model is a dynamic Bayesian network; the dynamic Bayesian network comprises an evidence layer, an intention layer and a diagnosis layer;
the feasibility of executing a specific intention is evaluated according to the road structure related features and the multi-vehicle interaction related features through the evidence layer, the driving operation at the current moment is executed according to the output of the evidence layer and the output of the last intention layer through the intention layer, and the probability of specific change of the current vehicle running state is obtained through the diagnosis layer according to the output of the intention layer;
training the interactive vehicle driving intention prediction model through pre-constructed training data to obtain a trained interactive vehicle driving intention prediction model;
and predicting the driving intention of the interactive vehicle through the trained interactive vehicle driving intention prediction model, and presenting the prediction result in a man-machine interaction mode.
2. The method of claim 1, wherein the driving intent corresponding to the co-directional scene driving comprises: left lane changing, right lane changing and lane keeping; the driving intention corresponding to the reverse scene driving comprises lane occupation and yielding; the driving intent corresponding to the lateral scene driving includes: preemptive pass and yield.
3. The method of claim 2, wherein when the interactive scene type is co-directional scene driving, feature extraction is performed on the interactive scene information according to different interactive scene types by using a feature extraction algorithm, further comprising:
when the interactive scene type is equidirectional scene driving, feature extraction is carried out on the interactive scene information by adopting a feature extraction algorithm, and the extracted features comprise: the distance feature of the interactive vehicle from the reference line at the previous moment, the distance feature of the interactive vehicle from the reference line at the current moment, the most lateral lane feature of the interactive vehicle, the lateral speed feature of the interactive vehicle and the deviation angle feature of the interactive vehicle.
4. The method according to claim 3, wherein the extracted features are taken as nodes, and the feature values are input into a pre-designed interactive vehicle driving intention prediction model as node states, and the method comprises the following steps:
and taking the characteristic of the most lateral lane of the interactive vehicle as an evidence layer node, taking the distance characteristic from the previous moment of the interactive vehicle to a reference line, the distance characteristic from the current moment of the interactive vehicle to the reference line, the transverse speed characteristic of the interactive vehicle and the deviation angle characteristic of the interactive vehicle as diagnosis layer nodes, and taking characteristic values corresponding to the characteristics as node states to input the node states into a pre-designed interactive vehicle driving intention prediction model.
5. The method of claim 2, wherein when the interactive scene type is reverse scene driving, feature extraction is performed on the interactive scene information according to different interactive scene types by using a feature extraction algorithm, further comprising:
when the interactive scene type is reverse scene driving, feature extraction is carried out on the interactive scene information by adopting a feature extraction algorithm, and the extracted features comprise: the distance characteristic of the interactive vehicle from the vehicle reference line, the interactive vehicle transverse speed characteristic, the interactive vehicle transverse acceleration characteristic, the interactive vehicle longitudinal speed characteristic, the interactive vehicle longitudinal acceleration characteristic and the interactive vehicle slip angle characteristic.
6. The method according to claim 5, wherein the extracted features are taken as nodes, and the feature values are input into a pre-designed interactive vehicle driving intention prediction model as node states, and the method comprises the following steps:
and taking the distance characteristic of the interactive vehicle from the reference line of the vehicle as an evidence layer node, taking the lateral speed characteristic of the interactive vehicle, the lateral acceleration characteristic of the interactive vehicle, the longitudinal speed characteristic of the interactive vehicle, the longitudinal acceleration characteristic of the interactive vehicle and the deviation angle characteristic of the interactive vehicle as diagnosis layer nodes, and taking a characteristic value corresponding to the characteristics as a node state to input the node state into a pre-designed interactive vehicle driving intention prediction model.
7. The method of claim 2, wherein when the interactive scene type is lateral scene driving, feature extraction is performed on the interactive scene information according to different interactive scene types by using a feature extraction algorithm, further comprising:
when the interactive scene type is transverse scene driving, feature extraction is carried out on the interactive scene information by adopting a feature extraction algorithm, and the extracted features comprise: the distance characteristic, the longitudinal speed characteristic, the longitudinal acceleration characteristic and the intersection point distance of the interactive vehicle and the host vehicle reference line of the interactive vehicle are obtained.
8. The method according to claim 7, wherein the extracted features are taken as nodes, and the feature values are input into a pre-designed interactive vehicle driving intention prediction model as node states, and the method comprises the following steps:
and taking the distance characteristic of the interactive vehicle from the reference line of the vehicle as an evidence layer node, taking the longitudinal speed characteristic of the interactive vehicle, the longitudinal acceleration characteristic of the interactive vehicle and the intersection distance characteristic of the interactive vehicle and the reference line of the vehicle as diagnosis layer nodes, and taking characteristic values corresponding to the characteristics as node states to input the node states into a pre-designed interactive vehicle driving intention prediction model.
9. The method according to claim 1, wherein when a plurality of interactive vehicles exist simultaneously, feature extraction is performed on the interactive scene information according to different interactive scene types by using a feature extraction algorithm, the extracted features are used as nodes, and feature values are input into a pre-designed interactive vehicle driving intention prediction model as node states, and the method further comprises the following steps:
simultaneously carrying out intention prediction on interactive vehicles of a plurality of interactive scene types;
performing intention prediction on a plurality of interactive vehicles in the same scene according to the interactive scene type;
and selecting the vehicle with the largest driving risk caused by the driving of the vehicle from each type of interactive scenes through a preset algorithm for marking.
10. A highly generalized interactive vehicle driving intent prediction apparatus, the apparatus comprising:
the interactive scene information acquisition module is used for acquiring interactive scene information of vehicle interactive driving and determining an interactive scene type according to the interactive scene information; the interactive environment information comprises course relation information, reference route information and road structure information between the vehicle and the interactive vehicle; the interactive scene types comprise equidirectional scene driving, reverse scene driving and transverse scene driving;
the feature extraction module is used for extracting features of the interactive scene information according to different interactive scene types by adopting a feature extraction algorithm, taking the extracted features as nodes, and inputting feature values serving as node states into a pre-designed interactive vehicle driving intention prediction model; the interactive vehicle driving intention prediction model is a dynamic Bayesian network; the dynamic Bayesian network comprises an evidence layer, an intention layer and a diagnosis layer;
the intention prediction module is used for evaluating the feasibility of executing a specific intention according to the road structure related features and the multi-vehicle interaction related features through the evidence layer, executing the driving operation at the current moment according to the output of the evidence layer and the output of the last intention layer through the intention layer, and obtaining the probability of the specific change of the current vehicle running state according to the output of the intention layer through the diagnosis layer;
the model training module is used for training the interactive vehicle driving intention prediction model through pre-constructed training data to obtain a trained interactive vehicle driving intention prediction model;
and the human-computer interaction module is used for predicting the driving intention of the interactive vehicle through the trained interactive vehicle driving intention prediction model and presenting the prediction result in a human-computer interaction mode.
CN202111544865.0A 2021-12-16 2021-12-16 Interactive vehicle driving intention prediction method and device with high generalization capability Active CN114212105B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111544865.0A CN114212105B (en) 2021-12-16 2021-12-16 Interactive vehicle driving intention prediction method and device with high generalization capability

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111544865.0A CN114212105B (en) 2021-12-16 2021-12-16 Interactive vehicle driving intention prediction method and device with high generalization capability

Publications (2)

Publication Number Publication Date
CN114212105A true CN114212105A (en) 2022-03-22
CN114212105B CN114212105B (en) 2024-03-05

Family

ID=80703103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111544865.0A Active CN114212105B (en) 2021-12-16 2021-12-16 Interactive vehicle driving intention prediction method and device with high generalization capability

Country Status (1)

Country Link
CN (1) CN114212105B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140195093A1 (en) * 2013-01-04 2014-07-10 Carnegie Mellon University Autonomous Driving Merge Management System
KR20160036968A (en) * 2014-09-26 2016-04-05 국민대학교산학협력단 Integrated assessment apparatus and method of drivers' drowsiness, inattention and workload
US20170190334A1 (en) * 2016-01-06 2017-07-06 GM Global Technology Operations LLC Prediction of driver intent at intersection
CN110304075A (en) * 2019-07-04 2019-10-08 清华大学 Track of vehicle prediction technique based on Mix-state DBN and Gaussian process
CN111079834A (en) * 2019-12-16 2020-04-28 清华大学 Intelligent vehicle safety situation assessment method considering multi-vehicle interaction
CN111367317A (en) * 2020-03-27 2020-07-03 中国人民解放军国防科技大学 Unmanned aerial vehicle cluster online task planning method based on Bayesian learning
CN113348119A (en) * 2020-04-02 2021-09-03 华为技术有限公司 Vehicle blind area identification method, automatic driving assistance system and intelligent driving vehicle comprising system
CN113511222A (en) * 2021-08-27 2021-10-19 清华大学 Scene self-adaptive vehicle interactive behavior decision and prediction method and device
CN113561974A (en) * 2021-08-25 2021-10-29 清华大学 Collision risk prediction method based on vehicle behavior interaction and road structure coupling

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140195093A1 (en) * 2013-01-04 2014-07-10 Carnegie Mellon University Autonomous Driving Merge Management System
KR20160036968A (en) * 2014-09-26 2016-04-05 국민대학교산학협력단 Integrated assessment apparatus and method of drivers' drowsiness, inattention and workload
US20170190334A1 (en) * 2016-01-06 2017-07-06 GM Global Technology Operations LLC Prediction of driver intent at intersection
CN110304075A (en) * 2019-07-04 2019-10-08 清华大学 Track of vehicle prediction technique based on Mix-state DBN and Gaussian process
CN111079834A (en) * 2019-12-16 2020-04-28 清华大学 Intelligent vehicle safety situation assessment method considering multi-vehicle interaction
CN111367317A (en) * 2020-03-27 2020-07-03 中国人民解放军国防科技大学 Unmanned aerial vehicle cluster online task planning method based on Bayesian learning
CN113348119A (en) * 2020-04-02 2021-09-03 华为技术有限公司 Vehicle blind area identification method, automatic driving assistance system and intelligent driving vehicle comprising system
CN113561974A (en) * 2021-08-25 2021-10-29 清华大学 Collision risk prediction method based on vehicle behavior interaction and road structure coupling
CN113511222A (en) * 2021-08-27 2021-10-19 清华大学 Scene self-adaptive vehicle interactive behavior decision and prediction method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张海伦;付锐;: "高速场景相邻前车驾驶行为识别及意图预测", 交通运输***工程与信息, no. 01, 15 February 2020 (2020-02-15) *
贺汉根;孙振平;徐昕;: "智能交通条件下车辆自主驾驶技术展望", 中国科学基金, no. 02, 15 March 2016 (2016-03-15) *

Also Published As

Publication number Publication date
CN114212105B (en) 2024-03-05

Similar Documents

Publication Publication Date Title
Khairdoost et al. Real-time driver maneuver prediction using LSTM
Katrakazas et al. A new integrated collision risk assessment methodology for autonomous vehicles
Benterki et al. Artificial intelligence for vehicle behavior anticipation: Hybrid approach based on maneuver classification and trajectory prediction
CN109572550B (en) Driving track prediction method, system, computer equipment and storage medium
CN109711557B (en) Driving track prediction method, computer equipment and storage medium
CN103069466B (en) System for inferring driver's lane change intention
US11462099B2 (en) Control system and control method for interaction-based long-term determination of trajectories for motor vehicles
Doshi et al. Examining the impact of driving style on the predictability and responsiveness of the driver: Real-world and simulator analysis
CN111104969A (en) Method for pre-judging collision possibility between unmanned vehicle and surrounding vehicle
EP3091370A1 (en) Method and arrangement for determining safe vehicle trajectories
CN109969172A (en) Control method for vehicle, equipment and computer storage medium
Li et al. Driving style classification based on driving operational pictures
CN112085165A (en) Decision information generation method, device, equipment and storage medium
US20200307577A1 (en) Interpreting data of reinforcement learning agent controller
CN114446049A (en) Traffic flow prediction method, system, terminal and medium based on social value orientation
Amsalu et al. Driver intention estimation via discrete hidden Markov model
Zhang et al. Long-term prediction for high-resolution lane-changing data using temporal convolution network
CN111814766B (en) Vehicle behavior early warning method and device, computer equipment and storage medium
Wu et al. Driver lane change intention recognition based on Attention Enhanced Residual-MBi-LSTM network
US20220121213A1 (en) Hybrid planning method in autonomous vehicle and system thereof
Griesbach et al. Lane change prediction with an echo state network and recurrent neural network in the urban area
CN116758741A (en) Multi-dimensional uncertainty perception intelligent automobile collision probability prediction method
CN113849971B (en) Driving system evaluation method and device, computer equipment and storage medium
CN114475656A (en) Travel track prediction method, travel track prediction device, electronic device, and storage medium
CN114446046A (en) LSTM model-based weak traffic participant track prediction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant