GB2599727A - Predicting the behavior of a vehicle using agent-to-agent relations to control an autonomous vehicle - Google Patents
Predicting the behavior of a vehicle using agent-to-agent relations to control an autonomous vehicle Download PDFInfo
- Publication number
- GB2599727A GB2599727A GB2016157.6A GB202016157A GB2599727A GB 2599727 A GB2599727 A GB 2599727A GB 202016157 A GB202016157 A GB 202016157A GB 2599727 A GB2599727 A GB 2599727A
- Authority
- GB
- United Kingdom
- Prior art keywords
- vehicle
- pedestrian
- predicted
- predicting
- computing unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 claims abstract description 41
- 230000009471 action Effects 0.000 claims abstract description 22
- 239000013598 vector Substances 0.000 claims abstract description 8
- 230000006399 behavior Effects 0.000 description 24
- 239000003795 chemical substances by application Substances 0.000 description 11
- 238000001514 detection method Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 238000012800 visualization Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000013398 bayesian method Methods 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
- B60W60/0027—Planning or execution of driving tasks using trajectory prediction for other traffic participants
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
- B60W60/0027—Planning or execution of driving tasks using trajectory prediction for other traffic participants
- B60W60/00276—Planning or execution of driving tasks using trajectory prediction for other traffic participants for two or more other traffic participants
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/095—Predicting travel path or likelihood of collision
- B60W30/0956—Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
- B60W60/0027—Planning or execution of driving tasks using trajectory prediction for other traffic participants
- B60W60/00272—Planning or execution of driving tasks using trajectory prediction for other traffic participants relying on extrapolation of current movement
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
- B60W60/0027—Planning or execution of driving tasks using trajectory prediction for other traffic participants
- B60W60/00274—Planning or execution of driving tasks using trajectory prediction for other traffic participants considering possible movement changes
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
- B60W2554/40—Dynamic objects, e.g. animals, windblown objects
- B60W2554/402—Type
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
- B60W2554/40—Dynamic objects, e.g. animals, windblown objects
- B60W2554/402—Type
- B60W2554/4029—Pedestrians
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
- B60W2554/40—Dynamic objects, e.g. animals, windblown objects
- B60W2554/404—Characteristics
- B60W2554/4045—Intention, e.g. lane change or imminent movement
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2556/00—Input parameters relating to data
- B60W2556/10—Historical data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
Abstract
Method of predicting the trajectory 5’ of a pedestrian 5, and based on the predicted pedestrian behaviour 5’, an action (arrow) of vehicle 2b can be predicted. For example, if the pedestrian 5 is predicted to enter the vehicle 2b, the vehicle 2b can be predicted to pull away soon. Depending on the predicted trajectory of the vehicle 2b, the host vehicle 1 may prepare to perform evasive action. Multiple pedestrian trajectories may be predicted. Time series of state vector may be generated from camera data for predicting future trajectory 5’ of the pedestrian 5. A graph representing a relationship between the pedestrian and the parked vehicle 2b may be generated. Based on the predicted outcome, the host vehicle 1 may determine a driving trajectory or a parking area.
Description
PREDICTING THE BEHAVIOR OF A VEHICLE USING AGENT-TO-AGENT
RELATIONS TO CONTROL AN AUTONOMOUS VEHICLE
FIELD OF THE INVENTION
[0001] An aspect of the invention relates to a method for predicting a behavior of a vehicle, wherein the method comprises camera data representing an environment of the vehicle. A further aspect of the invention relates to a method for controlling an autonomous (or ego) vehicle at least in part automatically. A further aspect of the invention relates to an electronic vehicle guidance system comprising a camera system, which is configured to generate camera data representing an environment of the vehicle.
BACKGROUND INFORMATION
[0002] Parking on a curbside or driving on a lane next to other vehicles parking on the curbside are important situations to handle for an autonomous vehicle or a partially autonomous vehicle For the purpose of performing such a maneuver, the vehicle has to be able to detect and segmentize preferred stopping zones within its proximity. To accomplish such a task, it is known to take into account a set of predefined rules for detecting and identifying parkable area.
[0003] When the parking vehicle starts moving when the ego vehicle passes or parks on the curbside, there is a potential risk of collision between the ego vehicle and the standing or parking vehicles. It is known to predict the behavior of vehicles based on their previous behavior or movements. However, these approaches fail in the case of parking or standing vehicles, since there are no relevant historical data to extrapolate.
[0004] Document DE 10 2018 104 270 Al describes a method for predicting the behavior of a pedestrian. To this end, images of the pedestrian are captured by using a computer vision object detection algorithm. The position of the pedestrian relative to the ego vehicle is determined and a posture of the pedestrian is estimated based on the computer vision object detection algorithm. Then, a trajectory and a movement state of the pedestrian are determined and the intention of the pedestrian is estimated in order to predict the behavior of the pedestrian.
[0005] Therefore, there is a need in the art for an improved method and system for predicting the intention of the vehicle when parked at a certain location.
[0006] This object is achieved by the subject-matter of the independent claims. Further implementations and preferred embodiments are subject-matter of the dependent claims.
SUMMARY OF THE INVENTION
[0007] The present invention is based on the idea to predict a trajectory of a pedestrian external to the stationary vehicle and predict an action of the vehicle depending on the predicted pedestrian trajectory.
[0008] According to a first aspect of the improved concept, a method for predicting a behavior of a vehicle, in particular a stationary vehicle, is provided. The method comprises generating camera data representing an environment of the vehicle, in particular by a camera system of an ego vehicle. The method comprises the steps of predicting a pedestrian trajectory of a pedestrian within the environment by using a computing unit, in particular a computing unit of the ego vehicle, and predicting an action of the vehicle depending on the predicted pedestrian trajectory by using the computing unit.
[0009] It is, however, not necessarily relied only on the camera data. Rather, output of a fusion module may be used, that is fused data. The fused data may include the camera data, lidar data, radar data, et cetera.
[0010] The camera system of the ego vehicle may be configured to depict a field of view comprising an environment of the ego vehicle and the camera, respectively. The environment of the vehicle represented by the camera data lies, in particular, within at least the field of view of the camera system.
[0011] The vehicle being a stationary vehicle can may for example be understood such that the vehicle is parked or otherwise not moving, at least when the action of the vehicle is predicted.
[0012] The predicted pedestrian trajectory may for example comprise a set of predicted coordinates for the pedestrian and/or a set of predicted state vectors of the pedestrian and, optionally, a corresponding probability for the trajectory or the individual coordinates or state vectors.
[0013] In particular, the camera data represent the environment of the vehicle for a certain predefined time period, such that the computing unit can detect and track the pedestrian during the time period and predict the pedestrian trajectory based on the tracking data.
[0014] According to a further aspect of the improved concept, a method for controlling an ego vehicle at least in part automatically is provided. To this end, a behavior of a vehicle is predicted according to a method for predicting a behavior of a vehicle as described above. At least one control signal for controlling the ego vehicle at least in part automatically is generated depending on the predicted action of the vehicle.
[0015] According to a further aspect of the improved concept, an electronic vehicle guidance system comprises a camera system. The camera system is configured to generate camera data representing an environment of a vehicle. The electronic vehicle guidance system is further comprising a computing unit, which is configured to predict a pedestrian trajectory of a pedestrian within the environment and to predict an action of the vehicle depending on the predicted pedestrian trajectory.
[0016] By means of the improved concept, the intent of a stationary vehicle may be predicted even before any movement or change in the dynamics of the vehicle is detectable. In particular, the relation between the vehicle and the pedestrian external to the vehicle are analyzed and exploited in order to transfer the information gathered from the behavior of the pedestrian to the expected behavior of the vehicle. Consequently, safety, reliability and smoothness of the partially or fully autonomous driving of the ego vehicle may be improved.
[0017] Further advantages, features, and details of the invention derive from the following description of preferred embodiments as well as from the drawings. The features and feature combinations previously mentioned in the description as well as the features and feature combinations mentioned in the following description of the figures alone can be employed not only in the respectively indicated combination but also in any other combination or taken alone without leaving the scope of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system and/or methods in accordance with embodiments of the present subject matter are now described, by way of example only, and regarding the accompanying figures, in which: [0019] Fig. 1 an ego vehicle driving next to parking vehicles; [0020] Fig. 2 a schematic visualization of a method step of an exemplary implementation of a method according to the improved concept; [0021] Fig. 3 a schematic visualization of a further method step of an exemplary implementation of a method according to the improved concept; and [0022] Fig. 4 a flow diagram according to a further exemplary implementation of a method according to the improved concept.
DETAILED DESCRIPTION
[0023] In the present document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment or implementation of the present subject matter described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
[0024] While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternative falling within the spirit and the scope of the disclosure.
[0025] The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by "comprises.., a" does not, without more constraints, preclude the existence of other elements or additional elements in the system or method.
[0026] The terms "includes", "including", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that includes a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by "includes.., a" does not, without more constraints, preclude the existence of other elements or additional elements in the system or method.
[0027] In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.
[0028] Human drivers leverage a variety of psychological and behavioral considerations during driving a motor vehicle. For instance, observing a group of pedestrians approaching a cross-walk will most probably alarm a human driver, since they are likely to cross the intersection. According to the improved concept, the behavior of stationary vehicles is predicted. This predictive information may be utilized in several embodiments during in-lane driving as well as for parking on curb for autonomous or semi-autonomous vehicles.
[0029] While action prediction for dynamic agents, such as vehicles or pedestrians, is a well-studied problem in the literature, these approaches use the past history of the agent to be predicted as a basis. For example, a time series containing a vehicle's velocity, acceleration, heading direction and so forth is used to predict the same vehicle's future maneuvers for micro actions. The same idea may be applied to the case of vulnerable road users, VRUs, such as pedestrians or cyclists to predict their behavior.
[0030] In contrast, the improved concept utilizes the behavior of VRUs, in particular pedestrians, for predicting the future action or maneuvers of vehicles. This approach enables making predictions even before the agent of interest, for example, a parked vehicle, shows any movement or detectable behavior, which can potentially lead to an extended prediction horizon and, hence, safer and smoother autonomous driving. According to the improved concept, the past movement history of pedestrians is leveraged in order to predict future actions and behaviors of vehicles that are stationary, for example, or may be parked on the curbside, as illustrated in Fig. 1. This prediction may be utilized to enable urban autonomous driving and determine control actions for an autonomous or semi-autonomous vehicle 1. The improved concept has, amongst others, potential applications in the field of in-lane driving and parking on curb.
[0031] For example, Fig. 1 shows a curbside 3 next to a lane 4 on which the ego vehicle 1 is driving. A plurality of stationary vehicles 2a, 2b, 2c is parking is next to the curbside 3. In the case of in-lane driving, the ego vehicle 1 is driving on the lane 4, so that there exists a danger of collision in case one of the stationary vehicles 2b starts moving into the lane 4 the ego vehicle 4 is driving on. In this case, earlier predictions of the behavior of the stationary vehicle 2b may enable the ego vehicle 1 to react appropriately at an early time and in a smooth fashion. Using a recorded action history of VRUs in the line of sight of the ego vehicle 1, it may predict if a stationary vehicle 2b is likely to move out of its parking position or not and thus, reduce its speed or stop in order to avoid a dangerous situation.
[0032] Another challenging situation in the domain of urban autonomous or semiautonomous driving is when the ego vehicle 1 needs to park on the curbside 3, either to end its trip or to handle a safety critical situation, for example, when an emergency vehicle is approaching. At the core of this problem is detecting free spaces where the ego vehicle 1 can potentially park. Free space on the curbside 3 between stationary vehicles 2b, 2c, in which no VRU exists may be a potentially parkable area. Other factors, such as no parking signs, parking meters, cross-walks, road curvature and so forth may be taken into account as well in order to find a parkable area. However, the ego vehicle 1 might detect a vehicle 2b as parked while that vehicle 2b is about to move, for example, because a person is entering the vehicle 2b, within the time spent the ego vehicle 1 tries to park, which may lead to a potential collision. This situation is indicated in Fig. 1 by the respective arrows in front of the ego vehicle 1 and the vehicle 2b.
[0033] According to the improved concept, a pedestrian may be detected and tracked in a first step. Object detection and tracking is a known problem and has been widely studied in the context of computer vision. Objects, such as pedestrians, are detected and reported using bounding boxes or abstract body models. Object tracking method, for example, based on optical flow, may then be employed in order to track the object of interest in consecutive frames. For this step, data driven models, for example deep learning methods, may be applied.
[0034] The output of said step may for example be reported in the form of bounding boxes or abstract body models from the perspective of the camera system (not shown in the figures) of the ego vehicle 1, or, in other words, in a first-person view. A more suitable representation for the purpose of prediction is achieved by transforming these detections into a top view representation in order to predict the pedestrian trajectories and high-level actions, as depicted in Fig. 2. In Fig. 2, the positions of a pedestrian 5 on the curbside 3 as well as a plurality of positions of further pedestrians 6 on the curbside 3 in an immediate environment of the stationary vehicle 2b are shown.
[0035] The predicted trajectory 5' (see Fig. 3) of the pedestrian 5 enables the ego vehicle 1 to predict if the pedestrian 5 or a group of pedestrians is approaching the stationary vehicle 2b. This high-level prediction may be used in a further step. At any given time t, the state of the pedestrian 5 may be abstracted as a feature vector containing coordinates of the pedestrian 5 in the top view alongside with optionally further information, for example velocity, heading direction, age, gender and so forth, of the pedestrian 5. The historic trajectory or, in other words, action history of the pedestrian 5 may be expressed as a time series of state vectors. Various time series prediction methods may be employed to predict the further state of the pedestrian 5, among which one may name non-parametric Bayesian methods, such as hierarchical Dirichlet processes, or data driven methods, such as long short-term memory deep neural networks. Fig. 3 shows an example of the visualization of the predicted trajectory 5' of the pedestrian 5. It should be noted that the predicted time series may potentially include more information than the position of the pedestrian 5, as also described above.
[0036] Having predicted the trajectory 5' of the pedestrian 5 and, for example further pedestrians 6, the ego vehicle 1 may make an inference on the further action or behavior of the parked vehicle 2b. For example, if a group of pedestrians 5, 6 is approaching towards the doors of the parked vehicle 2b, the probability that the vehicle 2b will be moving in the near future will increase and the ego vehicle 1 may avoid parking or driving in front of that particular vehicle 2b. For this purpose, the ego vehicle 1 may keep a history of the pedestrians 5, 6 in its proximity at any given time and extend the respective time series by predicting each pedestrian's 5, 6 trajectories in real time. Thus, the ego vehicle 1 has access to the predicted or, in other words, extended trajectories 5' of the pedestrians 5, 6 at each time step. These extended trajectories 5' alongside with the location of the parked vehicle 2b and other map features, such as cross-walks and traffic signs, may be represented in a dynamically updated top view map. By dynamically updating the map, the predictions may get more precise as time progresses and the ego vehicle 1 may have access to more accurate information about the pedestrian's 5, 6 intents.
[0037] Fig. 4 shows a flow diagram of an exemplary implementation of a method for predicting a behavior of the vehicle 2b according to the improved concept and of a method for controlling the ego vehicle 1 at least in part automatically according to the improved concept.
[0038] In step Si, a computing unit (not shown in the figures) detects, labels and tracks the pedestrians 5, 6 within the proximity of the ego vehicle 1 based on the camera data as described above. As a result, for example the detected bounding boxes for the pedestrians 5, 6 may be generated. In step S2, the computing unit predicts a pedestrian trajectory 5' for each pedestrian 5, 6 within a preset time horizon. In this way, extended pedestrian trajectories 5' are obtained as described above.
[0039] In step S3, the computing unit may create an agent-to-agent relation graph representing a relation between all the pedestrians 5, 6 and the vehicle 2b depending on the predicted trajectories 5'. Furthermore, the computing unit may filter out predicted trajectories 5' that are irrelevant to the vehicle 2b under consideration.
[0040] In step S4, the computing unit may use the agent-to-agent relation graph and the predicted pedestrian trajectories 5', which may be considered being encoded in the agentto-agent relation graph, in order to predict an intent or a behavior of the parked vehicle 2b. In step S5, the computing unit may use the predicted intent or behavior of the vehicle 2b in order to segmentize parkable areas in the proximity of the ego vehicle 1.
[0041] By means of the improved concept, an improved safety and comfort in autonomous or semi-autonomous vehicles may be achieved. The improved concept may, in some embodiments, utilize agent-to-agent relations for the purpose of intent classification. The improved concept may also provide an extended prediction horizon.
[0042] In some embodiments, the parked vehicles' intents may be predicted without observing any direct change in their state by using other agents' behavior for the prediction.
[0043] In some embodiments, the improved concept may be considered as a hybrid prediction framework, where predictions of pedestrian trajectories are used to predict the behavior of the stationary vehicle.
[0044] In some embodiments, a hierarchical prediction scheme is used, meaning that trajectories of pedestrians are predicted in order to predict the behavior of a parked car to control the ego vehicle. Moreover, agent-to-agent relations are used to do so. These agents are, in particular, of different types, for example pedestrians vehicles.
Reference signs 1 Ego vehicle 2a, 2b, 2c Stationary vehicles 3 Curbside 4 Lane Pedestrian 6 Pedestrians 5' Predicted trajectory Si, 52, 53, 54, 55 Method steps
Claims (7)
- CLAIMS1. A method for predicting a behavior of a vehicle, wherein the method comprises generating camera data representing an environment of the vehicle, characterized in that the method comprises the following steps: predicting a pedestrian trajectory of a pedestrian within the environment by using a computing unit; and predicting an action of the vehicle depending on the predicted pedestrian trajectory by using the computing unit.
- 2. The method according to claim 1, characterized in predicting at least one further pedestrian trajectory of at least one further pedestrian within the environment by using the computing unit; and predicting the action of the vehicle depending on the at least one further predicted pedestrian trajectory by using the computing unit.
- 3. The method according to one of the preceding claims, characterized in generating a time series of state vectors for the pedestrian based on the camera data by using the computing unit; and predicting at least one further state vector for the pedestrian based on the time series by using the computing unit; and generating the predicted trajectory depending on the at least one further predicted state vector by using the computing unit.
- 4. The method according to one of the preceding claims, characterized in generating a graph representing a relation between the pedestrian and the vehicle depending on the predicted trajectory by using the computing unit; and predicting the action of the vehicle depending on the graph by using the computing unit.
- 5. The method according to one of the preceding claims, characterized in determining a drivable or parkable area for an ego vehicle depending on the predicted action of the vehicle by using the computing unit.
- 6. A method for controlling an ego vehicle at least in part automatically, characterized in that the method comprises the following steps: predicting a behavior of a vehicle by using the method according to one of the preceding claims: and generating at least one control signal for controlling the ego vehicle at least in part automatically depending on the predicted action of the vehicle.
- 7. An electronic vehicle guidance system comprising a camera system, which is configured to generate camera data representing an environment of a vehicle, characterized in that the electronic vehicle guidance system comprises a computing unit, which is configured to predict a pedestrian trajectory of a pedestrian within the environment and to predict an action of the vehicle depending on the predicted pedestrian trajectory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2016157.6A GB2599727A (en) | 2020-10-12 | 2020-10-12 | Predicting the behavior of a vehicle using agent-to-agent relations to control an autonomous vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2016157.6A GB2599727A (en) | 2020-10-12 | 2020-10-12 | Predicting the behavior of a vehicle using agent-to-agent relations to control an autonomous vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
GB202016157D0 GB202016157D0 (en) | 2020-11-25 |
GB2599727A true GB2599727A (en) | 2022-04-13 |
Family
ID=73460489
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB2016157.6A Withdrawn GB2599727A (en) | 2020-10-12 | 2020-10-12 | Predicting the behavior of a vehicle using agent-to-agent relations to control an autonomous vehicle |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2599727A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116061973A (en) * | 2023-03-15 | 2023-05-05 | 安徽蔚来智驾科技有限公司 | Vehicle track prediction method, control device, readable storage medium, and vehicle |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018211582A1 (en) * | 2017-05-16 | 2018-11-22 | 日産自動車株式会社 | Movement prediction method for travel assistance device and movement prediction device |
JP2020019301A (en) * | 2018-07-30 | 2020-02-06 | 株式会社デンソーテン | Behavior decision device |
US20200180648A1 (en) * | 2017-11-22 | 2020-06-11 | Uatc, Llc | Object Interaction Prediction Systems and Methods for Autonomous Vehicles |
US20200317225A1 (en) * | 2019-04-08 | 2020-10-08 | Zf Friedrichshafen Ag | Drive behavior estimation of a passenger transport system |
WO2021030476A1 (en) * | 2019-08-13 | 2021-02-18 | Zoox, Inc. | Cost-based path determination |
-
2020
- 2020-10-12 GB GB2016157.6A patent/GB2599727A/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018211582A1 (en) * | 2017-05-16 | 2018-11-22 | 日産自動車株式会社 | Movement prediction method for travel assistance device and movement prediction device |
US20200180648A1 (en) * | 2017-11-22 | 2020-06-11 | Uatc, Llc | Object Interaction Prediction Systems and Methods for Autonomous Vehicles |
JP2020019301A (en) * | 2018-07-30 | 2020-02-06 | 株式会社デンソーテン | Behavior decision device |
US20200317225A1 (en) * | 2019-04-08 | 2020-10-08 | Zf Friedrichshafen Ag | Drive behavior estimation of a passenger transport system |
WO2021030476A1 (en) * | 2019-08-13 | 2021-02-18 | Zoox, Inc. | Cost-based path determination |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116061973A (en) * | 2023-03-15 | 2023-05-05 | 安徽蔚来智驾科技有限公司 | Vehicle track prediction method, control device, readable storage medium, and vehicle |
Also Published As
Publication number | Publication date |
---|---|
GB202016157D0 (en) | 2020-11-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3091370B1 (en) | Method and arrangement for determining safe vehicle trajectories | |
EP3678911B1 (en) | Pedestrian behavior predictions for autonomous vehicles | |
US10969789B2 (en) | Verifying predicted trajectories using a grid-based approach | |
Broggi et al. | Intelligent vehicles | |
US9760806B1 (en) | Method and system for vision-centric deep-learning-based road situation analysis | |
Goldhammer et al. | Intentions of vulnerable road users—Detection and forecasting by means of machine learning | |
Eidehall et al. | Statistical threat assessment for general road scenes using Monte Carlo sampling | |
Cheng et al. | Turn-intent analysis using body pose for intelligent driver assistance | |
Aoude et al. | Threat assessment design for driver assistance system at intersections | |
US7974748B2 (en) | Driver assistance system with vehicle states, environment and driver intention | |
US11120690B2 (en) | Method and device for providing an environmental image of an environment of a mobile apparatus and motor vehicle with such a device | |
CN116209611B (en) | Method and system for using other road user's responses to self-vehicle behavior in autopilot | |
CN110316186A (en) | Vehicle collision avoidance pre-judging method, device, equipment and readable storage medium storing program for executing | |
CN109318894B (en) | Vehicle driving assistance system, vehicle driving assistance method, and vehicle | |
US11945433B1 (en) | Risk mitigation in speed planning | |
Bonnin et al. | A generic concept of a system for predicting driving behaviors | |
JP7500210B2 (en) | Vehicle driving support method and driving support device | |
GB2599727A (en) | Predicting the behavior of a vehicle using agent-to-agent relations to control an autonomous vehicle | |
US20240017746A1 (en) | Assessing present intentions of an actor perceived by an autonomous vehicle | |
CN116507541A (en) | Method and system for predicting the response of other road users in autopilot | |
EP4137845A1 (en) | Methods and systems for predicting properties of a plurality of objects in a vicinity of a vehicle | |
Bartels et al. | Intelligence in the Automobile of the Future | |
Rendon-Velez et al. | Progress with situation assessment and risk prediction in advanced driver assistance systems: A survey | |
Zhang et al. | Situation analysis and adaptive risk assessment for intersection safety systems in advanced assisted driving | |
Vidano et al. | Artificially Intelligent Active Safety Systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |