CN117141474B - Obstacle track prediction method and device, vehicle controller, system and vehicle - Google Patents

Obstacle track prediction method and device, vehicle controller, system and vehicle Download PDF

Info

Publication number
CN117141474B
CN117141474B CN202311415092.5A CN202311415092A CN117141474B CN 117141474 B CN117141474 B CN 117141474B CN 202311415092 A CN202311415092 A CN 202311415092A CN 117141474 B CN117141474 B CN 117141474B
Authority
CN
China
Prior art keywords
scene
current
data set
obstacle
track prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311415092.5A
Other languages
Chinese (zh)
Other versions
CN117141474A (en
Inventor
刘西亚
蓟仲勋
罗衡荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Haixing Zhijia Technology Co Ltd
Original Assignee
Shenzhen Haixing Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Haixing Zhijia Technology Co Ltd filed Critical Shenzhen Haixing Zhijia Technology Co Ltd
Priority to CN202311415092.5A priority Critical patent/CN117141474B/en
Publication of CN117141474A publication Critical patent/CN117141474A/en
Application granted granted Critical
Publication of CN117141474B publication Critical patent/CN117141474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • B60W60/0017Planning or execution of driving tasks specially adapted for safety of other traffic participants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/50Barriers

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of vehicles, and discloses a method and a device for predicting an obstacle track, a vehicle controller, a system and a vehicle, wherein the method comprises the following steps: acquiring current scene type and current obstacle information corresponding to a target vehicle; determining current index values under different detection distances based on the current scene type and scene suitability of the track prediction models in a preset model library, wherein the preset model library comprises a plurality of track prediction models, and the scene suitability is used for representing the scene type adapted by the track prediction models; acquiring reference index values of the track prediction model under different detection distances, wherein the reference index values are obtained based on the evaluation of the target evaluation data set; determining a target track prediction model from a plurality of track prediction models based on the current obstacle information, the current index value and the reference index value corresponding to the track prediction model; and carrying out track prediction on the current obstacle based on the target track prediction model. The method and the device can solve the problem of inaccurate obstacle track prediction.

Description

Obstacle track prediction method and device, vehicle controller, system and vehicle
Technical Field
The invention relates to the technical field of vehicles, in particular to a method and a device for predicting obstacle trajectories, a vehicle controller, a system and a vehicle.
Background
At present, track prediction has become an indispensable ring in the field of automatic driving, demands on the track prediction are more urgent and strict in terms of accuracy, high efficiency, generalization and the like, an automatic driving vehicle plans according to track prediction data and makes decisions to cope with possible dangerous situations, the track prediction is widely applied in the aspect of safe automatic driving similar to defensive driving with a certain pre-judging capability of a human driver with rich experience, and the track prediction is also widely applied to various scenes such as open roads, closed environments and the like, and the track prediction is also predicted by the conventional road prediction to the materialized scene prediction.
Along with the continuous application and development of deep learning technology, more and more track prediction models based on neural networks are used, and the track prediction models are better in terms of evaluating indexes on public data and are widely applied to track prediction of vehicles. However, in the related art, track prediction is mainly performed on the target vehicle depending on a single type of track prediction model, and the single type of track prediction model is often difficult to adapt to a changeable driving scene, so that the track prediction is inaccurate.
Disclosure of Invention
In view of the above, the present invention provides a method, apparatus, vehicle controller, system, vehicle and computer readable storage medium for predicting an obstacle trajectory to solve the problem of inaccurate obstacle trajectory prediction.
In a first aspect, the present invention provides a method for predicting an obstacle trajectory, the method comprising:
acquiring current scene type and current obstacle information corresponding to a target vehicle;
determining current index values under different detection distances based on the current scene type and scene suitability of a track prediction model in a preset model library, wherein the preset model library comprises a plurality of track prediction models, and the scene suitability is used for representing the scene type adapted by the track prediction models;
acquiring reference index values of each track prediction model under different detection distances, wherein the reference index values are obtained based on the evaluation of a target evaluation data set;
determining a target track prediction model from a plurality of track prediction models based on the current obstacle information, the current index value and the reference index value corresponding to each track prediction model;
and carrying out track prediction on the current obstacle corresponding to the target vehicle based on the target track prediction model.
The beneficial effects are that: determining current index values under different detection distances according to the current scene type and scene suitability of the track prediction model; acquiring reference index values of each track prediction model under different detection distances; then, according to the current obstacle information, the current index value and the reference index value under different detection distances, a track prediction model applicable to the current scene type and the current obstacle information is selected from a preset track model library. Therefore, the adaptive switching of the track prediction model can be realized to adapt to changeable driving scenes, so that the accuracy of track prediction is improved.
In an alternative embodiment, the method further comprises:
acquiring a data set to be trained and a data set to be evaluated under the current scene type, wherein the data set to be trained and the data set to be evaluated are obtained based on real-time scene data of the current scene type;
training the target track prediction model based on the data set to be trained to obtain a trained track prediction model;
evaluating the trained track prediction model based on the target evaluation data set to obtain a target index value, and updating the target evaluation data set into the data set to be evaluated;
Comparing the current index value with the reference index value and the target index value respectively to obtain a comparison result;
and if the comparison result meets a preset model updating condition, updating the target track prediction model into the trained track prediction model.
The beneficial effects are that: training the target track prediction model based on the data set to be trained and the data set to be evaluated under the current scene type to obtain a trained track prediction model; then evaluating the trained track prediction model based on the target evaluation data set to obtain a target index value; and comparing the current index value with the reference index value and the target index value respectively, and updating the target track prediction model into a trained track prediction model if the comparison result meets the preset model updating condition. Therefore, the target track prediction model can be optimized, so that the model performance of the target track prediction model is improved.
In an optional embodiment, the acquiring the data set to be trained and the data set to be evaluated under the current scene type includes:
acquiring real-time scene information acquired by the target vehicle;
dividing the real-time scene information based on scene types to obtain target scene data under different scene types;
Acquiring a data set to be trained and a data set to be evaluated under different scene types;
and expanding the data set to be trained and the data set to be evaluated under different scene types based on the target scene data and preset data expansion conditions so as to obtain the data set to be trained and the data set to be evaluated under the current scene type.
The beneficial effects are that: because the data to be trained and the data to be evaluated of different scene types are divided and expanded in real time by the scene information acquired by the target vehicle in real time, the data to be trained and the data to be evaluated of different scene types can be updated in real time, so that the adaptive iterative updating of the prediction track model is realized.
In an optional implementation manner, the expanding the to-be-trained data set and the to-be-evaluated data set under the different scene types based on the target scene data and the preset data expansion condition to obtain the to-be-trained data set and the to-be-evaluated data set under the current scene type includes:
and if the target scene data meets the preset data expansion conditions, expanding the data set to be trained and the data set to be evaluated under different scene types based on the target scene data to obtain the data set to be trained and the data set to be evaluated under the current scene type.
The beneficial effects are that: and when the target scene data meets the preset data expansion conditions, expanding the data set to be trained and the data set to be evaluated under different scene types to obtain the data set to be trained and the data set to be evaluated under the current scene type, so that the reliability of the data set to be trained and the reliability of the data set to be evaluated can be ensured.
In an optional implementation manner, the training the target track prediction model based on the to-be-trained data set to obtain a trained track prediction model includes:
acquiring a perceived motion state of the current obstacle;
inputting the data set to be trained into the target track prediction model to obtain a predicted track and a predicted motion state of the current obstacle;
updating the motion state of the current obstacle based on the perceived motion state, the predicted motion state and a preset state updating condition;
and updating parameters of the target track prediction model based on the predicted track and the updated motion state to obtain a trained track prediction model.
The beneficial effects are that: the parameters of the target track prediction model are updated based on the motion state of the current obstacle, so that the model performance of the target track prediction model can be improved, and the accuracy of track prediction is further improved.
In an alternative embodiment, the current obstacle information includes a risk level of the current obstacle, the risk level of the current obstacle being obtained by:
acquiring the distance and the speed of the current obstacle relative to the target vehicle;
based on the distance and the speed, a risk level of the current obstacle is determined.
The beneficial effects are that: the risk level of the current obstacle is determined based on the distance and speed of the current obstacle relative to the target vehicle. Therefore, a reference can be provided for the selection of a subsequent track prediction model, so that the track prediction model is more attached to the current running condition of the target vehicle, the accuracy of obstacle track prediction is improved, the collision between the target vehicle and an obstacle is avoided, and the running safety is improved.
In an optional embodiment, the acquiring the current scene type and the current obstacle information includes:
determining an obstacle track prediction area corresponding to the target vehicle based on the current position of the target vehicle and a preset range;
and in the obstacle track prediction area, acquiring driving environment information and obstacle perception information so as to acquire the current scene type and the current obstacle information.
The beneficial effects are that: the obstacle track prediction area corresponding to the target vehicle is determined based on the current position of the target vehicle and the preset range, and the scene type and the current obstacle information are determined based on the data in the obstacle track prediction area, so that the load of an automatic driving system and the prediction time consumption of a track prediction model can be reduced.
In a second aspect, the present invention provides an obstacle trajectory prediction device, the device comprising:
the driving data acquisition module is used for acquiring the current scene type and the current obstacle information corresponding to the target vehicle;
the current index determining module is used for determining current index values under different detection distances based on the current scene type and scene suitability of the track prediction models in a preset model library, wherein the preset model library comprises a plurality of track prediction models, and the scene suitability is used for representing the scene type adapted by the track prediction models;
the reference index acquisition module is used for acquiring reference index values of each track prediction model under different detection distances, and the reference index values are obtained based on the evaluation of the target evaluation data set;
A prediction model selection module, configured to determine a target track prediction model from a plurality of track prediction models based on the current obstacle information, the current index value, and the reference index value corresponding to each track prediction model;
and the target track prediction module is used for predicting the track of the current obstacle corresponding to the target vehicle based on the target track prediction model.
In a third aspect, the present invention provides a vehicle controller comprising: the obstacle trajectory prediction method according to the first aspect or any one of the embodiments thereof is implemented by the processor and the memory, the memory and the processor are communicatively connected to each other, and the memory stores computer instructions.
In a fourth aspect, the present invention provides an autopilot system comprising:
the sensor module is used for acquiring obstacle perception information around the target vehicle so as to obtain current obstacle information;
the vehicle and environment information module is used for acquiring driving environment information so as to obtain the current scene type;
and the central processing calculation module is connected with the sensor module and the vehicle and environment information module and is used for executing the obstacle track prediction method of the first aspect or any corresponding implementation mode thereof.
In a fifth aspect, the present invention provides a vehicle comprising:
a vehicle body;
the vehicle controller of the above third aspect is provided in the vehicle body.
In a sixth aspect, the present invention provides a computer-readable storage medium having stored thereon computer instructions for causing a computer to execute the obstacle trajectory prediction method of the first aspect or any one of its corresponding embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a first obstacle trajectory prediction method according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of a scenario of obstacle trajectory prediction according to an embodiment of the invention;
FIG. 3 is a block diagram of an autopilot system according to one embodiment of the present invention;
FIG. 4 is a block diagram of another autopilot system in accordance with an embodiment of the present invention;
fig. 5 is a flowchart of a second obstacle trajectory prediction method according to an embodiment of the invention;
FIG. 6 is a schematic diagram of an obstacle trajectory prediction zone, according to an embodiment of the invention;
fig. 7 is a flowchart of a third obstacle trajectory prediction method according to an embodiment of the invention;
FIG. 8 is a flow chart of iterative updating of an obstacle trajectory model, according to an embodiment of the invention;
FIG. 9 is a flow chart of an obstacle trajectory model switch according to an embodiment of the invention;
FIG. 10 is a flow chart of interaction of the motion state of an obstacle in a trajectory prediction model according to an embodiment of the invention;
fig. 11 is a block diagram showing a configuration of an obstacle trajectory prediction device according to an embodiment of the present invention;
fig. 12 is a block diagram of a vehicle controller according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
At present, track prediction has become an indispensable ring in the field of automatic driving, is more urgent and strict in terms of accuracy, high efficiency, generalization and the like, is planned according to track prediction data and makes decisions to cope with possible dangerous situations, is similar to defensive driving with certain prejudging ability of a human driver with abundant experience, is widely applied in the aspect of safe automatic driving, and is also applied to various scenes such as open roads, closed environments and the like, and the track prediction is also predicted to specific scenes by conventional road property prediction.
With the continuous application and development of deep learning technology, more and more track prediction models based on neural networks are used, and the evaluation of indexes on the public data is better. Therefore, the trajectory prediction model is also widely used in the trajectory prediction of the vehicle. For example: receiving prediction scene information determined by a track prediction model of the mobile device; storing the predicted scene information into a cloud scene library, and carrying out data distribution on all scene information in the cloud scene library to distinguish normal scene information from abnormal scene information; respectively performing perception labeling on normal scene information and abnormal scene information to obtain a marked track prediction training sample; training a track prediction model in the cloud based on the track prediction training sample with the mark; and transmitting the parameters of the trained track prediction model to the mobile device to update and iterate the track prediction model of the mobile device. However, different types of track prediction models, such as a single-target track prediction model, a multi-target track prediction model, and the like, have different performance applicability to different scenes, and in the related art, track prediction is mainly performed on a target vehicle by depending on the single type of track prediction model, and the single type of track prediction model is often difficult to adapt to a variable driving scene, so that the track prediction is inaccurate.
In view of the above, according to an embodiment of the present invention, there is provided an obstacle trajectory prediction method embodiment, it is to be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different from that herein.
In the present embodiment, there is provided a method for predicting an obstacle trajectory, which may be used for a vehicle, such as: fig. 1 is a flowchart of a first obstacle trajectory prediction method according to an embodiment of the invention, as shown in fig. 1, the flowchart including the steps of:
step S101, obtaining a current scene type and current obstacle information corresponding to a target vehicle.
Specifically, as shown in fig. 2, the autonomous vehicle predicts the trajectory of the obstacle according to the perceived data, and then judges the potential danger according to the obstacle trajectory prediction information, so as to plan and control, thereby enabling the autonomous vehicle to cope with the potential dangerous situation. As shown in fig. 3 and 4, the target vehicle is provided with an automatic driving system, which includes: the system comprises a sensing sensor, a sensor module, a vehicle and environment information module and a central processing and calculating module; the sensing sensor comprises a radar module and a camera module, wherein the radar module comprises a laser radar, a millimeter wave radar, an ultrasonic radar and the like, and the camera module comprises a common camera, an infrared camera, a depth camera and the like; the sensor module is used for acquiring obstacle perception information around the target vehicle, wherein the obstacle perception information comprises radar data detected by the radar module and image data shot by the camera module; the vehicle and environment information module is used for collecting positioning data of the vehicle, vehicle self data (such as chassis speed, acceleration, steering and the like) and running environment data consisting of a high-precision map, wherein the running environment data comprises lane information, scene environment information, traffic elements and interface identifiers; the central processing calculation module comprises a sensing unit and a scene analysis and judgment unit, wherein the sensing unit is used for carrying out algorithm processing on barrier sensing information to obtain current barrier information, and the current barrier information comprises data such as barrier type, barrier code, barrier position, barrier speed, barrier length, width and height, appearance time stamp and the like; the scene analysis and judgment unit is used for determining a current scene type where the target vehicle is located and a scene identifier corresponding to the current scene type according to the driving environment data, wherein the scene type comprises at least one of a working scene and a traffic scene, the working scene comprises at least one of a closed scene and an open scene, the traffic scene comprises at least one of a straight road scene, a left/right turning scene, a T-shaped/cross-shaped/L-shaped intersection scene, a turning-around scene, a left/right overtaking scene, a zebra stripes scene, a dotted line scene and a solid line scene, and the current scene type comprises at least one of the working scene and the traffic scene.
Further, the central processing and calculating module further includes a data monitoring unit for monitoring upstream sensing data (i.e. data acquired by the sensor module), positioning data, driving environment information composed of a high-precision map, and the like, downstream planning data, and a data range of the associated scene analysis and determination unit, so as to improve controllability and reliability of an automatic driving system of the target vehicle. Specifically, during driving, the data input by the sensor module and the vehicle and environment information module needs to be checked for completeness and monitored, including but not limited to: the sensor module and the sensing unit are used for initializing the data state and the data range, the positioning data and the vehicle parameters, whether the map data is complete (such as whether the map data contains road data, scene environment data, traffic elements and interface identifiers), traffic scenes and operation scenes.
Step S102, determining current index values under different detection distances based on the current scene type and scene suitability of the track prediction models in a preset model library, wherein the preset model library comprises a plurality of track prediction models, and the scene suitability is used for representing the scene type adapted by the track prediction models.
It should be noted that the preset model library includes a plurality of track prediction models; the preset model library comprises at least one single-target track prediction model and a multi-target track prediction model. Before starting to call, the initialization parameters of the model can be acquired to initialize each track prediction model, wherein the initialization parameters comprise model weight parameters and update iteration identification. Specifically, because the performance requirements for the trajectory prediction model are different in different scene types, and the performance of the trajectory prediction model applicable to the same scene type is also different. Therefore, the index values under different detection distances can be determined in advance according to the scene type and the scene suitability of the track prediction model in the preset model library, so as to form a scene index data table shown in table 1, wherein the scene index data table comprises index values corresponding to different scene types and the scene suitability of the track prediction model, and specifically comprises index values of different evaluation indexes, such as: average distance error ADE, final distance error FDE, time elapsed Time, loss rate mr. In the actual use process, the track prediction model applicable under the current scene type and the index value corresponding to the track prediction model can be determined according to the scene suitability of the current scene type and the track prediction model so as to obtain the current index value. For example: under the scene of crossing (T-shaped/crossroad/L-shaped, etc.), ADE can be selected as the detection distance of 2 m: 0.7, FDE:1.1, mr:7.0% of time is generally used as the current index value, and in addition, in the subsequent model update iteration, in order to avoid the track prediction capability of the track prediction model being too poor, the index values ADE corresponding to 2m, 5m, and 10m may be used: 0.7,0.8,1.0, FDE:1.1,1.3,1.5, mr:7.0% of time consumption is small, and the current index value is used as a current index value of the track prediction model.
TABLE 1
Step S103, obtaining reference index values of each track prediction model under different detection distances, wherein the reference index values are obtained based on the evaluation of the target evaluation data set.
Specifically, assuming that the evaluation indexes of the track prediction model include an average distance error ADE, a final distance error FDE, a Time-consuming Time and a loss rate mr, a target evaluation data set may be input into each track prediction model, and a reference index value of each track prediction model under different detection distances may be calculated according to an evaluation method of each evaluation index. For example: directly calculating the obstacle position points and the predicted points in the target evaluation data set, specifically, 1, ADE: the average L2 distance between each point of the obstacle track and the corresponding ground truth point is predicted. For multi-modal predictions, a minimum ADE is typically used to indicate ADE exceeds the minimum of K predictions. 2. FDE: l2 distance between the final predicted position of the obstacle and the corresponding GT position. mr: based on the L2 distance of the final position, the situation ratio of the obstacle predicted trajectory not within the range of GT2.0 meters, such as n predicted trajectories, where the final coordinate distance of the m predicted trajectories exceeds 2m, mr=m/n.
Step S104, determining a target track prediction model from a plurality of track prediction models based on the current obstacle information, the current index value and the reference index value corresponding to each track prediction model.
For example, referring to table 1 above, if the current scene type is a left-right turning scene, and the trajectory prediction model a, c, e, f is applicable to both left-right turning scenes, it may be determined whether to select the single-target trajectory prediction model or the multi-target trajectory prediction model according to the current obstacle information, such as the obstacle position, the obstacle risk level, the number of obstacles, and the like. If the trajectory prediction model of the single object is selected, the trajectory prediction models c and f are excluded, and then the 2m detection distance, the 5m detection distance, or the 10m detection distance may be further selected according to the obstacle position, the number of obstacles, and/or the obstacle risk level, etc. If the 2m detection distance is selected, it is known from table 1 that the ADE of the track prediction model a at the 2m detection distance is 0.7, the FDE is 1.1, the mr is 7%, the time consumption is small, the ADE of the track prediction model e at the 2m detection distance is 0.6, the FDE is 0.9, the mr is 8.5%, and the time consumption is large, at this time, a relatively large difference can be filtered from the track prediction models a and e according to the current index values corresponding to the left and right turning scenes, if both the track prediction models float up and down at the current index values, and a relatively good track prediction model can be selected by using the scale factors as limitations. If they are both within the current index value, the curve curvature and length are considered, such as: the curve is short and the curve is general, when the loss rate difference is not large, the FDE index is considered, and the track prediction model e can be selected; if the curve length is relatively long, comprehensively considering ADE indexes and mr, and possibly selecting a track prediction model a; if the curve length is common and the curve degree is common, the ADE, the FDE and the mr are comprehensively measured, a track prediction model a is possibly selected, and particularly how to measure the ADE, the FDE and the mr, the two indexes are calculated by adopting a scale factor, and the ADE, the FDE and the mr can be selected according to actual conditions without limitation.
Further, the reference index values of each track prediction model under different detection distances are stored in the evaluation pool, the central processing and calculating module further comprises a track prediction self-adaptation and processing unit, and the track prediction self-adaptation and processing unit is used for comprehensively judging and switching the track prediction self-adaptation and processing unit into the selected target track prediction model based on the reference index values, the current scene type and the current obstacle information of each track prediction model in the evaluation pool under different detection distances.
Step S105, track prediction is carried out on the current obstacle corresponding to the target vehicle based on the target track prediction model.
Specifically, current obstacle information and running environment information can be input into a target track prediction model to predict the track of the current obstacle corresponding to the target vehicle, so as to obtain the track of the current obstacle. Of course, since the prediction modes of the track prediction models are different, the input data of each track prediction model can be selected according to actual situations.
According to the obstacle track prediction method provided by the embodiment, the current index values under different detection distances are determined according to the current scene type and the scene suitability of the track prediction model; acquiring reference index values of each track prediction model under different detection distances; then, according to the current obstacle information, the current index value and the reference index value under different detection distances, a track prediction model applicable to the current scene type and the current obstacle information is selected from a preset track model library. Therefore, the adaptive switching of the track prediction model can be realized to adapt to changeable driving scenes, so that the accuracy of track prediction is improved.
Fig. 5 is a flowchart of a second obstacle trajectory prediction method according to an embodiment of the invention, as shown in fig. 5, the flowchart including the steps of:
step S201, acquiring the current scene type and the current obstacle information corresponding to the target vehicle.
As one of the optional embodiments, the current obstacle information includes a risk level of the current obstacle, and the risk level of the current obstacle is obtained by: acquiring the distance and the speed of the current obstacle relative to the target vehicle; based on the distance and the speed, a risk level of the current obstacle is determined.
In the actual risk level division, the risk level of the current obstacle may be determined according to the distance and speed of the current obstacle from the target vehicle, the risk level of the current obstacle may be determined by combining the position of the current obstacle, and the like, and may be selected according to the actual situation.
For example, the risk level of the obstacle may be set according to a preset risk determination condition, such as: dangerous obstructions, larger influencing obstructions, generally influencing obstructions, and negligible obstructions. The preset hazard determination conditions may be: within the obstacle trajectory prediction area, whether the current obstacle is located on the lane, whether the current obstacle is near a predetermined point of the specific area, and whether the current obstacle is within a preset distance (e.g., 100 m) in front of the host vehicle. Specifically, if the current obstacle is near the predetermined point of the specific area, the current obstacle is in the 30m relative to the target vehicle and the speed relative to the target vehicle is greater than 5m/s, assigning a dangerous level to the current obstacle to be set as a dangerous obstacle; or, the distance of the current obstacle relative to the target vehicle is within the preset distance range, and the risk level of the current obstacle can be set to be a greater influence on the obstacle.
Specifically, the step S201 includes:
in step S2011, an obstacle track prediction area corresponding to the target vehicle is determined based on the current position of the target vehicle and the preset range.
Specifically, referring to fig. 6, a prediction space of a prediction range is established centering on the current position of the target vehicle to obtain an obstacle trajectory prediction area. For example: and taking an elliptical space of about 50m and about 20m in front of and behind the current position as an obstacle track prediction area when the preset range is about 20m in front of and behind 50 m.
In step S2012, in the obstacle trajectory prediction area, the driving environment information and the obstacle sensing information are acquired to acquire the current scene type and the current obstacle information.
Specifically, the driving environment information includes lane data and traffic elements in the obstacle track prediction area, the traffic elements include signal lamp data and traffic marks, the driving environment information can be acquired by a vehicle and environment information module, and then the scene analysis and judgment unit performs scene analysis on the driving environment information to obtain the current scene type. The obstacle sensing information can be obtained by collecting data of a sensing sensor by the sensor module, and then the sensing unit processes the obstacle sensing information to obtain current obstacle information. It should be noted that, the scene type and the acquisition of the obstacle data may refer to the descriptions of other related documents, and will not be described in detail herein.
Step S202, determining current index values under different detection distances based on the current scene type and scene suitability of the track prediction models in a preset model library, wherein the preset model library comprises a plurality of track prediction models, and the scene suitability is used for representing the scene type adapted by the track prediction models. Referring specifically to step S102, the details are not repeated here.
In step S203, reference index values of each trajectory prediction model under different detection distances are obtained, wherein the reference index values are obtained based on the evaluation of the target evaluation dataset. Referring specifically to step S103, the details are not repeated here.
Step S204, determining a target track prediction model from a plurality of track prediction models based on the current obstacle information, the current index value and the reference index value corresponding to each track prediction model. Referring specifically to step S104, the details are not repeated here.
Step S205, track prediction is carried out on the current obstacle corresponding to the target vehicle based on the target track prediction model. Referring specifically to step S105, the details are not repeated here.
In step S206, a data set to be trained and a data set to be evaluated under the current scene type are obtained based on the scene information of the current scene type.
Specifically, the step S206 includes:
in step S2061, real-time scene information acquired by the target vehicle is acquired.
Specifically, the real-time scene information includes real-time driving environment information collected by the vehicle and environment information module and real-time obstacle information obtained by processing the obstacle sensing information by the sensing unit.
In step S2062, the real-time scene information is divided based on the scene type to obtain the target scene data under different scene types.
Specifically, dividing the real-time scene information based on scene types to obtain first scene information under different scene types; dividing the first scene information under different scene types into normal data and abnormal data based on a data dividing rule of the scene types so as to form second scene information under different scene types; acquiring the motion state of a current obstacle corresponding to scene information; and based on the time sequence and the motion state, assembling, dividing and screening the second scene information to form target scene data under different scene types. For example, first scene information of a straight-track scene may be determined from the real-time scene information, and the first scene information of the straight-track scene may be divided into normal data and abnormal data to constitute second scene information of the straight-track scene; then, according to the real-time motion state of the current obstacle, the scene information of the dynamic/static/unstable obstacle is segmented from the second scene information of the straight-path scene by taking 3s as 1 segment so as to form target scene data in the straight-path scene. It should be noted that, the above abnormal data may be understood as irregular data corresponding to a scene type, for example: the traveling road section is temporarily closed, the traffic facilities are temporarily changed, or the situation does not belong to the conventional scene.
In step S2063, the data set to be trained and the data set to be evaluated under different scene types are acquired.
Specifically, the autopilot system constructs scene library data files for different scene types in advance, sets file identifiers for the scene library data files of different scene types, and when the autopilot system is used, checks the scene library data files of all scene types to judge whether all the required scene types exist or not, if not, constructs the scene library data files for the missing scene types, and sets the file identifiers for the scene library data files. In the subsequent use process, the scene library data files under different scene types can be searched according to the file identifications corresponding to the different scene types, wherein the scene library data files comprise the data set to be trained and the data set to be evaluated under the corresponding scene types.
Step S2064, expanding the data set to be trained and the data set to be evaluated under different scene types based on the target scene data and the preset data expansion conditions, so as to obtain the data set to be trained and the data set to be evaluated under the current scene type.
Specifically, the data expansion conditions can be set according to the ratio of the scene information in the scene library data file to the scene information in the total scene library data file under different scene types and the scene requirements of different scene types, or the data expansion conditions can be set according to the actual application, and the preset data expansion conditions of the various scene types can be the same or different, so that the data expansion conditions are not particularly limited.
Further, the step S2064 includes: and if the target scene data meets the preset data expansion condition, expanding the data set to be trained and the data set to be evaluated under different scene types based on the target scene data so as to obtain the data set to be trained and the data set to be evaluated under the current scene type. In addition, if the target scene data does not meet the preset data expansion condition, the data set to be trained and the data set to be evaluated in different scenes are kept unchanged.
Specifically, if the target scene data meets the preset data expansion condition, dividing the target scene data into a training data set to be updated and an evaluation data set to be updated according to a preset dividing ratio; based on scene types, writing training data sets to be updated into corresponding data sets to be trained in the scene library data files so as to expand the training data sets to be trained in different scene types, and writing evaluation data sets to be updated into corresponding data sets to be evaluated in the scene library data files so as to expand the evaluation data sets to be evaluated in different scene types. When training and evaluating the target track prediction model later, searching a target scene library data file according to the file identification of the current scene type, and calling a data set to be trained and a data set to be evaluated under the current scene type from the target scene library data file.
Step S207, training the target track prediction model based on the data set to be trained to obtain a trained track prediction model.
Specifically, the step S207 includes:
step S2071, obtaining the perceived motion state of the current obstacle.
It should be noted that, the current perceived motion state of the obstacle is obtained by processing the acquired obstacle perceived information by the perception unit.
Step S2072, the data set to be trained is input into the target track prediction model to obtain the predicted track and the predicted motion state of the current obstacle.
It should be noted that, the data set to be trained includes historical obstacle information and corresponding historical driving environment information, the historical obstacle information includes a historical position, a historical speed, a historical acceleration and a historical timestamp of the current obstacle, and the historical driving environment information includes historical lane information, historical signal lamp information and historical traffic identification. It should be noted that the history described herein is only with respect to the current training time.
Step S2073, updating the motion state of the current obstacle based on the perceived motion state, the predicted motion state, and the preset state update condition.
It should be noted that, since the perceived motion state perceived by the perceiving unit has a certain error, the present embodiment adds the obstacle motion state interaction module in the original track prediction model, and interacts in the local block independently. Specifically, the perceived motion state and the predicted motion state are compared, and the motion state of the current obstacle is updated according to the state comparison result and the preset state updating condition. Illustratively, the preset state update conditions are set according to the application: 1. sensing the motion state as dynamic, predicting the motion state as dynamic or static, and updating the motion state of the current obstacle as dynamic; 2. sensing the motion state as static, predicting the motion state as dynamic, and updating the motion state of the current obstacle as dynamic; 3. sensing the motion state as static, and updating the motion state of the current obstacle as static if the predicted motion state is static; 4. and sensing the motion state to be an unstable state, predicting the motion state to be dynamic or static, and updating the motion state of the current obstacle to be dynamic or static. Of course, if the actual need is more specific, the movement state of the obstacle may be divided into a micro-movement state and a strong movement state according to the speed of the obstacle.
And step S2074, updating parameters of the target track prediction model based on the predicted track and the actual motion state to obtain a trained track prediction model.
It can be understood that if the actual motion state is static, the target track prediction model outputs a predicted track; or if the actual motion state is static, but the target track prediction model does not output a predicted track, so that the track prediction of the target track prediction model deviates from the actual situation, therefore, parameters of the target track prediction model need to be adjusted according to the actual motion state and the predicted track, so that the target track prediction model does not perform track prediction on the current obstacle when the current obstacle is static, and performs track prediction on the current obstacle when the current obstacle is dynamic, so that the accuracy of the target track prediction model is improved.
Step S208, the trained track prediction model is evaluated based on the target evaluation data set, so as to obtain a target index value, and the target evaluation data set is updated to be an evaluation data set.
It can be appreciated that the objective evaluation data set is used to evaluate the trained trajectory prediction model to determine whether the model performance of the original objective trajectory prediction model and the trained trajectory prediction model is good or bad. After the evaluation is completed, the target evaluation data set is updated to be the data set to be evaluated, so as to enter the model iterative updating flow of the next round. In addition, in actual operation, the trained track prediction model is also evaluated based on the data set to be evaluated, so that a new reference index value is obtained, and preparation is made for the next round of model iterative updating.
Step S209, comparing the current index value with the reference index value and the target index value respectively to obtain a comparison result.
For example, if the detection distance is 2m and the evaluation index is FDE, the current index value of FDE at the detection distance of 2m may be compared with the reference index value of FDE at the detection distance of 2m and the target index value of FDE at the detection distance of 2m, respectively, to obtain a comparison result of the index values of FDE.
Step S210, if the comparison result meets the preset model updating condition, updating the target track prediction model into a trained track prediction model.
Further, if the comparison result does not meet the preset model updating condition, keeping the target track prediction model unchanged. It can be understood that if the comparison result does not meet the preset model update condition, it indicates that the model performance of the current target track prediction model is better than that of the trained track prediction model, and the model is more suitable for the current scene type, so that the parameters of the target track prediction model are kept unchanged.
As an example, referring to fig. 7, the overall flow of the obstacle trajectory prediction method provided by the present invention is as follows:
s301, importing a preset model database and initialization parameters, and inputting completeness inspection and data monitoring; the initialization parameters include model weight parameters of the track prediction model and update iteration identification, and input completeness check and data monitoring include but are not limited to: the method comprises the steps of initializing a data state and a data range of a sensor module and a sensing unit, determining whether positioning data and vehicle parameters are normal, determining whether map data are complete, initializing a traffic scene and initializing a working scene type.
S302, checking whether a vehicle end and a cloud end normally communicate through a data transmission state, checking and establishing a scene library data file based on a traffic scene and an operation scene, encoding the scene library data file, and initializing the duty ratio of each scene library data file; the duty ratio of the scene library data file is used for checking the data condition under each scene type.
S303, setting corresponding evaluation indexes (namely current index values) in a segmented mode based on scene types, environment information and scene suitability of the track prediction model, carrying out segmented evaluation on the track prediction model based on an initialization evaluation set, and storing segmented evaluation results (namely reference index values) in an evaluation pool.
S304, establishing a local obstacle track prediction area in the scene, comprehensively judging and switching to a corresponding target track prediction model based on a segmentation evaluation result, a scene type and obstacle information (such as an obstacle type and a danger level) in the evaluation pool.
And S305, running the target track prediction model in real time, performing effect test on the current obstacle information, and issuing a model test result and a model version state of the target track prediction model to a planning unit and a control unit so as to store the model test result and a regulation feedback result (namely model prediction quasi-and non-quasi-distribution) state together in an evaluation pool.
And S306, the vehicle end acquires, differentiates (daily and abnormal data), segments and screens real-time scene information in real time according to the data monitoring result and the vehicle cloud instruction consistency, and expands the data set to be trained and the data set to be evaluated under different scene types according to preset data expansion conditions so as to construct a new data set to be trained and a new data set to be evaluated.
S307, training the target track prediction model based on the new data set to be trained, and judging whether to overlap the target track prediction model based on preset model updating conditions.
S308, judging whether the alternating instruction is ended, if the iteration preset times of the target track prediction model do not reach the preset model updating condition, and receiving the instruction for stopping iteration or updating, ending the iteration and saving the best model parameters and version numbers of the target track prediction model, otherwise, returning to the step S305-S307.
It should be noted that, the related automatic driving method based on the trajectory prediction of the neural network lacks to switch and supplement the models according to conditions for different scenes and different situations, and it is difficult to adapt to all scenes only by means of one or two of the models. In the invention, the scene applicability of various track prediction models such as single targets, multiple targets and the like is considered, and the adaptive switching model is carried out according to the conditions, so that the method can adapt to all scenes. In addition, in the related art, a set of adaptive prediction method and model updating iteration are not provided based on different scenes or different tasks of the same scene, even if an iteration mode is mentioned, in practical application, training data injected into the model does not cover the application scene localization data and iterate based on conditions as much as possible, abnormal monitoring of the data is performed, meanwhile, certain deviation of the actual perception result is considered, and if only external stable data and a small amount of data which does not cover all scenes are adopted, the model with good evaluation effect is difficult to adapt to the scene. Therefore, in this embodiment, the training data injected into the track prediction model is full-coverage application scene localization data and real-time augmentation, and the prediction upstream and downstream data are monitored, and at the same time, iteration is performed based on the condition, so that the track prediction model is more suitable for the corresponding scene type. Compared with a scene library based on prediction results and environment division in the related art, the embodiment divides scene types based on map information and traffic elements so as to cope with different tasks through divided scene data, so that an optimal track prediction model is found, then the track prediction model iteration is carried out, the complexity and application safety of an application scene and the accuracy, high efficiency and intensity of the track are fully considered, an improvement method is provided for the angle of the current model, the track prediction model is considered to be capable of being adaptively switched and rapid iteration is carried out for enriching actual scene data models, the flow of data acquisition and screening and the standardization of reporting and issuing are really considered from the whole automatic driving application field, and the scene and replicability of the system are realized.
For example, referring to fig. 8, the process of iterative updating of a trajectory prediction model provided by the present invention is as follows:
s401, a cloud end or a vehicle end establishes a scene library data file based on traffic scenes and operation scenes, codes the scene library data file and counts the data duty ratio among the scene library data files of all scene types.
S402, setting segment evaluation indexes (namely current index values under different detection distances) according to the current scene type and scene suitability of the track prediction model, respectively carrying out segment evaluation on the track prediction model based on the target evaluation data set, and storing segment evaluation results.
S403, running the selected target track prediction model in the scene, performing effect test on the current obstacle information, and transmitting a model test result and a model version state to a planning unit and a control unit so as to store a segment evaluation result based on a segment evaluation index and a rule feedback result state in an evaluation pool.
S404, the vehicle end collects real-time scene information, performs daily and abnormal differentiation on the collected data based on scene types, performs assembly segmentation and screening on the differentiated data based on time sequences and obstacle states, sets data expansion conditions according to scene data occupation ratios and scene requirements, expands the scene data after reaching the data expansion conditions, and constructs a new data set to be trained and a new data set to be evaluated.
S405, training the target track prediction model based on the new data set to be trained. The vehicle-end offline training or the remote cloud online training can be selected by considering vehicle-end resources and running conditions.
S406, respectively carrying out sectional evaluation on the trained track prediction model based on the target evaluation data set and the new data set to be trained, and storing the sectional evaluation result into an evaluation pool.
S407, respectively comparing the target track prediction model and the trained track prediction model according to the segment evaluation index, the segment evaluation result before training and the segment evaluation result after training, and judging whether the target track prediction model needs to be updated to be the trained track prediction model or not based on a preset model updating condition.
S408, if the target track prediction model is updated, the updated track prediction model is issued to a preset model library, the vehicle end continues to run with the updated track prediction model, and when the iteration number of the target track prediction model does not reach the preset number, the step S404 is returned to continue to execute.
S409, if the preset number of iterations of the target track prediction model does not reach the update condition, and an instruction for stopping the iteration or updating is received, the model updating is ended, and the next cycle or scene switching is waited for.
It is worth to say that, compared with the training and iteration of the model by differentiating the scene into normal and abnormal data (long tail scene) and extracting the long tail scene data as the value data in the related technology, the invention also expands the data set to be evaluated according to the expansion of the scene data, thereby improving the authenticity of model evaluation and updating. Meanwhile, the map generalization module and training options are increased aiming at generalization of the model, namely, various map information interfaces or lane information interfaces which are arranged in different scenes or under different tasks of the same scene can be used in prediction processing, and only shunting or remote cloud issuing is needed to be performed in preprocessing. The method and the system are truly automatic for predicting the data pipeline, do not need any labeling, divide, screen and expand all in the system, and increase verification of newly added data based on the true value of the sensor data, thereby providing an efficient mode for the self-adaptive iterative model.
For example, referring to fig. 9, a specific process of switching a track prediction model provided by the present invention is as follows:
s501, obtaining a track prediction model from a preset model library, and obtaining a corresponding segmentation evaluation result, model information and version numbers thereof from an evaluation pool.
S502, scene type: and judging the operation scene or the traffic scene corresponding to the current scene type.
S503, obstacle information: and judging the obstacle information in the established obstacle track prediction area, wherein the obstacle information comprises the number of obstacles and the obstacle danger level. Wherein; the obstacle risk level is given in terms of the relative distance and speed of the obstacle from the vehicle, for example: if the distance from the vehicle is 30m and the relative speed is greater than 5, the obstacle is assigned a risk level.
S504, comprehensive judgment: and selecting a target track prediction model which is most suitable for the current scene type based on the segmentation evaluation result of S501, the scene type of S502 and the obstacle information of S503.
S505, model switching: and calling and switching to a target track prediction model from a preset model library so as to predict the track of the obstacle around the vehicle.
For example, referring to fig. 10, a specific flow of interaction between motion states of an obstacle in a trajectory prediction model provided by the present invention is as follows:
in step S601, the sensing unit acquires the historical obstacle information, the historical driving environment information and the perceived motion state information of the obstacle, and acquires the scene type and the task mode in the time sequence.
Step S602, an obstacle motion state interaction module is added in the original track prediction model, and the obstacle motion state interaction module is similar to the history information and is interacted in a local block independently.
And step S603, injecting the scene data constructed in the step S601 into a track prediction model for training, and performing evaluation test.
Step S604, obtaining the predicted track and the predicted motion state of each obstacle through a track prediction model.
Step S605 compares the perceived motion state in step S601 with the predicted motion state in step S604, sets a state update condition, and updates the motion state of the obstacle when the state update condition is satisfied.
It should be noted that, in the related automatic driving method based on the trajectory prediction of the neural network, the future trajectory is usually predicted by modeling based on the historical obstacle information and the map information, however, the static and dynamic effects of the obstacle on the trajectory prediction model in different scenes are often different, while most methods ignore the effect of the dynamic and static states of the obstacle on the prediction model, so that the model result is partially deviated from the actual situation, for example, the effect of the static more time on the trajectory of the dynamic obstacle is a situation, the dynamic more time is a further effect on the trajectory prediction model. The invention provides an improved prediction method for more fitting the actual interaction of the dynamic and static states of the obstacle in the model, and further improves the accuracy and rationality of application.
The present embodiment also provides an obstacle trajectory prediction device, which is used to implement the foregoing embodiments and preferred embodiments, and will not be described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
The present embodiment provides an obstacle trajectory prediction apparatus, as shown in fig. 11, including:
a driving data obtaining module 701, configured to obtain a current scene type and current obstacle information corresponding to a target vehicle;
the current index determining module 702 is configured to determine current index values under different detection distances based on a current scene type and scene suitability of a track prediction model in a preset model library, where the preset model library includes a plurality of track prediction models, and the scene suitability is used for characterizing the scene type adapted by the track prediction model;
a reference index obtaining module 703, configured to obtain reference index values of each track prediction model at different detection distances, where the reference index values are obtained based on the evaluation of the target evaluation dataset;
A prediction model selection module 704, configured to determine a target track prediction model from a plurality of track prediction models based on the current obstacle information, the current index value, and the reference index value corresponding to each track prediction model;
the target track prediction module 705 is configured to predict a track of a current obstacle corresponding to the target vehicle based on the target track prediction model.
In some alternative embodiments, the current obstacle information acquired by the driving data acquiring module 701 includes a risk level of the current obstacle, and the driving data acquiring module 701 specifically acquires the risk level of the current obstacle by: acquiring the distance and the speed of the current obstacle relative to the target vehicle; based on the distance and the speed, a risk level of the current obstacle is determined.
In some alternative embodiments, the travel data acquisition module 701 includes:
a prediction area determining unit, configured to determine an obstacle track prediction area corresponding to the target vehicle based on the current position of the target vehicle and a preset range;
and the area data acquisition unit is used for acquiring the driving environment information and the obstacle perception information in the obstacle track prediction area so as to acquire the current scene type and the current obstacle information.
In some alternative embodiments, the trajectory prediction apparatus further includes:
the scene information acquisition module is used for acquiring a data set to be trained and a data set to be evaluated under the current scene type, wherein the data set to be trained and the data set to be evaluated are obtained based on real-time scene data of the current scene type;
the prediction model training module is used for training the target track prediction model based on the data set to be trained to obtain a trained track prediction model;
the prediction model evaluation module is used for evaluating the trained track prediction model based on the target evaluation data set to obtain a target index value, and updating the target evaluation data set into a data set to be evaluated;
the model index evaluation module is used for comparing the current index value with the reference index value and the target index value respectively to obtain a comparison result;
and the prediction model updating module is used for updating the target track prediction model into a trained track prediction model if the comparison result meets the preset model updating condition.
In some alternative embodiments, the scene information acquisition module includes:
the scene information acquisition unit is used for acquiring real-time scene information acquired by the target vehicle;
The scene information dividing unit is used for dividing the real-time scene information based on scene types to obtain target scene data under different scene types;
the original data acquisition unit is used for acquiring the data set to be trained and the data set to be evaluated under different scene types;
the scene information expansion unit is used for expanding the data set to be trained and the data set to be evaluated under different scene types based on the target scene data and the preset data expansion conditions so as to obtain the data set to be trained and the data set to be evaluated under the current scene type.
In some alternative embodiments, the scene information extension unit is specifically configured to:
and if the target scene data meets the preset data expansion condition, expanding the data set to be trained and the data set to be evaluated under different scene types based on the target scene data so as to obtain the data set to be trained and the data set to be evaluated under the current scene type.
In some alternative embodiments, the predictive model training module includes:
a perceived-state acquisition unit configured to acquire a perceived motion state of a current obstacle;
the training data processing unit is used for inputting the data set to be trained into the target track prediction model to obtain the predicted track and the predicted motion state of the current obstacle;
A motion state determining unit for updating the motion state of the current obstacle based on the perceived motion state, the predicted motion state, and the preset state updating condition;
and the model parameter updating unit is used for updating the parameters of the target track prediction model based on the predicted track and the actual motion state to obtain a trained track prediction model.
Further functional descriptions of the above respective modules and units are the same as those of the above corresponding embodiments, and are not repeated here.
The obstacle trajectory prediction device in this embodiment is presented in the form of a functional unit, where the unit refers to an ASIC (Application Specific Integrated Circuit ) circuit, a processor and a memory executing one or more software or fixed programs, and/or other devices that can provide the above-described functions.
The embodiment of the invention also provides a vehicle controller, which is provided with the obstacle track prediction device shown in the figure 11.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a vehicle controller according to an alternative embodiment of the present invention, and as shown in fig. 12, the vehicle controller includes: one or more processors 10, memory 20, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are communicatively coupled to each other using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the vehicle controller, including instructions stored in or on memory to display graphical data of the GUI on an external input/output device, such as a display apparatus coupled to the interface. In some alternative embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple vehicle controllers may be connected, with each device providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 10 is illustrated in fig. 12.
The processor 10 may be a central processor, a network processor, or a combination thereof. The processor 10 may further include a hardware chip, among others. The hardware chip may be an application specific integrated circuit, a programmable logic device, or a combination thereof. The programmable logic device may be a complex programmable logic device, a field programmable gate array, a general-purpose array logic, or any combination thereof.
Wherein the memory 20 stores instructions executable by the at least one processor 10 to cause the at least one processor 10 to perform a method for implementing the embodiments described above.
The memory 20 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created according to the use of the vehicle controller, or the like. In addition, the memory 20 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some alternative embodiments, memory 20 may optionally include memory located remotely from processor 10, which may be connected to the vehicle controller via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Memory 20 may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as flash memory, hard disk, or solid state disk; the memory 20 may also comprise a combination of the above types of memories.
The vehicle controller further comprises an input device 30 and an output device 40. The processor 10, memory 20, input device 30, and output device 40 may be connected by a bus or other means, for example in fig. 12.
The input device 30 may receive input numeric or character data and generate key signal inputs related to user settings and function control of the vehicle controller, such as a touch screen, keypad, mouse, trackpad, touchpad, pointer stick, one or more mouse buttons, trackball, joystick, and the like. The output means 40 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. Such display devices include, but are not limited to, liquid crystal displays, light emitting diodes, displays and plasma displays. In some alternative implementations, the display device may be a touch screen.
There is also provided in this embodiment an autopilot system comprising: the system comprises a sensing sensor, a sensor module, a vehicle and environment information module and a central processing and calculating module; the sensing sensor comprises a radar module and a camera module, wherein the radar module comprises at least one of a laser radar, a millimeter wave radar and an ultrasonic radar, and the camera module comprises at least one of a common camera, an infrared camera and a depth camera; the sensor module is used for acquiring obstacle sensing information around the target vehicle detected by the sensing sensor so as to obtain current obstacle information; the vehicle and environment information module is used for acquiring running environment information consisting of a high-precision map so as to obtain the current scene type; in addition, the vehicle and environment information module is also used for collecting positioning data of the vehicle and data of the vehicle (such as chassis speed, acceleration, steering and the like). The central processing and calculating module is connected with the sensor module and the vehicle and environment information module and is used for executing the obstacle track prediction method.
Specifically, the central processing and calculating module comprises a sensing unit, a planning unit, a control unit, a scene analysis and judgment unit, a data monitoring unit, a track prediction self-adaption and processing unit, a calculating platform or a central controller and a scene analysis and judgment unit, wherein the sensing unit is used for carrying out algorithm processing on obstacle sensing information to obtain current obstacle information, and the current obstacle information comprises data such as obstacle type, obstacle coding, obstacle position, obstacle speed, obstacle length and width, appearance time stamp and the like; the data monitoring unit is used for monitoring upstream sensing data (namely data acquired by the sensor module), positioning data, running environment information formed by a high-precision map and the like, downstream planning data and the data range of the associated scene analysis and judgment unit so as to improve the controllability and reliability of an automatic driving system of the target vehicle; the track prediction self-adaption and processing unit is used for comprehensively judging, switching and selecting target track prediction models based on reference index values, current scene types and current obstacle information of each track prediction model in the evaluation pool under different detection distances in the obstacle track prediction area; the scene analysis and judgment unit is used for determining the current scene type of the target vehicle and the scene identifier corresponding to the current scene type according to the driving environment data; the planning unit and the control unit are used for planning and controlling the automatic driving; the computing platform or the central controller is used for executing a core algorithm related to automatic driving, integrating fusion data of the sensing sensor and completing functions such as path planning, decision control and the like.
Furthermore, the automatic driving system further comprises a remote cloud module which comprises a remote cloud, wherein the remote cloud is used for expanding the collected data according to conditions and achieving inconvenient vehicle-end processing conditions such as model training, data processing and model iteration issuing processing, issuing instructions (such as data collection, data processing, model self-adaptive switching and model iteration) and providing auxiliary guarantee for efficient operation of the automatic driving vehicle.
There is also provided in this embodiment a vehicle including: a vehicle body; the vehicle controller is arranged in the vehicle body. Specifically, the vehicle body includes: the vehicle speed detection device comprises a sensing sensor, a vehicle speed detection device, a vehicle and an environment information module; the sensing sensor comprises a radar module and a camera module, and is used for acquiring the obstacle around the sensing vehicle so as to obtain obstacle sensing information; the vehicle speed detection device is used for detecting the running speed of the vehicle; the vehicle and environment information module is used for collecting positioning data of the vehicle and data of the vehicle (such as chassis speed, acceleration, steering and the like) and obtaining high-precision map data so as to obtain driving environment data.
The embodiments of the present invention also provide a computer readable storage medium, and the method according to the embodiments of the present invention described above may be implemented in hardware, firmware, or as a computer code which may be recorded on a storage medium, or as original stored in a remote storage medium or a non-transitory machine readable storage medium downloaded through a network and to be stored in a local storage medium, so that the method described herein may be stored on such software process on a storage medium using a general purpose computer, a special purpose processor, or programmable or special purpose hardware. The storage medium can be a magnetic disk, an optical disk, a read-only memory, a random access memory, a flash memory, a hard disk, a solid state disk or the like; further, the storage medium may also comprise a combination of memories of the kind described above. It will be appreciated that a computer, processor, microprocessor controller or programmable hardware includes a storage element that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the methods illustrated by the above embodiments.
Although embodiments of the present invention have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope of the invention as defined by the appended claims.

Claims (11)

1. A method of obstacle trajectory prediction, the method comprising:
acquiring current scene type and current obstacle information corresponding to a target vehicle;
determining current index values under different detection distances based on the current scene type and scene suitability of a track prediction model in a preset model library, wherein the preset model library comprises a plurality of track prediction models, and the scene suitability is used for representing the scene type adapted by the track prediction models;
acquiring reference index values of each track prediction model under different detection distances, wherein the reference index values are obtained based on the evaluation of a target evaluation data set;
determining a target track prediction model from a plurality of track prediction models based on the current obstacle information, the current index value and the reference index value corresponding to each track prediction model;
Track prediction is carried out on the current obstacle corresponding to the target vehicle based on the target track prediction model;
the method further comprises the steps of:
acquiring a to-be-trained data set and an to-be-evaluated data set under the current scene type, wherein the to-be-trained data set and the to-be-evaluated data set are obtained based on real-time scene data of the current scene type, and the to-be-trained data set and the to-be-evaluated data set are used for training the target track prediction model so as to optimize the target track prediction model;
the obtaining the data set to be trained and the data set to be evaluated under the current scene type includes:
acquiring real-time scene information acquired by the target vehicle;
dividing the real-time scene information based on scene types to obtain target scene data under different scene types;
acquiring a data set to be trained and a data set to be evaluated under different scene types;
and expanding the data set to be trained and the data set to be evaluated under different scene types based on the target scene data and preset data expansion conditions so as to obtain the data set to be trained and the data set to be evaluated under the current scene type.
2. The obstacle trajectory prediction method according to claim 1, characterized in that the method further comprises:
training the target track prediction model based on the data set to be trained to obtain a trained track prediction model;
evaluating the trained track prediction model based on the target evaluation data set to obtain a target index value, and updating the target evaluation data set into the data set to be evaluated;
comparing the current index value with the reference index value and the target index value respectively to obtain a comparison result;
and if the comparison result meets a preset model updating condition, updating the target track prediction model into the trained track prediction model.
3. The obstacle trajectory prediction method according to claim 1, wherein the expanding the to-be-trained data set and the to-be-evaluated data set under the different scene types based on the target scene data and a preset data expansion condition to obtain the to-be-trained data set and the to-be-evaluated data set under the current scene type includes:
and if the target scene data meets the preset data expansion conditions, expanding the data set to be trained and the data set to be evaluated under different scene types based on the target scene data and the preset data expansion conditions so as to obtain the data set to be trained and the data set to be evaluated under the current scene type.
4. The obstacle trajectory prediction method according to claim 2, wherein the training the target trajectory prediction model based on the to-be-trained data set to obtain a trained trajectory prediction model includes:
acquiring a perceived motion state of the current obstacle;
inputting the data set to be trained into the target track prediction model to obtain a predicted track and a predicted motion state of the current obstacle;
updating the motion state of the current obstacle based on the perceived motion state, the predicted motion state and a preset state updating condition;
and updating parameters of the target track prediction model based on the predicted track and the updated motion state to obtain a trained track prediction model.
5. The obstacle trajectory prediction method according to claim 1, wherein the current obstacle information includes a risk level of a current obstacle, the risk level of the current obstacle being obtained by:
acquiring the distance and the speed of the current obstacle relative to the target vehicle;
based on the distance and the speed, a risk level of the current obstacle is determined.
6. The obstacle trajectory prediction method according to claim 1, wherein the acquiring the current scene type and the current obstacle information includes:
determining an obstacle track prediction area corresponding to the target vehicle based on the current position of the target vehicle and a preset range;
and in the obstacle track prediction area, acquiring driving environment information and obstacle perception information so as to acquire the current scene type and the current obstacle information.
7. An obstacle trajectory prediction device, the device comprising:
the driving data acquisition module is used for acquiring the current scene type and the current obstacle information corresponding to the target vehicle;
the current index determining module is used for determining current index values under different detection distances based on the current scene type and scene suitability of the track prediction models in a preset model library, wherein the preset model library comprises a plurality of track prediction models, and the scene suitability is used for representing the scene type adapted by the track prediction models;
the reference index acquisition module is used for acquiring reference index values of each track prediction model under different detection distances, and the reference index values are obtained based on the evaluation of the target evaluation data set;
A prediction model selection module, configured to determine a target track prediction model from a plurality of track prediction models based on the current obstacle information, the current index value, and the reference index value corresponding to each track prediction model;
the target track prediction module is used for predicting the track of the current obstacle corresponding to the target vehicle based on the target track prediction model;
the apparatus further comprises:
the scene information acquisition module is used for acquiring a to-be-trained data set and an to-be-evaluated data set under the current scene type, wherein the to-be-trained data set and the to-be-evaluated data set are obtained based on real-time scene data of the current scene type, and the to-be-trained data set and the to-be-evaluated data set are used for training the target track prediction model so as to optimize the target track prediction model;
the scene information acquisition module comprises:
the scene information acquisition unit is used for acquiring real-time scene information acquired by the target vehicle;
the scene information dividing unit is used for dividing the real-time scene information based on scene types to obtain target scene data under different scene types;
the original data acquisition unit is used for acquiring the data set to be trained and the data set to be evaluated under different scene types;
The scene information expansion unit is used for expanding the data set to be trained and the data set to be evaluated under different scene types based on the target scene data and the preset data expansion conditions so as to obtain the data set to be trained and the data set to be evaluated under the current scene type.
8. A vehicle controller, characterized by comprising:
a memory and a processor, the memory and the processor being communicatively connected to each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the obstacle trajectory prediction method of any one of claims 1 to 6.
9. An autopilot system comprising:
the sensor module is used for acquiring obstacle perception information around the target vehicle so as to obtain current obstacle information;
the vehicle and environment information module is used for acquiring driving environment information so as to obtain the current scene type;
a central processing calculation module, connected to the sensor module and the vehicle and environment information module, for executing the obstacle trajectory prediction method of any one of claims 1 to 6.
10. A vehicle, characterized by comprising:
A vehicle body;
the vehicle controller of claim 8, the vehicle controller disposed within the vehicle body.
11. A computer-readable storage medium having stored thereon computer instructions for causing a computer to execute the obstacle trajectory prediction method according to any one of claims 1 to 6.
CN202311415092.5A 2023-10-30 2023-10-30 Obstacle track prediction method and device, vehicle controller, system and vehicle Active CN117141474B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311415092.5A CN117141474B (en) 2023-10-30 2023-10-30 Obstacle track prediction method and device, vehicle controller, system and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311415092.5A CN117141474B (en) 2023-10-30 2023-10-30 Obstacle track prediction method and device, vehicle controller, system and vehicle

Publications (2)

Publication Number Publication Date
CN117141474A CN117141474A (en) 2023-12-01
CN117141474B true CN117141474B (en) 2024-01-30

Family

ID=88908456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311415092.5A Active CN117141474B (en) 2023-10-30 2023-10-30 Obstacle track prediction method and device, vehicle controller, system and vehicle

Country Status (1)

Country Link
CN (1) CN117141474B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113753077A (en) * 2021-08-17 2021-12-07 北京百度网讯科技有限公司 Method and device for predicting movement locus of obstacle and automatic driving vehicle
CN114426032A (en) * 2022-01-05 2022-05-03 重庆长安汽车股份有限公司 Automatic driving-based vehicle trajectory prediction method and system, vehicle and computer-readable storage medium
CN116001807A (en) * 2023-02-27 2023-04-25 安徽蔚来智驾科技有限公司 Multi-scene track prediction method, equipment, medium and vehicle
WO2023070258A1 (en) * 2021-10-25 2023-05-04 华为技术有限公司 Trajectory planning method and apparatus for vehicle, and vehicle
CN116187475A (en) * 2023-03-16 2023-05-30 北京京东乾石科技有限公司 Track prediction model generation method and device, and model training method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113753077A (en) * 2021-08-17 2021-12-07 北京百度网讯科技有限公司 Method and device for predicting movement locus of obstacle and automatic driving vehicle
WO2023070258A1 (en) * 2021-10-25 2023-05-04 华为技术有限公司 Trajectory planning method and apparatus for vehicle, and vehicle
CN114426032A (en) * 2022-01-05 2022-05-03 重庆长安汽车股份有限公司 Automatic driving-based vehicle trajectory prediction method and system, vehicle and computer-readable storage medium
CN116001807A (en) * 2023-02-27 2023-04-25 安徽蔚来智驾科技有限公司 Multi-scene track prediction method, equipment, medium and vehicle
CN116187475A (en) * 2023-03-16 2023-05-30 北京京东乾石科技有限公司 Track prediction model generation method and device, and model training method and device

Also Published As

Publication number Publication date
CN117141474A (en) 2023-12-01

Similar Documents

Publication Publication Date Title
US11403526B2 (en) Decision making for autonomous vehicle motion control
CN111680362B (en) Automatic driving simulation scene acquisition method, device, equipment and storage medium
CN110562258B (en) Method for vehicle automatic lane change decision, vehicle-mounted equipment and storage medium
CN112581763A (en) Method, device, equipment and storage medium for detecting road event
CN109109863B (en) Intelligent device and control method and device thereof
CN113682318B (en) Vehicle running control method and device
Ding et al. New multiple-target tracking strategy using domain knowledge and optimization
EP4046058A1 (en) Prediction and planning for mobile robots
CN116323364A (en) Waypoint prediction and motion forecast for vehicle motion planning
CN114005280A (en) Vehicle track prediction method based on uncertainty estimation
CN112099508A (en) Data management method and device for automatic driving vehicle
CN110986945B (en) Local navigation method and system based on semantic altitude map
CN112101527B (en) Method and device for identifying lane change, electronic equipment and storage medium
CN112859829B (en) Vehicle control method and device, electronic equipment and medium
US11436504B1 (en) Unified scene graphs
CN113119999B (en) Method, device, equipment, medium and program product for determining automatic driving characteristics
CN117141474B (en) Obstacle track prediction method and device, vehicle controller, system and vehicle
CN116295497A (en) Path generation method, path generation device, robot and storage medium
US20230070734A1 (en) Method and system for configuring variations in autonomous vehicle training simulations
Storms et al. Sure-val: Safe urban relevance extension and validation
CN111767651B (en) Index prediction model construction method, index prediction method and device
CN116203964B (en) Method, equipment and device for controlling vehicle to run
Rastgoufard The interacting multiple models algorithm with state-dependent value assignment
CN115985124B (en) Vehicle running control method and device, storage medium and electronic device
CN117492447B (en) Method, device, equipment and storage medium for planning driving track of automatic driving vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant