CN118067142A - Prediction method, prediction device and mobile carrier - Google Patents

Prediction method, prediction device and mobile carrier Download PDF

Info

Publication number
CN118067142A
CN118067142A CN202211466684.5A CN202211466684A CN118067142A CN 118067142 A CN118067142 A CN 118067142A CN 202211466684 A CN202211466684 A CN 202211466684A CN 118067142 A CN118067142 A CN 118067142A
Authority
CN
China
Prior art keywords
prediction
intention
prediction result
motion
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211466684.5A
Other languages
Chinese (zh)
Inventor
王天明
丁文超
王礼坤
余雷
张子健
李运佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202211466684.5A priority Critical patent/CN118067142A/en
Priority to PCT/CN2023/112646 priority patent/WO2024109176A1/en
Publication of CN118067142A publication Critical patent/CN118067142A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application provides a prediction method, a prediction device and a mobile carrier. A first intent prediction model is determined from a plurality of intent prediction models, based on the first type and the first motion scene, the plurality of intent prediction models being intent prediction models for different types of moving objects and motion scenes. And inputting the first characteristic information into a first intention prediction model to obtain a first intention prediction result, wherein the first characteristic information comprises coding characteristic information obtained according to high-precision map information corresponding to the first operation scene and motion information of the moving object. Thus, the accuracy of motion intention prediction of the moving object can be improved, and the accuracy of track prediction of the moving object can be further improved.

Description

Prediction method, prediction device and mobile carrier
Technical Field
The embodiment of the application relates to the fields of intelligent driving and intelligent traffic, and more particularly relates to a prediction method, a prediction device and a mobile carrier.
Background
Along with the development of intelligent driving and intelligent traffic, through the prediction of the motion intention of a moving object, an automatic driving vehicle obtains the future motion trail of the moving object, so that a reasonable decision is made, and the planning of safer and more efficient vehicle motion behaviors is crucial. By predicting the movement intention of the moving object, the traffic situation is judged in advance, and more reasonable traffic control is performed, which is also of great importance.
However, there is a great difference in the movement intention of different types of moving objects, for example, a rider and a vehicle on a road, and the uncertainty of the movement intention and the movement track of the rider is higher. Even with the same moving object, the degree of uncertainty of the movement intention is different in different movement scenes, for example, there is a great difference between the movement intention of the vehicle at the traffic intersection and the movement intention of the vehicle in the straight-road running.
Therefore, how to improve the accuracy of the intention prediction of different moving objects in different scenes is a problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a prediction method, a prediction device and a mobile carrier, which can effectively improve the prediction accuracy of the motion intention of different moving objects so as to obtain more accurate prediction tracks, thereby improving the safety of automatic driving.
In a first aspect, a prediction method is provided, the method including obtaining a first type and a first motion scene, the first type being a type of a moving object, the first motion scene being a motion scene in which the moving object is located. A first intent prediction model is determined from a plurality of intent prediction models, based on the first type and the first motion scene, the plurality of intent prediction models being intent prediction models for different types of moving objects and motion scenes. And inputting the first characteristic information into a first intention prediction model to obtain a first intention prediction result, wherein the first characteristic information comprises coding characteristic information obtained according to high-precision map information corresponding to the first operation scene and motion information of the moving object.
In the technical scheme, the first intention prediction model in the multi-scene intention prediction model can be accurately identified through the type of the moving object and the moving scene, so that the accuracy of the movement intention prediction of the moving object can be improved through the intention prediction model matched with the type of the moving object and the moving scene, and the track prediction accuracy of the moving object can be improved.
With reference to the first aspect, in some implementation manners of the first aspect, the method further includes inputting the first feature information into a first trajectory prediction model to obtain a first trajectory prediction result. And obtaining a second track prediction result according to the first intention prediction result and the first track prediction result.
In the technical scheme, a first track prediction result and a first intention prediction result are respectively and independently obtained, and then the two prediction results are fused to obtain a second track prediction result. Compared with the scheme of obtaining the predicted track based on the intention result, the scheme can realize the multi-task joint optimization and promote the multi-task self-consistency, thereby reducing the uncertainty propagation. In other words, the accuracy of the predicted track depends to some extent on the intention result, and uncertainty of the intention result will have a linkage effect on the predicted track. By the aid of the scheme, uncertain propagation can be reduced to a certain extent, and accuracy of track prediction is improved.
With reference to the first aspect, in certain implementation manners of the first aspect, the first intention prediction result includes a plurality of motion intents and predictors corresponding to each motion intention, and the first track prediction result includes a plurality of short-term prediction tracks and predictors corresponding to each short-term prediction track. Obtaining a second track prediction result according to the first intention prediction result and the first track prediction result comprises the following steps: and matching at least part of the motion intentions with at least part of the short-term predicted trajectories in the plurality of short-term predicted trajectories to obtain a second trajectory prediction result, wherein the prediction index of each motion intention in at least part of the motion intentions is greater than or equal to a first index threshold value, and the prediction index of each short-term predicted trajectory in at least part of the short-term predicted trajectories is greater than or equal to a second index threshold value.
In the technical scheme, in the first track prediction result and the first intention prediction result, each short-term prediction track and each movement intention are endowed with prediction indexes, so that uncertainty of future behaviors of the moving object can be effectively captured, further screening of the prediction track and the prediction intention is facilitated, and a second track prediction result is obtained.
With reference to the first aspect, in certain implementations of the first aspect, when the first type is a vehicle, the second trajectory prediction result includes at least one long-term predicted trajectory, and a predicted time length of the long-term predicted trajectory is greater than a predicted time length of the short-term predicted trajectory.
In the technical scheme, the short-term predicted track of the matched target can be supplemented, so that the accuracy of track prediction is improved.
With reference to the first aspect, in certain implementations of the first aspect, in a case where the first type is a vehicle and the first motion scene is an intersection scene, the first intent prediction model includes a first exit prediction model and a first lane prediction model. Inputting the first characteristic information into a first intention prediction model, and obtaining a first intention prediction result comprises the following steps: and inputting the first characteristic information and the exit characteristic information of the vehicle into a first exit prediction model to obtain a first exit prediction result. And inputting the first characteristic information and the lane characteristic information of the vehicle into a first lane prediction model to obtain a first lane prediction result. And obtaining a first intention prediction result according to the first exit prediction result and the first lane prediction result.
In the above technical solution, the first exit prediction result and the first lane prediction result are obtained independently, and then the two prediction results are fused to obtain the first intention prediction result. Compared with a lane prediction result obtained based on the intersection intention result, the method can reduce uncertainty propagation, and further is beneficial to improving accuracy of the first intention prediction result.
With reference to the first aspect, in certain implementations of the first aspect, the first exit prediction result includes a plurality of intent exits and predictors corresponding to each intent exit, and the first lane prediction result includes a plurality of intent lanes and predictors corresponding to each intent lane. Obtaining a first intention prediction result according to the first exit prediction result and the first lane prediction result comprises: and obtaining a first intention prediction result by using at least part of the intention exits and at least part of the intention lanes in the intention exits. Wherein the prediction index of each intention exit in at least part of the intention exits is greater than or equal to the third index threshold, and the prediction index of each intention lane in at least part of the intention lanes is greater than or equal to the fourth index threshold.
In the technical scheme, at least part of the intention exits and at least part of the intention lanes are determined through the prediction indexes, so that the determination efficiency of the first intention prediction result is improved.
With reference to the first aspect, in certain implementation manners of the first aspect, when the first type is a rider and the first motion scene is an intersection scene, the method further includes: and determining a red light running intention prediction model in the multiple intention prediction models according to the riding person and the crossing scene. And inputting the first characteristic information and the characteristic information of the traffic signal lamp into a red light running intention prediction model to obtain a red light running prediction result of the rider. Obtaining a second track prediction result according to the first intention prediction result and the first track prediction result comprises the following steps: and obtaining a second track prediction result according to the first intention prediction result, the red light running prediction result and the first track prediction result.
In the technical scheme, when the moving object is a rider in the intersection scene and the movement intention of the rider is predicted, the probability of the rider running the red light, the outlet selection of the intersection and the track prediction are comprehensively considered, so that the accuracy of the intention prediction and the accuracy of the track prediction of the rider in the intersection scene can be remarkably improved.
With reference to the first aspect, in some implementations of the first aspect, obtaining the second trajectory prediction result according to the first intent prediction result, the red light running prediction result, and the first trajectory prediction result includes: and obtaining a target intention prediction result according to the first intention prediction result and the red light running prediction result, wherein the target intention prediction result comprises a plurality of target movement intentions and prediction indexes corresponding to each target movement intention. And matching at least part of the motion intentions of the plurality of targets with at least part of the short-term predicted trajectories of the plurality of short-term predicted trajectories to obtain a second trajectory prediction result.
In the technical scheme, at least part of the movement intention and at least part of the short-term predicted track are determined through the prediction index, so that the determination efficiency of the second track prediction result is improved.
In a second aspect, a prediction apparatus is provided, the prediction apparatus comprising an acquisition module and a processing module. The acquisition module is used for acquiring a first type and a first motion scene, wherein the first type is the type of a motion object, and the first motion scene is the motion scene where the motion object is located. The processing module is used for determining a first intention prediction model from a plurality of intention prediction models according to a first type and a first motion scene, wherein the intention prediction models are intention prediction models for different types of moving objects and motion scenes. The processing module is used for inputting first characteristic information into the first intention prediction model to obtain a first intention prediction result, wherein the first characteristic information comprises coding characteristic information obtained according to high-precision map information corresponding to the first operation scene and motion information of the moving object.
The technical effect related content corresponding to the technical solution of the second aspect may refer to the corresponding description of the first aspect.
With reference to the second aspect, in certain implementations of the second aspect, the processing module is further configured to: and inputting the first characteristic information into a first track prediction model to obtain a first track prediction result. And obtaining a second track prediction result according to the first intention prediction result and the first track prediction result.
With reference to the second aspect, in some implementations of the second aspect, the first intent prediction result includes a plurality of motion intents and predictors corresponding to each motion intent, and the first trajectory prediction result includes a plurality of short-term prediction trajectories and predictors corresponding to each short-term prediction trajectory. The processing module is specifically used for: and matching at least part of the motion intentions with at least part of the short-term predicted trajectories in the plurality of short-term predicted trajectories to obtain a second trajectory prediction result, wherein the prediction index of each motion intention in at least part of the motion intentions is greater than or equal to a first index threshold value, and the prediction index of each short-term predicted trajectory in at least part of the short-term predicted trajectories is greater than or equal to a second index threshold value.
With reference to the second aspect, in certain implementations of the second aspect, when the first type is a vehicle, the second trajectory prediction result includes at least one long-term predicted trajectory, and the predicted time length of the long-term predicted trajectory is greater than the predicted time length of the short-term predicted trajectory.
With reference to the second aspect, in certain implementations of the second aspect, in a case where the first type is a vehicle and the first motion scene is an intersection scene, the first intent prediction model includes a first exit prediction model and a first lane prediction model. The processing module is specifically used for: and inputting the first characteristic information and the exit characteristic information of the vehicle into a first exit prediction model to obtain a first exit prediction result. And inputting the first characteristic information and the lane characteristic information of the vehicle into a first lane prediction model to obtain a first lane prediction result. And obtaining a first intention prediction result according to the first exit prediction result and the first lane prediction result.
With reference to the second aspect, in certain implementations of the second aspect, the first exit prediction result includes a plurality of intent exits and predictors corresponding to each intent exit, and the first lane prediction result includes a plurality of intent lanes and predictors corresponding to each intent lane. The processing module is specifically used for: and obtaining a first intention prediction result by using at least part of the intention exits and at least part of the intention lanes in the intention exits. Wherein the prediction index of each intention exit in at least part of the intention exits is greater than or equal to the third index threshold, and the prediction index of each intention lane in at least part of the intention lanes is greater than or equal to the fourth index threshold.
With reference to the second aspect, in some implementations of the second aspect, when the first type is a rider and the first motion scene is an intersection scene, the processing module is further configured to determine a red light running intent prediction model from a plurality of intent prediction models according to the rider and the intersection scene. And inputting the first characteristic information and the characteristic information of the traffic signal lamp into a red light running intention prediction model to obtain a red light running prediction result of the rider. Obtaining a second track prediction result according to the first intention prediction result and the first track prediction result comprises the following steps: and obtaining a second track prediction result according to the first intention prediction result, the red light running prediction result and the first track prediction result.
With reference to the second aspect, in certain implementations of the second aspect, the processing module is specifically configured to: and obtaining a target intention prediction result according to the first intention prediction result and the red light running prediction result, wherein the target intention prediction result comprises a plurality of target movement intentions and prediction indexes corresponding to each target movement intention. And matching at least part of the motion intentions of the plurality of targets with at least part of the short-term predicted trajectories of the plurality of short-term predicted trajectories to obtain a second trajectory prediction result.
In a third aspect, a prediction method is provided, the method comprising: the method comprises the steps of obtaining a first type of a moving object and a first moving scene of the moving object, wherein the first type is a riding person, and the first moving scene is an intersection scene. And determining a rider intersection intention prediction model and a rider red light running intention prediction model according to the rider and the intersection scene. And inputting the first characteristic information into a riding intersection intention prediction model to obtain a first intention prediction result. And inputting the first characteristic information into a red light running prediction model of the rider to obtain a red light running intention prediction result. And obtaining a target intention prediction result according to the first intention prediction result and the red light running intention prediction result. The first characteristic information comprises coding characteristic information obtained according to high-precision map information corresponding to the intersection scene and the motion information of the rider.
In the technical scheme, when the moving object is a rider in the intersection scene and the movement intention of the rider is predicted, the probability of the rider running the red light, the outlet selection of the intersection and the track prediction are comprehensively considered, so that the accuracy of the intention prediction of the rider in the intersection scene can be remarkably improved, and the accuracy of the track prediction of the rider in the intersection scene is further improved.
With reference to the third aspect, in some implementations of the third aspect, the target intent prediction result includes a plurality of target movement intents and a prediction index corresponding to each target movement intention. The method further comprises the steps of: and inputting the first characteristic information into a first track prediction model to obtain a first track prediction result, wherein the first track prediction result comprises a plurality of short-term prediction tracks and prediction indexes corresponding to each short-term prediction track. And according to the matching of at least part of the movement intents of the plurality of targets and at least part of the short-term predicted trajectories of the plurality of short-term predicted trajectories, obtaining a second trajectory prediction result.
According to the technical scheme, the intersection intention prediction result and the track prediction result of the riders in the intersection scene are obtained independently, and then the two prediction results are fused to obtain the second track prediction result, so that the multi-task joint optimization of the riders in the intersection scene can be realized, the multi-task self-consistent is improved, the uncertainty propagation of the track prediction in the riders in the intersection scene can be reduced, and the accuracy of the track prediction is improved.
With reference to the third aspect, in some implementations of the third aspect, inputting the first feature information into a rider intersection intention prediction model, obtaining a first intention prediction result includes: and inputting the first characteristic information and the intersection characteristic information of the rider into a rider intersection intention prediction model to obtain a first intention prediction result.
It should be appreciated that the intersection characteristic information for a rider may include characteristic information associated with the rider and the intersection exit element, such as the relative distance and relative position of the rider and the intersection.
In the technical scheme, the characteristic information of the riders in the intersection scene is used as the additional input of the riding pedestrian intersection intention prediction model, so that the accuracy of the intention prediction of the riders in the intersection scene can be further improved.
With reference to the third aspect, in some implementations of the third aspect, inputting the first feature information into a red light running prediction model of a rider, and obtaining a red light running intention prediction result includes: and inputting the first characteristic information and the characteristic information of the traffic signal lamp into a red light running prediction model of the rider to obtain a red light running prediction result.
It should be appreciated that the characteristic information of the traffic signal may include at least one of a color status of the current traffic signal, traffic indication information, or countdown information of the traffic signal.
In the technical scheme, the characteristic information of the traffic signal lamp in the intersection scene is used as the additional input of the red light running prediction model of the rider, so that the accuracy of the red light running intention prediction of the rider in the intersection scene can be further improved.
In a fourth aspect, a prediction apparatus is provided, where the prediction apparatus includes an acquisition module and a processing module, where the acquisition module is configured to acquire a first type of a moving object and a first motion scene where the moving object is located, the first type is a rider, and the first motion scene is an intersection scene. The processing module is used for determining a pedestrian crossing intention prediction model and a pedestrian red light running intention prediction model according to the pedestrians and the crossing scenes. And inputting the first characteristic information into a riding intersection intention prediction model to obtain a first intention prediction result. The processing module is used for inputting the first characteristic information into a red light running prediction model of a rider to obtain a red light running intention prediction result. And obtaining a target intention prediction result according to the first intention prediction result and the red light running intention prediction result. The first characteristic information comprises coding characteristic information obtained according to high-precision map information corresponding to the intersection scene and the motion information of the rider.
The technical effect related content corresponding to the technical solution of the fourth aspect may refer to the corresponding description of the third aspect.
With reference to the fourth aspect, in some implementations of the fourth aspect, the target intent prediction result includes a plurality of target movement intents and a prediction index corresponding to each target movement intention. The processing module is also used for: and inputting the first characteristic information into a first track prediction model to obtain a first track prediction result, wherein the first track prediction result comprises a plurality of short-term prediction tracks and prediction indexes corresponding to each short-term prediction track. And according to the matching of at least part of the movement intents of the plurality of targets and at least part of the short-term predicted trajectories of the plurality of short-term predicted trajectories, obtaining a second trajectory prediction result.
With reference to the fourth aspect, in some implementations of the fourth aspect, the processing module is specifically configured to: and inputting the first characteristic information and the intersection characteristic information of the rider into a rider intersection intention prediction model to obtain a first intention prediction result.
It should be appreciated that the intersection characteristic information for a rider may include characteristic information associated with the rider and the intersection exit element, such as the relative distance and relative position of the rider and the intersection.
With reference to the fourth aspect, in some implementations of the fourth aspect, the processing module is specifically configured to: and inputting the first characteristic information and the characteristic information of the traffic signal lamp into a red light running prediction model of the rider to obtain a red light running prediction result.
It should be appreciated that the characteristic information of the traffic signal may include at least one of a color status of the current traffic signal, traffic indication information, or countdown information of the traffic signal.
In a fifth aspect, there is provided a predicted apparatus comprising a processing unit and a storage unit, wherein the storage unit is configured to store instructions, the processing unit executing the instructions stored by the storage unit to cause the apparatus to perform any one of the possible methods of the first aspect or to cause the apparatus to perform any one of the possible methods of the third aspect.
Alternatively, the processing unit may be a processor, and the storage unit may be a memory, where the memory may be a storage unit (e.g., a register, a cache, etc.) in a chip, or may be a storage unit (e.g., a read only memory, a random access memory, etc.) in a smart device that is located outside the chip.
In a sixth aspect, a server is provided, the server comprising the apparatus according to any of the second or fourth aspects, the server further being configured to send the second trajectory prediction result to the mobile carrier.
In a seventh aspect, there is provided a mobile carrier comprising the apparatus of any one of the second or fourth aspects above.
With reference to the seventh aspect, in certain implementations of the seventh aspect, the mobile carrier is a vehicle.
In an eighth aspect, there is provided a computer program product comprising: computer program code which, when run on a computer, causes the computer to perform any one of the possible methods of the first aspect described above, or causes the computer to perform any one of the possible methods of the third aspect described above.
It should be noted that, the above computer program code may be stored in whole or in part on a first storage medium, where the first storage medium may be packaged together with the processor or may be packaged separately from the processor, and embodiments of the present application are not limited in this regard.
In a ninth aspect, there is provided a computer readable medium having stored thereon a program code which, when run on a computer, causes the computer to perform any one of the possible methods of the first aspect or causes the computer to perform any one of the possible methods of the third aspect.
In a tenth aspect, embodiments of the present application provide a chip system comprising a processor for invoking a computer program or computer instructions stored in a memory to cause the processor to perform any one of the possible methods of the first aspect or to cause a computer to perform any one of the possible methods of the third aspect.
With reference to the tenth aspect, in one possible implementation manner, the processor is coupled to the memory through an interface.
With reference to the tenth aspect, in one possible implementation manner, the chip system further includes a memory, where a computer program or computer instructions are stored.
Drawings
FIG. 1 is a functional block diagram illustration of a mobile carrier provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a sensor distribution applied to a moving carrier according to an embodiment of the present application;
FIG. 3 is a schematic view of a scene of a vehicle at an intersection according to an embodiment of the present application;
FIG. 4 is a schematic view of a scene of a vehicle at a non-intersection according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of a prediction method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an intent prediction method provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a track prediction method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of another track prediction method according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a method for predicting vehicle intersection intent and trajectory provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of a method for predicting an intention of a vehicle intersection according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a method for predicting non-intersection intent and track of a vehicle according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a method for predicting intent and trajectory at a rider intersection provided by an embodiment of the present application;
FIG. 13 is a schematic diagram of a method for predicting a rider trajectory according to an embodiment of the present application;
FIG. 14 is a schematic diagram of a prediction apparatus according to an embodiment of the present application;
Fig. 15 is a schematic hardware structure of a prediction apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings. Wherein, in the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone.
In the embodiment of the application, prefix words such as "first" and "second" are adopted, and only for distinguishing different description objects, no limitation is imposed on the position, sequence, priority, quantity or content of the described objects. The use of ordinal words and the like in embodiments of the present application to distinguish between the prefix words used to describe an object does not limit the described object, and statements of the described object are to be read in the claims or in the context of the embodiments and should not constitute unnecessary limitations due to the use of such prefix words. In addition, in the description of the present embodiment, unless otherwise specified, the meaning of "a plurality" is two or more.
In the embodiment of the present application, the descriptions of "when … …", "in … …", and "if" and the like all refer to that the device will perform corresponding processing under some objective condition, and are not limited in time, nor do the descriptions require that the device must have a judging action when implementing, nor do they mean that there are other limitations.
Fig. 1 is a functional block diagram of a mobile carrier 100 according to an embodiment of the present application. The mobile carrier 100 may include a perception system 120 and a computing platform 150, wherein the perception system 120 may include a variety of sensors that sense information regarding the environment surrounding the mobile carrier 100. For example, the perception system 120 may include a positioning system, which may be a global positioning system (global positioning system, GPS), a beidou system, or other positioning system. The perception system 120 may also include one or more of inertial measurement units (inertial measurement unit, IMU), lidar, millimeter wave radar, ultrasonic radar, and camera devices.
Some or all of the functions of the mobile carrier 100 may be controlled by the computing platform 150. Computing platform 150 may include processors 151 through 15n (n is a positive integer), which is a circuit with signal processing capabilities, and in one implementation, may be a circuit with instruction fetch and execution capabilities, such as a central processing unit (central processing unit, CPU), microprocessor, graphics processor (graphics processing unit, GPU) (which may be understood as a microprocessor), or digital signal processor (DIGITAL SIGNAL processor, DSP), etc.; in another implementation, the processor may implement a function through a logical relationship of hardware circuitry that is fixed or reconfigurable, e.g., a hardware circuitry implemented as an application-specific integrated circuit (ASIC) or a programmable logic device (programmable logic device, PLD), e.g., a field programmable gate array (field programmable GATE ARRAY, FPGA). In the reconfigurable hardware circuit, the processor loads the configuration document, and the process of implementing the configuration of the hardware circuit may be understood as a process of loading instructions by the processor to implement the functions of some or all of the above units. Furthermore, a hardware circuit designed for artificial intelligence may be also be considered as an ASIC, such as a neural network processing unit (neural network processing unit, NPU), tensor processing unit (tensor processing unit, TPU), deep learning processing unit (DEEP LEARNING processing unit, DPU), etc. In addition, computing platform 150 may also include a memory for storing instructions that some or all of processors 151 through 15n may call and execute to implement corresponding functions.
The mobile carrier 100 may include an advanced driving assistance system (ADVANCED DRIVING ASSISTANT SYSTEM, ADAS) that obtains information from around the vehicle using various sensors on the vehicle (including, but not limited to, lidar, millimeter wave radar, camera devices, ultrasonic sensors, global positioning system, inertial measurement units) and analyzes and processes the obtained information to perform functions such as obstacle sensing, object recognition, vehicle positioning, path planning, driver monitoring/reminding, etc., thereby improving safety, automation and comfort of vehicle driving.
The mobile carrier of the present application may include an on-road vehicle, a water vehicle, an air vehicle, an industrial facility, an agricultural facility, an entertainment facility, or the like. For example, the mobile carrier may be a vehicle, which is a vehicle in a broad concept, may be a vehicle (e.g., commercial vehicle, passenger vehicle, motorcycle, aerocar, train, etc.), an industrial vehicle (e.g., forklift, trailer, tractor, etc.), an engineering vehicle (e.g., excavator, earth mover, crane, etc.), an agricultural device (e.g., mower, harvester, etc.), an amusement device, a toy vehicle, etc., and the type of the vehicle according to the embodiments of the present application is not particularly limited. For another example, the mobile carrier may be an aircraft, or a ship. The following description will be given by taking a mobile carrier as an example of a vehicle.
Fig. 2 is a schematic diagram of a sensor distribution applied to the mobile carrier 100 according to an embodiment of the present application, and it should be understood that fig. 2 is only an exemplary schematic diagram of a sensor distribution manner, and that other distribution manners are possible, which are not limited in this embodiment of the present application. As shown in fig. 2, the sensors distributed on the mobile carrier 100 include a millimeter wave radar 201, an image pickup device 202, and a laser radar 203, and may further include other sensors not shown in fig. 2, which are not limited in this embodiment of the present application. Exemplary are a laser radar with a furthest sensing distance of about 150 meters, a camera with a furthest sensing distance of about 200 meters, a long-range millimeter wave radar with a furthest sensing distance of about 250 meters, and a mid/short-range millimeter wave radar with a furthest sensing distance of about 120 meters.
In the field of automatic driving, the automatic driving vehicle predicts the motion intention of a moving object, and converts a prediction result into a prediction track in a time dimension and a space dimension, so that the automatic driving vehicle can be helped to make more reasonable driving decisions.
Currently, the mainstream method for track prediction of a moving object is a method based on deep learning, and there are mainly the following specific ways.
First, an end-to-end model of trajectory prediction is built through a neural network, which can output a single predicted trajectory of a moving object, but the single predicted trajectory is difficult to capture uncertainty of future behavior of the moving object.
Second, an end-to-end trajectory prediction model is built based on implicit variables that represent uncertainty in future behavior of the moving object, where implicit variables can be understood as variables that have no specific physical meaning, in such a way that the impact of uncertainty in the moving object can be reduced, but still be an end-to-end trajectory prediction model that has natural unexplainability.
Third, the problem of trajectory prediction of a moving object is decomposed into candidate trajectory prediction or candidate end point prediction, and although this approach can enhance the interpretability of the intention of the moving object, there is still a problem of inaccurate prediction. For example, the dimensions for candidate trajectory prediction are high, making it difficult to construct an accurate prediction model. For candidate endpoint prediction, although the candidate endpoint selection has flexibility, the selection of the candidate endpoint depends on a map within a certain range, which makes it difficult to obtain an intended prediction of a moving object over a long period of time, which also results in inaccuracy of the candidate endpoint prediction.
The prediction of the motion intention and the track of the moving object does not consider the difference between different moving objects and the difference of the motion intention and the motion track of the moving object of the same type under different motion scenes, which leads to the inaccuracy of the prediction of the motion intention of the moving object and further leads to the inaccuracy of the prediction of the track.
Therefore, the embodiment of the present application proposes an intention prediction method, an intention prediction device and a mobile carrier, which will be described in detail below with reference to fig. 3 to 11.
First, the road junction scene and the non-road junction scene will be described in detail with reference to fig. 3 and 4.
Fig. 3 is a schematic view of a scene of a vehicle at an intersection according to an embodiment of the present application.
As shown in fig. 3, if vehicle a is the own vehicle, vehicle B is the target vehicle for vehicle a, and vehicle B has four types of road junction choices at the road junction, for example, vehicle B passes straight through road junction 2 along the current road, or vehicle B turns right through road junction 1, or vehicle B turns left through road junction 3 at the road junction, or vehicle B turns around through road junction 4 at the road junction. Each intersection may include multiple lanes, e.g., each intersection includes 2 lanes as shown in fig. 3.
It should be understood that the intersections shown in fig. 3 are "intersections", and the intersections in the embodiments of the present application may be other intersection types, for example, "T-intersections", and the embodiments of the present application do not limit the intersection types. In addition, the number of the road ports is not limited in the embodiment of the application. The specific intention prediction method in the embodiment of the application is described by taking the intersection scene shown in fig. 3 as an example.
Fig. 4 is a schematic view of a scene of a vehicle at a non-intersection according to an embodiment of the present application.
In non-intersection scenarios, the target vehicle may include two intentions of lane change or straight-ahead relative to the own vehicle. For example, as shown in fig. 4, the road has two lanes, and if the vehicle a is a host vehicle, the vehicle B is a target vehicle of the vehicle a, and the vehicle B may continue straight along the current lane 1 or may change lanes to lane 2 to run in a non-intersection scene.
It should be understood that the number of vehicles in the non-intersection scenario shown in fig. 4 is merely an example, and embodiments of the present application are not limited in this respect. In addition, the embodiment of the application does not limit the lane mark between the lanes, and the dotted line lane mark between the two lanes shown in fig. 4 can also be a solid line lane mark and the like.
Fig. 5 is a flowchart of an intent prediction method according to an embodiment of the present application. The method may be performed by the mobile carrier described above, or the method may be performed by the computing platform described above, or the method may be performed by a system on chip (SoC) in the computing platform, or the method may be performed by one or more processors in the computing platform.
S510, acquiring a first type and a first motion scene, wherein the first type is the type of a motion object, and the first motion scene is the motion scene where the motion object is located.
As one possible implementation, the first type and the first motion scene may be obtained in a tag that inputs feature information of the prediction model. For example, in an embodiment of the present application, the first type and the first motion scene may be obtained in a tag of the first feature information.
As a possible implementation, the first type and the first motion scene may also be used as a kind of independent information for selecting an intention prediction model adapted to the first type and the first motion scene from a plurality of intention prediction models.
It should be understood that the moving object may be a moving object that changes in real time in a traffic scene, such as a vehicle, a rider, and a pedestrian, and the embodiment of the present application is not limited to a specific type of moving object. In the embodiment of the application, the moving objects in the traffic scene are mainly taken as vehicles and riding human examples for detailed description.
S520, determining a first intention prediction model from a plurality of intention prediction models according to the first type and the first motion scene, wherein the plurality of intention prediction models are intention prediction models for different types of moving objects and motion scenes.
Because the motion intentions and trajectories of different types of moving objects in different motion scenes have great differences, in the application, corresponding intent prediction models are determined according to the types of the moving objects and the types of the motion scenes, thereby being beneficial to improving the accuracy of the intent prediction of the moving objects in the scenes.
By way of example, the intent prediction model may be a prediction of the movement intent of a vehicle in an intersection scene, a prediction of the movement intent of a vehicle in a non-intersection scene, a prediction of the movement intent of a rider in an intersection scene, a prediction of the intent of a rider to make a red light at an intersection, and so on. The riding traffic tool of the application can be understood as a non-motor vehicle such as a bicycle, an electric vehicle and the like.
It should be appreciated that the above scenario covered by the intent prediction model is for exemplary purposes only, and that other types of moving objects and moving scenarios may also be covered by aspects of the present application.
S530, inputting first characteristic information into a first intention prediction model to obtain a first intention prediction result, wherein the first characteristic information comprises coding characteristic information obtained according to high-precision map information corresponding to a first operation scene and motion information of a moving object.
As a possible implementation manner, the high-precision map information and the motion information are input into a feature extraction model to obtain the coding feature information. That is, the first feature information may include high-dimensional feature information extracted from road element information around the moving object and motion information of the moving object with respect to the own vehicle in the high-definition map. In other words, the two kinds of information are encoded by the feature extraction model, and the encoded feature information is obtained.
Optionally, the first characteristic information and scene characteristic information of the moving object are input into a first intention prediction model to obtain a first intention prediction result. The scene characteristic information of the moving object can be understood as characteristic information of the first type of moving object and the first moving scene. In the following, scene characteristic information of a moving object will be further exemplarily explained in connection with a specific intention prediction model.
In the technical scheme, the first intention prediction model in the multi-scene intention prediction model can be accurately identified through the type of the moving object and the moving scene, so that the accuracy of the movement intention prediction of the moving object can be improved through the intention prediction model matched with the type of the moving object and the moving scene, and the track prediction accuracy of the moving object can be improved.
The steps of fig. 5 will be described in detail with reference to fig. 6 to 12. For ease of understanding, the meaning of the two rectangles in fig. 5 to 12 is described herein, wherein "right-angled rectangle" represents input and output data and "rounded rectangle" represents a processing model of the input data, for example, processing the input data through a neural network.
It should be understood that the training data set and the test data set used in embodiments of the present application are from real vehicle acquisition data. The processing model of the input data is a model which has been trained by the training data set, for example, a feature extraction model, a vehicle intersection intention prediction model, a vehicle non-intersection intention prediction model, a pedestrian red light running prediction model, a vehicle intersection track prediction model, a vehicle non-intersection track prediction model, a pedestrian intersection track prediction model, and the like.
Fig. 6 is a schematic diagram of an intent prediction method according to an embodiment of the present application.
As shown in fig. 6, a first intent prediction model is selected from a plurality of intent prediction models included in the multi-scene intent prediction model 610 according to a first type and a first motion scene. And then, inputting the first characteristic information into a first intention prediction model to obtain a first intention prediction result. Wherein the first type and the first motion scene may be obtained in the form of a tag of the first characteristic information.
Illustratively, the first intent prediction model may be any one of a vehicle intersection intent prediction model 611, a vehicle non-intersection intent prediction model 612, a rider intersection intent prediction model 613, and a rider red light running prediction model 614.
Fig. 7 is a schematic diagram of a track prediction method according to an embodiment of the present application.
As shown in fig. 7, the first characteristic information is input into the first intention prediction model to obtain a first intention prediction result. The first characteristic information may also be input into the trajectory prediction model 620 to obtain a first trajectory prediction result. And then, obtaining a second track prediction result according to the first track prediction result and the first intention prediction result.
As one possible implementation, the trajectory prediction model 620 may be a trajectory prediction model that does not distinguish between a moving object type and a moving scene.
As one possible implementation, the trajectory prediction model 620 may also be a multi-scene trajectory prediction model.
Specifically, a first trajectory prediction model is selected from a plurality of trajectory prediction models included in the multi-scene trajectory prediction model according to a first type and a first motion scene. And inputting the first characteristic information into a first track prediction model to obtain a first track prediction result.
As one possible implementation, the first trajectory prediction result may include a plurality of short-term prediction trajectories and predictors corresponding to each of the prediction trajectories, and the first intention prediction result may include a plurality of movement intents and predictors corresponding to each of the movement intents. And matching at least part of the short-term predicted trajectories and at least part of the movement intentions in the plurality of short-term predicted trajectories to obtain a second trajectory prediction result. The prediction index of each movement intention in at least part of the movement intents is larger than or equal to a first index threshold value, and the prediction index of each short-term prediction track in at least part of the short-term prediction tracks is larger than or equal to a second index threshold value.
The prediction index may be understood as a parameter for screening the motion intention or the short-term prediction trajectory, for example, the prediction index may be a prediction probability, a prediction score, or the like.
It should be understood that the short-term predicted trajectory may be understood as a predicted trajectory obtained with the current motion speed of the moving object, which may have a shorter duration, for example, 3 seconds to 5 seconds.
In the technical scheme, in the first track prediction result and the first intention prediction result, each short-term prediction track and each movement intention are endowed with prediction indexes, so that uncertainty of future behaviors of the moving object can be effectively captured, further screening of the prediction track and the prediction intention is facilitated, and a second track prediction result is obtained.
As a possible implementation manner, at least part of the short-term predicted trajectories may also be obtained by clustering and weighting a plurality of short-term predicted trajectories in the first trajectory prediction result.
Illustratively, at least part of the short-term predicted trajectories may be 1 to 2 short-term predicted trajectories obtained by clustering and weighting.
In the technical scheme, a first track prediction result and a first intention prediction result are respectively and independently obtained, and then the two prediction results are fused to obtain a second track prediction result. Compared with the scheme of obtaining the predicted track based on the intention result, the scheme can realize the multi-task joint optimization and promote the multi-task self-consistency, thereby reducing the uncertainty propagation. In other words, the accuracy of the predicted track depends to some extent on the intention result, and uncertainty of the intention result will have a linkage effect on the predicted track. By the aid of the scheme, uncertain propagation can be reduced to a certain extent, and accuracy of track prediction is improved.
Fig. 8 is a schematic diagram of another track prediction method according to an embodiment of the present application. The specific steps are shown in fig. 8.
In the first step, the high-precision map information corresponding to the first motion scene and the motion information of the moving object are input into the feature extraction model 630 to obtain the coding feature information 641, wherein the first feature information 640 may include the coding feature information 641 and may also include the kinematic feature information 642.
For example, the high-precision map information may include static road element information and dynamic road element information corresponding to the first moving scene. For example, the static road element information may be road element information that is constant for a short period of time on a road such as a lane center line, a road boundary, a crosswalk, an intersection, a road gap, and a static obstacle. The dynamic road element information may be road element information that changes in a short period of time, such as traffic lights. The road element information may be acquired from the high-precision map after the vehicle position is determined by the positioning system device in the perception system 120 shown in fig. 1.
The motion information may be, for example, relative information such as a position, a speed, and an orientation of the moving object with respect to the own vehicle. The moving object information may be obtained by a dynamic object sensing device (e.g., one or more of a laser radar, a millimeter wave radar, an ultrasonic radar, and a camera device) in the sensing system 120 shown in fig. 1.
The kinematic feature information 642 is illustratively absolute kinematic feature information of a moving object, and may be obtained by a dynamic object sensing device in the sensing system 120 described in fig. 1.
It should be appreciated that the feature extraction model 630 may be derived from first training data, which may include training data for road element information and training data for athletic information.
In the second step, the first feature information 640 is input into the trajectory prediction model 620 and the multi-scene intent prediction model 610, respectively, to obtain a first trajectory prediction result and a first intent prediction result, respectively.
As one possible implementation, a first intent prediction model is selected from a plurality of intent prediction models included in the multi-scene intent prediction model 610 according to a first type and a first motion scene. A first trajectory prediction model is selected from a plurality of trajectory prediction models included in the multi-scene trajectory prediction model 620 according to the first type and the first motion scene. The first characteristic information comprises a first type and a first motion scene.
The first trajectory prediction model may be any one of a vehicle intersection trajectory prediction model, a vehicle non-intersection trajectory prediction model, and a rider intersection trajectory prediction model, for example.
Wherein the relevant interpretation of the first intent prediction model selected in the first step may refer to figure 6,
As one possible implementation, scene feature information of a moving object is acquired, the scene feature information including a first type of the moving object and a first moving scene. A first intent prediction model is selected from a plurality of intent prediction models included in the multi-scene intent prediction model 610 based on the first type and the first motion scene. A first trajectory prediction model is selected from a plurality of trajectory prediction models included in the multi-scene trajectory prediction model 620 according to the first type and the first motion scene.
Specifically, the first feature information 640 and the scene information of the moving object are input into the first intention prediction model to obtain a first intention prediction result, wherein the scene information of the moving object may further include feature information associated with the first type of moving object and the first moving scene, and specific examples will be described in detail in connection with specific scenes in fig. 9 to 12.
Illustratively, the scene information of the moving object may be obtained by a positioning system device and a dynamic object sensing device in the sensing system 120 shown in fig. 1. In particular, in a vehicle intersection scene, the scene information of the moving object may be the orientation, position, and distance of the vehicle (i.e., the target vehicle) with respect to the intersection. And obtaining the relative information such as the orientation, the position, the distance and the like of the target vehicle relative to the own vehicle through the dynamic object sensing device. The absolute position of the own vehicle and the absolute position of the intersection are obtained through the positioning system device. The orientation, position and distance of the target vehicle relative to the intersection can be obtained according to the relative information, the absolute position of the own vehicle and the absolute position of the intersection.
Thirdly, obtaining a second track prediction result according to the first intention prediction result and the first track prediction result.
The specific implementation manner of the third step may refer to fig. 7, and for brevity, will not be described in detail herein.
In the technical scheme, the first intention prediction model in the multi-scene track prediction model can be accurately identified through the type of the moving object and the moving scene, so that the accuracy of the short-term prediction track of the moving object can be improved through the track prediction model matched with the type of the moving object and the moving scene, and the long-term track prediction accuracy of the moving object can be further improved.
For self-vehicles, in an intersection scene, the difference between the vehicle and the rider is mainly represented by the large difference of the running speeds of the two moving objects, and in a real traffic environment, the probability of the rider running the red light may be higher than that of the vehicle. Since a non-motor vehicle that a rider normally drives cannot travel on a motor vehicle lane, in a non-intersection scene, there is a greater demand for prediction of the movement intention and movement trajectory of surrounding vehicles for a host vehicle.
The method for predicting intention and predicting trajectory of moving object according to the present application will be described in detail with reference to different types of moving objects and different moving scenes. For example, a vehicle is in an intersection scene, a vehicle is in a non-intersection scene, and a rider is in an intersection scene. These three scenarios will be described in detail below in conjunction with fig. 9-13.
Fig. 9 is a schematic diagram of a method for predicting an intention and a track of a vehicle intersection according to an embodiment of the present application.
As shown in fig. 9, the overall architecture of intent prediction and trajectory prediction of a vehicle in an intersection scene is similar to that in fig. 8.
In the first step, map information and moving object information are input into the feature extraction model 630 to obtain encoded feature information 641.
It should be understood that the encoding feature information 641 is not shown in fig. 9, and the first feature information 640 in fig. 9 may include the encoding feature information 641, and the first feature information 640 may further include the kinematic feature information 642 of the moving object. Reference is made in particular to the first step in fig. 8.
Illustratively, in a vehicle intersection scene, the encoded feature information 641 may be a feature extraction of road element information and moving object information. A specific form of the encoded feature information 641 may be a 256-dimensional feature vector.
Second, the first intent prediction model determined from the first type and the first motion scene is a vehicle intersection intent prediction model 611. Wherein the first type and the first motion scene may be obtained from intersection characteristic information of the vehicle. The first feature information 640 is input to the trajectory prediction model 620 to obtain a first trajectory prediction result. The first characteristic information 640 and the intersection characteristic information of the vehicle are input into the vehicle intersection intention prediction model 611 to obtain a first intention prediction result.
In the vehicle intersection scene, the scene characteristic information of the moving object may be intersection characteristic information of the moving object, and specifically may include a position and an orientation of a target vehicle with respect to each intersection, and a position and an orientation of the target vehicle with respect to each lane in each intersection.
As one possible implementation, in the vehicle intersection scene, the vehicle intersection intention prediction model 611 may include a first exit prediction model 6111 and a first lane prediction model 6112, and the first intention prediction result is obtained by inputting the first characteristic information into the intersection prediction model and the lane prediction model, respectively. The specific process will be described in detail with reference to fig. 10.
Fig. 10 is a schematic diagram of a method for predicting an intention of a vehicle intersection according to an embodiment of the present application.
As shown in fig. 10, first, the first characteristic information 640 is input to the first exit prediction model 6111 and the first lane prediction model 6112, respectively, to obtain a first exit prediction result and a first lane prediction result, respectively. And secondly, obtaining a first intention prediction result according to the first exit prediction result and the first lane prediction result.
Specifically, the first exit prediction result includes a prediction score of the target vehicle driving into each intersection, and the first lane prediction result includes a prediction score of each lane of the target vehicle driving into each intersection. At least one intersection with the prediction score greater than or equal to the first score threshold is taken as an intention exit, and at least one lane in the intention exits with the prediction score greater than or equal to the second score threshold is taken as an intention lane. The first intent prediction result includes an intent lane and a score corresponding to the intent lane.
For example, as specifically described with reference to fig. 3, the host vehicle is a vehicle a, the moving object is a vehicle B, the vehicle B has four exits that can be driven into in the intersection scene, namely, an exit 1, an exit 2, an exit 3 and an exit 4, and the first exit prediction result may be the prediction scores of the vehicle B driving into the exits 1, the exits 2, the exits 3 and the exits 4. Each exit has two lanes, and the first lane prediction result is a prediction score for each lane of each exit, i.e., a total of 8 lane prediction scores. If the predictive score for outlet 1 is greater than or equal to the first score threshold, then outlet 1 is a candidate outlet. The lane in which the predictive score is greater than or equal to the second score threshold in the exit 1 is then taken as the intended lane.
It should be understood that, in the above scheme, the prediction index is taken as the prediction score for example, the third index threshold may be understood as the first score threshold herein, and the fourth index threshold may be understood as the second score threshold herein. The embodiment of the application does not limit the type of the prediction index.
Thirdly, obtaining a second track prediction result according to the first track prediction result and the first intention prediction result.
In the vehicle intersection scene, the second track prediction result may include at least one long-term prediction track and a prediction index corresponding to each long-term prediction track. The long-term predicted trajectory may be understood as a predicted trajectory that is obtained with a current movement speed of the target vehicle and has a longer duration, for example, 5 seconds to 10 seconds.
As a possible implementation manner, at least part of the motion intents in the plurality of motion intents and at least part of the short-term predicted trajectories in the plurality of short-term predicted trajectories are matched, so as to obtain at least one target short-term predicted trajectory. And prolonging the short-term predicted track of each item mark to obtain a long-term predicted track.
Specifically, taking the prediction index as an example of the prediction probability, the first index threshold is a first probability threshold, and the second index threshold is a second probability threshold. Firstly, matching a short-term predicted track with a predicted probability larger than or equal to a first predicted probability threshold value with a movement intention with a predicted probability larger than or equal to a second predicted probability threshold value to obtain at least one target short-term predicted track and a target intention lane corresponding to a target intention exit. And then, connecting each target short-term predicted track with the intention lane corresponding to the intention exit through spline interpolation to obtain a long-term predicted track.
In the technical scheme, the short-term predicted track of the matched target can be supplemented, so that the accuracy of track prediction is improved.
The above is a way of obtaining the second trajectory prediction result after the first trajectory prediction result and the first intention prediction result are matched. And under the condition that the first track prediction result and the first intention prediction result are not matched, discarding the first intention prediction result to directly prolong a plurality of short-term prediction tracks, and obtaining a long-term prediction track.
In the above technical solution, the first exit prediction result and the first lane prediction result are obtained independently, and then the two prediction results are fused to obtain the first intention prediction result. Compared with a lane prediction result obtained based on the intersection intention result, the method can reduce uncertainty propagation, and further is beneficial to improving accuracy of the first intention prediction result.
Fig. 11 is a schematic diagram of a method for predicting non-intersection intention and predicting track of a vehicle according to an embodiment of the present application.
The overall architecture of intent prediction and trajectory prediction for vehicles in non-intersection scenarios in fig. 11 is similar to that in fig. 6. Fig. 11 simplifies the overall architecture, and the steps of determining the first intent prediction model and the first trajectory prediction model from the first type and the first motion scene are not shown in fig. 11. A specific predictive model of a vehicle in a non-intersection scenario is shown in fig. 11, as shown in fig. 11, the first intent prediction model is a vehicle non-intersection intent prediction model 612. The embodiment of the present application is not limited to a specific form of the intent prediction model, for example, the vehicle non-intersection intent prediction model 612 may be a neural network model formed by three fully connected layers.
In the first step, the first feature information 640 is input into the trajectory prediction model 620 to obtain a first trajectory prediction result.
The first characteristic information 640 may include, among other things, encoding characteristic information 641 and kinematic characteristic information 642.
In a non-intersection scenario, the kinematic feature information 642 may be a kinematic state feature of a target vehicle surrounding the vehicle, and may specifically include information such as position coordinates, type, size, speed, and orientation of the target vehicle. The moving object characteristic information may be obtained by a dynamic object sensing device (e.g., one or more of a laser radar, a millimeter wave radar, an ultrasonic radar, and a camera device) in the sensing system 120 shown in fig. 1.
In the second step, the first intention prediction result is obtained from the first feature information 640 and the scene feature information of the moving object in the vehicle non-intersection intention prediction model 612.
In a situation where the vehicle is in a non-intersection, the scene feature information of the moving object may be non-intersection scene feature information of the vehicle.
The non-intersection scene characteristic information of the lane may include road point characteristic information of the own vehicle on the road on which the own vehicle is currently traveling, lane traffic indication information, distance information from the own vehicle to the intersection, and the like. The waypoint characteristic information may include waypoint location and type. The traffic indication information is used for indicating whether each lane corresponding to the current running road of the own vehicle can go straight, can turn left, can turn right, can turn around, can change lanes leftwards or can change lanes rightwards, and the like. The traffic indication information is also used to indicate the traffic light status of the own vehicle on the road on which it is currently traveling.
The non-intersection scene feature information may be obtained from a high-precision map after determining the vehicle position by a positioning system device in the perception system 120 shown in fig. 1.
It should be understood that the embodiment of the present application does not limit the execution sequence of the first step and the second step in fig. 11, but the first step and the second step are independent results, that is, the result of the first step is not dependent on the result of the second step.
Thirdly, obtaining a second track prediction result according to the first track prediction result and the first intention prediction result.
Taking a prediction index as an example of a prediction score, a first intention prediction result of a target vehicle includes a plurality of intention lanes of the target vehicle and a prediction score of each intention lane in a non-intersection scene. The first trajectory prediction result of the target vehicle includes a plurality of short-term predicted trajectories of the target vehicle, and a prediction score of each of the short-term predicted trajectories.
As one possible implementation, at least a portion of the plurality of intended lanes and at least a portion of the plurality of short-term predicted trajectories are matched, and when matched, at least one target short-term predicted trajectory and the target lane intent are obtained. And interpolating each target short-term predicted track and the target lane intention through a spline to obtain a target long-term predicted track. The second trajectory prediction result includes at least one target long-term predicted trajectory and a prediction score for each target long-term predicted trajectory.
Wherein at least a portion of the intended lanes are intended lanes having a predictive score greater than or equal to a third score threshold and at least a portion of the short-term predicted trajectories are intended lanes having a predictive score greater than or equal to a fourth score threshold.
As one possible implementation, at least part of the intended lanes of the plurality of intended lanes and at least part of the short-term predicted trajectories of the plurality of short-term predicted trajectories are matched, and when the matching is not over, the extended short-term predicted trajectories are used as the second trajectory prediction result.
Fig. 12 is a schematic diagram of a method for predicting intention and predicting trajectory of a rider crossing according to an embodiment of the present application. The first type is a rider, and the first motion scene is an intersection scene.
The overall architecture of the method for predicting the intention of the rider at the intersection and the method for predicting the track are similar to those of fig. 8, and the difference is that the intention prediction model screened by the rider in the method for predicting the intention of the rider at the intersection is a rider intersection intention prediction model or a rider intersection intention prediction model and a rider red light running prediction model. Fig. 12 is an illustration of a pedestrian crossing intention prediction model and a pedestrian red light running prediction model, which are taken as an intention prediction model of a pedestrian screened in the crossing intention method.
As one possible implementation, the rider intersection intent prediction model 613 is determined among a plurality of intent prediction models according to the rider and intersection scene.
As one possible implementation, as shown in fig. 12, a rider intersection intention prediction model 613 and a rider red light running prediction model 614 are determined among a plurality of intention prediction models according to the rider and intersection scene.
The rider intersection intention prediction and trajectory prediction method will be described in detail with reference to fig. 12.
First, the first feature information 640 is input into the trajectory prediction model 620 to obtain a first trajectory prediction result.
The first feature information 640 may include encoded feature information 641 and kinematic feature information of the rider, wherein in an intersection scene of the rider, the encoded feature information 641 is obtained from high-precision map information of the rider and information extraction features of the rider with respect to the own vehicle.
The first track prediction result may include a plurality of short-term prediction tracks of the rider, and a prediction probability of each short-term prediction track. The prediction index here is represented by a prediction probability.
And secondly, inputting the first characteristic information 640 and the intersection characteristic information of the pedestrians into a pedestrian intersection intention prediction model 613 to obtain a first intention prediction result.
The intersection characteristic information of the pedestrians can comprise characteristic information associated with the pedestrians and intersection exit elements, such as the relative distance and relative position of the pedestrians and the intersection.
For example, the first intent prediction result may include a plurality of intent exits of the rider and a prediction probability of each intent exit.
And thirdly, inputting the first characteristic information 640 and the characteristic information of the traffic signal lamp into a red light running prediction model 614 of a rider to obtain a red light running prediction result.
The characteristic information of the traffic signal may include at least one of a color state of the current traffic signal, traffic indication information, or countdown information of the traffic signal.
The red light running prediction result comprises a plurality of intention exits for a rider to possibly run a red light, and a prediction probability for each intention exit to run the red light. For example, a rider can run straight through the red light, or the rider can turn left to run the red light, etc.
It should be understood that the embodiment of the present application does not limit the execution sequence of the first, second and third steps in fig. 12, but the first, second and third steps are independent results, that is, the results of the first, second and third steps are independent from each other.
Fourth, obtaining a second track prediction result according to the first track prediction result, the first intention prediction result and the red light running prediction result. In particular, this will be described in detail with reference to fig. 13.
Fig. 13 is a schematic diagram of a method for predicting a track of a rider according to an embodiment of the present application.
As shown in fig. 13, the red light running prediction result, the first intention prediction result, and the first track prediction result need to be respectively subjected to one-round screening according to the respective corresponding judgment conditions, and then are matched to obtain a second track prediction result. The following describes the screening process and the matching process of the three prediction results in detail.
First, through running red light index threshold value, select the first intention export of riding the pedestrian from running red light prediction result. Specifically, the red light running index threshold may be a third probability threshold, and the intended exit with the prediction probability greater than or equal to the third probability threshold in the red light running prediction result is used as the first intended exit. The first intended outlet may be one or more.
Second, through crossing index threshold value, the second intention export of the rider is screened from the first intention prediction result. Specifically, the intersection index threshold may be a fourth probability threshold, and the intent exit with the prediction probability of the first intent prediction result being greater than or equal to the fourth probability threshold is taken as the second intent exit. The second intended outlet may be one or more.
Thirdly, obtaining a target intention outlet according to the first intention outlet and the second intention outlet. Wherein the target intention exits may be one or more.
Specifically, the first intention outlet is taken as a complementary intention outlet of the second intention outlet, so that a target intention outlet is obtained.
Fourth, according to the track index, the first track prediction result is processed to obtain at least one candidate short-term prediction track. Specifically, the track index may be a fifth probability threshold, at least one short-term predicted track is clustered and weighted, and at least one predicted track with a prediction probability greater than or equal to the fifth probability threshold is determined as a candidate short-term predicted track.
Fifthly, matching at least one target intention outlet with at least one candidate short-term prediction track to obtain a second track prediction result. The duration corresponding to the predicted track in the second track prediction result is short. That is, when the moving object is a rider, since the intention of the rider changes rapidly, there is no need to lengthen the candidate short-term prediction trajectory of the rider in this scheme.
Specifically, if any one of the at least one target intention exits matches any one of the at least one candidate short-term predicted trajectories, the matched candidate short-term predicted trajectory is taken as the output of the second trajectory prediction result. If any one of the at least one target intention exits and any one of the at least one candidate short-term prediction trajectories are not matched, generating at least one new short-term prediction trajectory according to the corresponding position of the at least one target intention exit and the current motion state (such as the current position, the direction and the current speed) of the rider, and taking the at least one new short-term prediction trajectory as the output of the second trajectory prediction result.
In the technical scheme, when the moving object is a rider in the intersection scene and the movement intention of the rider is predicted, the probability of the rider running the red light, the outlet selection of the intersection and the track prediction are comprehensively considered, so that the accuracy of the intention prediction and the accuracy of the track prediction of the rider in the intersection scene can be remarkably improved.
It should be understood that if the above technical solution is applied to a smart traffic scenario, the relative information of the moving object with respect to the own vehicle in the above solution may be absolute information of the moving object or relative information of the moving object with respect to the smart traffic sensing device.
The foregoing is a prediction method according to an embodiment of the present application, and a prediction apparatus will be described in detail with reference to fig. 14 and 15. It should be understood that the description of the apparatus embodiments and the description of the method embodiments correspond to each other. Therefore, reference may be made to the above method embodiments, which are not described in detail herein for brevity.
Fig. 14 is a schematic diagram of a prediction apparatus according to an embodiment of the present application. The prediction apparatus 1400 includes: an acquisition module 1401 and a processing module 1402.
The acquiring unit 1401 is configured to acquire a first type and a first motion scene, where the first type is a type of a moving object, and the first motion scene is a motion scene where the moving object is located.
The processing unit 1402 is configured to determine a first intent prediction model from a plurality of intent prediction models according to a first type and a first motion scene, the plurality of intent prediction models being intent prediction models for different types of moving objects and motion scenes.
The processing unit 1402 is configured to input first feature information into a first intention prediction model to obtain a first intention prediction result, where the first feature information includes coding feature information obtained according to high-precision map information corresponding to a first operation scene and motion information of a moving object.
It should be understood that the above is only an exemplary description, and that the prediction apparatus is for performing the methods or steps mentioned in the foregoing method embodiments, and thus the apparatus corresponds to the foregoing method embodiments. For details, reference may be made to the description of the foregoing method embodiments, which are not repeated here.
Fig. 15 is a schematic hardware structure of a prediction apparatus according to an embodiment of the present application. The prediction apparatus 1500 shown in fig. 15 may include: memory 1510, processor 1520, and communication interface 1530. Wherein the memory 1510, the processor 1520, and the communication interface 1530 are connected by an internal connection path, the memory 1510 is used for storing instructions, and the processor 1520 is used for executing the instructions stored in the memory 1520 to control the input/output interface 1530 to receive/transmit at least part of the parameters of the second channel model. Alternatively, the memory 1510 may be coupled to the processor 1520 through an interface or may be integrated with the processor 1520.
The communication interface 1530 may be used to implement communication between the communication device 1500 and other devices or communication networks, for example, but not limited to, a transceiver. The communication interface 1530 may also include an input/output interface (input/output interface).
In implementation, the steps of the methods described above may be performed by integrated logic circuitry in hardware or instructions in software in the processor 1520. The method disclosed in connection with the embodiments of the present application may be directly embodied as a hardware processor executing or may be executed by a combination of hardware and software modules in the processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 1510, and the processor 1520 reads information in the memory 1510, in conjunction with its hardware, to perform the steps of the above method. To avoid repetition, a detailed description is not provided herein.
It should be appreciated that in embodiments of the present application, the processor may be a central processing unit CPU, but the processor may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate array FPGA or other programmable logic device, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It should also be appreciated that in embodiments of the present application, the memory may include read only memory and random access memory, and provide instructions and data to the processor. A portion of the processor may also include nonvolatile random access memory. The processor may also store information of the device type, for example.
It should be understood that the term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The embodiment of the application also provides a prediction device, which comprises a processing unit and a storage unit, wherein the storage unit is used for storing instructions, and the processing unit executes the instructions stored by the storage unit so as to enable the device to execute the prediction method.
The embodiment of the present application further provides a mobile carrier, which includes the prediction apparatus 1400 or the prediction apparatus 1500. The mobile carrier may be a vehicle.
The embodiment of the present application further provides a server, where the server may include the prediction apparatus 1400, and the server is further configured to send the second track prediction result to the mobile carrier.
Embodiments of the present application also provide a computer readable medium storing program code which, when run on a computer, causes the computer to perform any of the methods of figures 5 to 13 described above.
Embodiments of the present application also provide a computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the above method.
The embodiment of the application also provides a chip, which comprises: at least one processor and a memory, the at least one processor being coupled to the memory for reading and executing instructions in the memory to perform any of the methods of fig. 5-13 described above.
As used in this specification, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between 2 or more computers. Furthermore, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from two components interacting with one another in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (20)

1. A method of prediction, comprising:
acquiring a first type and a first motion scene, wherein the first type is a type of a motion object, and the first motion scene is a motion scene where the motion object is located;
Determining a first intention prediction model from a plurality of intention prediction models according to the first type and the first motion scene, wherein the plurality of intention prediction models are intention prediction models for different types of moving objects and motion scenes;
and inputting first characteristic information into the first intention prediction model to obtain a first intention prediction result, wherein the first characteristic information comprises coding characteristic information obtained according to high-precision map information corresponding to the first running scene and the motion information of the moving object.
2. The method according to claim 1, wherein the method further comprises:
inputting the first characteristic information into a first track prediction model to obtain a first track prediction result;
and obtaining a second track prediction result according to the first intention prediction result and the first track prediction result.
3. The method of claim 2, wherein the first intent prediction result comprises a plurality of motion intents and predictors corresponding to each of the motion intents, and the first trajectory prediction result comprises a plurality of short-term prediction trajectories and predictors corresponding to each of the short-term prediction trajectories;
The obtaining a second track prediction result according to the first intention prediction result and the first track prediction result includes:
And matching at least part of the motion intentions with at least part of the short-term prediction tracks in the plurality of short-term prediction tracks to obtain the second track prediction result, wherein the prediction index of each motion intention in the at least part of the motion intentions is larger than or equal to a first index threshold value, and the prediction index of each short-term prediction track in the at least part of the short-term prediction tracks is larger than or equal to a second index threshold value.
4. A method according to claim 2 or 3, wherein when the first type is a vehicle, the second trajectory prediction result comprises at least one long-term predicted trajectory, the predicted duration of the long-term predicted trajectory being greater than the predicted duration of the short-term predicted trajectory.
5. The method of any one of claims 1 to 4, wherein, in the case where the first type is a vehicle and the first motion scene is an intersection scene, the first intent prediction model includes a first exit prediction model and a first lane prediction model,
The step of inputting the first characteristic information into a first intention prediction model to obtain a first intention prediction result comprises the following steps:
Inputting the first characteristic information and the exit characteristic information of the vehicle into the first exit prediction model to obtain a first exit prediction result;
Inputting the first characteristic information and the lane characteristic information of the vehicle into the first vehicle lane prediction model to obtain a first vehicle lane prediction result;
And obtaining the first intention prediction result according to the first exit prediction result and the first lane prediction result.
6. The method of claim 5, wherein the first exit prediction result comprises a plurality of intended exits and a prediction index corresponding to each of the intended exits, and the first lane prediction result comprises a plurality of intended lanes and a prediction index corresponding to each of the intended lanes;
the obtaining the first intention prediction result according to the first exit prediction result and the first lane prediction result includes:
Obtaining the first intent prediction result by using at least part of the intent exits and at least part of the intent lanes;
Wherein the prediction index of each of the at least partial intention exits is greater than or equal to a third index threshold, and the prediction index of each of the at least partial intention lanes is greater than or equal to a fourth index threshold.
7. A method according to claim 2 or 3, wherein when the first type is a rider and the first sports scene is an intersection scene, the method further comprises:
Determining a red light running intention prediction model in the plurality of intention prediction models according to the riding person and the intersection scene;
Inputting the first characteristic information and the characteristic information of the traffic signal lamp into the red light running intention prediction model to obtain a red light running prediction result of the rider;
The obtaining a second track prediction result according to the first intention prediction result and the first track prediction result includes:
And obtaining the second track prediction result according to the first intention prediction result, the red light running prediction result and the first track prediction result.
8. The method of claim 7, wherein the deriving the second trajectory prediction result from the first intent prediction result, the red light running prediction result, and the first trajectory prediction result comprises:
Obtaining a target intention prediction result according to the first intention prediction result and the red light running prediction result, wherein the target intention prediction result comprises a plurality of target movement intentions and prediction indexes corresponding to each target movement intention;
and matching at least part of the motion intentions of the plurality of target motion intentions with at least part of short-term predicted trajectories of a plurality of short-term predicted trajectories to obtain the second trajectory prediction result.
9. A prediction apparatus, wherein the prediction apparatus includes an acquisition module and a processing module:
The acquisition module is used for acquiring a first type and a first motion scene, wherein the first type is a type of a motion object, and the first motion scene is a motion scene where the motion object is located;
The processing module is used for determining a first intention prediction model from a plurality of intention prediction models according to the first type and the first motion scene, wherein the plurality of intention prediction models are intention prediction models for different types of moving objects and motion scenes;
The processing module is used for inputting first characteristic information into the first intention prediction model to obtain a first intention prediction result, wherein the first characteristic information comprises coding characteristic information obtained according to high-precision map information corresponding to the first operation scene and motion information of the moving object.
10. The prediction device of claim 9, wherein the processing module is further configured to:
inputting the first characteristic information into a first track prediction model to obtain a first track prediction result;
and obtaining a second track prediction result according to the first intention prediction result and the first track prediction result.
11. The prediction apparatus according to claim 10, wherein the first intention prediction result includes a plurality of motion intents and predictors corresponding to each of the motion intents, and the first trajectory prediction result includes a plurality of short-term prediction trajectories and predictors corresponding to each of the short-term prediction trajectories;
the processing module is specifically configured to:
And matching at least part of the motion intentions with at least part of the short-term prediction tracks in the plurality of short-term prediction tracks to obtain the second track prediction result, wherein the prediction index of each motion intention in the at least part of the motion intentions is larger than or equal to a first index threshold value, and the prediction index of each short-term prediction track in the at least part of the short-term prediction tracks is larger than or equal to a second index threshold value.
12. The prediction apparatus according to claim 10 or 11, wherein when the first type is a vehicle, the second trajectory prediction result includes at least one long-term predicted trajectory, and a predicted duration of the long-term predicted trajectory is longer than a predicted duration of the short-term predicted trajectory.
13. The prediction apparatus according to any one of claims 9 to 12, wherein, in the case where the first type is a vehicle and the first motion scene is an intersection scene, the first intention prediction model includes a first exit prediction model and a first lane prediction model,
The processing module is specifically configured to:
Inputting the first characteristic information and the exit characteristic information of the vehicle into the first exit prediction model to obtain a first exit prediction result;
Inputting the first characteristic information and the lane characteristic information of the vehicle into the first vehicle lane prediction model to obtain a first vehicle lane prediction result;
And obtaining the first intention prediction result according to the first exit prediction result and the first lane prediction result.
14. The prediction apparatus according to claim 13, wherein the first exit prediction result includes a plurality of intention exits and a prediction index corresponding to each of the intention exits, and the first lane prediction result includes a plurality of intention lanes and a prediction index corresponding to each of the intention lanes;
the processing module is specifically configured to:
Obtaining the first intent prediction result by using at least part of the intent exits and at least part of the intent lanes;
Wherein the prediction index of each of the at least partial intention exits is greater than or equal to a third index threshold, and the prediction index of each of the at least partial intention lanes is greater than or equal to a fourth index threshold.
15. The prediction device of claim 10 or 11, wherein when the first type is a rider and the first motion scene is an intersection scene, the processing module is further configured to:
Determining a red light running intention prediction model in the plurality of intention prediction models according to the riding person and the intersection scene;
Inputting the first characteristic information and the characteristic information of the traffic signal lamp into the red light running intention prediction model to obtain a red light running prediction result of the rider;
The obtaining a second track prediction result according to the first intention prediction result and the first track prediction result includes:
And obtaining the second track prediction result according to the first intention prediction result, the red light running prediction result and the first track prediction result.
16. The prediction device according to claim 15, wherein the processing module is specifically configured to:
Obtaining a target intention prediction result according to the first intention prediction result and the red light running prediction result, wherein the target intention prediction result comprises a plurality of target movement intentions and prediction indexes corresponding to each target movement intention;
and matching at least part of the motion intentions of the plurality of target motion intentions with at least part of short-term predicted trajectories of a plurality of short-term predicted trajectories to obtain the second trajectory prediction result.
17. A prediction apparatus comprising a processor and a memory, the memory for storing program instructions, the processor for invoking the program instructions to perform the method of any of claims 1-9.
18. A mobile carrier comprising a prediction device according to any one of claims 9 to 17.
19. A computer-readable storage medium, on which a computer program is stored which, when executed by a computer, causes the method of claims 1 to 8 to be implemented.
20. A chip comprising a processor and a data interface through which the processor reads instructions stored on a memory to perform the method of any one of claims 1 to 8.
CN202211466684.5A 2022-11-22 2022-11-22 Prediction method, prediction device and mobile carrier Pending CN118067142A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211466684.5A CN118067142A (en) 2022-11-22 2022-11-22 Prediction method, prediction device and mobile carrier
PCT/CN2023/112646 WO2024109176A1 (en) 2022-11-22 2023-08-11 Prediction method and apparatus, and mobile carrier

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211466684.5A CN118067142A (en) 2022-11-22 2022-11-22 Prediction method, prediction device and mobile carrier

Publications (1)

Publication Number Publication Date
CN118067142A true CN118067142A (en) 2024-05-24

Family

ID=91107952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211466684.5A Pending CN118067142A (en) 2022-11-22 2022-11-22 Prediction method, prediction device and mobile carrier

Country Status (2)

Country Link
CN (1) CN118067142A (en)
WO (1) WO2024109176A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112519765A (en) * 2019-09-03 2021-03-19 北京百度网讯科技有限公司 Vehicle control method, apparatus, device, and medium
CN111046919B (en) * 2019-11-21 2023-05-12 南京航空航天大学 Surrounding dynamic vehicle track prediction system and method integrating behavior intention
CN113261035B (en) * 2019-12-30 2022-09-16 华为技术有限公司 Trajectory prediction method and related equipment
US11708093B2 (en) * 2020-05-08 2023-07-25 Zoox, Inc. Trajectories with intent
JP7338582B2 (en) * 2020-07-30 2023-09-05 株式会社デンソー Trajectory generation device, trajectory generation method, and trajectory generation program
CN115062202A (en) * 2022-06-30 2022-09-16 重庆长安汽车股份有限公司 Method, device, equipment and storage medium for predicting driving behavior intention and track
CN115158331A (en) * 2022-07-12 2022-10-11 东风柳州汽车有限公司 Method, device, equipment and storage medium for preventing passengers from dizziness

Also Published As

Publication number Publication date
WO2024109176A1 (en) 2024-05-30

Similar Documents

Publication Publication Date Title
US10696298B2 (en) Path prediction for a vehicle
US20220212693A1 (en) Method and apparatus for trajectory prediction, device and storage medium
JP2022514975A (en) Multi-sensor data fusion method and equipment
CN114072841A (en) Depth refinement from images
EP3819897B1 (en) Driving support method and driving support device
US20230001953A1 (en) Planning-aware prediction for control-aware autonomous driving modules
WO2023092451A1 (en) Method and apparatus for predicting drivable lane
US20230150510A1 (en) Methods and systems for tracking a mover's lane over time
US20230351886A1 (en) Traffic signal understandings and representation for prediction, planning, and control
CN111315623B (en) Vehicle control device
CN115393677A (en) End-to-end system training using fused images
EP3967978B1 (en) Detecting a construction zone by a lead autonomous vehicle (av) and updating routing plans for following autonomous vehicles (avs)
CN115158309A (en) Factor trajectory prediction using context sensitive fusion
JP7464425B2 (en) Vehicle control device, vehicle control method, and program
CN118067142A (en) Prediction method, prediction device and mobile carrier
WO2023230740A1 (en) Abnormal driving behavior identification method and device and vehicle
Tewari et al. AI-based autonomous driving assistance system
US20220245387A1 (en) End-to-end monocular 2d semantic keypoint detector and tracker learning
WO2022201276A1 (en) Reliability determination device and reliability determination method
CN115705717A (en) Method and system for predicting characteristics of a plurality of objects in the vicinity of a vehicle
US20230294742A1 (en) System and method for lane association/transition association with splines
US20230334876A1 (en) End-to-end learned lane boundary detection based on a transformer
US20240185437A1 (en) Computer-Implemented Method and System for Training a Machine Learning Process
US12043290B2 (en) State identification for road actors with uncertain measurements based on compliant priors
Reddy Driverless car: software modelling and design using Python and Tensorflow

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication