CN116152782A - Obstacle track prediction method, device, equipment and storage medium - Google Patents
Obstacle track prediction method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN116152782A CN116152782A CN202310412478.4A CN202310412478A CN116152782A CN 116152782 A CN116152782 A CN 116152782A CN 202310412478 A CN202310412478 A CN 202310412478A CN 116152782 A CN116152782 A CN 116152782A
- Authority
- CN
- China
- Prior art keywords
- obstacle
- target
- information
- target obstacle
- track
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000002452 interceptive effect Effects 0.000 claims abstract description 72
- 230000004927 fusion Effects 0.000 claims abstract description 58
- 238000013528 artificial neural network Methods 0.000 claims abstract description 28
- 238000012545 processing Methods 0.000 claims abstract description 23
- 238000003062 neural network model Methods 0.000 claims abstract description 14
- 125000004122 cyclic group Chemical group 0.000 claims abstract description 7
- 230000001133 acceleration Effects 0.000 claims description 20
- 230000000007 visual effect Effects 0.000 claims description 17
- 230000000306 recurrent effect Effects 0.000 claims description 8
- 238000011156 evaluation Methods 0.000 claims description 5
- 230000003993 interaction Effects 0.000 claims description 5
- 230000033001 locomotion Effects 0.000 abstract description 9
- 238000010586 diagram Methods 0.000 description 19
- 238000004891 communication Methods 0.000 description 16
- 230000008569 process Effects 0.000 description 14
- 238000004590 computer program Methods 0.000 description 11
- 238000013527 convolutional neural network Methods 0.000 description 11
- 238000013500 data storage Methods 0.000 description 7
- 230000004888 barrier function Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 238000007726 management method Methods 0.000 description 3
- 206010039203 Road traffic accident Diseases 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- WLRMANUAADYWEA-NWASOUNVSA-N (S)-timolol maleate Chemical compound OC(=O)\C=C/C(O)=O.CC(C)(C)NC[C@H](O)COC1=NSN=C1N1CCOCC1 WLRMANUAADYWEA-NWASOUNVSA-N 0.000 description 1
- 102100024633 Carbonic anhydrase 2 Human genes 0.000 description 1
- 102100024650 Carbonic anhydrase 3 Human genes 0.000 description 1
- 101100273207 Dictyostelium discoideum carC gene Proteins 0.000 description 1
- 101000760643 Homo sapiens Carbonic anhydrase 2 Proteins 0.000 description 1
- 101000760630 Homo sapiens Carbonic anhydrase 3 Proteins 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/16—Image acquisition using multiple overlapping images; Image stitching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Traffic Control Systems (AREA)
Abstract
The application relates to a method, a device, equipment and a storage medium for predicting obstacle trajectories, and in particular relates to the technical field of intelligent driving. The method comprises the following steps: determining a target obstacle within a specified range of the target vehicle; determining interactive map sequence information and historical state information of a target obstacle; determining a cross splicing fusion characteristic sequence of the target obstacle according to the interactive map sequence information and the historical state information of the target obstacle; and processing the cross splicing fusion characteristic sequence of the target obstacle through a cyclic neural network model to determine the target track of the target obstacle. According to the scheme, the motion information of the target obstacle and the position and speed relation between the target obstacle and other obstacles are considered, and the data are fused and then processed by the circulating neural network, so that the accuracy of the target track prediction result is improved.
Description
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a method, a device, equipment and a storage medium for predicting obstacle trajectories.
Background
With the development of intelligent technology, urban roads, communication base stations, GPS positioning systems, internet of things and the like, the automatic driving industry is created, and more automobiles are provided with advanced auxiliary driving systems and even full-automatic driving systems.
At present, the automatic driving technology is not mature, the safety is very challenging, traffic accidents caused by automatic driving vehicles occur at time, even casualties are caused, in order to improve the safety performance of the automatic driving vehicles, the future driving track of surrounding environmental barriers can be predicted in the driving process of the automobile, and the decision-making planning of the main vehicle is assisted by considering the interactivity among the barriers and between the barriers and the main vehicle, so that the vehicle is more stable, safe and comfortable in the automatic driving process. The moving track of the obstacle is usually judged by the gesture information and the historical track when the track of the obstacle is predicted currently.
However, the information used in the prediction of the obstacle track is relatively single, and the accuracy of the track prediction is relatively poor.
Disclosure of Invention
The application provides a method, a device, computer equipment and a storage medium for predicting an obstacle track, which improve the accuracy of obstacle track prediction.
In one aspect, there is provided a method of obstacle trajectory prediction, the method comprising:
determining a target obstacle within a specified range of the target vehicle;
determining interactive map sequence information and historical state information of the target obstacle; the interactive map sequence information is used for indicating the relative speed and the relative position of the target obstacle and other obstacles at each moment; the history state information is used for indicating the position information and the speed information of the target obstacle at each moment;
determining a cross splicing fusion characteristic sequence of the target obstacle according to the target obstacle interaction map sequence information and the historical state information;
and processing the cross splicing fusion characteristic sequence of the target obstacle through a cyclic neural network model to determine the target track of the target obstacle.
In yet another aspect, there is provided an obstacle trajectory prediction apparatus, the apparatus including:
an obstacle determination module configured to determine a target obstacle within a specified range of a target vehicle;
the obstacle information determining module is used for determining interactive map sequence information and historical state information of the target obstacle; the interactive map sequence information is used for indicating the relative speed and the relative position of the target obstacle and other obstacles at each moment; the history state information is used for indicating the position information and the speed information of the target obstacle at each moment;
The fusion characteristic acquisition module is used for determining a cross splicing fusion characteristic sequence of the target obstacle according to the interactive map sequence information and the historical state information of the target obstacle;
and the track determining module is used for determining the target track of the target obstacle through processing the cyclic neural network model according to the cross splicing fusion characteristic sequence of the target obstacle.
In a possible implementation manner, the obstacle determining module is further configured to obtain image information collected by an image sensor of the target vehicle, and point cloud information collected by a radar sensor of the target vehicle;
performing feature recognition on the target image to obtain target image features;
performing feature recognition on the point cloud information to obtain target point cloud features;
aligning the target image features and the target point cloud features to obtain a point cloud vision hybrid feature;
determining each obstacle in a designated range of a target vehicle, and position information and category information of each obstacle based on the point cloud visual mixing characteristics; the target obstacle is one of the individual obstacles.
In one possible implementation, the interactive map sequence information of the target obstacle includes at least one of a relative distance, an azimuth angle, a relative velocity, a relative acceleration, an absolute velocity of other obstacles, an absolute acceleration of other obstacles, and size information of other obstacles of the target obstacle.
In one possible implementation, the historical state information includes at least one of position information, velocity information, acceleration information, and point cloud visual mixing characteristics of the target obstacle at various times.
In a possible implementation manner, the fusion feature acquisition module is used for inputting the interactive map sequence information into an interactive map feature neural network to obtain interactive map features;
inputting the historical state information to a multi-input multi-layer sensor; the multi-input multi-layer sensor outputs historical state characteristics;
and inputting the interactive map features and the historical state features into a cross-splicing fusion network to obtain cross-splicing fusion features.
In a possible implementation manner, the track determining module is further configured to input the cross-stitching fusion feature into a self-attention circulating neural network for processing, so as to obtain running tracks of at least two target obstacles;
and carrying out risk coefficient evaluation on at least two running tracks, and taking the target track with the minimum risk coefficient as the target obstacle.
In one possible implementation manner, the track determining module is further configured to traverse each track point of the predicted track for each predicted track, and calculate distances between the track point and other obstacles respectively;
And determining the risk coefficient of the predicted track according to the distance between each track point and other obstacles.
In yet another aspect, a computer device is provided that includes a processor and a memory having at least one instruction stored therein that is loaded and executed by the processor to implement the obstacle trajectory prediction method described above.
In yet another aspect, a computer readable storage medium having stored therein at least one instruction loaded and executed by a processor to implement the obstacle trajectory prediction method described above is provided.
In yet another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium and executes the computer instructions so that the computer device performs the obstacle trajectory prediction method described above.
The technical scheme that this application provided can include following beneficial effect:
When track prediction is required for the target obstacle, the obstacle in the appointed range of the target vehicle is required to be acquired first, the relative speed and the relative position of the target obstacle and other obstacles at each moment, the position information and the speed information of the target obstacle at each moment are determined, at this moment, the computer equipment performs splicing and fusion according to the information to obtain a cross splicing and fusion feature sequence, and the cross splicing and fusion feature sequence is processed through a cyclic neural network model to finally obtain the prediction result of the target track of the target obstacle. When the target track of the target obstacle is predicted by the scheme, the motion information of the target obstacle and the position and speed relation between the target obstacle and other obstacles are considered, and the data are fused and then processed by the circulating neural network, so that the accuracy of the target track prediction result is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram showing a structure of an obstacle recognition system according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating a method of obstacle trajectory prediction according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating a method of obstacle trajectory prediction according to an exemplary embodiment.
Fig. 4 shows a schematic diagram of the positional relationship of a center type obstacle and a distributed obstacle.
Fig. 5 shows a schematic diagram of interactive map sequence information according to an embodiment of the present application.
Fig. 6 shows a schematic view of layer 0 of a stereoscopic mesh according to an embodiment of the present application.
Fig. 7 shows a schematic structural diagram of an interactive map feature neural network according to an embodiment of the present application.
Fig. 8 shows a schematic diagram of a multi-input multi-layer sensor according to an embodiment of the present application.
Fig. 9 shows a schematic structural diagram of a cross-spliced fusion network according to an embodiment of the present application.
Fig. 10 shows a schematic structural diagram of a recurrent neural network according to an embodiment of the present application.
Fig. 11 is a block diagram showing a structure of an obstacle trajectory prediction device according to an exemplary embodiment.
Fig. 12 shows a block diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be understood that, in the embodiments of the present application, the "indication" may be a direct indication, an indirect indication, or an indication having an association relationship. For example, a indicates B, which may mean that a indicates B directly, e.g., B may be obtained by a; it may also indicate that a indicates B indirectly, e.g. a indicates C, B may be obtained by C; it may also be indicated that there is an association between a and B.
In the description of the embodiments of the present application, the term "corresponding" may indicate that there is a direct correspondence or an indirect correspondence between the two, or may indicate that there is an association between the two, or may indicate a relationship between the two and the indicated, configured, or the like.
In the embodiment of the present application, the "predefining" may be implemented by pre-storing corresponding codes, tables or other manners that may be used to indicate relevant information in devices (including, for example, terminal devices and network devices), and the specific implementation of the present application is not limited.
Fig. 1 is a schematic diagram showing a structure of an obstacle recognition system according to an exemplary embodiment. The vehicle positioning system includes a server 110 and a target vehicle 120. The target vehicle 120 may include a data processing device, an image capturing device, a data storage module, and the like.
Optionally, the target vehicle 120 includes an image capturing device and a data storage module, where the image capturing device may capture an image of an environment surrounding the target vehicle during operation of the target vehicle, and store the captured image in the data storage module in the target vehicle.
Optionally, the target vehicle 120 is in communication connection with the server 110 through a transmission network (such as a wireless communication network), and the target vehicle 120 may upload each data (such as a collected image) stored in the data storage module to the server 110 through the wireless communication network, so that the server 110 processes the collected image, and trains a convolutional neural network model applied to aspects such as intelligent driving according to the collected image.
Optionally, the target vehicle 120 further includes a state acquisition device (not shown in fig. 1), where the state acquisition device may acquire a running state of the target vehicle 120 in real time during running, and store the running state as time-series data in a data storage device in the target vehicle 120, where the target vehicle may further upload the running state in the data storage module to the server 110 through a wireless communication network, so that the server 110 trains a convolutional neural network model applied to aspects such as intelligent running according to an image acquired by the target vehicle and the running state of the target vehicle.
Optionally, the target vehicle further includes a data processing device, where the data processing device may identify the image when the image acquisition device of the target vehicle 120 acquires the image, acquire the obstacle information in the image, and predict the driving track of the obstacle in a future period of time according to the obstacle information.
Optionally, the server 110 may also perform wireless communication connection to each vehicle (for example, may include the target vehicle 120) including the target vehicle 120 and establish communication connection with the server 110 through a wireless communication network, and send corresponding algorithm information to each vehicle, for example, model parameters of a trained convolutional neural network model may be sent to the target vehicle 120 through the wireless communication network in the server 110, and at this time, an intelligent running application in the target vehicle 120 may load the trained convolutional neural network model, so as to implement real-time processing on an image acquired in real time, and predict a running track of an obstacle (for example, other vehicles) in the image.
Optionally, the server may be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and technical computing services such as big data and artificial intelligence platforms.
Optionally, the system may further include a management device, where the management device is configured to manage the system (e.g., manage a connection state between each module and the server, etc.), where the management device is connected to the server through a communication network. Optionally, the communication network is a wired network or a wireless network.
Alternatively, the wireless network or wired network described above uses standard communication techniques and/or protocols. The network is typically the internet, but may be any other network including, but not limited to, a local area network, a metropolitan area network, a wide area network, a mobile, a limited or wireless network, a private network, or any combination of virtual private networks. In some embodiments, techniques and/or formats including hypertext markup language, extensible markup language, and the like are used to represent data exchanged over a network. All or some of the links may also be encrypted using conventional encryption techniques such as secure socket layer, transport layer security, virtual private network, internet protocol security, etc. In other embodiments, custom and/or dedicated data communication techniques may also be used in place of or in addition to the data communication techniques described above.
Fig. 2 is a flowchart illustrating a method of obstacle trajectory prediction according to an exemplary embodiment. The method is performed by a computer device, which may be one of a terminal device and a server as shown in fig. 1. As shown in fig. 2, the obstacle trajectory prediction method may include the steps of:
In the intelligent driving process, because the road condition is complex, other vehicles generally run on the road together with the vehicle around the vehicle in the running process of the vehicle, and in the running process, the other vehicles are likely to accelerate, decelerate, turn, change the road and the like, so that the other vehicles are used as barriers in the running process of the vehicle, the running track has a great possibility, and various running tracks of the barriers are needed to be considered in advance in the intelligent driving process, so that the running state of the vehicle can be controlled in time, and traffic accidents are avoided.
In the running process of the target vehicle, the computer device can acquire each obstacle (such as each surrounding vehicle) in a specified range through the image acquisition device on the target vehicle, and when the computer device determines the movement state of each surrounding vehicle, one of the vehicles can be selected as the target obstacle, and the running state of the target obstacle is calculated.
The interactive map sequence information is used for indicating the relative speed and the relative position of the target obstacle and other obstacles at all times; the history state information is used to indicate position information and speed information of the target obstacle at various times.
When the computer equipment selects one obstacle as a target obstacle, the relative speed and the relative position of the target obstacle and other obstacles at each moment can be determined, and the position information and the speed information of the target obstacle at each moment can be determined.
And 203, determining a cross-splice fusion characteristic sequence of the target obstacle according to the interactive map sequence information and the historical state information of the target obstacle.
After the interactive map sequence information and the historical state information of the target obstacle are obtained, the computer equipment can splice and fuse the information, for example, splice and fuse the relative speed and the relative position in the interactive map sequence information and the position information and the speed information at each moment in the historical state information according to a preset mode to obtain the cross splice and fusion feature sequence.
Optionally, the computer device may further use a trained convolutional neural network to perform feature extraction on the information, and finally fuse the information to obtain a cross-spliced fusion feature sequence.
At this time, the cross-spliced fusion feature sequence includes the historical motion information of the network target obstacle, and also includes the relative position and relative speed information of the target obstacle and other obstacles in the historical motion process, so that the motion characteristic of the target obstacle can be represented according to the cross-spliced fusion feature sequence.
And 204, processing through a cyclic neural network model according to the cross splicing fusion characteristic sequence of the target obstacle, and determining the target track of the target obstacle.
After the cross-spliced and fused characteristic sequence of the target obstacle is obtained, the cross-spliced and fused characteristic sequence of the target obstacle can be input into the circulating neural network model for processing, so that a target track of the target obstacle in the future is output in the circulating neural network model.
In summary, when track prediction is required for the target obstacle, it is required to first obtain the obstacle within the specified range of the target vehicle, determine the relative speed and the relative position of the target obstacle and other obstacles at each time, and the position information and the speed information of the target obstacle at each time, at this time, the computer device performs stitching and fusion according to the above information to obtain a cross stitching and fusion feature sequence, and then processes the cross stitching and fusion feature sequence through a recurrent neural network model, so as to finally obtain the prediction result of the target track of the target obstacle. When the target track of the target obstacle is predicted by the scheme, the motion information of the target obstacle and the position and speed relation between the target obstacle and other obstacles are considered, and the data are fused and then processed by the circulating neural network, so that the accuracy of the target track prediction result is improved.
Fig. 3 is a flowchart illustrating a method of obstacle trajectory prediction according to an exemplary embodiment. The method is performed by a computer device, which may be one of a terminal device and a server as shown in fig. 1. As shown in fig. 3, the obstacle trajectory prediction method may include the steps of:
in step 301, a target obstacle within a specified range of the target vehicle is determined.
Optionally, the computer device acquires image information acquired by an image sensor of the target vehicle and point cloud information acquired by a radar sensor of the target vehicle; performing feature recognition on the target image to obtain target image features; performing feature recognition on the point cloud information to obtain target point cloud features; aligning the target image features and the target point cloud features to obtain a point cloud vision hybrid feature; determining each obstacle in a designated range of the target vehicle, and position information and category information of each obstacle based on the point cloud visual mixing characteristics; the target obstacle is one of the individual obstacles.
The sensing equipment installed on the body of the target vehicle comprises a laser radar sensor and a camera sensor, sensing information acquired by the sensing equipment comprises two types of information, namely point cloud information acquired by the laser radar sensor and image information acquired by the camera sensor (namely an image sensor), the two types of information are respectively input into a multi-layer sensor network layer and a convolution network layer of a sensing network model, the point cloud information is processed by the multi-layer sensor network layer to obtain point cloud features, the visual information is processed by the convolution network layer to obtain visual features, the point cloud features and the visual features are input into a feature alignment network layer of the sensing network model, and point cloud visual mixed features (Points View Mix Feature, PVMF) output by the feature alignment network layer are input into a regression network layer of the sensing network model.
The regression network layer of the perception network model outputs position information and category information of each obstacle, and stores each position information and category information in an information container (Information Container, IC).
And extracting the point cloud visual mixing characteristics F of each obstacle from the point cloud visual mixing characteristics output by the characteristic alignment network layer according to the position information of the obstacle, and storing the point cloud visual mixing characteristics F in the information container IC.
When the position information and the category information of each obstacle are acquired, each obstacle can be continuously classified into two types, namely a central obstacle and a distributed obstacle, wherein the central obstacle is an obstacle needing to be predicted for future track, and can be arbitrarily designated, each obstacle can be regarded as the central obstacle, when the obstacle is designated as the central obstacle, the obstacle is a target obstacle needing to be predicted for future track, and the obstacles surrounding the central obstacle are regarded as the distributed obstacle, namely, when the obstacle needs to be predicted for future track, the obstacle is automatically upgraded to the central obstacle.
Optionally, the interactive map sequence information of the target obstacle includes at least one of a relative distance, an azimuth angle, a relative speed, a relative acceleration, an absolute speed of other obstacles, an absolute acceleration of other obstacles, and size information of other obstacles of the target obstacle.
Optionally, the historical state information includes at least one of position information, speed information, acceleration information, and point cloud visual mixing characteristics of the target obstacle at various moments.
Alternatively, the interactive map sequence information of the target obstacle may be acquired by:
referring to fig. 4, a schematic diagram of the positional relationship between a central obstacle and a distributed obstacle is shown. The central obstacle Car1 is the target obstacle, and the distributed obstacles Car0, car2, car3 and Ego Car are also included in fig. 4.
Referring to fig. 5, a schematic diagram of interactive map sequence information according to an embodiment of the present application is shown. As shown in fig. 5, at time tn, one of the obstacles is arbitrarily designated as the central obstacle (the target obstacle at this time), a three-dimensional grid is constructed with the central obstacle as the center, the three-dimensional grid is divided into 10 layers, each layer stores the central obstacle and the target obstacle After each layer of interactive information is completely stored in the three-dimensional grid according to rules, the interactive information carried by the three-dimensional grid is the interactive map information of the central obstacle。
The different forms of interaction information include relative distance (relative distance, rd) and direction angle (da) of the distributed obstacle to the central obstacle, relative velocity (relative velocity, rv) and relative acceleration (ra) of the distributed obstacle to the central obstacle, absolute velocity (absolute velocity, av) and absolute acceleration (absolute acceleration, aa) of the distributed obstacle, length, width, height (length, width, height) and class (class) of the distributed obstacle.
As shown in fig. 4, the relative distance between the distributed obstacle Car0 and the central obstacle Car1 is rd0, the relative distance between the distributed obstacle Car2 and the central obstacle Car1 is rd2, the relative distance between the distributed obstacle Car3 and the central obstacle Car1 is rd3, the relative distance between the distributed obstacle Ego Car and the central obstacle Car1 is rde, and rd0, rd2, rd3, rde are saved to the 0 th layer of the three-dimensional grid as shown in fig. 5, which is a schematic diagram of the 0 th layer of the three-dimensional grid according to the embodiment of the present application as shown in fig. 6.
The direction angle da, the relative velocity rv, the relative acceleration ra, the absolute velocity av, the absolute acceleration aa, the long length, the wide width, the high height, and the like are respectively stored in layers 1, 2, 3, 4, 5, 6, 7, 8, and 9 of the three-dimensional grid shown in fig. 5 according to the distribution of the relative distance rd.
Over a period of timeInner part (S)>The interactive map information at each moment synthesizes the interactive map sequence information IMSI of the obstacle,/-for each moment>Preferably, t=2 seconds, each time interval 100 ms, +.>The dimension is 10×224×224, and the dimension of the interactive map sequence information IMSI is 20×10×224×224.
The interactive map sequence information IMSI of the obstacle is stored in the information container IC.
Alternatively, the historical state information of the target obstacle may be obtained by:
the historical status information HSI of the obstacle includes a period of time that has elapsed for the central obstacleIn the inner part of the inner part,position information P of the obstacle at each moment,/for each moment>Speed information V, < >>Acceleration information A, < >>And characteristic information F, < > of the obstacle itself>Wherein->,。
The characteristic information of the obstacle is obtained from the point cloud visual mixing characteristics output by the characteristic alignment network layer stored in the information container IC, and the point cloud visual mixing characteristics of the obstacle are extracted, namely, characteristic information F.
And step 303, inputting the interactive map sequence information into an interactive map feature neural network to obtain interactive map features.
When obtainingInteractive map sequence information of time of day->After that, it is possible to add +.>The interactive map information of the moment->Is input to a constructed interactive map feature neural network (Interactive Map Feature Neural Network, IMFNN) which outputs +.>Interactive map feature->。/>
Referring to fig. 7, a schematic structural diagram of an interactive map feature neural network according to an embodiment of the present application is shown. As shown in fig. 7, the interactive map feature neural network includes a module 1 and a module 2, the module 1 is a convolutional neural network (Convolutional Neural Network, CNN), and the module 2 is a multi-layer perceptron (Multilayer Perceptron, MLP).
The dimensions of the interactive map information belong to two-dimensional multi-channels, the module 1 of the interactive map feature neural network is designed as a convolutional neural network (Convolutional Neural Network, CNN), the interactive map informationThe module 1 for inputting the interactive map feature neural network interactivelyThe module 1 of the map feature neural network outputs an interactive convolution feature (Interactive Convolutional Feature, ICF) by pooling (pool) and full connected (full connected) inputs to the module 2 of the interactive map feature neural network, the module 2 of the interactive map feature neural network outputs the interactive map feature- >。
Will beTime of day of the history status information->Divided into two parts, the first part information is that the central obstacle is +.>Position, speed, acceleration information of moment +.>The second part of information is that the central obstacle is +.>Characteristic information of the time of day>。
Positioning the first part of information on the central obstacleTime position->Speed->Acceleration informationSequentially arranging to form track information, inputting the track information into the multi-input multi-layer sensor (namely MLP 1-MLP 2), then splicing to obtain first characteristic information, and simultaneously, arranging the central barrier of the second part of information at->The characteristic information of the moment is input to the MLP3 in the multi-input multi-layer sensor to obtain second characteristic information, the multi-input multi-layer sensor superimposes the first characteristic information and the second characteristic information and then processes the first characteristic information and the second characteristic information through the MLP4, and finally, historical state characteristics are output>Fig. 8 is a schematic diagram of a multi-input multi-layer sensor according to an embodiment of the present application.
And 305, inputting the interactive map features and the historical state features into a cross-splice fusion network to obtain cross-splice fusion features.
Referring to fig. 9, a schematic structural diagram of a cross-spliced fusion network according to an embodiment of the present application is shown, as shown in fig. 9,the interactive map feature of the moment->And the history status feature->The feature dimensions of the two are the same, and the interactive map sequence feature +.>After being input into the cross fusion network, the characteristics of the corresponding positions of the historical state characteristics are absorbed, and the absorption factor alpha=0.6 is used for obtaining the cross map mixing characteristics +.>(Interactive Map Mix Feature, IMMF) the history status feature +.>After being input into the cross fusion network, the characteristics of the corresponding positions of the interactive map characteristics are absorbed, and the absorption factor beta=0.4 is used for obtaining the history state mixed characteristic +.>(History Status Mix Feature, HSMF), and splicing (concat) the interactive map sequence mixed feature and the historical state mixed feature to obtain the cross-spliced fusion feature +.>。
Acquiring the interactive map information and the history state information at each time from the information container IC, and sequentially processing according to the steps C1, C2 and C3 to obtain a past period of timeInner part (S)>The cross-splice fusion feature sequence CSFFS, < >>. Preferably, t=2 seconds, each time interval 100 ms, +. >The dimension is 1×128, and the cross splice fusion feature sequence CSFFS dimension is 20×1×128./>
And step 306, inputting the cross splicing fusion characteristics into a self-attention circulating neural network for processing, and obtaining at least two running tracks of the target obstacle.
Optionally, the computer device inputs the cross-stitching fusion feature sequence CSFFS of the obstacle to an encoder set of the self-attention-circulating neural networkAn Encoder Set, the Encoder Set outputting encoding features; the number of encoders of the Encoder Set Encoder Set of the self-attention circulating neural network is the same as the length of the fusion characteristic sequence; the computer device then inputs the encoded features to a self-attention module of the self-attention recurrent neural network, the self-attention module outputting self-attention features; finally, the self-attention feature is input into a Decoder Set of the self-attention circulating neural network, and the Decoder Set outputs an obstacle for a period of time in the futureMultiple predicted trajectories within->,,/>M=1, 2,..6, as shown in fig. 10, which shows a schematic structural diagram of a recurrent neural network according to an embodiment of the present application. Optimally, 6 predicted tracks are output at most, Representation->Moment self-attention modulePredicted 6 track points.
Preferably, t=2 seconds, each time interval being 100 milliseconds.
And 307, carrying out risk coefficient evaluation on at least two running tracks, and taking the track with the minimum risk coefficient as the target track of the target obstacle.
Optionally, for each predicted track, traversing each track point of the predicted track, and respectively calculating the distance between the track point and other obstacles; and determining the risk coefficient of the predicted track according to the distance between each track point and other obstacles.
That is, the computer device may select a predicted trajectory, traverse each trajectory point of the predicted trajectory, and calculate the trajectory point and the surrounding distributed obstacle, respectivelyIs normalized to obtain a normalized distance +.>And then (2) to->Mapping exponential function to obtain risk coefficient of the predicted track>As in equation (1).
E2, selecting a predicted track with the smallest risk coefficient as the optimal predicted track of the obstacle in the future, as shown in a formula (2);
wherein, the liquid crystal display device comprises a liquid crystal display device,the index corresponding to the predicted track with the smallest danger coefficient in the m tracks.
When a predicted track with the smallest risk coefficient is obtained, the predicted track can be used as a target track of a target obstacle.
In summary, when track prediction is required for the target obstacle, it is required to first obtain the obstacle within the specified range of the target vehicle, determine the relative speed and the relative position of the target obstacle and other obstacles at each time, and the position information and the speed information of the target obstacle at each time, at this time, the computer device performs stitching and fusion according to the above information to obtain a cross stitching and fusion feature sequence, and then processes the cross stitching and fusion feature sequence through a recurrent neural network model, so as to finally obtain the prediction result of the target track of the target obstacle. When the target track of the target obstacle is predicted by the scheme, the motion information of the target obstacle and the position and speed relation between the target obstacle and other obstacles are considered, and the data are fused and then processed by the circulating neural network, so that the accuracy of the target track prediction result is improved.
Fig. 11 is a block diagram showing a structure of an obstacle trajectory prediction device according to an exemplary embodiment. The device comprises:
an obstacle determination module 1101 for determining a target obstacle within a specified range of the target vehicle;
An obstacle information determining module 1102, configured to determine interactive map sequence information and historical state information of the target obstacle; the interactive map sequence information is used for indicating the relative speed and the relative position of the target obstacle and other obstacles at each moment; the history state information is used for indicating the position information and the speed information of the target obstacle at each moment;
the fusion feature acquisition module 1103 is configured to determine a cross-splice fusion feature sequence of the target obstacle according to the target obstacle interaction map sequence information and the historical state information;
the track determining module 1104 is configured to determine a target track of the target obstacle by performing processing through a recurrent neural network model according to the cross-spliced fusion feature sequence of the target obstacle.
In a possible implementation manner, the obstacle determining module is further configured to obtain image information collected by an image sensor of the target vehicle, and point cloud information collected by a radar sensor of the target vehicle;
performing feature recognition on the target image to obtain target image features;
performing feature recognition on the point cloud information to obtain target point cloud features;
Aligning the target image features and the target point cloud features to obtain a point cloud vision hybrid feature;
determining each obstacle in a designated range of a target vehicle, and position information and category information of each obstacle based on the point cloud visual mixing characteristics; the target obstacle is one of the individual obstacles.
In one possible implementation, the interactive map sequence information of the target obstacle includes at least one of a relative distance, an azimuth angle, a relative velocity, a relative acceleration, an absolute velocity of other obstacles, an absolute acceleration of other obstacles, and size information of other obstacles of the target obstacle.
In one possible implementation, the historical state information includes at least one of position information, velocity information, acceleration information, and point cloud visual mixing characteristics of the target obstacle at various times.
In a possible implementation manner, the fusion feature acquisition module is used for inputting the interactive map sequence information into an interactive map feature neural network to obtain interactive map features;
Inputting the historical state information to a multi-input multi-layer sensor; the multi-input multi-layer sensor outputs historical state characteristics;
and inputting the interactive map features and the historical state features into a cross-splicing fusion network to obtain cross-splicing fusion features.
In a possible implementation manner, the track determining module is further configured to input the cross-stitching fusion feature into a self-attention circulating neural network for processing, so as to obtain running tracks of at least two target obstacles;
and carrying out risk coefficient evaluation on at least two running tracks, and taking the target track with the minimum risk coefficient as the target obstacle.
In one possible implementation manner, the track determining module is further configured to traverse each track point of the predicted track for each predicted track, and calculate distances between the track point and other obstacles respectively;
and determining the risk coefficient of the predicted track according to the distance between each track point and other obstacles.
In summary, when track prediction is required for the target obstacle, it is required to first obtain the obstacle within the specified range of the target vehicle, determine the relative speed and the relative position of the target obstacle and other obstacles at each time, and the position information and the speed information of the target obstacle at each time, at this time, the computer device performs stitching and fusion according to the above information to obtain a cross stitching and fusion feature sequence, and then processes the cross stitching and fusion feature sequence through a recurrent neural network model, so as to finally obtain the prediction result of the target track of the target obstacle. When the target track of the target obstacle is predicted by the scheme, the motion information of the target obstacle and the position and speed relation between the target obstacle and other obstacles are considered, and the data are fused and then processed by the circulating neural network, so that the accuracy of the target track prediction result is improved.
Fig. 12 shows a block diagram of a computer device 1200 shown in an exemplary embodiment of the present application. The computer device may be implemented as a server in the above-described aspects of the present application. The computer apparatus 1200 includes a central processing unit (Central Processing Unit, CPU) 1201, a system Memory 1204 including a random access Memory (Random Access Memory, RAM) 1202 and a Read-Only Memory (ROM) 1203, and a system bus 1205 connecting the system Memory 1204 and the central processing unit 1201. The computer device 1200 also includes a mass storage device 1206 for storing an operating system 1209, application programs 1210, and other program modules 1211.
The mass storage device 1206 is connected to the central processing unit 1201 through a mass storage controller (not shown) connected to the system bus 1205. The mass storage device 1206 and its associated computer-readable media provide non-volatile storage for the computer device 1200. That is, the mass storage device 1206 may include a computer readable medium (not shown) such as a hard disk or a compact disk-Only (CD-ROM) drive.
The computer readable medium may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, erasable programmable read-Only register (Erasable Programmable Read Only Memory, EPROM), electrically erasable programmable read-Only Memory (EEPROM) flash Memory or other solid state Memory technology, CD-ROM, digital versatile disks (Digital Versatile Disc, DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that the computer storage medium is not limited to the one described above. The system memory 1204 and mass storage device 1206 described above may be collectively referred to as memory.
According to various embodiments of the disclosure, the computer device 1200 may also operate through a network, such as the Internet, to a remote computer on the network. I.e., the computer device 1200 may be connected to the network 1208 via a network interface unit 1207 coupled to the system bus 1205, or alternatively, the network interface unit 1207 may be used to connect to other types of networks or remote computer systems (not shown).
The memory further comprises at least one computer program stored in the memory, and the central processing unit 1201 implements all or part of the steps of the methods shown in the above embodiments by executing the at least one computer program.
In an exemplary embodiment, a computer readable storage medium is also provided for storing at least one computer program that is loaded and executed by a processor to implement all or part of the steps of the above method. For example, the computer readable storage medium may be Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), compact disc Read-Only Memory (CD-ROM), magnetic tape, floppy disk, optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium and executes the computer instructions to cause the computer device to perform all or part of the steps of the methods shown in the embodiments illustrated in fig. 2 or 3 described above.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (10)
1. A method of obstacle trajectory prediction, the method comprising:
determining a target obstacle within a specified range of the target vehicle;
determining interactive map sequence information and historical state information of the target obstacle; the interactive map sequence information is used for indicating the relative speed and the relative position of the target obstacle and other obstacles at each moment; the history state information is used for indicating the position information and the speed information of the target obstacle at each moment;
Determining a cross splicing fusion characteristic sequence of the target obstacle according to the target obstacle interaction map sequence information and the historical state information;
and processing the cross splicing fusion characteristic sequence of the target obstacle through a cyclic neural network model to determine the target track of the target obstacle.
2. The method of claim 1, wherein the determining a target obstacle within a specified range of the target vehicle comprises:
acquiring image information acquired by an image sensor of the target vehicle and point cloud information acquired by a radar sensor of the target vehicle;
performing feature recognition on the target image to obtain target image features;
performing feature recognition on the point cloud information to obtain target point cloud features;
aligning the target image features and the target point cloud features to obtain a point cloud vision hybrid feature;
determining each obstacle in a designated range of a target vehicle, and position information and category information of each obstacle based on the point cloud visual mixing characteristics; the target obstacle is one of the individual obstacles.
3. The method of claim 1, wherein the interactive map sequence information of the target obstacle comprises at least one of a relative distance of the target obstacle to other obstacles, an azimuth angle, a relative velocity, a relative acceleration, an absolute velocity of other obstacles, an absolute acceleration of other obstacles, and size information of other obstacles.
4. The method of claim 2, wherein the historical state information includes at least one of position information, velocity information, acceleration information, point cloud visual mixing characteristics of the target obstacle at various times.
5. The method of claim 1, wherein the determining the cross-splice fusion feature sequence of the target obstacle based on the target obstacle interaction map sequence information and historical state information comprises:
inputting the interactive map sequence information into an interactive map feature neural network to obtain interactive map features;
inputting the historical state information to a multi-input multi-layer sensor; the multi-input multi-layer sensor outputs historical state characteristics;
and inputting the interactive map features and the historical state features into a cross-splicing fusion network to obtain cross-splicing fusion features.
6. The method according to any one of claims 1 to 5, wherein the determining the target trajectory of the target obstacle by processing through a recurrent neural network model based on the cross-stitching fusion feature sequence of the target obstacle comprises:
Inputting the cross splicing fusion characteristics into a self-attention circulating neural network for processing to obtain at least two running tracks of the target obstacle;
and carrying out risk coefficient evaluation on at least two running tracks, and taking the target track with the minimum risk coefficient as the target obstacle.
7. The method of claim 6, wherein the performing risk factor evaluation on at least two of the trajectories comprises:
traversing each track point of the predicted track aiming at each predicted track, and respectively calculating the distance between the track point and other obstacles;
and determining the risk coefficient of the predicted track according to the distance between each track point and other obstacles.
8. An obstacle trajectory prediction device, the device comprising:
an obstacle determination module configured to determine a target obstacle within a specified range of a target vehicle;
the obstacle information determining module is used for determining interactive map sequence information and historical state information of the target obstacle; the interactive map sequence information is used for indicating the relative speed and the relative position of the target obstacle and other obstacles at each moment; the history state information is used for indicating the position information and the speed information of the target obstacle at each moment;
The fusion characteristic acquisition module is used for determining a cross splicing fusion characteristic sequence of the target obstacle according to the interactive map sequence information and the historical state information of the target obstacle;
and the track determining module is used for determining the target track of the target obstacle through processing the cyclic neural network model according to the cross splicing fusion characteristic sequence of the target obstacle.
9. A computer device comprising a processor and a memory having stored therein at least one instruction that is loaded and executed by the processor to implement the obstacle trajectory prediction method of any one of claims 1 to 7.
10. A computer readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement the obstacle trajectory prediction method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310412478.4A CN116152782A (en) | 2023-04-18 | 2023-04-18 | Obstacle track prediction method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310412478.4A CN116152782A (en) | 2023-04-18 | 2023-04-18 | Obstacle track prediction method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116152782A true CN116152782A (en) | 2023-05-23 |
Family
ID=86350947
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310412478.4A Pending CN116152782A (en) | 2023-04-18 | 2023-04-18 | Obstacle track prediction method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116152782A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111857134A (en) * | 2020-06-29 | 2020-10-30 | 江苏大学 | Target obstacle vehicle track prediction method based on Bayesian network |
CN112015847A (en) * | 2020-10-19 | 2020-12-01 | 北京三快在线科技有限公司 | Obstacle trajectory prediction method and device, storage medium and electronic equipment |
CN113740837A (en) * | 2021-09-01 | 2021-12-03 | 广州文远知行科技有限公司 | Obstacle tracking method, device, equipment and storage medium |
CN114399743A (en) * | 2021-12-10 | 2022-04-26 | 浙江零跑科技股份有限公司 | Method for generating future track of obstacle |
CN115909749A (en) * | 2023-01-09 | 2023-04-04 | 广州通达汽车电气股份有限公司 | Vehicle operation road risk early warning method, device, equipment and storage medium |
-
2023
- 2023-04-18 CN CN202310412478.4A patent/CN116152782A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111857134A (en) * | 2020-06-29 | 2020-10-30 | 江苏大学 | Target obstacle vehicle track prediction method based on Bayesian network |
CN112015847A (en) * | 2020-10-19 | 2020-12-01 | 北京三快在线科技有限公司 | Obstacle trajectory prediction method and device, storage medium and electronic equipment |
CN113740837A (en) * | 2021-09-01 | 2021-12-03 | 广州文远知行科技有限公司 | Obstacle tracking method, device, equipment and storage medium |
CN114399743A (en) * | 2021-12-10 | 2022-04-26 | 浙江零跑科技股份有限公司 | Method for generating future track of obstacle |
CN115909749A (en) * | 2023-01-09 | 2023-04-04 | 广州通达汽车电气股份有限公司 | Vehicle operation road risk early warning method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113168708B (en) | Lane line tracking method and device | |
US20210286359A1 (en) | Event data recordation to identify and resolve anomalies associated with control of driverless vehicles | |
US10369974B2 (en) | Control and coordination of driverless fuel replenishment for autonomous vehicles | |
CN112286206B (en) | Automatic driving simulation method, system, equipment, readable storage medium and platform | |
CN109747659A (en) | The control method and device of vehicle drive | |
KR20200123474A (en) | Framework of navigation information for autonomous navigation | |
CN112639793A (en) | Test method and device for automatically driving vehicle | |
DE102019102944A1 (en) | Systems and methods for a vehicle control strategy with low pilot control level | |
DE102020121258A1 (en) | VEHICLE PARK CONTROL | |
US20200401149A1 (en) | Corner case detection and collection for a path planning system | |
CN113954858A (en) | Method for planning vehicle driving route and intelligent automobile | |
CN114792149A (en) | Track prediction method and device and map | |
Wang et al. | Trajectory prediction for turning vehicles at intersections by fusing vehicle dynamics and driver’s future input estimation | |
CN113859265B (en) | Reminding method and device in driving process | |
CN114212108A (en) | Automatic driving method, device, vehicle, storage medium and product | |
CN113741384B (en) | Method and device for detecting automatic driving system | |
CN116152782A (en) | Obstacle track prediction method, device, equipment and storage medium | |
DE102023114042A1 (en) | Image-based pedestrian speed estimation | |
CN116135654A (en) | Vehicle running speed generation method and related equipment | |
CN115951677A (en) | Method and device for planning driving track of automatic driving vehicle | |
CN114764980B (en) | Vehicle turning route planning method and device | |
CN115205311A (en) | Image processing method, image processing apparatus, vehicle, medium, and chip | |
CN115346288A (en) | Simulation driving record acquisition method and system, electronic equipment and storage medium | |
CN113401132A (en) | Driving model updating method and device and electronic equipment | |
CN114511834A (en) | Method and device for determining prompt information, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20230523 |