CN112541449A - Pedestrian trajectory prediction method based on unmanned aerial vehicle aerial photography view angle - Google Patents

Pedestrian trajectory prediction method based on unmanned aerial vehicle aerial photography view angle Download PDF

Info

Publication number
CN112541449A
CN112541449A CN202011505987.4A CN202011505987A CN112541449A CN 112541449 A CN112541449 A CN 112541449A CN 202011505987 A CN202011505987 A CN 202011505987A CN 112541449 A CN112541449 A CN 112541449A
Authority
CN
China
Prior art keywords
pedestrian
track
interaction
prediction
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011505987.4A
Other languages
Chinese (zh)
Inventor
刘昱
王天保
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202011505987.4A priority Critical patent/CN112541449A/en
Publication of CN112541449A publication Critical patent/CN112541449A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a pedestrian trajectory prediction method based on an unmanned aerial vehicle aerial photography view angle, which comprises the following steps of 1: the method comprises the steps that pedestrian positions are obtained through a target detection algorithm in pedestrian track preprocessing, and a pedestrian position sequence within a period of time is quickly obtained through a target tracking algorithm; step 2: the track coding uses a long-short term memory network to code a track sequence with a period of time to obtain track motion characteristics; and step 3: the graph convolution network interaction construction takes each pedestrian coordinate as a vertex of the graph convolution network, and the graph convolution network is used for constructing an interaction relation among pedestrians to obtain a track interaction characteristic; and 4, step 4: optimizing maximum mutual information; and 5: and decoding the track motion characteristics and the track interaction characteristics by using a long-term and short-term memory network to obtain a prediction sequence with a certain duration, and completing the track prediction. Compared with the prior art, the method and the device have the advantages that the technical effect of constructing the interaction mode among the pedestrians and predicting the track is achieved, and the robustness is good.

Description

Pedestrian trajectory prediction method based on unmanned aerial vehicle aerial photography view angle
Technical Field
The invention relates to the field of intelligent robots and unmanned platforms, in particular to a pedestrian trajectory prediction method based on an unmanned aerial vehicle aerial photography view angle.
Background
In dense pedestrian scenes such as urban streets and the like, self paths of moving bodies such as automatic driving vehicles and robots need to be planned according to the positions of other pedestrians, safe distances can be kept and risk factors are eliminated through position prediction of targets, and the accuracy of future position prediction of the pedestrians is very important for a decision-making system of the moving bodies. The pedestrian trajectory prediction is a complex task, as the motion habits of each pedestrian are naturally different, and the group environment has human-human interaction, the motion mode of the individual is influenced by the hidden effect of the pedestrians around, people can adjust the route of the individual according to the common knowledge in the aspect of social rules, and the motion subject needs to predict the actions and social behaviors of other people. The construction of pedestrian interaction patterns with high interpretability and generalization capability is the key point of the trajectory prediction problem.
The intensive pedestrian scene at road surface visual angle has a large amount of scheduling problems of sheltering from to ordinary monocular camera is very limited to the judgment ability of distance, and unmanned aerial vehicle can obtain pedestrian's horizontal position information in a flexible way, consequently uses the unmanned aerial vehicle visual angle of taking photo by plane can obtain pedestrian's position and carry out the orbit prediction work high-efficiently.
In the existing computer vision method, the graph neural network applies deep learning on a non-Euclidean structure, constructs the relation between vertex and edge representation objects, shows good robustness and interpretability, and is an effective mode for modeling an interaction mode between pedestrians through a graph topological structure.
Disclosure of Invention
In consideration of the advantages and problems of the convolution network in the establishment of an interaction model, the invention provides a pedestrian trajectory prediction method based on an unmanned aerial vehicle aerial photography visual angle, and realizes a new convolution neural network trajectory prediction model based on the unmanned aerial vehicle aerial photography visual angle, so that an interaction mode among pedestrians is established and trajectory prediction is carried out.
1. The invention discloses a pedestrian trajectory prediction method based on an unmanned aerial vehicle aerial photography view angle, which is characterized by comprising the following steps:
step 1: carry out pedestrian's orbit preliminary treatment in the pedestrian video of unmanned aerial vehicle aerial photography, including fixing a position the pedestrian fast, the central point who gets the target frame promptly is pedestrian's position, establishes all orbit coordinates X of surveing the pedestrian and is X ═ X1,X2,…,Xn
Step 2: and (3) carrying out pedestrian track coding: representing the relative position change of a single pedestrian track between the previous frame and the next frameComprises the following steps:
Figure BDA0002844966000000021
encoding to fixed length motion vectors using long short term memory networks
Figure BDA0002844966000000022
Then using long-short term memory network to encode to obtain the trace motion characteristics
Figure BDA0002844966000000023
And step 3: constructing graph convolution network interaction: using graph structure Gt=(Vt,Et) Establishing an interactive model among pedestrians at the time t, and taking the pedestrians as a set V of vertexes in a graph structuretThe interaction relation among the pedestrians is a set E of edgestThe vertex V in each time pointtConnection relation E oftExpressed as adjacency matrix AtWill adjoin the matrix AtEdge of (1)
Figure BDA0002844966000000024
Weights assigned according to different distances
Figure BDA0002844966000000025
Expressed as:
Figure BDA0002844966000000026
characteristics of the path movement
Figure BDA0002844966000000027
Input features as vertices in graph convolution networks
Figure BDA0002844966000000028
Overlapping two layers of graph convolution networks, and obtaining the output characteristic of the ith track through a two-layer GCN structure
Figure BDA0002844966000000029
Output characteristics of pair-to-figure convolution network
Figure BDA00028449660000000210
Carrying out long-short term memory network coding to obtain the track interaction characteristics
Figure BDA00028449660000000211
And 4, step 4: the method for realizing the maximum mutual information between the local features and the global features of the track interaction features comprises the following specific processes: firstly, making negative sample of convolution network input
Figure BDA00028449660000000212
Obtaining output by a graph convolution network
Figure BDA0002844966000000031
Simultaneous extraction of global features
Figure BDA0002844966000000032
The judger D is then trained so that it can output negative examples
Figure BDA0002844966000000033
Misjudging and matching the output Z of the positive sample, thereby training the loss function L of the discriminatorinf,LinfExpressed as:
Figure BDA0002844966000000034
through the training process, the extraction result of the graph convolution network is optimized;
and 5: and (3) carrying out track prediction: using long and short term memory network to characterize trajectory motion
Figure BDA0002844966000000035
And trajectory interaction features
Figure BDA0002844966000000036
Decoding is carried outOutputting a frame of two-dimensional pedestrian trajectory prediction, and determining whether the total output length reaches the prediction sequence length? If not, adding a new output frame into the input sequence, discarding the input of the first frame, if so, outputting the prediction sequence, thereby obtaining the prediction sequence with a certain time length and completing the track prediction.
Compared with the prior art, the invention realizes the technical effect of constructing the interaction mode among the pedestrians and carrying out the track prediction, and the prediction result has good robustness.
Drawings
FIG. 1 is an overall flow chart of a pedestrian trajectory prediction method based on an unmanned aerial vehicle aerial photography view angle according to the present invention;
FIG. 2 is a schematic diagram of a model framework structure of an embodiment of a pedestrian trajectory prediction method based on an unmanned aerial vehicle aerial photography view angle;
FIG. 3 is a schematic diagram of a track prediction live-action in which the solid lines are observed historical tracks, the dark dotted lines are actual future tracks, the light dotted lines are predicted future tracks, and two pedestrians on the right side of the graph (a) walk from right to left, and one pedestrian on the left side walks from left to right; in the figure (b), three pedestrians walk from right to left. The basic coincidence with the light-colored dotted line can be observed from the figure, which shows that the prediction effect of the patent is better
Detailed Description
The technical solution of the present invention will be described in detail below with reference to the accompanying drawings.
The overall idea of the invention is to realize the prediction of the pedestrian track by adopting the overlooking pedestrian video obtained based on the aerial photography of the unmanned aerial vehicle.
As shown in fig. 1, the method mainly comprises the following steps:
step 1: carrying out pedestrian track preprocessing in the pedestrian video aerial photographed by the unmanned aerial vehicle: the pedestrian video that unmanned aerial vehicle was taken photo by plane contains the pedestrian of a plurality of overlooking visual angles, uses existing target detection and target tracking method to fix a position the pedestrian fast, promptly: taking the central position of the target frame as the position of the pedestrian, and setting the track coordinates X of all observed pedestrians as X1,X2,…,XnExtracting two-dimensional position sequence of pedestrian, setting input sequence and prediction sequence length;
Step 2: and (3) carrying out pedestrian track coding: the relative position change of a single pedestrian trajectory between the previous frame and the next frame is expressed as:
Figure BDA0002844966000000041
encoding to fixed length motion vectors using long short term memory networks
Figure BDA0002844966000000042
Then coding the motion vector by using a long-short term memory network to obtain the track motion characteristics
Figure BDA0002844966000000043
And step 3: constructing a graph convolution network interaction model: using graph structure Gt=(Vt,Et) Establishing an interactive model among pedestrians at the time t, and taking the pedestrians as a set V of vertexes in a graph structuretThe interaction relation among the pedestrians is a set E of edgestThe vertex V in each time pointtConnection relation E oftExpressed as adjacency matrix AtWill adjoin the matrix AtEdge of (1)
Figure BDA0002844966000000044
Weights assigned according to different distances
Figure BDA0002844966000000045
Expressed as:
Figure BDA0002844966000000046
characteristics of the path movement
Figure BDA0002844966000000047
Input features as vertices in graph convolution networks
Figure BDA0002844966000000048
Superposing two layers of graph convolution networks, and passing through two layers of GCN nodesConstructing output characteristics of the ith track
Figure BDA0002844966000000049
Output characteristics of pair-to-figure convolution network
Figure BDA00028449660000000410
Carrying out long-short term memory network coding to obtain the track interaction characteristics
Figure BDA00028449660000000411
And 4, step 4: in order to enable the graph convolution network to construct a good pedestrian track interaction relationship, the maximum mutual information method is used for realizing the mutual information between the local features and the global features of the maximum track interaction features, namely the maximum mutual information optimization is realized, and the specific process is as follows: firstly, making negative sample of convolution network input
Figure BDA00028449660000000412
Obtaining output by a graph convolution network
Figure BDA00028449660000000413
Simultaneous extraction of global features
Figure BDA0002844966000000051
The judger D is then trained so that it can output negative examples
Figure BDA0002844966000000052
Misjudging and matching the output Z of the positive sample, thereby training the loss function L of the discriminatorinf,LinfExpressed as:
Figure BDA0002844966000000053
through the training process, the extraction result of the graph convolution network is optimized;
and 5: and (3) carrying out track prediction: using long and short term memory network to characterize trajectory motion
Figure BDA0002844966000000054
And trajectory interaction features
Figure BDA0002844966000000055
Decoding is performed, a frame of two-dimensional pedestrian trajectory prediction is output, and it is determined whether the total output length reaches the prediction sequence length? If not, adding a new output frame into the input sequence, discarding the input of the first frame, if so, outputting the prediction sequence, thereby obtaining the prediction sequence with a certain time length and completing the track prediction.

Claims (1)

1. A pedestrian trajectory prediction method based on an unmanned aerial vehicle aerial photography view angle is characterized by specifically comprising the following steps:
step 1: carry out pedestrian's orbit preliminary treatment in the pedestrian video of unmanned aerial vehicle aerial photography, including fixing a position the pedestrian fast, the central point who gets the target frame promptly is pedestrian's position, establishes all orbit coordinates X of surveing the pedestrian and is X ═ X1,X2,…,Xn
Step 2: and (3) carrying out pedestrian track coding: the relative position change of a single pedestrian trajectory between the previous frame and the next frame is expressed as:
Figure FDA0002844965990000011
encoding to fixed length motion vectors using long short term memory networks
Figure FDA0002844965990000012
Then using long-short term memory network to encode to obtain the trace motion characteristics
Figure FDA0002844965990000013
And step 3: constructing graph convolution network interaction: using graph structure Gt=(Vt,Et) Establishing an interactive model among pedestrians at the time t, and taking the pedestrians as a set V of vertexes in a graph structuretThe interaction relation among the pedestrians is a set E of edgestEach one of themVertex V in time pointtConnection relation E oftExpressed as adjacency matrix AtWill adjoin the matrix AtEdge of (1)
Figure FDA0002844965990000014
Weights assigned according to different distances
Figure FDA0002844965990000015
Figure FDA0002844965990000016
Expressed as:
Figure FDA0002844965990000017
characteristics of the path movement
Figure FDA0002844965990000018
Input features V as vertices in graph convolution networksi tSuperposing two layers of graph convolution networks, and obtaining the output characteristic of the ith track through two layers of GCN structures
Figure FDA0002844965990000019
Output characteristics of pair-to-figure convolution network
Figure FDA00028449659900000110
Carrying out long-short term memory network coding to obtain the track interaction characteristics
Figure FDA00028449659900000111
And 4, step 4: the method for realizing the maximum mutual information between the local features and the global features of the track interaction features comprises the following specific processes: firstly, making negative sample of convolution network input
Figure FDA00028449659900000112
Figure FDA00028449659900000113
Obtaining output by a graph convolution network
Figure FDA00028449659900000114
Simultaneous extraction of global features
Figure FDA00028449659900000115
The judger D is then trained so that it can output negative examples
Figure FDA00028449659900000116
Misjudging and matching the output Z of the positive sample, thereby training the loss function L of the discriminatorinf,LinfExpressed as:
Figure FDA0002844965990000021
through the training process, the extraction result of the graph convolution network is optimized;
and 5: and (3) carrying out track prediction: using long and short term memory network to characterize trajectory motion
Figure FDA0002844965990000022
And trajectory interaction features
Figure FDA0002844965990000023
Decoding is performed, a frame of two-dimensional pedestrian trajectory prediction is output, and it is determined whether the total output length reaches the prediction sequence length? If not, adding a new output frame into the input sequence, discarding the input of the first frame, if so, outputting the prediction sequence, thereby obtaining the prediction sequence with a certain time length and completing the track prediction.
CN202011505987.4A 2020-12-18 2020-12-18 Pedestrian trajectory prediction method based on unmanned aerial vehicle aerial photography view angle Pending CN112541449A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011505987.4A CN112541449A (en) 2020-12-18 2020-12-18 Pedestrian trajectory prediction method based on unmanned aerial vehicle aerial photography view angle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011505987.4A CN112541449A (en) 2020-12-18 2020-12-18 Pedestrian trajectory prediction method based on unmanned aerial vehicle aerial photography view angle

Publications (1)

Publication Number Publication Date
CN112541449A true CN112541449A (en) 2021-03-23

Family

ID=75019153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011505987.4A Pending CN112541449A (en) 2020-12-18 2020-12-18 Pedestrian trajectory prediction method based on unmanned aerial vehicle aerial photography view angle

Country Status (1)

Country Link
CN (1) CN112541449A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269054A (en) * 2021-04-30 2021-08-17 重庆邮电大学 Aerial video analysis method based on space-time 2D convolutional neural network
CN113362367A (en) * 2021-07-26 2021-09-07 北京邮电大学 Crowd trajectory prediction method based on multi-precision interaction
CN113435356A (en) * 2021-06-30 2021-09-24 吉林大学 Track prediction method for overcoming observation noise and perception uncertainty
CN114827750A (en) * 2022-05-31 2022-07-29 脸萌有限公司 Method, device and equipment for predicting visual angle and storage medium
CN114861554A (en) * 2022-06-02 2022-08-05 广东工业大学 Unmanned ship target track prediction method based on collective filtering
CN116612493A (en) * 2023-04-28 2023-08-18 深圳先进技术研究院 Pedestrian geographic track extraction method and device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564118A (en) * 2018-03-30 2018-09-21 陕西师范大学 Crowd scene pedestrian track prediction technique based on social affinity shot and long term memory network model
CN110660082A (en) * 2019-09-25 2020-01-07 西南交通大学 Target tracking method based on graph convolution and trajectory convolution network learning
CN111161322A (en) * 2019-12-31 2020-05-15 大连理工大学 LSTM neural network pedestrian trajectory prediction method based on human-vehicle interaction
CN111339867A (en) * 2020-02-18 2020-06-26 广东工业大学 Pedestrian trajectory prediction method based on generation of countermeasure network
CN111339449A (en) * 2020-03-24 2020-06-26 青岛大学 User motion trajectory prediction method, device, equipment and storage medium
CN111401233A (en) * 2020-03-13 2020-07-10 商汤集团有限公司 Trajectory prediction method, apparatus, electronic device, and medium
CN111428763A (en) * 2020-03-17 2020-07-17 陕西师范大学 Pedestrian trajectory prediction method based on scene constraint GAN
CN111488815A (en) * 2020-04-07 2020-08-04 中山大学 Basketball game goal event prediction method based on graph convolution network and long-time and short-time memory network
CN111612206A (en) * 2020-03-30 2020-09-01 清华大学 Street pedestrian flow prediction method and system based on space-time graph convolutional neural network
CN111626198A (en) * 2020-05-27 2020-09-04 多伦科技股份有限公司 Pedestrian motion detection method based on Body Pix in automatic driving scene
CN111931905A (en) * 2020-07-13 2020-11-13 江苏大学 Graph convolution neural network model and vehicle track prediction method using same

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564118A (en) * 2018-03-30 2018-09-21 陕西师范大学 Crowd scene pedestrian track prediction technique based on social affinity shot and long term memory network model
CN110660082A (en) * 2019-09-25 2020-01-07 西南交通大学 Target tracking method based on graph convolution and trajectory convolution network learning
CN111161322A (en) * 2019-12-31 2020-05-15 大连理工大学 LSTM neural network pedestrian trajectory prediction method based on human-vehicle interaction
CN111339867A (en) * 2020-02-18 2020-06-26 广东工业大学 Pedestrian trajectory prediction method based on generation of countermeasure network
CN111401233A (en) * 2020-03-13 2020-07-10 商汤集团有限公司 Trajectory prediction method, apparatus, electronic device, and medium
CN111428763A (en) * 2020-03-17 2020-07-17 陕西师范大学 Pedestrian trajectory prediction method based on scene constraint GAN
CN111339449A (en) * 2020-03-24 2020-06-26 青岛大学 User motion trajectory prediction method, device, equipment and storage medium
CN111612206A (en) * 2020-03-30 2020-09-01 清华大学 Street pedestrian flow prediction method and system based on space-time graph convolutional neural network
CN111488815A (en) * 2020-04-07 2020-08-04 中山大学 Basketball game goal event prediction method based on graph convolution network and long-time and short-time memory network
CN111626198A (en) * 2020-05-27 2020-09-04 多伦科技股份有限公司 Pedestrian motion detection method based on Body Pix in automatic driving scene
CN111931905A (en) * 2020-07-13 2020-11-13 江苏大学 Graph convolution neural network model and vehicle track prediction method using same

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
DAOGUANG LIU等: "A Method For Short-Term Traffic Flow Forecasting Based On GCN-LSTM", 《2020 INTERNATIONAL CONFERENCE ON COMPUTER VISION, IMAGE AND DEEP LEARNING (CVIDL)》 *
FRANCO SCARSELLI等: "The Graph Neural Network Model", 《IEEE TRANSACTIONS ON NEURAL NETWORKS》 *
HAO XUE等: "A Location-Velocity-Temporal Attention LSTM Model for Pedestrian Trajectory Prediction", 《IEEEACCESS》 *
IAN J. GOODFELLOW等: "Generative Adversarial Nets", 《ARXIV:1406.2661V1 [STAT.ML]》 *
LEGOLAS~: "生成式对抗网络的损失函数的理解", 《CSDN》 *
YINGFAN HUANG等: "STGAT: Modeling Spatial-Temporal Interactions for Human Trajectory Prediction", 《2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
ZHISHUAI LI等: "A Hybrid Deep Learning Approach with GCN and LSTM for Traffic Flow Prediction", 《2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC)》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269054A (en) * 2021-04-30 2021-08-17 重庆邮电大学 Aerial video analysis method based on space-time 2D convolutional neural network
CN113269054B (en) * 2021-04-30 2022-06-10 重庆邮电大学 Aerial video analysis method based on space-time 2D convolutional neural network
CN113435356A (en) * 2021-06-30 2021-09-24 吉林大学 Track prediction method for overcoming observation noise and perception uncertainty
CN113362367A (en) * 2021-07-26 2021-09-07 北京邮电大学 Crowd trajectory prediction method based on multi-precision interaction
CN113362367B (en) * 2021-07-26 2021-12-14 北京邮电大学 Crowd trajectory prediction method based on multi-precision interaction
CN114827750A (en) * 2022-05-31 2022-07-29 脸萌有限公司 Method, device and equipment for predicting visual angle and storage medium
CN114827750B (en) * 2022-05-31 2023-12-22 脸萌有限公司 Viewing angle prediction method, device, equipment and storage medium
CN114861554A (en) * 2022-06-02 2022-08-05 广东工业大学 Unmanned ship target track prediction method based on collective filtering
CN116612493A (en) * 2023-04-28 2023-08-18 深圳先进技术研究院 Pedestrian geographic track extraction method and device

Similar Documents

Publication Publication Date Title
CN112541449A (en) Pedestrian trajectory prediction method based on unmanned aerial vehicle aerial photography view angle
CN110956651B (en) Terrain semantic perception method based on fusion of vision and vibrotactile sense
US11017550B2 (en) End-to-end tracking of objects
US11860629B2 (en) Sparse convolutional neural networks
Bhattacharyya et al. Long-term on-board prediction of people in traffic scenes under uncertainty
Ridel et al. Scene compliant trajectory forecast with agent-centric spatio-temporal grids
Yudin et al. Object detection with deep neural networks for reinforcement learning in the task of autonomous vehicles path planning at the intersection
US11731663B2 (en) Systems and methods for actor motion forecasting within a surrounding environment of an autonomous vehicle
Sales et al. Adaptive finite state machine based visual autonomous navigation system
CN110986945B (en) Local navigation method and system based on semantic altitude map
JP2020123346A (en) Method and device for performing seamless parameter switching by using location based algorithm selection to achieve optimized autonomous driving in each of regions
CN115861383A (en) Pedestrian trajectory prediction device and method based on multi-information fusion in crowded space
Yang et al. PTPGC: Pedestrian trajectory prediction by graph attention network with ConvLSTM
US20230267615A1 (en) Systems and methods for generating a road surface semantic segmentation map from a sequence of point clouds
CN115272712A (en) Pedestrian trajectory prediction method fusing moving target analysis
Zhong et al. Behavior prediction for unmanned driving based on dual fusions of feature and decision
Karpyshev et al. Mucaslam: Cnn-based frame quality assessment for mobile robot with omnidirectional visual slam
Roth et al. Viplanner: Visual semantic imperative learning for local navigation
Xu et al. Trajectory prediction for autonomous driving with topometric map
Fu et al. Decision making for autonomous driving via multimodal transformer and deep reinforcement learning
Khalil et al. Integration of motion prediction with end-to-end latent RL for self-driving vehicles
CN114723782A (en) Traffic scene moving object perception method based on different-pattern image learning
Postnikov et al. Conditioned human trajectory prediction using iterative attention blocks
Wang et al. Enhancing mapless trajectory prediction through knowledge distillation
CN115018883A (en) Transmission line unmanned aerial vehicle infrared autonomous inspection method based on optical flow and Kalman filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210323