CN112465273A - Unmanned vehicle track prediction method based on local attention mechanism - Google Patents

Unmanned vehicle track prediction method based on local attention mechanism Download PDF

Info

Publication number
CN112465273A
CN112465273A CN202011560297.9A CN202011560297A CN112465273A CN 112465273 A CN112465273 A CN 112465273A CN 202011560297 A CN202011560297 A CN 202011560297A CN 112465273 A CN112465273 A CN 112465273A
Authority
CN
China
Prior art keywords
vehicle
unmanned vehicle
track
vehicles
unmanned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011560297.9A
Other languages
Chinese (zh)
Other versions
CN112465273B (en
Inventor
杨正才
石川
周奎
姚胜华
张友兵
尹长城
冯樱
刘成武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University of Automotive Technology
Original Assignee
Hubei University of Automotive Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University of Automotive Technology filed Critical Hubei University of Automotive Technology
Priority to CN202011560297.9A priority Critical patent/CN112465273B/en
Publication of CN112465273A publication Critical patent/CN112465273A/en
Application granted granted Critical
Publication of CN112465273B publication Critical patent/CN112465273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Development Economics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Game Theory and Decision Science (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an unmanned vehicle track prediction method based on a local attention mechanism, which takes the historical track of vehicles around an unmanned vehicle as input, and fully considers the influence of the interaction between the unmanned vehicle and adjacent vehicles on the future track of the unmanned vehicle; constructing space interaction between the unmanned vehicle and an adjacent vehicle according to the geometric structure of the road and the geometric shape of the vehicle, estimating a part of vehicles with higher correlation with the future track of the unmanned vehicle through a local attention mechanism, calculating the correlation between the part of vehicles and the unmanned vehicle, and constructing time interaction by weighting and summing the correlation; synthesizing time and space interaction between the unmanned vehicle and surrounding vehicles at the current moment, inputting the comprehensive interaction characteristics and then connecting a decoder of a full connection layer to obtain track distribution and track coordinates of the unmanned vehicle in a period of time in the future; and calculating loss by using a negative log-likelihood loss function during training, and predicting the track of the unmanned vehicle for a period of time in the future by using the trained model through loss back propagation updating parameters to assist the completion of subsequent decision planning.

Description

Unmanned vehicle track prediction method based on local attention mechanism
Technical Field
The invention belongs to the field of intelligent driving, and particularly relates to an unmanned vehicle track prediction method based on a local attention mechanism.
Background
In recent years, with the advent of intelligent driving booms, the use of artificial intelligence technology in automobiles has increased, especially for vehicles dedicated to developing pure unmanned driving. The track prediction technology for predicting the position of the automobile at the next second is the basis for realizing unmanned driving, subsequent actions can be carried out only by correctly predicting the future position of the automobile without causing danger to the automobile, if the fact that the automobile is about to leave the current lane is predicted, whether the actions cause the danger or not can be predicted in advance, and if the actions cause the danger, intervention interference can be carried out in advance.
Current vehicle trajectory prediction technologies can be divided into model-based trajectory prediction methods and neural network-based trajectory prediction methods; the model-based methods comprise methods based on a dynamic model, Kalman filtering and the like, and the methods are proved to have higher prediction precision only in a short time, however, once the prediction time span is increased, the prediction precision is greatly reduced; the method based on the neural network, such as RNN, LSTM, and the like, just solves the problem of the reduction of the track prediction precision in a long time span, and can still maintain satisfactory prediction precision in a long time span by fully mining the nonlinear relation in the historical information.
However, the current prediction algorithm based on the neural network only roughly considers the historical tracks of the unmanned vehicle at all the moments, and generates the hidden state vector of the last moment by using the historical tracks at all the moments; the hidden state vector is difficult to guarantee to contain important information of all historical moments, and important contents in the historical information are inevitably lost, so that the finally predicted track is often greatly deviated from the actual track of the vehicle. Therefore, historical information with the largest influence on the current track prediction must be extracted, historical information with smaller influence is ignored, and when the unmanned vehicle changes lanes, a human driver mainly observes the driving conditions of vehicles before and after a target lane to determine when the lane can be changed; this gives less weight to the vehicle information on the unmanned vehicle lane than the same weight to all vehicles. The vehicle track information most relevant to the unmanned vehicle motion at each moment is extracted by calculating the relevancy score of the historical information of other vehicles and the historical information of the unmanned vehicle, and the information is input into a neural model for track prediction, so that the calculation cost can be saved, and the precision of the track prediction can be improved.
Disclosure of Invention
In view of the defects of the prior art, the invention aims to solve the problem that in the field of current track prediction, most methods only roughly consider historical track information of surrounding vehicles at all times, interaction between the surrounding vehicles and unmanned vehicles is not sufficiently excavated, and the track prediction precision is low.
In order to solve the problems, the invention adopts the technical scheme that the unmanned vehicle track prediction method based on the local attention mechanism comprises the following steps:
step 1: acquiring historical motion track information of an unmanned vehicle and surrounding vehicles, and preprocessing the track information;
step 2: constructing an environment tensor of the potential circle of the unmanned vehicle, and extracting space interaction between vehicles;
and step 3: extracting time interaction between vehicles by using a local attention mechanism model;
and 4, step 4: predicting future tracks of the unmanned vehicle by training an LSTM model of an Encoder-Decoder structure;
further, in the step 1, historical movement track information of the vehicle and surrounding vehicles is collected, and the track information is preprocessed, and the specific method includes:
the method comprises the steps of intercepting a video of a recording camera into pictures, calibrating each picture, detecting a vehicle in each picture by using a target detection algorithm, recording the geometric center position of the corresponding vehicle as the position coordinate of the current moment, giving ID numbers corresponding to the vehicle, a lane where the vehicle is located and the current frame to obtain historical track information of the vehicle, taking the frame number in the track information as a timestamp index, filtering and smoothing the coordinates, arranging processed data according to the ascending order of the timestamp, and dividing the data into a training set, a verification set and a test set according to the ratio of 7:1:2, thereby obtaining a data set for model training and verification.
Further, in the step 2, a potential circle of the unmanned vehicle is constructed, and an empty tensor grid corresponding to the potential circle is constructed by comprehensively considering the length of the vehicle and the width of the road structure; and filling hidden state vectors obtained at the last moment of historical observation of the vehicles in the circle in a tensor grid position corresponding to the last position of the vehicle by judging whether the vehicles around the current moment are in a potential circle range limited by the unmanned vehicle or not, namely constructing an environment tensor of the potential circle of the unmanned vehicle after the hidden state vectors are finished, and extracting space interaction information among the vehicles at the current moment after the tensor grid position is formed.
Further, in step 3, the local attention mechanism model is used to extract time interaction vectors between surrounding vehicles and the unmanned vehicle, and the calculation method is as follows:
the hidden state vector of the unmanned vehicle at the current moment is used for solving the center position of the small window,
Figure 548443DEST_PATH_IMAGE001
determining the range of the small window according to the central position of the small window, and calculating the correlation degree of the hidden state vectors of the surrounding vehicles at the current moment and the hidden state vectors of the unmanned vehicles within the range of the small window
Figure DEST_PATH_IMAGE002
Weighting and averaging the calculated correlation degree and corresponding hidden state vector of the surrounding vehicle to obtain the correlation degreeTime interaction information between vehicles at previous time
Figure 527900DEST_PATH_IMAGE003
Further, in the step 4, the specific steps of predicting the future trajectory of the unmanned vehicle through an LSTM model of an Encoder-Decoder structure are as follows:
(1) inputting the track coordinates in the whole observation time into a full connection layer according to the historical track of the vehicle obtained in the step 1, and obtaining word embedded vectors of the track coordinates at all times
Figure DEST_PATH_IMAGE004
(2) Embedding words of the track coordinate of the current moment into a vector
Figure 229009DEST_PATH_IMAGE005
And the encoder hidden state vector at last moment
Figure DEST_PATH_IMAGE006
Inputting into LSTM encoder to obtain the encoder hidden state vector at the current moment
Figure 686535DEST_PATH_IMAGE007
In the same way, the hidden state vector of the encoder of all the time track coordinates can be obtained
Figure DEST_PATH_IMAGE008
(3) The obtained encoder hidden state vectors of all vehicles at the current moment
Figure 704170DEST_PATH_IMAGE009
Substituting the potential circle constructed in the step 2 into a corresponding position in the empty tensor grid corresponding to the potential circle to obtain an environment tensor of the potential circle of the unmanned vehicle;
(4) according to the step 3, the hidden state vector of the unmanned vehicle at the current moment is used for solving the center position of the small window, the surrounding vehicle range with the highest correlation degree with the unmanned vehicle is determined, the correlation degrees are calculated with the hidden state vector of the unmanned vehicle one by one, and the time interaction vector is obtained through weighted average;
(5) combining the space and time interaction vectors contact obtained in the step 2 and the step 3 to obtain comprehensive interaction characteristics
Figure DEST_PATH_IMAGE010
Will be
Figure 588949DEST_PATH_IMAGE011
And the decoder hidden state vector at the last moment
Figure DEST_PATH_IMAGE012
Inputting into LSTM decoder to obtain hidden state vector of current unmanned vehicle decoder
Figure 921229DEST_PATH_IMAGE013
I.e. by
Figure DEST_PATH_IMAGE014
(6) Hiding the state vector of the unmanned vehicle decoder at the current moment
Figure 41501DEST_PATH_IMAGE015
After passing through the full connection layer, the probability distribution parameter of the predicted track at t +1 can be obtained
Figure DEST_PATH_IMAGE016
Wherein
Figure 772696DEST_PATH_IMAGE017
The predicted track coordinate of the unmanned vehicle at t +1 is obtained;
further, in step 4, training an LSTM model of an Encoder-Decoder structure, training the model to minimize a negative log-likelihood loss function as a target, performing back propagation according to an error of the loss function with respect to a process weight parameter, updating the process weight parameter by using a gradient descent algorithm, and storing the model weight parameter when the generalization capability of the trajectory prediction model is the best, thereby completing model training.
The invention provides an unmanned vehicle track prediction method based on a local attention mechanism, which comprehensively considers the interaction of an unmanned vehicle and surrounding vehicles on a spatial position and the dependency relationship on the surrounding vehicle time sequence, and predicts the future time track of the unmanned vehicle by utilizing an LSTM network with an 'Encode-Decoder' structure, starting from the aspect of improving the pre-judging capability of the vehicle, and acquiring the historical track information of the vehicle by a target identification algorithm, wherein the historical track information comprises a vehicle ID, a lane ID where the vehicle is located, an acquisition time ID and the geometric center position coordinate of the vehicle in a picture frame; constructing a vehicle space interaction tensor corresponding to the unmanned vehicle force circle based on a road structure and a vehicle geometric structure, encoding vehicle historical track information, filling encoded hidden state vectors in the unmanned vehicle force circle in a position corresponding to the tensor, and fully considering the position interaction of the road structure and the vehicle; calculating partial vehicles with highest correlation degree between the surrounding vehicle tracks and the unmanned vehicle tracks based on a local attention mechanism, calculating the correlation degree between the partial vehicles and the unmanned vehicle, and performing weighted summation to obtain time interaction information between the vehicles; and (4) integrating space and time interaction, inputting integrated interaction information into a decoder, and then connecting a full connection layer to obtain unmanned vehicle track distribution and track coordinates in a future period of time. Calculating the loss error of each training through a negative log-likelihood loss function, reversely propagating the error for derivation, and updating parameters through gradient descent to accelerate the convergence of model training; the generalization performance of the finally trained model is better, the prediction precision can keep better performance on different data sets, a basis is provided for the follow-up decision of the intelligent driving automobile, and the intelligent driving automobile can run more reliably in a complex traffic scene.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a diagram of a preprocessed data format;
FIG. 3 is a schematic diagram of construction of a force circle tensor of the unmanned vehicle;
FIG. 4 is a schematic diagram of a local attention mechanism extraction vehicle time interaction.
Detailed Description
The technical solutions of the present invention are further described below with reference to the accompanying drawings and specific embodiments, which are used only for facilitating the detailed understanding of the present invention by those skilled in the art, and are not intended to limit the scope of the present invention, and various modifications of equivalent forms of the present invention by those skilled in the art are included in the scope of the present invention defined by the appended claims.
A method for predicting a track of an unmanned vehicle based on a local attention mechanism is characterized in that in the running process of an intelligent driving vehicle, the running track of the unmanned vehicle in a future period is predicted through the historical running tracks of the unmanned vehicle and surrounding vehicles, and sufficient information is provided for subsequent planning and decision-making of the vehicle, so that traffic accidents of the vehicle caused by lane deviation are effectively avoided. As shown in fig. 1, the vehicle trajectory prediction method includes: the method comprises the steps of vehicle track information preprocessing, track information coding, construction of a space interaction vector of the unmanned vehicle and surrounding vehicles, construction of a time interaction vector of the unmanned vehicle and the surrounding vehicles through a local attention mechanism, track prediction output and model process parameter derivation and optimization.
The method comprises the following specific implementation processes:
A. preprocessing the acquired data;
a1, recording the information of the test vehicle on a section of road, and acquiring data by using a camera, wherein the initial data format is a video file containing the vehicle information. Intercepting a video file into pictures according to a sampling frequency of 10Hz, detecting vehicles in the images according to prior knowledge of target detection after each picture is calibrated, determining the geometric center position of the vehicles, extracting track information of the vehicles, namely local coordinates (x, y) of the vehicles at each moment, and recording the Frame _ ID, the corresponding Vehicle number Vehicle _ ID and the Lane number Lane _ ID where the vehicles are located at the current acquisition moment;
a2, the data format at this time is ". csv", processed into Dataframe format by the pandas library, and smoothed by the Savitzky-Golay filter;
a3, processing the Dataframe into 5 columns through a resize function, wherein the first column is a Vehicle number Vehicle _ ID, the second column is a Frame _ ID of the current acquisition time, the third and fourth columns are respectively an abscissa x and an ordinate y of a local coordinate system of the Vehicle, and the fifth column is a Lane number Lane _ ID where the Vehicle is located. Finally, the Frame _ ID is used as a time stamp, the processed data are arranged in an ascending order, and the final data processing effect is shown in FIG. 2;
B. encoding input data
B1, given the historical track of a vehicle at the moment t:
Figure DEST_PATH_IMAGE018
wherein
Figure 828377DEST_PATH_IMAGE019
Representing the track of the i surrounding vehicles at the current time t, wherein the observation length of the historical track of the vehicles is
Figure DEST_PATH_IMAGE020
The historical observation range is from
Figure 645023DEST_PATH_IMAGE021
A driving track within the current time t;
b2, mapping all position coordinates of the ith vehicle in the historical observation length at the current time t to corresponding word embedding vectors through a full connection layer
Figure DEST_PATH_IMAGE022
Namely:
Figure 178773DEST_PATH_IMAGE023
wherein
Figure DEST_PATH_IMAGE024
In order to be a function of the full connection,
Figure 498896DEST_PATH_IMAGE025
is the weight of the full connection layer;
similarly, word embedding vectors corresponding to all position coordinates of the ith vehicle in the whole historical observation length can be obtained
Figure DEST_PATH_IMAGE026
Namely:
Figure 587462DEST_PATH_IMAGE027
b3 word embedding vector of ith vehicle at current time t
Figure DEST_PATH_IMAGE028
And the last moment
Figure 32350DEST_PATH_IMAGE029
Temporal encoder implicit state vector
Figure DEST_PATH_IMAGE030
The coded implicit state vector of the ith vehicle around the unmanned vehicle at the current time t is obtained through the coding of an LSTM coder
Figure 635369DEST_PATH_IMAGE031
Namely:
Figure DEST_PATH_IMAGE032
wherein the content of the first and second substances,
Figure 341157DEST_PATH_IMAGE033
the LSTM encoder is responsible for encoding the track information of each vehicle into an implicit state vector,
Figure DEST_PATH_IMAGE034
is the weight of the encoder;
b4, executing the same word embedding and encoding operation on the position coordinates of each vehicle at all times, and obtaining the implicit state vectors of all vehicles. Wherein
Figure 738640DEST_PATH_IMAGE035
And
Figure DEST_PATH_IMAGE036
respectively encoding hidden state vectors of the ith vehicle and the unmanned vehicle at the current time t;
similarly, the encoder state vectors corresponding to all the position coordinates of the first vehicle and the unmanned vehicle in the whole historical observation length can be obtained
Figure 529879DEST_PATH_IMAGE037
Namely:
Figure DEST_PATH_IMAGE038
C. extracting interaction information between vehicles
C1, extracting the space interaction between the unmanned vehicle and the surrounding vehicles by using the convolution layer;
inputting historical tracks of all vehicles around the unmanned vehicle into an encoder of an LSTM model, and solving the encoding hidden state vector of all vehicles around the unmanned vehicle at the current time t based on the coordinate positions of all vehicles around the unmanned vehicle at the time t
Figure 936589DEST_PATH_IMAGE039
Encoding hidden state vectors of all surrounding vehicles at the current time t
Figure DEST_PATH_IMAGE040
By linear variation, a tensor is constructed:
Figure 496884DEST_PATH_IMAGE041
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE042
Figure 599357DEST_PATH_IMAGE043
is a weight matrix learned by back propagation.
Figure DEST_PATH_IMAGE044
Determining the position coordinates of the ith vehicle around the time t as an indication function
Figure 612312DEST_PATH_IMAGE045
Whether the potential force circle of the unmanned vehicle is within the potential force circle range of the unmanned vehicle at the moment t, and if and only if the potential force circle of the unmanned vehicle is within the potential force circle range of the unmanned vehicle as shown in figure 3
Figure DEST_PATH_IMAGE046
In that
Figure 698080DEST_PATH_IMAGE047
The indicating function is 1 when the potential force ring is in the range, otherwise, the indicating function is 0; wherein
Figure DEST_PATH_IMAGE048
The vehicle is a set of vehicles around the unmanned vehicle at the time t;
the range of the potential force circle of the unmanned vehicle is defined as the coordinate of the unmanned vehicle at the moment t
Figure 112880DEST_PATH_IMAGE049
As a central origin, the transverse coordinates fall within [ -4.5m,4.5m]Within the interval, the longitudinal coordinate is within [ -20m,30m]A rectangular area within the interval; since the length of the vehicle is about 5.5m, the width of the lane is about 3m, and the construction dimension is [9,3 ]]Tensor of
Figure DEST_PATH_IMAGE050
Tensor thus constructed
Figure 383325DEST_PATH_IMAGE050
The spatial position information and the road structure information among vehicles on the road can be well reserved;
in unmanned vehicle
Figure 414735DEST_PATH_IMAGE051
Tensor at location
Figure DEST_PATH_IMAGE052
Convolution operation is carried out, and a convolution filter acts on H, so that the interactive vector of the space position information of the unmanned vehicle and the surrounding vehicles at the current moment can be obtained
Figure 38614DEST_PATH_IMAGE053
Figure DEST_PATH_IMAGE054
Wherein the content of the first and second substances,
Figure 573501DEST_PATH_IMAGE055
the size of the filter for the current convolution operation;
c2, extracting time interaction between the unmanned vehicle and surrounding vehicles by using the local attention mechanism model;
setting a local window, wherein the window possibly comprises coded hidden state vectors of historical tracks of all the i surrounding vehicles at the current time t and vehicles with the highest correlation degree of the unmanned vehicles;
according to the coded hidden state vector of the unmanned vehicle at the time t
Figure DEST_PATH_IMAGE056
Finding the central position of the local small window
Figure 14846DEST_PATH_IMAGE057
Figure DEST_PATH_IMAGE058
Wherein
Figure 267973DEST_PATH_IMAGE059
The hidden state vector lengths of all i vehicles around the unmanned vehicle at the time t,
Figure DEST_PATH_IMAGE060
Figure 545809DEST_PATH_IMAGE061
is a parameter matrix learned by training;
as shown in FIG. 4, the center position of the local window is determined according to the determined position
Figure DEST_PATH_IMAGE062
The window size can be determined to be in the range
Figure 669623DEST_PATH_IMAGE063
I.e. only considering the hidden vector values of all i vehicles around the time t within the window
Figure DEST_PATH_IMAGE064
. Wherein D is an integer, and the value is determined according to the actual condition;
the hidden state vector of the ith vehicle around the window at the time t
Figure 281870DEST_PATH_IMAGE065
Implicit state vectors of the unmanned vehicles at the time t one by one
Figure DEST_PATH_IMAGE066
And (3) carrying out correlation calculation, and scoring the correlation between the i-th hidden state vector of the vehicle around each moment in the window and the hidden state vector of the unmanned vehicle at the moment t, wherein the specific calculation formula is as follows:
Figure 756713DEST_PATH_IMAGE067
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE068
the correlation score between the hidden state vector of the ith vehicle around the small window at the time t and the hidden state vector of the unmanned vehicle is obtained; to make the correlation score follow the central position
Figure 987974DEST_PATH_IMAGE069
The distance between the two is enlarged and reduced, a Gaussian distribution product factor is added after the sigmoid operation, and the mean value of the Gaussian distribution product factor is
Figure 763032DEST_PATH_IMAGE069
Standard deviation of
Figure DEST_PATH_IMAGE070
Here, the correlation evaluation function
Figure 280601DEST_PATH_IMAGE071
Wherein
Figure DEST_PATH_IMAGE072
For intermediate transition matrix, harmonize
Figure 242741DEST_PATH_IMAGE073
And
Figure DEST_PATH_IMAGE074
normally performing matrix operation;
all encoder hidden state vectors of i surrounding vehicles in a small window at the moment t
Figure 12114DEST_PATH_IMAGE075
The hidden state vectors of the unmanned vehicle encoder at the moment one by one
Figure DEST_PATH_IMAGE076
Calculating a relevancy score; and the respective relevancy scores are related to the corresponding encoder hidden state vectors
Figure 376099DEST_PATH_IMAGE077
Multiplying and calculating a weighted average to obtain a time interaction relation vector between the unmanned vehicle and the ith surrounding workshop at the time t
Figure DEST_PATH_IMAGE078
Namely:
Figure 801920DEST_PATH_IMAGE079
c3, connecting the space interactive features of the surrounding vehicles extracted at the time t with the time interactive features to form the comprehensive interactive features of the unmanned vehicles and the surrounding vehicles at the time t
Figure DEST_PATH_IMAGE080
Figure 516935DEST_PATH_IMAGE081
D. Decoder output
D1 comprehensive interaction characteristics based on unmanned vehicle and surrounding vehicles at time t
Figure DEST_PATH_IMAGE082
And the output of the unmanned vehicle decoder at the last moment
Figure 214633DEST_PATH_IMAGE083
The hidden state vector of the unmanned vehicle decoder at the time t can be obtained by being taken as the input of an LSTM decoder
Figure DEST_PATH_IMAGE084
I.e. by
Figure 636387DEST_PATH_IMAGE085
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE086
LSTM decoder weights obtained for network back propagation;
E. predicted trajectory output
E1, obtaining hidden state vector of unmanned vehicle decoder at t moment
Figure 761338DEST_PATH_IMAGE087
The probability distribution parameter of the predicted track Y at t +1 can be obtained through a full connection layer
Figure DEST_PATH_IMAGE088
I.e. by
Figure 432490DEST_PATH_IMAGE089
The calculation formula of the conditional probability distribution parameter (c) is as follows:
Figure DEST_PATH_IMAGE090
wherein the content of the first and second substances,
Figure 933879DEST_PATH_IMAGE091
as a function of the full link layer,
Figure DEST_PATH_IMAGE092
the weight of the full connection layer is obtained through network training;
Figure 6877DEST_PATH_IMAGE093
wherein
Figure DEST_PATH_IMAGE094
Is the average value of the coordinate distribution of the coordinate track at the t +1 moment predicted by the unmanned vehicle at the t moment,
Figure 40080DEST_PATH_IMAGE095
the standard deviation of the coordinate distribution of the coordinate track at the t +1 moment predicted by the unmanned vehicle at the t moment,
Figure DEST_PATH_IMAGE096
and (3) a covariance coefficient of coordinate distribution of the coordinate at the t +1 moment predicted by the unmanned vehicle at the t moment. Therefore, the coordinate trajectory coordinate at the t +1 moment predicted by the unmanned vehicle at the t moment is as follows:
Figure 198529DEST_PATH_IMAGE097
F. model process parameter derivation and optimization
F1, dividing the data set into a training set, a verification set and a test set according to the proportion of 7:1:2, and continuously verifying the trained model by using the verification set to ensure the performance consistency of the model on the training set and the verification set;
f2, minimizing the negative log-likelihood loss function in the training process, and obtaining the corresponding model process weight and bias when the negative log-likelihood loss function is minimum through back propagation updating parameters, wherein the loss function is as follows:
Figure DEST_PATH_IMAGE098

Claims (6)

1. an unmanned vehicle track prediction method based on a local attention mechanism is characterized by comprising the following steps:
step 1: acquiring historical motion track information of an unmanned vehicle and surrounding vehicles, and preprocessing the track information;
step 2: constructing an environment tensor of the potential circle of the unmanned vehicle, and extracting space interaction between vehicles;
and step 3: extracting time interaction between vehicles by using a Local attribute model;
and 4, step 4: and predicting the future track of the unmanned vehicle by training an LSTM model of an Encode-Decoder structure.
2. The unmanned aerial vehicle track prediction method based on the local attention mechanism as claimed in claim 1, wherein in the step 1, historical motion track information of the vehicle and surrounding vehicles is collected, and the track information is preprocessed, and the specific method is as follows:
capturing a video of a recording camera into pictures, calibrating each picture, detecting a vehicle in each picture by using a target detection algorithm, recording the geometric center position of the corresponding vehicle as the position coordinate of the current moment, and giving ID numbers corresponding to the vehicle, a lane where the vehicle is located and the current frame to obtain the historical track information of the vehicle; and taking the frame number in the track information as a timestamp index, filtering and smoothing the coordinate, arranging the processed data in an ascending order according to the timestamp, and dividing the data into a training set, a verification set and a test set according to the ratio of 7:1:2, thereby obtaining a data set for model training and verification.
3. The unmanned aerial vehicle trajectory prediction method based on the local attention mechanism as claimed in claim 1, wherein in the step 2, a real power circle of the unmanned aerial vehicle is constructed, and an empty tensor grid corresponding to the real power circle is constructed by comprehensively considering the vehicle length and the road structure width; filling hidden state vectors obtained at the last moment of historical observation of vehicles in the circle in the last moment of the vehicles by judging whether the vehicles around the current moment are in the potential circle range limited by the unmanned vehicles or notIn the position-corresponding tensor grid position; after that, an unmanned vehicle potential circle environment tensor is constructed, and space interaction information among vehicles at the current moment is extracted through a convolution layer
Figure 74768DEST_PATH_IMAGE001
4. The unmanned aerial vehicle trajectory prediction method based on the Local attention mechanism as claimed in claim 1, wherein in the step 3, a Local attribute model is used to extract a time interaction vector between a surrounding vehicle and the unmanned aerial vehicle, and the calculation method is as follows:
the hidden state vector of the unmanned vehicle at the current moment is used for solving the center position of the small window,
Figure 418025DEST_PATH_IMAGE002
determining the range of the small window according to the central position of the small window, and calculating the correlation degree of the hidden state vectors of the surrounding vehicles at the current moment and the hidden state vectors of the unmanned vehicles within the range of the small window
Figure 325938DEST_PATH_IMAGE003
The calculated correlation degree and the corresponding hidden state vector of the surrounding vehicles are weighted and averaged to obtain the time interaction information between the vehicles at the current moment
Figure 336620DEST_PATH_IMAGE004
5. The unmanned aerial vehicle trajectory prediction method based on the local attention mechanism as claimed in claim 1, wherein in the step 4, the specific steps of predicting the future trajectory of the unmanned aerial vehicle through an LSTM model of an Encoder-Decoder structure are as follows:
(1) inputting the track coordinates in the whole observation time into a full connection layer according to the historical track of the vehicle obtained in the step 1, and obtaining word embedded vectors of the track coordinates at all times
Figure 163630DEST_PATH_IMAGE005
(2) Embedding words of the track coordinate of the current moment into a vector
Figure 994183DEST_PATH_IMAGE006
And the encoder hidden state vector at last moment
Figure 440208DEST_PATH_IMAGE007
Inputting into LSTM encoder to obtain the encoder hidden state vector at the current moment
Figure 39816DEST_PATH_IMAGE008
In the same way, the hidden state vector of the encoder of all the time track coordinates can be obtained
Figure 37728DEST_PATH_IMAGE009
(3) The obtained encoder hidden state vectors of all vehicles at the current moment
Figure 355577DEST_PATH_IMAGE010
Substituting the potential circle constructed in the step 2 into a corresponding position in the empty tensor grid corresponding to the potential circle to obtain an environment tensor of the potential circle of the unmanned vehicle;
(4) according to the step 3, the hidden state vector of the unmanned vehicle at the current moment is used for solving the center position of the small window, the surrounding vehicle range with the highest correlation degree with the unmanned vehicle is determined, the correlation degrees are calculated with the hidden state vector of the unmanned vehicle one by one, and the time interaction vector is obtained through weighted average;
(5) combining the space and time interaction vectors contact obtained in the step 2 and the step 3 to obtain comprehensive interaction characteristics
Figure 339714DEST_PATH_IMAGE011
Will be
Figure 59408DEST_PATH_IMAGE012
And the decoder hidden state vector at the last moment
Figure 228221DEST_PATH_IMAGE013
Inputting into LSTM decoder to obtain hidden state vector of current unmanned vehicle decoder
Figure 33366DEST_PATH_IMAGE014
I.e. by
Figure 821193DEST_PATH_IMAGE015
(6) Hiding the state vector of the unmanned vehicle decoder at the current moment
Figure 395394DEST_PATH_IMAGE014
After passing through the full connection layer, the probability distribution parameter of the predicted track at t +1 can be obtained
Figure 876054DEST_PATH_IMAGE016
Wherein
Figure 761971DEST_PATH_IMAGE017
Namely the predicted track coordinate of the unmanned vehicle at t + 1.
6. The method for unmanned aerial vehicle trajectory prediction based on local attention mechanism as claimed in claim 5, wherein in step 4, training an LSTM model of an Encoder-Decoder structure, training the model to minimize a negative log likelihood loss function as a target, performing back propagation according to an error of the loss function relative to the process weight parameters, updating the process weight parameters by using a gradient descent algorithm, and storing the model weight parameters when the generalization capability of the trajectory prediction model is best, thereby completing model training.
CN202011560297.9A 2020-12-25 2020-12-25 Unmanned vehicle track prediction method based on local attention mechanism Active CN112465273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011560297.9A CN112465273B (en) 2020-12-25 2020-12-25 Unmanned vehicle track prediction method based on local attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011560297.9A CN112465273B (en) 2020-12-25 2020-12-25 Unmanned vehicle track prediction method based on local attention mechanism

Publications (2)

Publication Number Publication Date
CN112465273A true CN112465273A (en) 2021-03-09
CN112465273B CN112465273B (en) 2022-05-31

Family

ID=74803884

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011560297.9A Active CN112465273B (en) 2020-12-25 2020-12-25 Unmanned vehicle track prediction method based on local attention mechanism

Country Status (1)

Country Link
CN (1) CN112465273B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949597A (en) * 2021-04-06 2021-06-11 吉林大学 Vehicle track prediction and driving manipulation identification method based on time mode attention mechanism
CN113313320A (en) * 2021-06-17 2021-08-27 湖北汽车工业学院 Vehicle track prediction method based on residual attention mechanism
CN113362367A (en) * 2021-07-26 2021-09-07 北京邮电大学 Crowd trajectory prediction method based on multi-precision interaction
CN113570595A (en) * 2021-08-12 2021-10-29 上汽大众汽车有限公司 Vehicle track prediction method and optimization method of vehicle track prediction model
CN114372116A (en) * 2021-12-30 2022-04-19 华南理工大学 Vehicle track prediction method based on LSTM and space-time attention mechanism

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086869A (en) * 2018-07-16 2018-12-25 北京理工大学 A kind of human action prediction technique based on attention mechanism
CN110276439A (en) * 2019-05-08 2019-09-24 平安科技(深圳)有限公司 Time Series Forecasting Methods, device and storage medium based on attention mechanism
US20200324794A1 (en) * 2020-06-25 2020-10-15 Intel Corporation Technology to apply driving norms for automated vehicle behavior prediction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086869A (en) * 2018-07-16 2018-12-25 北京理工大学 A kind of human action prediction technique based on attention mechanism
CN110276439A (en) * 2019-05-08 2019-09-24 平安科技(深圳)有限公司 Time Series Forecasting Methods, device and storage medium based on attention mechanism
US20200324794A1 (en) * 2020-06-25 2020-10-15 Intel Corporation Technology to apply driving norms for automated vehicle behavior prediction

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KAOUTHER MESSAOUD,ETC: "Attention Based Vehicle Trajectory Prediction", 《IEEE TRANSACTIONS ON INTELLIGENT VEHICLES》, 30 April 2020 (2020-04-30), pages 1 - 11 *
刘创等: "基于注意力机制的车辆运动轨迹预测", 《浙江大学学报(工学版)》, no. 06, 30 June 2020 (2020-06-30), pages 1156 - 1163 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949597A (en) * 2021-04-06 2021-06-11 吉林大学 Vehicle track prediction and driving manipulation identification method based on time mode attention mechanism
CN112949597B (en) * 2021-04-06 2022-11-04 吉林大学 Vehicle track prediction and driving manipulation identification method based on time mode attention mechanism
CN113313320A (en) * 2021-06-17 2021-08-27 湖北汽车工业学院 Vehicle track prediction method based on residual attention mechanism
CN113313320B (en) * 2021-06-17 2022-05-31 湖北汽车工业学院 Vehicle track prediction method based on residual attention mechanism
CN113362367A (en) * 2021-07-26 2021-09-07 北京邮电大学 Crowd trajectory prediction method based on multi-precision interaction
CN113362367B (en) * 2021-07-26 2021-12-14 北京邮电大学 Crowd trajectory prediction method based on multi-precision interaction
CN113570595A (en) * 2021-08-12 2021-10-29 上汽大众汽车有限公司 Vehicle track prediction method and optimization method of vehicle track prediction model
CN114372116A (en) * 2021-12-30 2022-04-19 华南理工大学 Vehicle track prediction method based on LSTM and space-time attention mechanism
CN114372116B (en) * 2021-12-30 2023-03-21 华南理工大学 Vehicle track prediction method based on LSTM and space-time attention mechanism

Also Published As

Publication number Publication date
CN112465273B (en) 2022-05-31

Similar Documents

Publication Publication Date Title
CN112465273B (en) Unmanned vehicle track prediction method based on local attention mechanism
CN111242036B (en) Crowd counting method based on multi-scale convolutional neural network of encoding-decoding structure
CN110705457B (en) Remote sensing image building change detection method
EP4152204A1 (en) Lane line detection method, and related apparatus
EP3633615A1 (en) Deep learning network and average drift-based automatic vessel tracking method and system
US20210303911A1 (en) Method of segmenting pedestrians in roadside image by using convolutional network fusing features at different scales
CN111382686B (en) Lane line detection method based on semi-supervised generation confrontation network
CN112733800B (en) Remote sensing image road information extraction method and device based on convolutional neural network
CN109145836B (en) Ship target video detection method based on deep learning network and Kalman filtering
CN112465199B (en) Airspace situation assessment system
JP2022025008A (en) License plate recognition method based on text line recognition
CN115205855B (en) Vehicle target identification method, device and equipment integrating multi-scale semantic information
CN113989287A (en) Urban road remote sensing image segmentation method and device, electronic equipment and storage medium
CN115690153A (en) Intelligent agent track prediction method and system
CN113313320B (en) Vehicle track prediction method based on residual attention mechanism
CN116109986A (en) Vehicle track extraction method based on laser radar and video technology complementation
CN114926796A (en) Bend detection method based on novel mixed attention module
CN112597996B (en) Method for detecting traffic sign significance in natural scene based on task driving
CN114241314A (en) Remote sensing image building change detection model and algorithm based on CenterNet
Gao et al. Robust lane line segmentation based on group feature enhancement
CN117409358A (en) BiFPN-fused light flame detection method
CN112001453A (en) Method and device for calculating accuracy of video event detection algorithm
CN112232102A (en) Building target identification method and system based on deep neural network and multitask learning
CN115147720A (en) SAR ship detection method based on coordinate attention and long-short distance context
CN116469013B (en) Road ponding prediction method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant