CN112257850A - Vehicle track prediction method based on generation countermeasure network - Google Patents

Vehicle track prediction method based on generation countermeasure network Download PDF

Info

Publication number
CN112257850A
CN112257850A CN202011157093.0A CN202011157093A CN112257850A CN 112257850 A CN112257850 A CN 112257850A CN 202011157093 A CN202011157093 A CN 202011157093A CN 112257850 A CN112257850 A CN 112257850A
Authority
CN
China
Prior art keywords
network
track
discrimination
generation
predicted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011157093.0A
Other languages
Chinese (zh)
Other versions
CN112257850B (en
Inventor
周毅
周丹阳
胡姝婷
李伟
张延宇
杜晓玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University
Original Assignee
Henan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University filed Critical Henan University
Priority to CN202011157093.0A priority Critical patent/CN112257850B/en
Publication of CN112257850A publication Critical patent/CN112257850A/en
Application granted granted Critical
Publication of CN112257850B publication Critical patent/CN112257850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention firstly extracts the historical track data of a target vehicle and the historical track data of vehicles around the target vehicle and constructs a generation confrontation network model, the extracted track data is sequentially input into a generation network according to time sequence to obtain a predicted track value, the predicted track value and a real track value are alternately input into a discrimination network to discriminate the difference between the predicted track value and the real track value to obtain a discrimination probability, the discrimination probability is input into the generation network and the discrimination network to obtain the loss values of the two networks, the parameters of the discrimination network and the generation network are reversely updated until the discrimination probability output by the discrimination network is close to 1, the generation confrontation network model is trained well, an attention machine is added into the generation network model, the hidden state information of an encoder is considered at each decoding moment by a decoder to calculate the correlation with the hidden state at the current prediction moment, and obtaining the input code most relevant to the hidden layer state at the current prediction time, and improving the accuracy of the predicted track.

Description

Vehicle track prediction method based on generation countermeasure network
Technical Field
The invention belongs to the technical field of prediction of peripheral vehicle tracks of unmanned vehicles, and particularly relates to a vehicle track prediction method based on a generation countermeasure network.
Background
The unmanned automobile is a comprehensive intelligent system integrating functions of environmental perception, planning decision, multi-level auxiliary driving and the like. The complete unmanned automobile senses surrounding environment information through a sensor, and a decision system is used for replacing a human brain to analyze the current situation and make a reasonable decision so as to control the automobile to execute a corresponding unit. With the development of computer technology, unmanned vehicles are applied to more and more fields, have the characteristics of wide sensing range and no fatigue, can greatly reduce the occurrence of lower traffic accidents, and improve the traffic efficiency of cities.
Although unmanned vehicles have achieved great success in many areas, safety issues are a key issue that has been studied. Since the unmanned vehicle inevitably interacts with surrounding participants in the driving process, the unmanned vehicle without prediction capability is cautious to drive on a highway, and in recent years, the traffic accidents caused by the unmanned vehicle are caused by wrong understanding of the surrounding environment by the unmanned vehicle. Accurate prediction of the trajectory of the surrounding vehicle is an important prerequisite for safer driving of unmanned vehicles and for high quality decision-making and planning. By predicting future travel trajectories using previous observations of surrounding vehicles, the predicted trajectories of surrounding vehicles can be used to plan the movement of the unmanned vehicle to avoid collisions with surrounding vehicles.
The traditional track prediction models comprise a hidden Markov model, a Bayesian model and a Kalman filter, but the models are provided with more constraint conditions and parameters, so that the historical track information cannot be fully utilized, and the fitting effect is poor. The track prediction model also adopts a neural network for prediction, due to the limitation of a network structure, the track prediction can not be carried out on a longer sequence, and the prediction position at a single moment has little significance for the subsequent decision and motion planning of the unmanned vehicle. In order to solve the defects and shortcomings of the traditional method and the method combined with the neural network in the field of vehicle trajectory prediction at present, it is necessary to consider the prediction of a target vehicle time series and the interaction behavior of surrounding vehicles to a target vehicle in the vehicle trajectory prediction process.
Disclosure of Invention
The invention aims to provide a vehicle track prediction method based on a generation countermeasure network, which is used for improving the accuracy of vehicle track prediction.
The technical scheme for solving the technical problems of the invention is as follows: a vehicle track prediction method based on a generation countermeasure network comprises the following steps:
s1: preprocessing data in the NGSIM data set;
s2: adding an attention mechanism on the basis of an LSTM encoder-decoder, and taking the whole as a generator network;
s3: constructing a discrimination network based on an MLP neural network, and inputting a predicted track and a real track to obtain a discrimination probability;
s4: constructing and generating a confrontation network model through a generator network and a discrimination network, and training to generate the confrontation network model;
s5: and storing the trained model, selecting a test data set from the preprocessed data set, inputting the test data into the trained confrontation network generation model, and predicting to obtain the future track coordinates of the vehicle.
The step S1 specifically includes:
s1.1, processing the NGSIM data set through a smoothing filter, and eliminating abnormal data;
s1.2: and selecting track data on 2, 3 and 4 lanes, and selecting the transverse position, the longitudinal position and the speed in the vehicle data as track characteristics.
S1.3: extracting the target vehicle at t1~t1Track sequence in + n time is
Figure BDA0002743110690000021
Wherein the content of the first and second substances,
Figure BDA0002743110690000022
target vehicle and surrounding vehicles of the target vehicle at current t1Sets of trajectory characteristics of time of day, i.e.
Figure BDA0002743110690000023
Figure BDA0002743110690000024
Indicates that the target vehicle is at t1The lateral position of the moment of time,
Figure BDA0002743110690000025
indicates that the target vehicle is at t1The longitudinal position of the moment of time,
Figure BDA0002743110690000026
indicates that the target vehicle is at t1The speed of the moment in time is,
Figure BDA0002743110690000027
indicates that the target vehicle and the surrounding vehicles are at t1The difference in the lateral distance at the time of day,
Figure BDA0002743110690000028
for the target vehicle and the surrounding vehicles at t1The difference in the longitudinal distance at the time of day,
Figure BDA0002743110690000029
for surrounding vehicles at t relative to the target vehicle1The speed of the moment.
The step S2 specifically includes:
s2.1: inputting the track sequence extracted in S1.3 into the full connection layer to obtain the characteristic space sequence received by the network
Figure BDA0002743110690000031
S2.2: inputting the characteristic space sequence L into an LSTM encoder, and encoding to obtain a history hidden state corresponding to each moment
Figure BDA0002743110690000032
Extracting historical hidden states obtained by an encoder and recording the historical hidden states as a historical hidden state vector set
Figure BDA0002743110690000033
S2.3: adding attention mechanism before decoder decoding, and knowing hidden state at last moment in decoder
Figure BDA0002743110690000034
(it is important to point out that in the present invention
Figure BDA0002743110690000035
Subscript t of (1)2And
Figure BDA0002743110690000036
subscript t of (1)1+1 denotes a different concept, t1Representing respective times, t, in the encoder2Representing each time instant in the decoder), the similarity between the hidden state of the decoder at the last time instant and the hidden state vector of the historical track can be obtained
Figure BDA0002743110690000037
Wherein;
s2.4: will obtain et'Are normalized, i.e.
Figure BDA0002743110690000038
S2.5: s 'after normalization'tWeighted summation of the value and the hidden state of the historical track is obtained to obtain the value of the decoder at t2Input code at time +1
Figure BDA0002743110690000039
S2.6: will be provided with
Figure BDA00027431106900000310
And
Figure BDA00027431106900000311
the vector passes through a decoder and is output to obtain a predicted time t2A value of +1, i.e.
Figure BDA00027431106900000312
Where w is the weight of the decoder.
Figure BDA00027431106900000313
At t for generating a network2Hidden state at time +1, hidden layer state of decoder
Figure BDA00027431106900000314
Obtaining the track of the current prediction time through mapping
Figure BDA00027431106900000315
The step S3 specifically includes:
and alternately inputting the J predicted tracks and the real tracks into a discrimination network, wherein the discrimination network consists of two layers of MLPs, the label of the real track is recorded as 1, and the label of the predicted track is recorded as 0, so that the discrimination probability is obtained.
The step S4 specifically includes:
s4.1: the loss function for constructing the generated network is:
Figure BDA00027431106900000316
wherein J represents the number of input predicted tracks,
Figure BDA00027431106900000317
and representing the discrimination probability of the jth predicted track in the discrimination network.
Figure BDA00027431106900000318
Representing the euclidean distance of the predicted trajectory values from the true trajectory values. m represents the number of trace points, and lambda is the weight of the loss function.
The loss function for constructing the discrimination network is:
Figure BDA0002743110690000041
wherein the content of the first and second substances,
Figure BDA0002743110690000042
and representing the discrimination probability of the jth real track in the discrimination network.
S4.2: fixing parameters of a generated network, training a discrimination network, alternately inputting a real track and a predicted track into the discrimination network to obtain a discrimination probability, inputting the discrimination probability into the discrimination network and the generated network to calculate a loss value, and updating the parameters of the discrimination network by using an Adam algorithm.
S4.3: fixing the parameters of the discrimination network, training the generation network, alternately inputting the real track and the predicted track into the discrimination network to obtain the discrimination probability, inputting the discrimination probability into the discrimination network and the generation network to calculate to obtain a loss value, and adjusting the parameters of the generation network by using an Adam algorithm according to the loss value.
S4.4: when the judgment probability calculation of the judgment network for the predicted track is close to 1, the judgment network does not distinguish the predicted track and the real track, namely the generation of the network and the training of the judgment network are finished.
The invention has the beneficial effects that: the present invention first extracts historical trajectory data (lateral position, longitudinal position, speed) of a target vehicle, and historical trajectory data (lateral distance with respect to the target vehicle, longitudinal distance with respect to the target vehicle, speed with respect to the target vehicle) of surrounding vehicles (front, left, and right vehicles located in the target vehicle) of the target vehicle. Then constructing a generation confrontation network model, inputting the extracted track data into the generation network in sequence according to a time sequence to obtain a predicted track value, alternately inputting the predicted track value and a real track value into a discrimination network, discriminating the difference between the predicted track value and the real track value by the discrimination network to obtain a discrimination probability, inputting the discrimination probability into the generation network discrimination network to obtain loss values of the two networks, and updating parameters of the discrimination network and the generation network in a reverse direction until the discrimination probability output by the discrimination network is close to 1, which indicates that the training of the generation confrontation network model is mature. The invention adds an attention mechanism in the generation of the network model, and solves the problem that the traditional decoder only uses fixed intermediate variable prediction to cause important information loss in long sequence prediction. The decoder considers the hidden state information of the encoder at each decoding moment, calculates the correlation between the hidden state information and the current prediction moment hidden state, obtains the input code most correlated to the current prediction moment hidden state, and improves the accuracy of the prediction track.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a model architecture diagram of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the present invention comprises the steps of:
s1: preprocessing data in the NGSIM data set;
the step S1 specifically includes:
s1.1, processing the NGSIM data set through a smoothing filter, and eliminating abnormal data;
s1.2: and selecting track data on 2, 3 and 4 lanes, and selecting the transverse position, the longitudinal position and the speed in the vehicle data as track characteristics.
In step S1.2, it may be set that the unmanned vehicle travels on 3 lanes, the target vehicle is a vehicle in front of and closest to the unmanned vehicle, the surrounding vehicles of the target vehicle are front vehicles of the target vehicle on the same lane, and left and right vehicles on left and right lanes of the target vehicle. The left vehicle is the vehicle with the 2 lanes closest to the target vehicle, and the right vehicle is the vehicle with the 4 lanes closest to the target vehicle.
S1.3: extracting the target vehicle at t1~t1Track sequence in + n time is
Figure BDA0002743110690000051
Wherein the content of the first and second substances,
Figure BDA0002743110690000052
target vehicle and surrounding vehicles of the target vehicle at current t1Sets of trajectory characteristics of time of day, i.e.
Figure BDA0002743110690000053
Figure BDA0002743110690000054
Indicates that the target vehicle is at t1The lateral position of the moment of time,
Figure BDA0002743110690000055
indicates that the target vehicle is at t1The longitudinal position of the moment of time,
Figure BDA0002743110690000056
indicates that the target vehicle is at t1The speed of the moment in time is,
Figure BDA0002743110690000057
indicates that the target vehicle and the surrounding vehicles are at t1The difference in the lateral distance at the time of day,
Figure BDA0002743110690000058
for the target vehicle and the surrounding vehicles at t1The difference in the longitudinal distance at the time of day,
Figure BDA0002743110690000059
for surrounding vehicles at t relative to the target vehicle1The speed of the moment.
S2: adding an attention mechanism on the basis of an LSTM encoder-decoder, and taking the whole as a generator network;
the step S2 specifically includes:
s2.1: inputting the track sequence extracted in S1.3 into the full connection layer to obtain the characteristic space sequence received by the network
Figure BDA00027431106900000510
In step S2.1, the generating network comprises a fully connected layer and LSTM encoder and decoder and attention mechanism, will
Figure BDA0002743110690000061
Firstly, inputting the data into a full connection layer, and outputting the data from the full connection layer to obtain a fixed-length characteristic space sequence received by a network
Figure BDA0002743110690000062
S2.2: inputting the characteristic space sequence L into an LSTM encoder, and encoding to obtain a history hidden state corresponding to each moment
Figure BDA0002743110690000063
Extracting historical hidden states obtained by an encoder and recording the historical hidden states as a historical hidden state vector set
Figure BDA0002743110690000064
Step S2.2, specifically, the feature space sequence L is input into an encoder for generating a network for encoding, an initial hidden state and a context vector of the encoder are initialized, and a corresponding step length track sequence point L output by a full connection layer is input into each LSTM unittEach LSTM network module comprises a forgetting gate, an input gate and an output gate, the beginning part of the module corresponds to the forgetting gate, and the updating formula of each track point corresponding to the forgetting gate is as follows: f. oft=σ(w11Lt+w12ht-1+bf) Where σ is sigmoid function
Figure BDA0002743110690000065
ftFor forgetting the output of the gate, w11、w12Weight vector for forgetting gate, LtIs an input value of the current time, ht-1Hidden state at the previous moment, bfTo forget the biasing of the door.
The input information is a value between (0, 1) obtained by passing through a forgetting gate, the middle part of the module is an input gate, and the updating formula of the input gate is it=σ(w21Lt+w22ht-1+bi) Wherein w is21、w22As weight vector of input gate, biFor the bias of the input gate, the cell state update formula:
Figure BDA0002743110690000066
where tanh is the activation function of the input gate. w is a31、w32Is a weight vector of the tanh layer, bcIs an offset.
The right part of the module is an output gate, and the formula for updating the output gate is as follows: ot=σ(w41Lt+w42ht-1+b0) Wherein w is41、w42As weight vectors of output gates, b0For the output gate bias, the hidden state update formula is:
Figure BDA0002743110690000067
hidden state h output by LSTM unit of each layertAnd a cell unit ctPassing to next LSTM unit, extracting all historical hidden state vector sets in encoder
Figure BDA0002743110690000068
S2.3: adding attention mechanism before decoder decoding, and knowing hidden state at last moment in decoder
Figure BDA0002743110690000071
(it is important to point out that in the present invention
Figure BDA0002743110690000072
Subscript t of (1)2And
Figure BDA0002743110690000073
subscript t of (1)1+1 denotes a different concept, t1Representing respective times, t, in the encoder2Representing each time instant in the decoder), the similarity between the hidden state at the previous time instant and the hidden state vector of the historical track can be obtained
Figure BDA0002743110690000074
Wherein;
s2.4: will obtain et'Are normalized, i.e.
Figure BDA0002743110690000075
S2.5: s 'after normalization'tWeighted summation of the value and the hidden state of the historical track is obtained to obtain the value of the decoder at t2Input code at time +1
Figure BDA0002743110690000076
S2.6: will be provided with
Figure BDA0002743110690000077
And
Figure BDA0002743110690000078
the vector passes through a decoder and is output to obtain a predicted time t2A value of +1, i.e.
Figure BDA0002743110690000079
Where w is the weight of the decoder.
Figure BDA00027431106900000710
At t for generating a network2Hidden state at time +1, hidden layer state of decoder
Figure BDA00027431106900000711
Obtaining the track of the current prediction time through mapping
Figure BDA00027431106900000712
S3: constructing a discrimination network based on an MLP neural network, and inputting a predicted track and a real track to obtain a discrimination probability;
the step S3 specifically includes:
and alternately inputting the J predicted tracks and the real tracks into a discrimination network, wherein the discrimination network consists of two layers of MLPs, the label of the real track is recorded as 1, and the label of the predicted track is recorded as 0, so that the discrimination probability is obtained.
The detailed process of step S3 is:
alternately inputting J predicted tracks and real tracks into a discrimination network, wherein the discrimination network consists of two layers of MLP networks, the predicted tracks and the real tracks are changed from multi-dimension to one-dimension through MLP, the labels of the real tracks are recorded as 1, the labels of the predicted tracks are recorded as 0, the real tracks and the predicted tracks are alternately input into the discrimination network to obtain discrimination probability, and the formula for constructing the discrimination probability is as follows:
Figure BDA00027431106900000713
wherein, wm1Is the weight of the first layer MLP, bm1Is the bias of the MLP for that layer. Will obtain
Figure BDA00027431106900000714
And inputting the trajectory into the second layer MLP to obtain the final discrimination probability of the trajectory. Namely, it is
Figure BDA00027431106900000715
Wherein wm2Is the weight of the MLP of the layer, bm2For the biasing of the MLP of this layer,
Figure BDA00027431106900000716
and i represents the discrimination probability obtained by the track, and i represents the label of the track.
S4: constructing and generating a confrontation network model through a generator network and a discriminator network, and training to generate the confrontation network model;
the step S4 specifically includes:
s4.1: the loss function for constructing the generated network is:
Figure BDA0002743110690000081
wherein J represents the number of input predicted tracks,
Figure BDA0002743110690000082
and representing the discrimination probability of the jth predicted track in the discrimination network.
Figure BDA0002743110690000083
Representing the euclidean distance of the predicted trajectory values from the true trajectory values. m represents the number of trace points, and lambda is the weight of the loss function.
The loss function for constructing the discrimination network is:
Figure BDA0002743110690000084
wherein the content of the first and second substances,
Figure BDA0002743110690000085
and representing the discrimination probability of the jth real track in the discrimination network.
S4.2: fixing parameters of a generated network, training a discrimination network, alternately inputting a real track and a predicted track into the discrimination network to obtain a discrimination probability, inputting the discrimination probability into the discrimination network and the generated network to calculate a loss value, and updating the parameters of the discrimination network by using an Adam algorithm.
S4.3: fixing the parameters of the discrimination network, training the generation network, alternately inputting the real track and the predicted track into the discrimination network to obtain the discrimination probability, inputting the discrimination probability into the discrimination network and the generation network to calculate to obtain a loss value, and adjusting the parameters of the generation network by using an Adam algorithm according to the loss value.
S4.4: when the judgment probability calculation of the judgment network on the predicted track is close to 1, the judgment network cannot distinguish the predicted track from the real track, namely the generation of the network and the training of the judgment network are finished.
S5: and storing the trained model, selecting a test data set from the preprocessed data set, inputting the test data into the trained confrontation network generation model, and predicting to obtain the future track coordinates of the vehicle.
The present invention first extracts the historical trajectory data (lateral position, longitudinal position, speed) of the target vehicle, the historical trajectory data (lateral distance with respect to the target vehicle, longitudinal distance with respect to the target vehicle, speed with respect to the target vehicle) of the surrounding vehicles (front vehicle, left vehicle, right vehicle located in the target vehicle) of the target vehicle by the present invention. Then constructing a generation confrontation network model, inputting the extracted track data into the generation network in sequence according to a time sequence to obtain a predicted track value, alternately inputting the predicted track value and a real track value into a discrimination network, discriminating the difference between the predicted track value and the real track value by the discrimination network to obtain a discrimination probability, inputting the discrimination probability into the generation network and the discrimination network to obtain loss values of the two networks, and updating parameters of the discrimination network and the generation network in a reverse manner until the discrimination probability output by the discrimination network is close to 1, which indicates that the generation confrontation network model is mature in training. The invention adds an attention mechanism in the generation of the network model, and solves the problem that the traditional decoder only uses fixed intermediate variable prediction to cause important information loss in long sequence prediction. The decoder considers the hidden state information of the encoder at each decoding moment, calculates the correlation between the hidden state information and the current prediction moment hidden state, obtains the input code most correlated to the current prediction moment hidden state, and improves the prediction accuracy.

Claims (5)

1. A vehicle track prediction method based on a generation countermeasure network is characterized by comprising the following steps:
s1: preprocessing data in the NGSIM data set;
s2: adding an attention mechanism on the basis of an LSTM encoder-decoder, and taking the whole as a generator network;
s3: constructing a discrimination network based on an MLP neural network, and inputting a predicted track and a real track to obtain a discrimination probability;
s4: constructing and generating a confrontation network model through a generator network and a discriminator network, and training to generate the confrontation network model;
s5: and storing the trained model, selecting a test data set from the preprocessed data set, inputting the test data into the trained confrontation network generation model, and predicting to obtain the future track coordinates of the vehicle.
2. The method for predicting vehicle trajectories based on generation of countermeasure network as claimed in claim 1, wherein the step S1 is specifically as follows:
s1.1, processing the NGSIM data set through a smoothing filter, and eliminating abnormal data;
s1.2: and selecting track data on 2, 3 and 4 lanes, and selecting the transverse position, the longitudinal position and the speed in the vehicle data as track characteristics.
S1.3: extracting the target vehicle at t1~t1Track sequence in + n time is
Figure FDA0002743110680000011
Wherein the content of the first and second substances,
Figure FDA00027431106800000110
target vehicle and surrounding vehicles of the target vehicle at current t1Sets of trajectory characteristics of time of day, i.e.
Figure FDA0002743110680000012
Figure FDA0002743110680000013
Indicates that the target vehicle is at t1The lateral position of the moment of time,
Figure FDA0002743110680000014
indicates that the target vehicle is at t1The longitudinal position of the moment of time,
Figure FDA0002743110680000015
indicates that the target vehicle is at t1The speed of the moment in time is,
Figure FDA0002743110680000016
indicates that the target vehicle and the surrounding vehicles are at t1The difference in the lateral distance at the time of day,
Figure FDA0002743110680000017
for the target vehicle and the surrounding vehicles at t1The difference in the longitudinal distance at the time of day,
Figure FDA0002743110680000018
for surrounding vehicles at t relative to the target vehicle1The speed of the moment.
3. The method for predicting vehicle trajectories based on generation of countermeasure network as claimed in claim 2, wherein the step S2 is specifically as follows:
s2.1: inputting the track sequence extracted in S1.3 into the full connection layer to obtain the characteristic space sequence received by the network
Figure FDA0002743110680000019
S2.2: inputting the characteristic space sequence L into an LSTM encoder, and encoding to obtain a history hidden state corresponding to each moment
Figure FDA0002743110680000021
Extracting historical hidden states obtained by an encoder and recording the historical hidden states as a historical hidden state vector set
Figure FDA0002743110680000022
S2.3: adding attention mechanism before decoder decoding, and knowing hidden state at last moment in decoder
Figure FDA0002743110680000023
The similarity between the hidden state at the last moment and the hidden state vector of the historical track can be obtained
Figure FDA0002743110680000024
Wherein;
s2.4: will obtain et'Are normalized, i.e.
Figure FDA0002743110680000025
S2.5: s 'after normalization'tThe weighted summation of the value and the historical track hidden state obtained by the encoder is obtained to obtain the value of the decoder at t2Input code at time +1
Figure FDA0002743110680000026
S2.6: will be provided with
Figure FDA0002743110680000027
And
Figure FDA0002743110680000028
the vector passes through a decoder and is output to obtain a predicted time t2Hidden state value of +1, i.e.
Figure FDA0002743110680000029
Where w is the weight of the decoder.
Figure FDA00027431106800000210
At t for generating a network2Hidden state at time +1, hidden layer state of decoder
Figure FDA00027431106800000211
Obtaining the track of the current prediction time through mapping
Figure FDA00027431106800000212
4. The method for predicting vehicle trajectories based on generation of countermeasure network as claimed in claim 3, wherein the step S3 is specifically as follows:
and alternately inputting the J predicted tracks and the real tracks into a discrimination network, wherein the discrimination network consists of two layers of MLPs, the label of the real track is recorded as 1, and the label of the predicted track is recorded as 0, so that the discrimination probability is obtained.
5. The method for predicting vehicle trajectories based on generation of countermeasure network as claimed in claim 4, wherein the step S4 is specifically as follows:
s4.1: the loss function for constructing the generated network is:
Figure FDA00027431106800000213
wherein J represents the number of input predicted tracks,
Figure FDA00027431106800000214
and representing the discrimination probability of the jth predicted track in the discrimination network.
Figure FDA00027431106800000215
Representing the euclidean distance of the predicted trajectory values from the true trajectory values. m represents the number of trace points, and lambda is the weight of the loss function.
The loss function for constructing the discrimination network is:
Figure FDA0002743110680000031
wherein the content of the first and second substances,
Figure FDA0002743110680000032
and representing the discrimination probability of the jth real track in the discrimination network.
S4.2: fixing parameters of a generated network, training a discrimination network, alternately inputting a real track and a predicted track into the discrimination network to obtain a discrimination probability, inputting the discrimination probability into the discrimination network and the generated network to calculate a loss value, and updating the parameters of the discrimination network by using an Adam algorithm.
S4.3: fixing the parameters of the discrimination network, training the generation network, alternately inputting the real track and the predicted track into the discrimination network to obtain the discrimination probability, inputting the discrimination probability into the discrimination network and the generation network to calculate to obtain a loss value, and adjusting the parameters of the generation network by using an Adam algorithm according to the loss value.
S4.4: when the judgment probability calculation of the judgment network for the predicted track is close to 1, the judgment network does not distinguish the predicted track and the real track, namely the generation of the network and the training of the judgment network are finished.
CN202011157093.0A 2020-10-26 2020-10-26 Vehicle track prediction method based on generation countermeasure network Active CN112257850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011157093.0A CN112257850B (en) 2020-10-26 2020-10-26 Vehicle track prediction method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011157093.0A CN112257850B (en) 2020-10-26 2020-10-26 Vehicle track prediction method based on generation countermeasure network

Publications (2)

Publication Number Publication Date
CN112257850A true CN112257850A (en) 2021-01-22
CN112257850B CN112257850B (en) 2022-10-28

Family

ID=74261556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011157093.0A Active CN112257850B (en) 2020-10-26 2020-10-26 Vehicle track prediction method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN112257850B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949597A (en) * 2021-04-06 2021-06-11 吉林大学 Vehicle track prediction and driving manipulation identification method based on time mode attention mechanism
CN113050640A (en) * 2021-03-18 2021-06-29 北京航空航天大学 Industrial robot path planning method and system based on generation of countermeasure network
CN113076599A (en) * 2021-04-15 2021-07-06 河南大学 Multimode vehicle trajectory prediction method based on long-time and short-time memory network
CN113313941A (en) * 2021-05-25 2021-08-27 北京航空航天大学 Vehicle track prediction method based on memory network and encoder-decoder model
CN113435356A (en) * 2021-06-30 2021-09-24 吉林大学 Track prediction method for overcoming observation noise and perception uncertainty
CN113779892A (en) * 2021-09-27 2021-12-10 中国人民解放军国防科技大学 Wind speed and wind direction prediction method
CN113989326A (en) * 2021-10-25 2022-01-28 电子科技大学 Target track prediction method based on attention mechanism
CN114279061A (en) * 2021-11-26 2022-04-05 国网北京市电力公司 Method and device for controlling air conditioner and electronic equipment
CN114348019A (en) * 2021-12-20 2022-04-15 清华大学 Vehicle trajectory prediction method, vehicle trajectory prediction device, computer equipment and storage medium
CN114549930A (en) * 2022-02-21 2022-05-27 合肥工业大学 Rapid road short-time vehicle head interval prediction method based on trajectory data
CN114815904A (en) * 2022-06-29 2022-07-29 中国科学院自动化研究所 Attention network-based unmanned cluster countermeasure method and device and unmanned equipment
CN115170607A (en) * 2022-06-17 2022-10-11 中国科学院自动化研究所 Travel track generation method and device, electronic equipment and storage medium
CN115759383A (en) * 2022-11-11 2023-03-07 桂林电子科技大学 Destination prediction method and system with branch network and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781838A (en) * 2019-10-28 2020-02-11 大连海事大学 Multi-modal trajectory prediction method for pedestrian in complex scene
EP3705367A1 (en) * 2019-03-05 2020-09-09 Bayerische Motoren Werke Aktiengesellschaft Training a generator unit and a discriminator unit for collision-aware trajectory prediction
WO2020205629A1 (en) * 2019-03-29 2020-10-08 Intel Corporation Autonomous vehicle system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3705367A1 (en) * 2019-03-05 2020-09-09 Bayerische Motoren Werke Aktiengesellschaft Training a generator unit and a discriminator unit for collision-aware trajectory prediction
WO2020205629A1 (en) * 2019-03-29 2020-10-08 Intel Corporation Autonomous vehicle system
CN110781838A (en) * 2019-10-28 2020-02-11 大连海事大学 Multi-modal trajectory prediction method for pedestrian in complex scene

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
AGRIM GUPTA ET AL.: "Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
AMIR SADEGHIAN ET AL.: "SoPhie: An Attentive GAN for Predicting Paths Compliant to Social and Physical Constraints", 《PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION(CVPR)》 *
刘创 等: "基于注意力机制的车辆运动轨迹预测", 《浙江大学学报(工学版)》 *
欧阳俊 等: "基于GAN和注意力机制的行人轨迹预测研究", 《激光与光电子学进展》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113050640A (en) * 2021-03-18 2021-06-29 北京航空航天大学 Industrial robot path planning method and system based on generation of countermeasure network
CN112949597B (en) * 2021-04-06 2022-11-04 吉林大学 Vehicle track prediction and driving manipulation identification method based on time mode attention mechanism
CN112949597A (en) * 2021-04-06 2021-06-11 吉林大学 Vehicle track prediction and driving manipulation identification method based on time mode attention mechanism
CN113076599A (en) * 2021-04-15 2021-07-06 河南大学 Multimode vehicle trajectory prediction method based on long-time and short-time memory network
CN113313941A (en) * 2021-05-25 2021-08-27 北京航空航天大学 Vehicle track prediction method based on memory network and encoder-decoder model
CN113313941B (en) * 2021-05-25 2022-06-24 北京航空航天大学 Vehicle track prediction method based on memory network and encoder-decoder model
CN113435356A (en) * 2021-06-30 2021-09-24 吉林大学 Track prediction method for overcoming observation noise and perception uncertainty
CN113435356B (en) * 2021-06-30 2023-02-28 吉林大学 Track prediction method for overcoming observation noise and perception uncertainty
CN113779892A (en) * 2021-09-27 2021-12-10 中国人民解放军国防科技大学 Wind speed and wind direction prediction method
CN113989326A (en) * 2021-10-25 2022-01-28 电子科技大学 Target track prediction method based on attention mechanism
CN113989326B (en) * 2021-10-25 2023-08-25 电子科技大学 Attention mechanism-based target track prediction method
CN114279061A (en) * 2021-11-26 2022-04-05 国网北京市电力公司 Method and device for controlling air conditioner and electronic equipment
CN114348019A (en) * 2021-12-20 2022-04-15 清华大学 Vehicle trajectory prediction method, vehicle trajectory prediction device, computer equipment and storage medium
CN114348019B (en) * 2021-12-20 2023-11-07 清华大学 Vehicle track prediction method, device, computer equipment and storage medium
CN114549930A (en) * 2022-02-21 2022-05-27 合肥工业大学 Rapid road short-time vehicle head interval prediction method based on trajectory data
CN115170607A (en) * 2022-06-17 2022-10-11 中国科学院自动化研究所 Travel track generation method and device, electronic equipment and storage medium
CN114815904A (en) * 2022-06-29 2022-07-29 中国科学院自动化研究所 Attention network-based unmanned cluster countermeasure method and device and unmanned equipment
CN115759383A (en) * 2022-11-11 2023-03-07 桂林电子科技大学 Destination prediction method and system with branch network and electronic equipment
CN115759383B (en) * 2022-11-11 2023-09-15 桂林电子科技大学 Destination prediction method and system with branch network and electronic equipment

Also Published As

Publication number Publication date
CN112257850B (en) 2022-10-28

Similar Documents

Publication Publication Date Title
CN112257850B (en) Vehicle track prediction method based on generation countermeasure network
CN112347567B (en) Vehicle intention and track prediction method
CN112215337B (en) Vehicle track prediction method based on environment attention neural network model
CN112965499B (en) Unmanned vehicle driving decision-making method based on attention model and deep reinforcement learning
CN113954864B (en) Intelligent automobile track prediction system and method integrating peripheral automobile interaction information
CN103605362B (en) Based on motor pattern study and the method for detecting abnormality of track of vehicle multiple features
CN111930110A (en) Intent track prediction method for generating confrontation network by combining society
CN111079590A (en) Peripheral vehicle behavior pre-judging method of unmanned vehicle
CN112949597B (en) Vehicle track prediction and driving manipulation identification method based on time mode attention mechanism
CN114372116B (en) Vehicle track prediction method based on LSTM and space-time attention mechanism
CN114202120A (en) Urban traffic travel time prediction method aiming at multi-source heterogeneous data
CN109727490A (en) A kind of nearby vehicle behavior adaptive corrective prediction technique based on driving prediction field
CN113554060B (en) LSTM neural network track prediction method integrating DTW
CN115848398B (en) Lane departure early warning system assessment method based on learning and considering driver behavior characteristics
CN115158364A (en) Method for joint prediction of driving intention and track of surrounding vehicle by automatic driving vehicle
Zhu et al. Transfollower: Long-sequence car-following trajectory prediction through transformer
CN114368387B (en) Attention mechanism-based driver intention recognition and vehicle track prediction method
CN116595871A (en) Vehicle track prediction modeling method and device based on dynamic space-time interaction diagram
CN117141518A (en) Vehicle track prediction method based on intention perception spatiotemporal attention network
CN115376103A (en) Pedestrian trajectory prediction method based on space-time diagram attention network
Sharma et al. Kernelized convolutional transformer network based driver behavior estimation for conflict resolution at unsignalized roundabout
CN112927507B (en) Traffic flow prediction method based on LSTM-Attention
CN116386020A (en) Method and system for predicting exit flow of highway toll station by multi-source data fusion
CN111443701A (en) Unmanned vehicle/robot behavior planning method based on heterogeneous deep learning
CN110489671B (en) Road charging pile recommendation method based on encoder-decoder model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant