CN114283575A - Signal lamp intersection queuing length estimation method based on video monitoring data - Google Patents

Signal lamp intersection queuing length estimation method based on video monitoring data Download PDF

Info

Publication number
CN114283575A
CN114283575A CN202011030216.4A CN202011030216A CN114283575A CN 114283575 A CN114283575 A CN 114283575A CN 202011030216 A CN202011030216 A CN 202011030216A CN 114283575 A CN114283575 A CN 114283575A
Authority
CN
China
Prior art keywords
vehicle
queue
video
track
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011030216.4A
Other languages
Chinese (zh)
Other versions
CN114283575B (en
Inventor
盛子豪
薛拾贝
徐云雯
李德伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202011030216.4A priority Critical patent/CN114283575B/en
Publication of CN114283575A publication Critical patent/CN114283575A/en
Application granted granted Critical
Publication of CN114283575B publication Critical patent/CN114283575B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Traffic Control Systems (AREA)

Abstract

A signal lamp intersection queuing length estimation method based on video monitoring data is characterized in that a monitoring video is from a monitoring camera of an intersection; the monitoring video is processed to return vehicle information in a monitoring range; the method comprises the steps of learning parameters of a designed car following model by using car information acquired in a signal period, reconstructing a driving track of a car outside a video range by using the learned car following model, and estimating an evolution process along with time when a queuing length exceeds a video monitoring range by analyzing tracks of all cars on a surveyed road section in the signal period. The invention effectively reduces the propagation of the accumulated error in the signal period and improves the accuracy of the queue length estimation.

Description

Signal lamp intersection queuing length estimation method based on video monitoring data
Technical Field
The invention relates to the field of traffic information estimation, in particular to a signal lamp intersection queuing length estimation method based on video monitoring data.
Background
Traffic congestion becomes an increasingly serious problem in urban traffic networks, and the queuing length of signal light intersections is of great importance to both traffic performance indexes and signal optimization. Therefore, the accurate estimation of the queuing length has important significance for relieving traffic jam and improving the traffic efficiency of the intersection.
As an important component of an intelligent transportation system, in recent years, with the improvement of detection means, the estimation of the queuing length has been rapidly developed. At present, queue length estimation based on different traffic sensors, such as loop detectors and motion sensors, has been studied accordingly. The loop detector typically provides an overall level of traffic information (e.g., traffic flow and density) that is combined with traffic flow theory to estimate queue length. However, the loop detector has limited coverage and cannot capture vehicle-level information (e.g., travel time and trajectory). In comparison to loop detectors, on-board sensors can provide vehicle-level information with a relatively large coverage. However, the permeability of the vehicle-mounted sensor is low, and a certain limitation exists in practical application. For both loop detectors and motion sensors, high additional costs are incurred for installation and maintenance.
The monitoring camera has the advantages of low cost and convenience in installation, and is widely applied to an intelligent traffic system. Existing methods for estimating the queuing length using a surveillance camera can be classified into an image-based method and a video-based method. The image-based method infers the travel time by extracting section data such as license plate numbers, time stamps and the like, and then estimates the queuing length. However, this method is only a monitoring camera as a substitute for other traffic sensors, and the unique function of the camera is not fully utilized. Compared with images, the video can record the motion of a plurality of vehicles at the same time, so that the interaction of adjacent vehicles can be acquired from the video, and the learning is carried out through a vehicle-following model. However, current video-based methods typically separate vehicles into moving vehicles and stationary vehicles by identifying and tracking the vehicles. By analyzing the position of the stationary vehicle, the queue length in the video can be estimated. However, due to the limitations of the installation height of the monitoring camera and the shooting range, a queued vehicle that exceeds the monitoring range of the monitoring camera cannot be detected, and therefore the queuing length at that time cannot be estimated.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a signal lamp intersection queuing length estimation method based on video monitoring data.
The invention is realized by the following technical scheme.
A signal lamp intersection queuing length estimation method based on video monitoring data is characterized by comprising the following steps:
s1, collecting the vehicle type, license plate number and track of each vehicle in the video monitoring range, and establishing a database;
s2, designing a three-layer neural network model, training by taking relevant characteristics of each signal lamp period as input, and judging whether a residual queue is retained in the current period in the previous period by using the trained model;
s3, numbering the vehicles by taking the departure time of the downstream vehicles as a reference, establishing a Gaussian process model by utilizing the upstream arrival time and the serial number of the vehicles which can be matched with the upstream and downstream license plate numbers, and acquiring the serial number of the unmatched vehicle from the downstream as the input of the Gaussian process model to obtain the most possible upstream arrival time of the vehicle;
s4, dividing the vehicle track of each vehicle in the current period video monitoring range into a track that the vehicle enters the queue from the entering monitoring range to the parking and a track that the vehicle starts to drive from the queue to the exiting monitoring range, and respectively learning the parameters of the designed vehicle following model by using the divided tracks of all the vehicles so as to reconstruct the track of the vehicle outside the video range;
and S5, fusing the track reconstructed by the following model and the corresponding track in the video by utilizing a cubic Hermite interpolation algorithm to reduce the accumulated error, analyzing the track to obtain when each vehicle is added into the queue and when each vehicle is taken out of the queue, and thus obtaining the estimation result of the queue length evolution.
Preferably, in S2, designing a three-layer neural network model includes the following steps:
the inputs of the neural network are respectively
Figure BDA0002703407700000021
Represents the average time occupancy of the k-th lane at the previous cycle, wherein d represents the downstream intersection, and the average time occupancy can explain how the vehicle leaving speed affects the probability of queue residue.
Figure BDA0002703407700000022
Representing the traffic flow of all lanes upstream of the traffic light period of the previous cycle, jd,kThe downstream flow proportion of the kth lane is used for checking the lane changing behavior of the vehicle, u represents an upstream intersection, and K represents the total number of lanes.
Figure BDA0002703407700000031
Is the average time occupancy of all lanes upstream of the previous cycle. The second layer of neural network is a hidden layer and comprises a plurality of neurons. And the third layer is an output layer and represents the probability that the residual queue of the k-th lane in the previous period stays in the current period.
Preferably, in S3, the gaussian process model is established as follows:
the serial number and the upstream arrival time of the matched vehicle are respectively equal to y ═ y1,y2,...,yn],t=[t1,t2,...,tn]Where N is the total number of vehicles of the upstream and downstream license plate matches, the gaussian process models the vehicle arrival times as a joint normal distribution, i.e., p (y) N (μ (t), K (t, t)), where μ (t) represents the mean of the probability distribution for different vehicle arrival times and K (t, t) represents the covariance between the different times. Let y*,t*Representing respectively the serial number and the upstream arrival time of the unmatched vehicles, y is then solved by the Gaussian process*Has an edge distribution of p (y)*)=N(m*(t*),C*) Wherein the posterior mean value m*(t*)=μ(t*)+K(t*,t)K-1(t,t)[y(t)-μ(t)]As the most likely upstream arrival time of the vehicle.
Preferably, in S4, the vehicle trajectory division process is as follows:
the ith vehicle track is expressed as { [ x { [i(t),vi(t)]|t=0,Δt,2Δt,...,TiIn which xi(t) and vi(T) respectively represents the position and speed of the ith vehicle at time T, Δ T is the time step, TiIs the time of the trajectory of the ith vehicle. The time from entering the monitoring range to parking and joining the queue is obtained by the following formula:
Figure BDA0002703407700000032
wherein xthredAnd vthredAnd when the change of the position of the vehicle from the driving monitoring range to the first time is smaller than the threshold value and the speed is also smaller than the threshold value, checking whether the change of the position of the vehicle is still smaller than the threshold value after 3 seconds in order to prevent shaking, and if the change of the position of the vehicle is smaller than the threshold value, considering that the vehicle is added into the queue at the moment, thereby obtaining the track of the vehicle from the driving monitoring range to the parking and adding into the queue. Likewise, after joining the queue, when the change in position is greater than the threshold value and the speed is also greater than the threshold value at that time and after 3 seconds, the vehicle is considered to be leaving the queue at that time, resulting in a trajectory for the vehicle to begin traveling from the queue to the exit monitoring range.
Preferably, in S4, the car-following model is designed as follows:
because the interaction between the vehicles and the adjacent vehicles when the vehicles are in the queue and leave the queue is greatly different, the following model is designed to learn two sets of parameters for describing the behaviors of the vehicles when the vehicles are in the queue and leave the queue respectively on the basis of the full-speed differential model.
Preferably, in S4, the process of reconstructing the trajectory is as follows:
the track of the (i +1) th vehicle can be reconstructed only by obtaining the vehicle head distance and the speed difference between the ith vehicle and the (i +1) th vehicle and obtaining the acceleration of the (i +1) th vehicle by using the following model. And respectively reconstructing a track of the vehicle from the entrance to the investigation road section to the joining queue and a track from the leaving queue to the exit of the investigation road section by using the two sets of parameters of the following model, thereby reconstructing a track outside a video range.
Preferably, in S4, the vehicle-following model parameter learning process is as follows:
modeling the parametric learning process as an optimization problem solution, since the parametric learning process of the following models describing the joining and leaving of vehicles is consistent, it is described with the same notation:
Figure BDA0002703407700000041
Figure BDA0002703407700000042
Figure BDA0002703407700000043
i=1,…,I
where Θ is the learning parameter to be solved for, I is the number of vehicles in a signal period for learning,
Figure BDA0002703407700000044
representing the reconstructed trajectory of the ith vehicle,
Figure BDA0002703407700000045
is the moment when the ith vehicle enters the video surveillance range. And after the modeling of the optimization problem is completed, solving the guaranteed speed and precision by using a genetic algorithm.
Preferably, in S5, the flow of the cubic hermitian interpolation algorithm is as follows:
calculating the mean square error of the vehicle track in the video and the corresponding car following model reconstructed track as a backtracking distance DBAnd obtaining the position of the fusion point on the reconstructed track by using the following formula:
Figure BDA0002703407700000046
wherein t isfpIs the time, x, of the fusion point correspondencesIs the position of the stop line,/sIs the farthest monitoring distance of the video. And taking the fusion point and the starting point of the vehicle entering the video monitoring range as interpolation boundary points, and calculating an interpolation result by utilizing cubic Hermite interpolation to be used as a fusion track.
Preferably, in S5, the process of analyzing the trajectory to estimate the queuing length is as follows:
when the state of the ith vehicle is
Figure BDA0002703407700000047
The ith vehicle is considered to be joining the queue at time t, and similarly, if the position of the vehicle remains unchanged until time t and changes at the next time (t +1), the vehicle leaves the queue at time t. The queue length q (t) can be obtained by the following equation: q (t) ═ xs-xls(t) wherein xls(t) is the position of the last vehicle in line at time t.
Compared with the prior art, the invention has the following beneficial effects:
the signal lamp intersection queuing length estimation method based on the video monitoring data judges whether a residual queue stays in the current period or not in the previous signal period by utilizing a neural network model. The neural network model takes the average time occupancy rate and the traffic flow of the upstream and downstream of the previous signal period as input, and outputs the probability that the residual queue of the previous signal period stays in the current signal period, thereby effectively reducing the propagation of the accumulated error in the signal period and improving the accuracy of the queue length estimation.
The signal lamp intersection queuing length estimation method based on the video monitoring data provided by the invention calculates the upstream arrival time of unmatched vehicles through the Gaussian process, and obtains the running time interval of each vehicle on the investigation road section. And learning the parameters of the designed following model by using the vehicle track extracted from the video. And respectively reconstructing a track of the vehicle from the entrance to the investigation road section to the joining queue and a track from the leaving queue to the exit from the investigation road section through the designed following model, so that the calculated amount of track reconstruction outside the video range is reduced, and the running speed is increased. The triple Hermite interpolation algorithm fuses the reconstructed track and the track in the video, and further reduces the accumulated error of track reconstruction, thereby improving the accuracy of queue length estimation.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a flowchart of a signal light intersection queuing length estimation method based on video monitoring data according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a surveillance video providing track;
FIG. 3 is a schematic diagram of a cubic Hermite interpolation algorithm for fusion;
FIG. 4 is a graph comparing an estimated value and an actual value of a queue length.
Detailed Description
The following examples illustrate the invention in detail: the embodiment is implemented on the premise of the technical scheme of the invention, and a detailed implementation mode and a specific operation process are given. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.
Referring to fig. 1, the invention provides a signal lamp intersection queuing length estimation method based on video monitoring data, which is explained by taking a hollywood express way of a lankershirm main road and a hollywood global north road section of a main road in north and south of california as specific cases, and specifically comprises the following steps:
s1, establishing a database according to the vehicle type, license plate number and track of each vehicle on the Lankershirm road, which comprises the following specific steps:
s1.1, obtaining the vehicle type, license plate number and track of each vehicle from the monitoring video;
s1.2, processing abnormal points in the track data, specifically as follows:
s1.2.1, traversing time dimension data, and checking whether to increase progressively according to a time step;
s1.2.2, traversing the position dimension data, checking whether the position change at the adjacent time exceeds a threshold value;
s1.2.3, if the time of a certain track point is not increased according to the time step, modifying;
s1.2.4, if the position change exceeds the threshold, deleting the position at the moment and replacing the position with the average value of the positions at the front and back adjacent moments;
s1.3, sequencing the vehicle information according to time, and indexing the vehicles through license plate numbers;
s1.4, storing the vehicle information according to a signal period, and establishing a database;
s2, establishing a three-layer neural network model, wherein the first layer of neural network is an input layer and comprises three neurons which are respectively
Figure BDA0002703407700000061
Represents the average time occupancy of the k-th lane at the previous cycle, wherein d represents the downstream intersection, and the average time occupancy can explain how the vehicle leaving speed affects the probability of queue residue.
Figure BDA0002703407700000062
Representing the traffic flow of all lanes upstream of the traffic light period of the previous cycle, jd,kThe downstream flow proportion of the kth lane is used for checking the lane changing behavior of the vehicle, u represents an upstream intersection, and K represents the total number of lanes.
Figure BDA0002703407700000063
Is the average time occupancy of all lanes upstream of the previous cycle. The second layer of neural network is a hidden layer and comprises ten neurons, and the input of the ith hidden layer neuron
Figure BDA0002703407700000064
Wherein b isi
Figure BDA0002703407700000065
For trainable parameters, each neuron has its own parameters, the input is activated to obtain the output of hidden layer as
Figure BDA0002703407700000066
The third layer is an output layer and represents the probability that the k-th lane stays in the residual queue of the previous period. And when the probability is greater than 0.5, the kth lane is considered to be detained with the residual queue of the previous period, and when the probability is less than 0.5, no residual queue is considered to be detained to the current period.
Simulating the investigation road section by using traffic simulation software SUMO to obtain the generation condition of the residual queue under different traffic conditions, using the relevant characteristics of each signal period obtained by simulation as an input training neural network model, and judging whether the residual queue is retained in the current period in the previous period by using the trained model;
s3, the same vehicle up and down cannot be matched completely because the identified license plate number may have false or failed results. Numbering vehicles by taking the departure time of a downstream vehicle as a reference, establishing a Gaussian process model by utilizing the upstream arrival time and the serial number of the vehicles which can be matched with the upstream and downstream license plate numbers, wherein the serial number and the upstream arrival time of the matched vehicles are respectively y ═ y [, y [ ]1,y2,...,yn],t=[t1,t2,...,tn]Where N is the total number of vehicles of the upstream and downstream license plate matches, the gaussian process models the vehicle arrival times as a joint normal distribution, i.e., p (y) N (μ (t), K (t, t)), where μ (t) represents the mean of the probability distribution for different vehicle arrival times and K (t, t) represents the covariance between the different times. Let y*,t*Representing respectively the serial number and the upstream arrival time of the unmatched vehicles, then according to the gaussian process there is a joint gaussian distribution as follows:
Figure BDA0002703407700000071
determining y by Gaussian process*Has an edge distribution of p (y)*)=N(m*(t*),C*) Wherein the posterior mean value m*(t*)=μ(t*)+K(t*,t)K-1(t,t)[y(t)-μ(t)]. Taking the serial number of the unmatched vehicle as the input of the Gaussian process, and obtaining the most probable upstream arrival time of the vehicle by a posterior mean formula;
s4, dividing the vehicle track of each vehicle in the current period video monitoring range into a track from the entering monitoring range to the parking queue and a track from the queue to the exiting monitoring range, taking the ith vehicle as an example, the track is expressed as { [ x ]i(t),vi(t)]|t=0,Δt,2Δt,...,TiIn which xi(t) and vi(T) respectively represents the position and speed of the ith vehicle at time T, Δ T is the time step, TiIs the time of the trajectory of the ith vehicle. The time from entering the monitoring range to parking and joining the queue is obtained by the following formula:
Figure BDA0002703407700000072
wherein xthredAnd vthredAnd when the change of the position of the vehicle from the driving monitoring range to the first time is smaller than the threshold value and the speed is also smaller than the threshold value, checking whether the change of the position of the vehicle is still smaller than the threshold value after 3 seconds in order to prevent shaking, and if the change of the position of the vehicle is smaller than the threshold value, considering that the vehicle is added into the queue at the moment, thereby obtaining the track of the vehicle from the driving monitoring range to the parking and adding into the queue. Likewise, after joining the queue, when the change in position is greater than the threshold value and the speed is also greater than the threshold value at that time and after 3 seconds, the vehicle is considered to be leaving the queue at that time, resulting in a trajectory for the vehicle to begin traveling from the queue to the exit monitoring range.
Because the interaction between the vehicles and the adjacent vehicles when the vehicles are in the queue and leave the queue is greatly different, the following model is designed to learn two sets of parameters for describing the behaviors of the vehicles when the vehicles are in the queue and leave the queue respectively on the basis of the full-speed differential model.
The vehicle-following model describes the behavior of the (i +1) th vehicle following the ith vehicle, and the formula is as follows:
ai+1(t)=κ[V(s)-vi+1(t)]+λ(vi(t)-vi+1(t))
wherein a isi+1(t) and vi+1(t) is the acceleration and velocity of the (i +1) th vehicle at time t, and κ is the coefficient of sensitivity. Optimum speed V(s) ═ V1+V2tanh[C1(s-li-C2)]Is a function of the headway s, where V1,V2,C1,C2Is a parameter to be learned,/iIs the vehicle length of the ith vehicle, and is obtained from the vehicle type.
Figure BDA0002703407700000081
It is described that when the headway s is less than or equal to the threshold scWhen the distance s between the head and the following vehicles is larger than the threshold s, the influence of the speed difference between the head and the following vehicles on the following vehicles is generatedcIn time, the speed difference between the leading vehicle and the following vehicle has little effect on the following vehicle.
The trajectory reconstruction process is as follows:
only the head distance and the speed difference between the ith vehicle and the (i +1) th vehicle are needed to be obtained, the acceleration of the (i +1) th vehicle is obtained by using the following model, and the track of the (i +1) th vehicle is reconstructed by using the following formula:
Figure BDA0002703407700000082
and respectively reconstructing a track of the vehicle from the entrance to the investigation road section to the joining queue and a track from the leaving queue to the exit of the investigation road section by using the two sets of parameters of the following model, thereby reconstructing a track outside a video range.
Respectively learning parameters of two car-following models by using the divided tracks of all the cars, wherein the car-following model parameter learning process is as follows:
modeling the parametric learning process as an optimization problem solution, since the parametric learning process of the following models describing the joining and leaving of vehicles is consistent, it is described with the same notation:
Figure BDA0002703407700000083
Figure BDA0002703407700000084
Figure BDA0002703407700000085
i=1,…,I
wherein Θ ═ { κ, b, sc,V1,V2,C1,C2I is the number of vehicles in a signal period used for learning,
Figure BDA0002703407700000086
representing the reconstructed trajectory of the ith vehicle,
Figure BDA0002703407700000087
is the moment when the ith vehicle enters the video surveillance range. And after the modeling of the optimization problem is completed, solving the guaranteed speed and precision by using a genetic algorithm.
Respectively learning parameters of two vehicle-following models by using the divided tracks of all vehicles, thereby reconstructing the track of the vehicle outside the video range;
s5, fusing the trajectory reconstructed by the following model and the corresponding trajectory in the video by utilizing a cubic Hermite interpolation algorithm to reduce the accumulated error, and calculating the mean square error of the vehicle trajectory in the video and the reconstructed trajectory of the corresponding following model as a backtracking distance D as shown in FIG. 3BAnd obtaining the position of the fusion point on the reconstructed track by using the following formula:
Figure BDA0002703407700000088
wherein t isfpIs the time, x, of the fusion point correspondencesIs the position of the stop line,/sIs the maximum monitoring distance of the video. And taking two track points, namely the fusion point and the starting point of the vehicle entering the video monitoring range, as interpolation boundary points, and calculating an interpolation result by utilizing cubic Hermite interpolation to serve as a fusion track. The expression of cubic hermite is y (t) ═ c0+c1·t+c2·t2+c3·t3Wherein the parameter c0,c1,c2,c3And solving by substituting the positions and the speeds of the two track points.
When the state of the ith vehicle is
Figure BDA0002703407700000091
The ith vehicle is considered to be joining the queue at time t, and similarly, if the position of the vehicle remains unchanged until time t and changes at the next time (t +1), the vehicle leaves the queue at time t. The queue length q (t) can be obtained by the following equation: q (t) ═ xs-xls(t) wherein xls(t) is the position of the last vehicle in line at time t, resulting in an estimate of the line length as shown in FIG. 4. In order to obtain a quantized analysis result, the estimation value and the actual value of the maximum queuing length of each signal cycle are compared, and the average absolute errors of the estimation results of the three lanes are 7.178ft., 5.056ft., and 8.508ft. The average absolute percentage errors were 4.26%, 2.25%, 3.94%, respectively.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.

Claims (7)

1. A signal lamp intersection queuing length estimation method based on video monitoring data is characterized by comprising the following steps:
s1, collecting the vehicle type, license plate number and track of each vehicle in the video monitoring range, and establishing a database;
s2, designing a three-layer neural network model, training by taking relevant characteristics of each signal period as input, and judging whether a residual queue is retained in the current period in the previous period by using the trained model;
s3, numbering all vehicles by taking the leaving time of the downstream vehicle as a reference, establishing a Gaussian process model, and taking the serial numbers of unmatched vehicles as the input of the Gaussian process model to obtain the most possible upstream arrival time of the vehicle;
s4, dividing the track of each vehicle in the video monitoring range of the current signal period into the track of the vehicle from the entering monitoring range to the joining queue and the track of the vehicle starting to travel from the queue to the exiting monitoring range, and utilizing all the divided vehicle tracks in the current period to learn the parameters of the designed vehicle following model so as to reconstruct the track of the vehicle outside the video range;
and S5, fusing the track reconstructed by the following model and the corresponding track in the video by utilizing a cubic Hermite interpolation algorithm, analyzing the track, and obtaining when each vehicle joins in the queue and leaves the queue, thereby obtaining an estimation result when the queuing length exceeds the video monitoring range.
2. The signal intersection queuing length estimation method based on video surveillance data as claimed in claim 1, wherein in S2, designing a three-layer neural network model includes:
the first layer of neural network is an input layer and comprises three neurons:
Figure FDA0002703407690000011
representing the average time occupancy rate of the k-th downstream lane in the previous period, wherein d represents the downstream intersection;
Figure FDA0002703407690000012
representing the traffic flow of all lanes upstream of the previous cycle, jd,kIs the downstream flow proportion of the kth lane, u represents the upstream intersection, and K represents the total number of lanes;
Figure FDA0002703407690000013
is the average time occupancy of all lanes upstream of the previous cycle;
the second layer of neural network is a hidden layer and comprises a plurality of neurons;
and the third layer is an output layer, and the probability that the k-th lane has residual queue detention in the previous period is estimated.
3. The signal intersection queuing length estimation method based on video surveillance data as claimed in claim 1, wherein in S3, building a gaussian process model includes the following:
the serial number and the upstream arrival time of the matched vehicle are respectively equal to y ═ y1,...,yn],t=[t1,...,tn]Where N is the total number of vehicles of the upstream and downstream license plate matches, the gaussian process models the vehicle arrival times as a joint normal distribution, i.e., p (y) N (μ (t), K (t, t)), where μ (t) represents the mean of the probability distribution of different vehicle arrival times, and K (t, t) represents the covariance between different times; let y*,t*Representing respectively the serial number and the upstream arrival time of the unmatched vehicles, y is then solved by the Gaussian process*As the most likely upstream arrival time of the vehicle.
4. The signal light intersection queuing length estimation method based on video surveillance data as claimed in claim 1, wherein in S4, the vehicle trajectory division process is as follows:
the trajectory of the ith vehicle is represented as { [ x { [i(t),vi(t)]|t=0,Δt,2Δt,...,TiIn which xi(t) and vi(T) denotes position and velocity, respectively, at time T, Δ T being the time step, TiIs the duration of the ith vehicle trajectory;
when the position change of the vehicle is smaller than the threshold value after entering the monitoring range for the first time and the speed is also smaller than the threshold value, checking whether the position change is still smaller than the threshold value after 3 seconds, and if the position change is smaller than the threshold value, determining that the vehicle is added into the queue at the moment to obtain the track from the entering monitoring range to the parking and adding into the queue; likewise, after joining the queue, when the change in position is greater than the threshold value and the speed is also greater than the threshold value at that time and after 3 seconds, the trajectory is obtained at which the vehicle starts to travel from the queue to the exit monitoring range.
5. The signal intersection queuing length estimation method based on video monitoring data as claimed in claim 1, wherein in S4, the vehicle-following model parameter learning process is as follows:
Figure FDA0002703407690000021
Figure FDA0002703407690000022
Figure FDA0002703407690000023
i=1,…,I
wherein, theta is the learning parameter to be solved, I is the number of the vehicles used for learning in a signal period,
Figure FDA0002703407690000024
representing the reconstructed trajectory of the ith vehicle,
Figure FDA0002703407690000025
is that the ith vehicle enters the video monitorAnd at the moment of controlling the range, solving by using a genetic algorithm after the modeling of the optimization problem is completed so as to ensure the speed and the precision.
6. The signal intersection queuing length estimation method based on video surveillance data as claimed in claim 1, wherein in S5, the cubic hermite interpolation algorithm steps are as follows:
calculating the mean square error of the vehicle track in the video and the corresponding car following model reconstructed track as a backtracking distance DBAnd obtaining the position of the fusion point on the reconstructed track by using the following formula:
Figure FDA0002703407690000026
wherein, tfpIs the time, x, of the fusion point correspondencesIs the position of the stop line,/sIs the farthest monitoring distance of the video; and taking the fusion point and the starting point of the vehicle entering the video monitoring range as interpolation boundary points, and calculating an interpolation result by utilizing cubic Hermite interpolation to be used as a fusion track.
7. The signal light intersection queue length estimation method based on video surveillance data as claimed in claim 1, wherein in S5, analyzing the trajectory to estimate the queue length is performed as follows:
when the state of the ith vehicle is
Figure FDA0002703407690000031
Considering the ith vehicle to join the queue at time t, and if the position of the vehicle remains unchanged before time t and changes at the next time, the vehicle leaves the queue at time t; the queuing length q (t) at each moment is obtained by the following formula: q (t) ═ xs-xls(t) wherein xls(t) is the position of the last vehicle in line at time t.
CN202011030216.4A 2020-09-27 2020-09-27 Signal lamp intersection queuing length estimation method based on video monitoring data Active CN114283575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011030216.4A CN114283575B (en) 2020-09-27 2020-09-27 Signal lamp intersection queuing length estimation method based on video monitoring data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011030216.4A CN114283575B (en) 2020-09-27 2020-09-27 Signal lamp intersection queuing length estimation method based on video monitoring data

Publications (2)

Publication Number Publication Date
CN114283575A true CN114283575A (en) 2022-04-05
CN114283575B CN114283575B (en) 2023-02-07

Family

ID=80867529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011030216.4A Active CN114283575B (en) 2020-09-27 2020-09-27 Signal lamp intersection queuing length estimation method based on video monitoring data

Country Status (1)

Country Link
CN (1) CN114283575B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012133760A (en) * 2010-12-01 2012-07-12 Sumitomo Electric Ind Ltd Traffic signal control device and traffic signal control method
CN105261215A (en) * 2015-10-09 2016-01-20 南京慧尔视智能科技有限公司 Intelligent traffic behavior perception method and intelligent traffic behavior perception system based on microwaves
CN107134156A (en) * 2017-06-16 2017-09-05 上海集成电路研发中心有限公司 A kind of method of intelligent traffic light system and its control traffic lights based on deep learning
CN108492562A (en) * 2018-04-12 2018-09-04 连云港杰瑞电子有限公司 Intersection vehicles trajectory reconstruction method based on fixed point detection with the alert data fusion of electricity
CN110009906A (en) * 2019-03-25 2019-07-12 上海交通大学 Dynamic path planning method based on traffic forecast
CN110164128A (en) * 2019-04-23 2019-08-23 银江股份有限公司 A kind of City-level intelligent transportation analogue system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012133760A (en) * 2010-12-01 2012-07-12 Sumitomo Electric Ind Ltd Traffic signal control device and traffic signal control method
CN105261215A (en) * 2015-10-09 2016-01-20 南京慧尔视智能科技有限公司 Intelligent traffic behavior perception method and intelligent traffic behavior perception system based on microwaves
CN107134156A (en) * 2017-06-16 2017-09-05 上海集成电路研发中心有限公司 A kind of method of intelligent traffic light system and its control traffic lights based on deep learning
CN108492562A (en) * 2018-04-12 2018-09-04 连云港杰瑞电子有限公司 Intersection vehicles trajectory reconstruction method based on fixed point detection with the alert data fusion of electricity
CN110009906A (en) * 2019-03-25 2019-07-12 上海交通大学 Dynamic path planning method based on traffic forecast
CN110164128A (en) * 2019-04-23 2019-08-23 银江股份有限公司 A kind of City-level intelligent transportation analogue system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘聪等: "基于BP神经网络交通事故中车辆排队长度计算的仿真研究", 《信息与电脑(理论版)》 *

Also Published As

Publication number Publication date
CN114283575B (en) 2023-02-07

Similar Documents

Publication Publication Date Title
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN107066953B (en) A kind of vehicle cab recognition towards monitor video, tracking and antidote and device
CN112700470B (en) Target detection and track extraction method based on traffic video stream
Ki et al. A traffic accident recording and reporting model at intersections
Althoff et al. Comparison of Markov chain abstraction and Monte Carlo simulation for the safety assessment of autonomous cars
Lin et al. A Real‐Time Vehicle Counting, Speed Estimation, and Classification System Based on Virtual Detection Zone and YOLO
CN110077398B (en) Risk handling method for intelligent driving
CN108319909B (en) Driving behavior analysis method and system
CN105513354A (en) Video-based urban road traffic jam detecting system
CN100466010C (en) Different species traffic information real time integrating method
CN109345832B (en) Urban road overtaking prediction method based on deep recurrent neural network
CN113762473B (en) Complex scene driving risk prediction method based on multi-time space diagram
CN107310550A (en) Road vehicles travel control method and device
CN111243338A (en) Vehicle acceleration-based collision risk evaluation method
Xue et al. A context-aware framework for risky driving behavior evaluation based on trajectory data
CN117372969B (en) Monitoring scene-oriented abnormal event detection method
CN114446046A (en) LSTM model-based weak traffic participant track prediction method
Eggert et al. The foresighted driver: Future ADAS based on generalized predictive risk estimation
Shin et al. Image-based learning to measure the stopped delay in an approach of a signalized intersection
Hao et al. Aggressive lane-change analysis closing to intersection based on UAV video and deep learning
CN114283575B (en) Signal lamp intersection queuing length estimation method based on video monitoring data
CN115204755B (en) Service area access rate measuring method and device, electronic equipment and readable storage medium
CN116080681A (en) Zhou Chehang identification and track prediction method based on cyclic convolutional neural network
Yu et al. Vehicle forward collision warning based upon low frequency video data: A hybrid deep learning modeling approach
Wang et al. Improved Time-to-collision Considering Vehicle Speed Adaptation based on Trajectory Data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant