CN107730881A - Traffic congestion vision detection system based on depth convolutional neural networks - Google Patents

Traffic congestion vision detection system based on depth convolutional neural networks Download PDF

Info

Publication number
CN107730881A
CN107730881A CN201710440987.2A CN201710440987A CN107730881A CN 107730881 A CN107730881 A CN 107730881A CN 201710440987 A CN201710440987 A CN 201710440987A CN 107730881 A CN107730881 A CN 107730881A
Authority
CN
China
Prior art keywords
mrow
road
msup
msubsup
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710440987.2A
Other languages
Chinese (zh)
Inventor
汤平
汤一平
王辉
钱小鸿
陈才君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Enjoyor Co Ltd
Original Assignee
Enjoyor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Enjoyor Co Ltd filed Critical Enjoyor Co Ltd
Priority to CN201710440987.2A priority Critical patent/CN107730881A/en
Publication of CN107730881A publication Critical patent/CN107730881A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0116Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0141Measuring and analyzing of parameters relative to traffic conditions for specific applications for traffic information dissemination

Landscapes

  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

A kind of traffic congestion visible detection method based on depth convolutional neural networks, including video camera, traffic Cloud Server and road traffic congestion detecting system on urban road;Video camera is used to obtain the video data on each road in city, configures in the top of road, the vedio data on road is transferred into traffic Cloud Server by network;Traffic Cloud Server is used to receive the road video data obtained from video camera, and be submitted to road traffic congestion detecting system and detected and identified, finally testing result is stored in Cloud Server and issued in a manner of WebGIS to realize the control of traffic, induction and the quick reply of live traffic police;Road traffic congestion detecting system includes road and direction of traffic customized module, congestion in road detection module and road congestion conditions release module.Accuracy of detection of the present invention is higher, real-time is preferable, testing result is simple and clear.

Description

Traffic congestion vision detection system based on depth convolutional neural networks
Technical field
The present invention relates to artificial intelligence, convolutional neural networks and computer vision traffic congestion context of detection application, Belong to intelligent transportation field.
Background technology
Current traffic problems have become global " city common fault ", and traffic congestion is the master of city " traffic illness " Show." cause of disease " of urban traffic blocking comes from many factors, and traffic congestion directly affects the trip quality of people, special It is not the people using vehicular traffic.Road vehicle is crowded, and traffic accident takes place frequently, and traffic environment deteriorates, energy shortage, and environment is dirty Dye constantly aggravates, the basic theory of these getting worse traffic problems and modern transportation, i.e., sensible, orderly, safe, easypro The requirements such as suitable, low energy consumption, low stain are disagreed completely.
The evaluation criterion of modern transportation system is safe, unimpeded, energy-conservation.Therefore urban highway traffic operation conditions is held How middle service level is, it is necessary to set up a kind of science, objective appraisal method.But due to lacking a kind of relatively section at present The system effectively evaluated road traffic service level and effective road traffic state detection means are learned, is especially existed Road traffic congestion context of detection.
Traffic information collection technology is considered as the key technology of a most important thing in intelligent transportation, currently used friendship Logical information acquiring technology has ground induction coil, magneto-dependent sensor, ultrasonic sensor, microwave, GPS and vision sensor;Due to ground Sense coil, magnetosensitive, ultrasonic wave, the transport information detection sensor such as microwave need to be embedded in underground face, during I&M Original road surface must be destroyed, have impact on road traffic, while the road in China is due to road surface caused by the reasons such as the overload of vehicle Damage must be safeguarded to the sensor being embedded in below road often;These other detection means can only be perceived out on road The vehicle passed through on some point or certain line, therefore can only be between the speed by vehicle of the set-up site of sensor Ground connection presumption congestion;Therefore above-mentioned detection means safeguards that inconvenient, cost of investment is high, antijamming capability there is installation The defects of difference and sensing range are limited.In addition, these detection means are difficult to detect the stationary vehicle on road.
Vision sensor is then a kind of contactless Traffic flow detecting means, and it simulates human vision principle, fusion meter Calculation machine technology and image processing techniques, traffic flow and congestion in road state are detected by vision signal, are progressively to send out in recent years The new road traffic detection system that exhibition is got up.But the video detection of road traffic state is generally tracked using analysis at present The detection of vehicle and statistical method on road, this detection method need to spend very big computing resource, it is difficult to while obtain various The holographic traffic behavior of the reflection such as traffic basic parameter and Assessment of Serviceability of Roads.
Chinese invention patent application number discloses a kind of traffic based on Video Analysis Technology for 201110108851.4 and gathered around Detection method is filled in, by obtaining average dissimilarity, the key frame of video lens on the basis of Video segmentation and key-frame extraction Three number, average light flow field energy congestion characteristic quantities, realize that traffic congestion detects using more classification SVM methods.The technology is still The vision detection technology in deep learning epoch before belonging to, there is accuracy of detection it is not high the problem of.
Chinese invention patent application number discloses a kind of urban highway traffic based on video for 201510969912.4 and gathered around Stifled detecting system, it is main to include reading video and preprocessed video, to obtain frame of video;Background is carried out to the frame of video of acquisition Modeling, obtain background frame sequence and prospect frame sequence;Pair prospect frame sequence at same frame with background frame sequence is detected and extracted Moving target, and preserve single frames foreground picture by prospect frame sequence after the disposal of gentle filter;By prospect frame sequence played in order Single frames foreground picture is simultaneously filtered tracking to moving target, to obtain vehicle condition in road;According to vehicle condition and/or fortune The prospect frame sequence of moving-target, calculate and draw traffic parameter.The vision-based detection in deep learning epoch before the technology is still fallen within Technology, there is accuracy of detection it is not high the problem of.Sentenced in addition, being detected for traffic congestion by calculating traffic parameter mode Disconnected, this judgment mode needs multiple traffic parameters and supported, the accuracy that congestion status differentiates is obtained by relevant traffic stream parameter The influence of the accuracy taken.
The core of congestion in road vision-based detection is the detection of stationary vehicle on road.Due to people's car in China's urban road Mix special circumstances with motor vehicle and non-motor vehicle, it is various plus the vehicle vehicle species of the operation of road, pass through vision All vehicles that mode will detect on road like clockwork are not easy thing, also need to determine whether that these vehicles are motions Or it is static, the difficulty of detection can be bigger.
Recent years, deep learning in the technology of computer vision field obtained rapid development, and deep learning can utilize Substantial amounts of training sample and hidden layer successively in depth learn the abstracted information of image, more comprehensively directly obtain characteristics of image. Digital picture is described with matrix, and convolutional neural networks describe the whole of image preferably from local message block Body structure, therefore solve problem using convolutional neural networks mostly in computer vision field, deep learning method.Unroll Improve accuracy of detection and detection time, depth convolutional neural networks technology is from R-CNN, Fast R-CNN to Faster R-CNN. Further precision improvement, acceleration, end-to-end and more practical is embodied in, is almost covered from being categorized into detection, segmentation, fixed Position every field.It will be the research for having very much actual application value that depth learning technology, which applies to congestion in road vision-based detection, Field.
When the vision system of the mankind is perceiving moving target, moving target can be formed on the imaging plane of vision system A kind of image stream of even variation, referred to as light stream.Light stream expresses image pixel and changed with time speed degree, is one The apparent motion of gradation of image pattern in image sequence, it is the pixel being observed on the surface motion of space motion object Instantaneous velocity field.The advantages of optical flow method, is the provision of the speed of related movement of moving target, exercise attitudes position and surface The abundant informations such as texture structure, and can be in the case where not knowing any information of scene, or even under complex scene, can also examine Measure moving target.Therefore, after road vehicle is detected, moving vehicle or static car can be distinguished with optical flow method .
Realizing that accuracy of detection is high, the detection key that real-time is good, testing result is simple and clear is will be by direct, simple Singly understand, simple, the visual road traffic detection means of calculating directly obtains whether certain road traffic is in following 6 Kind status information, i.e. road traffic state are in service level A:It is unimpeded;Service level B:Substantially it is unimpeded;Service level C:Tentatively Congestion;Service level D:Congestion:Service level E:Heavy congestion;Service level F:Localized road and large area paralysis.
The urban transportation in China will be in mixed traffic state within a very long time.Serviced under the conditions of mixed traffic Horizontal achievement data has the characteristics that:(1) diversity of data acquisition object:Not only need to gather road section traffic volume data but also needs Gather intersection internal transportation data, at the same in once observing generally require to observe simultaneously a variety of behaviors of traffic unit and its Parameter;(2) it is strong to cross over property for the space-time of data:In order to obtain the achievement data of varying service level grade under different transportation conditions, Detection needs to gather the data on certain time and spatial extent, and needs to be online data.
It is to use a kind of road close friend for not destroying road surface or not being related to pavement construction to realize key easy to implement Type, contactless, large area road traffic state detection means, while existing equipment and investment are utilized as far as possible; The service state of road is the synthesis of many factors such as condition of road surface, operation conditions, means of transportation situation and traffic safety status Embody, although being that the service level state that can obtain road is believed by detecting these all multi-state datas by calculating such as statistics Breath, but it is preferably can straightforward, simple and convenient, service status information that economy obtains road in real time and various traffic Master data.
The content of the invention
In order to overcome the shortcomings of that the accuracy of detection of existing traffic congestion detection mode is relatively low, real-time is poor, the present invention carries For a kind of accuracy of detection is higher, real-time is preferable, the simple and clear traffic based on depth convolutional neural networks of testing result is gathered around Stifled vision detection system.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of traffic congestion vision detection system based on depth convolutional neural networks, including on urban road Video camera, traffic Cloud Server and road traffic congestion detecting system;
Described video camera is used to obtain the video data on each road in city, configures in the top of road, passes through network Vedio data on road is transferred to described traffic Cloud Server;
Described traffic Cloud Server is used to receive the road video data obtained from described video camera, and is passed Give described road traffic congestion detecting system to be detected and identified, be finally stored in testing result in Cloud Server simultaneously Issued in a manner of WebGIS to realize the control of traffic, induction and the quick reply of live traffic police;
Described road traffic congestion detecting system includes road and direction of traffic customized module, congestion in road detection module With road congestion conditions release module;
Described road and direction of traffic customized module are used for the track for customizing the road in camera field of view, specific practice It is that virtual track is drawn according to the lane on real road and direction of traffic in video image, virtual track is entered with from left to right Row arrangement, the most left track of name is 1 track, and then the right track adjacent with this is 2 tracks ..., according to the quantity N in track The track for providing rightmost is N tracks;
Described congestion in road detection module, include based on Fast R-CNN vehicle detections unit, the static car of optical flow method Detection unit, by track count stationary vehicle unit and congestion in road computing unit.
Further, it is described to be used to detect all cars in video image based on Fast R-CNN vehicle detections unit , specific practice is the motor vehicles gone out using depth convolutional neural networks Fast Segmentation on road and provides these vehicles in road Shared spatial positional information on road;
The motor vehicle segmentation and positioning used is made up of two models, and a model is the selective search net for generating RoI Network;Another model is Fast R-CNN motor vehicle target detection networks;
Selective search network, i.e. RPN;RPN networks are built any scalogram picture as input, output rectangular target The set of frame is discussed, each frame includes 4 position coordinates variables and a score.For formation zone Suggestion box, at last Small network is slided in the convolution Feature Mapping of shared convolutional layer output, this network is connected to input convolution Feature Mapping entirely In n × n spatial window.Each sliding window is mapped on a low-dimensional vector, a sliding window of each Feature Mapping A corresponding numerical value.This vector exports the layer of the full connection at the same level to two.
In the position of each sliding window, while k suggestion areas is predicted, so position returns layer and has 4k output, The codes co-ordinates of i.e. k bounding box.Layer of classifying exports the score of 2k bounding box, i.e., is target/non-targeted to each Suggestion box Estimated probability, be the classification layer realized with the softmax layers of two classification, k can also be generated with logistic recurrence Point.K Suggestion box is parameterized by the corresponding k Suggestion box for being referred to as anchor.Each anchor is with current sliding window mouth center Centered on, and a kind of corresponding yardstick and length-width ratio, using 3 kinds of yardsticks and 3 kinds of length-width ratios, so just have in each sliding position K=9 anchor.For example, for the convolution Feature Mapping that size is w × h, then a total of w × h × k anchor.RPN nets Network structure chart is as shown in Figure 2.
In order to train RPN networks, a binary label is distributed to each anchor, is to mark the anchor with this It is not target.Then distribute positive label and give this two class anchor:(I) with some real target bounding box, i.e. Ground Truth, GT has the ratio between highest IoU, i.e. Interse-ction-over-Union, common factor union, overlapping anchor;(II) it is and any GT bounding boxs have the overlapping anchor of the IoU more than 0.7.Notice that a GT bounding box may give multiple anchor distribution positive mark Label.The IoU ratios that the negative label of distribution gives all GT bounding boxs are below 0.3 anchor.Anon-normal non-negative anchor is to instruction Practicing target does not have any effect, then abandons.
There are these to define, it then follows the multitask loss in Fast R-CNN [17], to minimize object function.One is schemed The loss function of picture is defined as:
Here, i is anchor index, piIt is the prediction probability that anchor is the i-th target, if anchor is Just, GT labelsIt is exactly 1, if anchor is negative,It is exactly 0;tiIt is a vector, represents 4 parameters of the bounding box of prediction Change coordinate,It is the coordinate vector of GT bounding boxs corresponding with positive anchor;λ is a balance weight, here λ=10, NclsIt is The normalized value of cls items is mini-batch size, here Ncls=256, NregThe normalized value for being reg items is anchor positions The quantity put, Nreg=2,400;Classification Loss function LclsTwo classifications, i.e. motor vehicles target and non power driven vehicle target Logarithm loss:
For returning loss function Lreg, defined to minor function:
In formula, LregTo return loss function, R is the loss function of robust, and smooth L are calculated with formula (4)1
In formula, smoothL1For smooth L1Loss function, x are variable;
Fast R-CNN network structures in input picture after depth convolutional neural networks as shown in figure 3, can obtain Characteristic pattern, corresponding RoIs can be then obtained according to characteristic pattern and RPN networks, finally then passes through RoI ponds layer.The layer is only There is the process in level spatial " pyramid " pond.Input is N number of Feature Mapping and R RoI.N number of Feature Mapping comes from finally One convolutional layer, the size of each Feature Mapping is w × h × c.Each RoI is a tuple (n, r, c, h, w), wherein, n It is the index of Feature Mapping, n ∈ (0,1,2 ..., N-1), r, c are top left co-ordinates, and h, w are height and width respectively.Output then by The Feature Mapping that maximum pond obtains.The effect of this layer mainly has two, first, by the block pair in the RoI and characteristic pattern in artwork It should get up;It by characteristic pattern down-sampling is fixed size that another, which is, is then passed to full connection again.
Further, selective search network is shared with detecting the weights of network:Selective search network and Fast R- CNN is stand-alone training, differently to change their convolutional layer.Therefore need using between a kind of two networks of permission The technology of shared convolutional layer, rather than learn two networks respectively.A kind of 4 practical step training algorithms are used in invention, are passed through Alternative optimization learns shared feature.The first step, according to above-mentioned training RPN, at the beginning of model of the network with ImageNet pre-training Beginningization, and end-to-end finely tune suggests task for region.Second step, the Suggestion box generated using the RPN of the first step, by Fast R-CNN train one individually detection network, this detection network be equally by the model initialization of ImageNet pre-training, At this time two networks are also without shared convolutional layer.3rd step, trained with detection netinit RPN, but fixed shared volume Lamination, and only finely tune the exclusive layers of RPN, present two network share convolutional layers.4th step, shared convolutional layer is kept to consolidate It is fixed, fine setting Fast R-CNN fc, i.e., full articulamentum.So, two network share identical convolutional layers, form one it is unified Network.
In view of object it is multiple dimensioned the problem of, use three kinds of simple chis for each characteristic point on characteristic pattern Degree, the area of bounding box is respectively 128 × 128,256 × 256,512 × 512 and three kind of length-width ratio, respectively 1:1、1:2、2: 1.Pass through this design, in this way it is no longer necessary to which Analysis On Multi-scale Features or multi-scale sliding window mouth predict big region, can reach section Save the effect of a large amount of run times.
By the processing of above-mentioned two network, the motor vehicles in a frame video image and the size and sky to it are detected Between position confined, that is, obtained size and the locus of vehicle, its r, c are that the upper left corner of vehicle in the picture is sat Mark, h, w are projected size of the vehicle in the plane of delineation, i.e. height and width respectively;Then need to judge whether these motor vehicles are located In inactive state;
Further, described optical flow method stationary vehicle detection unit is used to judge whether vehicle is in static shape on road State;When the vehicle in road scene correspond to two dimensional image plane move when, these vehicles two dimensional image plane projection just Motion is formd, the flowing that this motion is showed with plane of delineation luminance patterns is known as light stream.Optical flow method is to motion The important method that sequence image is analyzed, the movable information of Vehicle Object target in image is included in light stream;
The present invention uses a kind of sparse iterative method of Lucas-Kanade light streams based on pyramid model;Figure is first introduced below The pyramidal representation of picture, it is assumed that image I size is nx×ny.Define I0The 0th tomographic image is represented, the 0th tomographic image is rate respectively Highest image, i.e. original image, this tomographic image it is wide and a height ofWith Then we are with one Recursive mode is planted to describe pyramidal representation:We pass through IL-1To calculate IL(L=1,2 ...).IL-1Represent pyramid L- 1 layer of image, ILRepresent the image of pyramid L layers.Assuming that image IL-1It is wide and a height ofWithSo image ILCan be with It is expressed as
In order to simplify formula, we are by imageBoundary point value definition It is as follows,
The point that formula (5) defines must is fulfilled for conditionTherefore image ILWidthAnd heightNeed to meet formula (6),
Image I pyramid model { I is built by formula (5) and (6)LL=0 ..., Lm。LmFor pyramid model Highly, LmTypically take 2,3 or 4.For in general image LmIt is just nonsensical more than 4.Using the image of 640 × 480 sizes as Example, the 1st, 2,3,4 tomographic image size of its pyramid model is respectively 320 × 240, and 160 × 120,80 × 60,40 × 30;
LK optical flow computation methods based on pyramid model, first the top k layer search characteristics in image pyramid model The match point of point, then kth -1 of the initial estimate in image pyramid model using the result of calculation of k layers as k-1 layers Layer search match point, goes round and begins again and iterates to the 0th layer of image pyramid model always, so as to which the light of this feature point be calculated Stream;
The detection target of optical flow method is:In front and rear two field pictures I and J, for image I some pixel u, in image Its match point v=u+d is found in J, or finds out its offset vector d, is calculated with formula (7);
V=u+d=[ux+dx uy+dy]T (7)
In formula, u is some pixel in image I, and v is pixel matched in image J, and d is between the two Offset vector;
First, image I and J pyramid model { I are establishedLL=0 ..., Lm{ JLL=0 ..., Lm;Then picture is calculated Vegetarian refreshments u positions in each pyramidal layers of image IL=0 ..., Lm;Then by a search window image J gold Word tower model highest tomographic image ILmMiddle calculating uLmMatch point vLm, and calculate offset vector dLm
Next we describe the optical flow method based on pyramid model with the mode of iteration;Assuming that pyramid mould is known The offset vector d of type L+1 layersL+1.So by 2dL+1As the initial value of L layers, with the match point for nearby searching for L layers vL;And then obtain the offset vector d of L layersL
By each layer of offset vector d of iterative calculationLAfter (L=0 ..., Lm), the final light stream of the pixel is
In formula, d be a certain pixel light stream value, dLFor a certain pixel L layers light stream value;
After the light stream vectors value of each feature pixel in obtaining image, engaged in this profession according to described vehicle detection unit detection Motor vehicles and shared spatial positional information on road on road, i.e., obtained each vehicle in two dimensional image plane Frame, each frame have four data representations, the position r in the upper left corner, c and length and width h, w;Here each inframe is calculated to own The average value of feature-point optical flow vector, is calculated with formula (9),
In formula,For the average value of the light stream vectors of certain vehicle inframe, diFor a certain feature pixel of certain vehicle inframe Light stream vectors value, n are the quantity of the feature pixel of certain vehicle inframe;
The average value of the light stream vectors of certain vehicle inframe is calculatedAfterwards, just should if the value is less than a certain threshold value T Vehicle frame is as doubtful stationary vehicle frame;Then timing, five data of the present invention are proceeded by the doubtful stationary vehicle frame Express the position r of doubtful stationary vehicle frame, the i.e. upper left corner, c, length and width h, w and quiescent time td;In program cyclic process In, if the doubtful stationary vehicle frame occurred in front and rear two frame will be added up in same position, quiescent time, i.e. td←td+t;
Described is used to count the stationary vehicle on track, handling process such as Fig. 5 by track statistics stationary vehicle unit It is shown, the track in having customized two dimensional image in described road and direction of traffic customized module, here first from first lane Count the stationary vehicle on track respectively to N tracks;First, from nearby static to determining whether at a distance from first lane Vehicle, the mode of judgement are to check the quiescent time t of doubtful stationary vehicle framedIf the doubtful stationary vehicle frame it is static when Between td≥Ts, then it is determined that stationary vehicle frame, and carry out mark;Then check on next track whether there is stationary vehicle, Untill rightmost lane detection;The stationary vehicle on all tracks, handling process on whole road are thus obtained As shown in Figure 5;
Described congestion in road computing unit is used for the congestion status that road is held from overall aspect, and the present invention is main to close Heavy congestion state is noted, specific practice is according to the important indicator of heavy congestion state, i.e. localized road and large area paralysis, is changed Sentence is talked about, and the extreme portions vehicle on road on all tracks is all in inactive state;Stationary vehicle on present invention road Shared area AsWith the area and A in each trackdRatio calculated,
ζ=As/Ad (10)
In formula, AsFor the area shared by all stationary vehicles on road, AdAnd, ζ is for the area in each track on road The dutycycle of stationary vehicle on road;
As ζ >=Vs, i.e., when ζ is more than or equal to some threshold value, system is just determined as heavy congestion state automatically, sets here It is set to 0.5.
Beneficial effects of the present invention are mainly manifested in:Accuracy of detection is higher, real-time is preferable, testing result is simple and clear.
Brief description of the drawings
Fig. 1 is Faster R-CNN structure charts;
Fig. 2 is selective search network;
Fig. 3 is Fast R-CNN structure charts;
Fig. 4 is congestion in road vision-based detection flow chart;
Fig. 5 is the flow chart that stationary vehicle is counted by track.
Embodiment
The invention will be further described below in conjunction with the accompanying drawings.
1~Fig. 5 of reference picture, a kind of traffic congestion vision detection system based on depth convolutional neural networks, including installation Video camera, traffic Cloud Server and road traffic congestion detecting system on urban road.
Described video camera is used to obtain the video data on each road in city, configures in the top of road, passes through network Vedio data on road is transferred to described traffic Cloud Server;
Described traffic Cloud Server is used to receive the road video data obtained from described video camera, and is passed Give described road traffic congestion detecting system to be detected and identified, be finally stored in testing result in Cloud Server simultaneously Issued in a manner of WebGIS to realize the control of traffic, induction and the quick reply of live traffic police;
Described road traffic congestion detecting system includes road and direction of traffic customized module, congestion in road detection mould Block and road congestion conditions release module;
Described road and direction of traffic customized module are used for the track for customizing the road in camera field of view, specific practice It is that virtual track is drawn according to the lane on real road and direction of traffic in video image, virtual track is entered with from left to right Row arrangement, the most left track of name is 1 track, and then the right track adjacent with this is 2 tracks ..., according to the quantity N in track The track for providing rightmost is N tracks;
Described congestion in road detection module, include based on Fast R-CNN vehicle detections unit, the static car of optical flow method Detection unit, by track count stationary vehicle unit and congestion in road computing unit;
Described is used for all vehicles of the detection in video image based on Fast R-CNN vehicle detections unit, specifically does Method is the motor vehicles gone out using depth convolutional neural networks Fast Segmentation on road and provides these vehicles shared by road Spatial positional information;
Motor vehicle segmentation and positioning used herein are made up of two models, and a model is that the selectivity for generating RoI is searched Rope network;Another model is Fast R-CNN motor vehicle target detection networks, and detection unit structure chart is as shown in Figure 1.
Selective search network, i.e. RPN;RPN networks are built any scalogram picture as input, output rectangular target The set of frame is discussed, each frame includes 4 position coordinates variables and a score.For formation zone Suggestion box, at last Small network is slided in the convolution Feature Mapping of shared convolutional layer output, this network is connected to input convolution Feature Mapping entirely In n × n spatial window.Each sliding window is mapped on a low-dimensional vector, a sliding window of each Feature Mapping A corresponding numerical value.This vector exports the layer of the full connection at the same level to two.
In the position of each sliding window, while k suggestion areas is predicted, so position returns layer and has 4k output, The codes co-ordinates of i.e. k bounding box.Layer of classifying exports the score of 2k bounding box, i.e., is target/non-targeted to each Suggestion box Estimated probability, be the classification layer realized with the softmax layers of two classification, k can also be generated with logistic recurrence Point.K Suggestion box is parameterized by the corresponding k Suggestion box for being referred to as anchor.Each anchor is with current sliding window mouth center Centered on, and a kind of corresponding yardstick and length-width ratio, using 3 kinds of yardsticks and 3 kinds of length-width ratios, so just have in each sliding position K=9 anchor.For example, for the convolution Feature Mapping that size is w × h, then a total of w × h × k anchor.RPN nets Network structure chart is as shown in Figure 2.
In order to train RPN networks, a binary label is distributed to each anchor, is to mark the anchor with this It is not target.Then distribute positive label and give this two class anchor:(I) with some real target bounding box, i.e. Ground Truth, GT has the ratio between highest IoU, i.e. Interse-ction-over-Union, common factor union, overlapping anchor;(II) it is and any GT bounding boxs have the overlapping anchor of the IoU more than 0.7.Notice that a GT bounding box may give multiple anchor distribution positive mark Label.The IoU ratios that the negative label of distribution gives all GT bounding boxs are below 0.3 anchor.Anon-normal non-negative anchor is to instruction Practicing target does not have any effect, then abandons.
There are these to define, it then follows the multitask loss in Fast R-CNN, to minimize object function.To image Loss function is defined as:
Here, i is anchor index, piIt is the prediction probability that anchor is the i-th target, if anchor is Just, GT labelsIt is exactly 1, if anchor is negative,It is exactly 0;tiIt is a vector, represents 4 parameters of the bounding box of prediction Change coordinate,It is the coordinate vector of GT bounding boxs corresponding with positive anchor;λ is a balance weight, here λ=10, NclsIt is The normalized value of cls items is mini-batch size, here Ncls=256, NregThe normalized value for being reg items is anchor positions The quantity put, Nreg=2,400;Classification Loss function LclsTwo classifications, i.e. motor vehicles target and non power driven vehicle target Logarithm loss:
For returning loss function Lreg, defined to minor function:
In formula, LregTo return loss function, R is the loss function of robust, and smooth L are calculated with formula (4)1
In formula, smoothL1For smooth L1Loss function, x are variable;
Fast R-CNN network structures in input picture after depth convolutional neural networks as shown in figure 3, can obtain Characteristic pattern, corresponding RoIs can be then obtained according to characteristic pattern and RPN networks, finally then passes through RoI ponds layer.The layer is only There is the process in level spatial " pyramid " pond.Input is N number of Feature Mapping and R RoI.N number of Feature Mapping comes from finally One convolutional layer, the size of each Feature Mapping is w*h*c.Each RoI is a tuple (n, r, c, h, w), wherein, n is The index of Feature Mapping, n ∈ (0,1,2 ..., N-1), r, c are top left co-ordinates, and h, w are height and width respectively.Output is then by most The Feature Mapping that great Chiization obtains.The effect of this layer mainly has two, first, the RoI in artwork is corresponding with the block in characteristic pattern Get up;It by characteristic pattern down-sampling is fixed size that another, which is, is then passed to full connection again.
Selective search network is shared with detecting the weights of network:Selective search network and Fast R-CNN are independent Training, differently to change their convolutional layer.Therefore need to allow to share convolutional layer between two networks using a kind of Technology, rather than respectively learn two networks.A kind of 4 practical step training algorithms are used in invention, by alternative optimization come The shared feature of study.The first step, according to above-mentioned training RPN, the model initialization of network ImageNet pre-training, and hold and arrive End fine setting is used for region and suggests task.Second step, the Suggestion box generated using the RPN of the first step, one is trained by Fast R-CNN Individual individually detection network, this detection network is equally at this time two by the model initialization of ImageNet pre-training Network is also without shared convolutional layer.3rd step, trained with detection netinit RPN, but fixed shared convolutional layer, and only Finely tune the exclusive layers of RPN, present two network share convolutional layers.4th step, keep shared convolutional layer to fix, finely tune Fast R-CNN fc, i.e., full articulamentum.So, two network share identical convolutional layers, a unified network is formed.
In view of object it is multiple dimensioned the problem of, use three kinds of letters for each characteristic point (anchor) on characteristic pattern Single yardstick, the area of bounding box is respectively 128 × 128,256 × 256,512 × 512 and three kind of length-width ratio, respectively 1:1、 1:2、2:1.By this design, Analysis On Multi-scale Features or multi-scale sliding window mouth are no longer needed in scheme to predict big area Domain, the effect for saving a large amount of run times can be reached.
By the processing of above-mentioned two network, the motor vehicles in a frame video image and the size and sky to it are detected Between position confined, that is, obtained size and the locus of vehicle, its r, c are that the upper left corner of vehicle in the picture is sat Mark, h, w are projected size of the vehicle in the plane of delineation, i.e. height and width respectively;Then need to judge whether these motor vehicles are located In inactive state;
Described optical flow method stationary vehicle detection unit is used to judge whether vehicle remains static on road;Work as road When vehicle in scene corresponds to two dimensional image plane motion, these vehicles are formed transporting in the projection of two dimensional image plane Dynamic, the flowing that this motion is showed with plane of delineation luminance patterns is known as light stream.Optical flow method is to movement sequence image The important method analyzed, the movable information of Vehicle Object target in image is included in light stream;
The present invention uses a kind of sparse iterative method of Lucas-Kanade light streams based on pyramid model;Figure is first introduced below The pyramidal representation of picture, it is assumed that image I size is nx×ny.Define I0The 0th tomographic image is represented, the 0th tomographic image is rate respectively Highest image, i.e. original image, this tomographic image it is wide and a height ofWith Then we are with one Recursive mode is planted to describe pyramidal representation:We pass through IL-1To calculate IL(L=1,2 ...).IL-1Represent pyramid L- 1 layer of image, ILRepresent the image of pyramid L layers.Assuming that image IL-1It is wide and a height ofWithSo image ILCan be with It is expressed as
In order to simplify formula, we are by imageBoundary point value definition It is as follows,
The point that formula (5) defines must is fulfilled for conditionTherefore image ILWidthAnd heightNeed to meet formula (6),
Image I pyramid model { I is built by formula (5) and (6)LL=0 ..., Lm。LmFor pyramid model Highly, LmTake 2,3 or 4.For in general image LmIt is just nonsensical more than 4.By taking the image of 640 × 480 sizes as an example, its 1st, 2,3,4 tomographic image size of pyramid model is respectively 320 × 240, and 160 × 120,80 × 60,40 × 30;
LK optical flow computation methods based on pyramid model, first the top k layer search characteristics in image pyramid model The match point of point, then kth -1 of the initial estimate in image pyramid model using the result of calculation of k layers as k-1 layers Layer search match point, goes round and begins again and iterates to the 0th layer of image pyramid model always, so as to which the light of this feature point be calculated Stream;
The detection target of optical flow method is:In front and rear two field pictures I and J, for image I some pixel u, in image Its match point v=u+d is found in J, or finds out its offset vector d, is calculated with formula (7);
V=u+d=[ux+dx uy+dy]T (7)
In formula, u is some pixel in image I, and v is pixel matched in image J, and d is between the two Offset vector;
First, image I and J pyramid model { I are establishedLL=0 ..., Lm{ JLL=0 ..., Lm;Then picture is calculated Vegetarian refreshments u positions in each pyramidal layers of image IL=0 ..., Lm;Then by a search window image J gold Word tower model highest tomographic image ILmMiddle calculating uLmMatch point vLm, and calculate offset vector dLm
Next we describe the optical flow method based on pyramid model with the mode of iteration;Assuming that pyramid mould is known The offset vector d of type L+1 layersL+1.So by 2dL+1As the initial value of L layers, with the match point for nearby searching for L layers vL;And then obtain the offset vector d of L layersL
By each layer of offset vector d of iterative calculationLAfter (L=0 ..., Lm), the final light stream of the pixel is
In formula, d be a certain pixel light stream value, dLFor a certain pixel L layers light stream value;
After the light stream vectors value of each feature pixel in obtaining image, engaged in this profession according to described vehicle detection unit detection Motor vehicles and shared spatial positional information on road on road, i.e., obtained each vehicle in two dimensional image plane Frame, each frame have four data representations, the position r in the upper left corner, c and length and width h, w;Here each inframe is calculated to own The average value of feature-point optical flow vector, is calculated with formula (9),
In formula,For the average value of the light stream vectors of certain vehicle inframe, diFor a certain feature pixel of certain vehicle inframe Light stream vectors value, n are the quantity of the feature pixel of certain vehicle inframe;
The average value of the light stream vectors of certain vehicle inframe is calculatedAfterwards, just should if the value is less than a certain threshold value T Vehicle frame is as doubtful stationary vehicle frame;Then timing, five data of the present invention are proceeded by the doubtful stationary vehicle frame Express the position r of doubtful stationary vehicle frame, the i.e. upper left corner, c, length and width h, w and quiescent time td;In program cyclic process In, if the doubtful stationary vehicle frame occurred in front and rear two frame will be added up in same position, quiescent time, i.e. td←td+t;
Described is used to count the stationary vehicle on track, handling process such as Fig. 5 by track statistics stationary vehicle unit It is shown, the track in having customized two dimensional image in described road and direction of traffic customized module, here first from first lane Count the stationary vehicle on track respectively to N tracks;First, from nearby static to determining whether at a distance from first lane Vehicle, the mode of judgement are to check the quiescent time t of doubtful stationary vehicle framedIf the doubtful stationary vehicle frame it is static when Between td≥Ts, then it is determined that stationary vehicle frame, and carry out mark;Then check on next track whether there is stationary vehicle, Untill rightmost lane detection;The stationary vehicle on all tracks, handling process on whole road are thus obtained As shown in Figure 5;
Described congestion in road computing unit is used for the congestion status that road is held from overall aspect, and the present invention is main to close Heavy congestion state is noted, specific practice is according to the important indicator of heavy congestion state, i.e. localized road and large area paralysis, is changed Sentence is talked about, and the extreme portions vehicle on road on all tracks is all in inactive state;Stationary vehicle on present invention road Shared area AsWith the area and A in each trackdRatio calculated,
ζ=As/Ad (10)
In formula, AsFor the area shared by all stationary vehicles on road, AdAnd, ζ is for the area in each track on road The dutycycle of stationary vehicle on road;
As ζ >=Vs, i.e., when ζ is more than or equal to some threshold value, system is just determined as heavy congestion state automatically, sets here It is set to 0.5.
Described road congestion conditions release module is used to issue the heavy congestion state on urban road on WebGIS Section, so as to traffic quickly take traffic signalization measure alleviate congestion status, traffic participant is according to issue Information avoids traffic congestion section, realizes traffic guidance.
The foregoing is only the preferable implementation example of the present invention, be not intended to limit the invention, it is all in spirit of the invention and Within principle, any modification, equivalent substitution and improvements made etc., it should be included in the scope of the protection.

Claims (6)

  1. A kind of 1. traffic congestion vision detection system based on depth convolutional neural networks, it is characterised in that:Including installed in city Video camera, traffic Cloud Server and road traffic congestion detecting system on city's road;
    Described video camera is used to obtain video data on each road in city, configures in the top of road, by network by road Vedio data on road is transferred to described traffic Cloud Server;
    Described traffic Cloud Server is used to receive the road video data obtained from described video camera, and is submitted to Described road traffic congestion detecting system detected and identified, finally by testing result be stored in Cloud Server and with WebGIS mode is issued to realize the control of traffic, induction and the quick reply of live traffic police;
    Described road traffic congestion detecting system includes road and direction of traffic customized module, congestion in road detection module and road Road congestion release module;
    Described road and direction of traffic customized module are used for the track for customizing the road in camera field of view, in video image Virtual track is drawn according to the lane on real road and direction of traffic, virtual track is arranged with from left to right, and name is most Left track is 1 track, and then the right track adjacent with this is 2 tracks ..., and rightmost is provided according to the quantity N in track Track is N tracks;
    Described congestion in road detection module, include based on Fast R-CNN vehicle detections unit, the inspection of optical flow method stationary vehicle Survey unit, count stationary vehicle unit and congestion in road computing unit by track.
  2. 2. the traffic congestion vision detection system based on depth convolutional neural networks as claimed in claim 1, it is characterised in that: Described is used for all vehicles of the detection in video image based on Fast R-CNN vehicle detections unit, and specific practice is to use Motor vehicles that depth convolutional neural networks Fast Segmentation goes out on road simultaneously provide these vehicles space bit shared on road Confidence ceases;
    The motor vehicle segmentation and positioning used is made up of two models, and a model is the selective search network for generating RoI;Separately One model is Fast R-CNN motor vehicle target detection networks;
    Described selective search network, i.e. RPN;RPN networks export rectangular target using any scalogram picture as input The set of Suggestion box, each frame include 4 position coordinates variables and a score;The target of described target Suggestion box refers to Motor vehicles object;
    It is the estimated probability of target/non-targeted to each Suggestion box, is the classification layer realized with the softmax layers of two classification;K Suggestion box is parameterized by the corresponding k Suggestion box for being referred to as anchor;
    Each anchor is centered on current sliding window mouth center, and a kind of corresponding yardstick and length-width ratio, uses 3 kinds of yardsticks and 3 Kind length-width ratio, so just has k=9 anchor in each sliding position;
    In order to train RPN networks, a binary label is distributed to each anchor, is to mark the anchor with this Target;Then distribute positive label and give this two class anchor:(I) have with some real target bounding box, i.e. Ground Truth, GT The ratio between highest IoU, i.e. Interse-ction-over-Union, common factor union, overlapping anchor;(II) with any GT bags Enclosing box has the overlapping anchor of the IoU more than 0.7;Notice that a GT bounding box may distribute positive label to multiple anchor; The IoU ratios that the negative label of distribution gives all GT bounding boxs are below 0.3 anchor;Anon-normal non-negative anchor is to training mesh No any effect is marked, then is abandoned;
    There are these to define, it then follows the multitask loss in Fast R-CNN, to minimize object function.Loss to an image Function is defined as:
    <mrow> <mi>L</mi> <mrow> <mo>(</mo> <mo>{</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>}</mo> <mo>,</mo> <mo>{</mo> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>}</mo> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mrow> <mi>c</mi> <mi>k</mi> </mrow> </msub> </mfrac> <munder> <mo>&amp;Sigma;</mo> <mi>i</mi> </munder> <msub> <mi>L</mi> <mrow> <mi>c</mi> <mi>l</mi> <mi>s</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>,</mo> <msubsup> <mi>p</mi> <mi>i</mi> <mo>*</mo> </msubsup> <mo>)</mo> </mrow> <mo>+</mo> <mi>&amp;lambda;</mi> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>g</mi> </mrow> </msub> </mfrac> <munder> <mo>&amp;Sigma;</mo> <mi>i</mi> </munder> <msubsup> <mi>p</mi> <mi>i</mi> <mo>*</mo> </msubsup> <msub> <mi>L</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>g</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>,</mo> <msubsup> <mi>t</mi> <mi>i</mi> <mo>*</mo> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
    Here, i is anchor index, piIt is the prediction probability that anchor is the i-th target, if anchor is just, GT marks LabelIt is exactly 1, if anchor is negative,It is exactly 0;tiIt is a vector, represents 4 parametrization coordinates of the bounding box of prediction,It is the coordinate vector of GT bounding boxs corresponding with positive anchor;λ is a balance weight, NclsThe normalized value for being cls items is Mini-batch size, NregBe reg items normalized value be anchor positions quantity, Classification Loss function LclsIt is two The logarithm of classification, i.e. motor vehicles subject object and non power driven vehicle target loses:
    <mrow> <msub> <mi>L</mi> <mrow> <mi>c</mi> <mi>l</mi> <mi>s</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>,</mo> <msubsup> <mi>p</mi> <mi>i</mi> <mo>*</mo> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mo>&amp;lsqb;</mo> <msubsup> <mi>p</mi> <mi>i</mi> <mo>*</mo> </msubsup> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msubsup> <mi>p</mi> <mi>i</mi> <mo>*</mo> </msubsup> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
    In formula, LclsFor Classification Loss function, PiIt is the prediction probability of the i-th target for anchor;Pi *For real target bounding box The prediction probability of i-th target;
    For returning loss function Lreg, defined to minor function:
    <mrow> <msub> <mi>L</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>g</mi> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>,</mo> <msubsup> <mi>t</mi> <mi>i</mi> <mo>*</mo> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>-</mo> <msubsup> <mi>t</mi> <mi>i</mi> <mo>*</mo> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
    In formula, LregTo return loss function, R is the loss function of robust, and smoothL is calculated with formula (4)1
    <mrow> <msub> <mi>smooth</mi> <mrow> <mi>L</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>0.5</mn> <msup> <mi>x</mi> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mtable> <mtr> <mtd> <mrow> <mi>i</mi> <mi>f</mi> </mrow> </mtd> <mtd> <mrow> <mrow> <mo>|</mo> <mi>x</mi> <mo>|</mo> </mrow> <mo>&lt;</mo> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> </mtd> </mtr> <mtr> <mtd> <mrow> <mrow> <mo>|</mo> <mi>x</mi> <mo>|</mo> </mrow> <mo>-</mo> <mn>0.5</mn> </mrow> </mtd> <mtd> <mrow> <mi>o</mi> <mi>t</mi> <mi>h</mi> <mi>e</mi> <mi>r</mi> <mi>w</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> <mo>,</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
    In formula, smoothL1For smooth L1Loss function, x are variable;
    Fast R-CNN networks, characteristic pattern can be obtained after depth convolutional neural networks in input picture, according to characteristic pattern And RPN networks can then obtain corresponding RoIs, finally then pass through RoI ponds layer;Wherein RoI, i.e. area-of-interest, refer to It is exactly the region of motor vehicle;
    For Fast R-CNN networks, input is N number of Feature Mapping and R RoI;N number of Feature Mapping comes from last volume Lamination, the size of each Feature Mapping is w × h × c;
    Each RoI is a tuple (n, r, c, h, w), wherein, n is the index of Feature Mapping, n ∈ (0,1,2 ..., N-1), r, C is top left co-ordinate, and h, w are height and width respectively;
    Export the Feature Mapping then obtained by maximum pond;RoI in artwork is mapped with the block in characteristic pattern;By feature Figure down-sampling is fixed size, is then passed to full connection again.
  3. 3. the traffic congestion vision detection system based on depth convolutional neural networks as claimed in claim 2, it is characterised in that: The selective search network and Fast R-CNN are all stand-alone trainings, using 4 step training algorithms, are learned by alternative optimization Practise shared feature;The first step, according to above-mentioned training RPN, the model initialization of network ImageNet pre-training, and it is end-to-end Finely tune and suggest task for region;Second step, the Suggestion box generated using the RPN of the first step, one is trained by Fast R-CNN Individually detection network, this detection network are equally at this time two nets by the model initialization of ImageNet pre-training Network is also without shared convolutional layer;3rd step, trained with detection netinit RPN, but fixed shared convolutional layer, and it is only micro- Adjust the exclusive layers of RPN, present two network share convolutional layers;4th step, keep shared convolutional layer to fix, finely tune Fast R-CNN fc, i.e., full articulamentum;So, two network share identical convolutional layers, a unified network is formed;
    By the processing of above-mentioned two network, the motor vehicles in a frame video image and the size and space bit to it are detected Put and confined, that is, obtained size and the locus of vehicle, its r, c are the top left co-ordinate of vehicle in the picture, h, w It is projected size of the vehicle in the plane of delineation, i.e. height and width respectively;Then need to judge whether these motor vehicles are in static State.
  4. 4. the traffic congestion vision detection system based on depth convolutional neural networks as described in one of claims 1 to 3, it is special Sign is:Described optical flow method stationary vehicle detection unit is used to judge whether vehicle remains static on road;Work as road When vehicle in scene corresponds to two dimensional image plane motion, these vehicles are formed transporting in the projection of two dimensional image plane Dynamic, the flowing that this motion is showed with plane of delineation luminance patterns is known as light stream, and vehicle pair in image is included in light stream As the movable information of target;
    Using the sparse iterative method of Lucas-Kanade light streams based on pyramid model, it is assumed that image I size is nx×ny, it is fixed Adopted I0Represent the 0th tomographic image, the 0th tomographic image is rate highest image, i.e. original image respectively, this tomographic image it is wide and a height ofWith Then pyramidal representation is described with recursive mode:Pass through IL-1To calculate IL, L=1, 2 ..., IL-1Represent the image of pyramid L-1 layers, ILRepresent the image of pyramid L layers, it is assumed that image IL-1It is wide and a height of WithSo image ILIt is expressed as
    <mrow> <mtable> <mtr> <mtd> <mrow> <msup> <mi>I</mi> <mi>L</mi> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mn>2</mn> <mi>x</mi> <mo>,</mo> <mn>2</mn> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mfrac> <mn>1</mn> <mn>8</mn> </mfrac> <mrow> <mo>(</mo> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>(</mo> <mrow> <mn>2</mn> <mi>x</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mi>y</mi> </mrow> <mo>)</mo> <mo>+</mo> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>(</mo> <mrow> <mn>2</mn> <mi>x</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mi>y</mi> </mrow> <mo>)</mo> <mo>+</mo> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>(</mo> <mrow> <mn>2</mn> <mi>x</mi> <mo>,</mo> <mn>2</mn> <mi>y</mi> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> <mo>+</mo> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>(</mo> <mrow> <mn>2</mn> <mi>x</mi> <mo>,</mo> <mn>2</mn> <mi>y</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mo>+</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mfrac> <mn>1</mn> <mn>16</mn> </mfrac> <mrow> <mo>(</mo> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>(</mo> <mrow> <mn>2</mn> <mi>x</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mi>y</mi> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> <mo>+</mo> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>(</mo> <mrow> <mn>2</mn> <mi>x</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mi>y</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> <mo>+</mo> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>(</mo> <mrow> <mn>2</mn> <mi>x</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mi>y</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> <mo>,</mo> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>(</mo> <mrow> <mn>2</mn> <mi>x</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mi>y</mi> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
    By imageThe value of boundary point be defined as follows,
    <mrow> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mover> <mo>=</mo> <mo>&amp;CenterDot;</mo> </mover> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow>
    <mrow> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mover> <mo>=</mo> <mo>&amp;CenterDot;</mo> </mover> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow>
    <mrow> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <msubsup> <mi>n</mi> <mi>x</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mover> <mo>=</mo> <mo>&amp;CenterDot;</mo> </mover> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <msubsup> <mi>n</mi> <mi>x</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow>
    <mrow> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <msubsup> <mi>n</mi> <mi>y</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>)</mo> </mrow> <mover> <mo>=</mo> <mo>&amp;CenterDot;</mo> </mover> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mi>y</mi> <mo>,</mo> <msubsup> <mi>n</mi> <mi>y</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
    <mrow> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <msubsup> <mi>n</mi> <mi>x</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>n</mi> <mi>y</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>)</mo> </mrow> <mover> <mo>=</mo> <mo>&amp;CenterDot;</mo> </mover> <msup> <mi>I</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <msubsup> <mi>n</mi> <mi>x</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>-</mo> <mn>1</mn> <mo>,</mo> <msubsup> <mi>n</mi> <mi>y</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
    The point that formula (5) defines must is fulfilled for conditionTherefore image ILWidthAnd heightNeed to meet formula (6),
    <mrow> <msubsup> <mi>n</mi> <mi>x</mi> <mi>L</mi> </msubsup> <mo>&amp;le;</mo> <mfrac> <mrow> <msubsup> <mi>n</mi> <mi>x</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>+</mo> <mn>1</mn> </mrow> <mn>2</mn> </mfrac> </mrow>
    <mrow> <msubsup> <mi>n</mi> <mi>y</mi> <mi>L</mi> </msubsup> <mo>&amp;le;</mo> <mfrac> <mrow> <msubsup> <mi>n</mi> <mi>y</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>+</mo> <mn>1</mn> </mrow> <mn>2</mn> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
    Image I pyramid model { I is built by formula (5) and (6)LL=0 ..., Lm, LmFor the height of pyramid model Degree;
    LK optical flow computation methods based on pyramid model, first the top k layers search characteristics point in image pyramid model Match point, then kth -1 layer of the initial estimate in image pyramid model using the result of calculation of k layers as k-1 layers search Rope match point, go round and begin again and iterate to the 0th layer of image pyramid model always, so as to which the light stream of this feature point be calculated;
    The detection target of optical flow method is:In front and rear two field pictures I and J, for image I some pixel u, in image J Its match point v=u+d is found, or finds out its offset vector d, is calculated with formula (7);
    V=u+d=[ux+dx uy+dy]T (7)
    In formula, u is some pixel in image I, and v is matched pixel in image J, and d is between the two inclined The amount of shifting to;
    First, image I and J pyramid model { I are establishedLL=0 ..., Lm{ JLL=0 ..., Lm;Then pixel u is calculated The position in each pyramidal layers of image IL=0 ..., Lm;Then by a search window image J pyramid Model highest tomographic image ILmMiddle calculating uLmMatch point vLm, and calculate offset vector dLm
    Next the optical flow method based on pyramid model is described with the mode of iteration;Assuming that pyramid model L+1 is known The offset vector d of layerL+1, then by 2dL+1As the initial value of L layers, with the match point vL for nearby searching for L layers;And then Obtain the offset vector d of L layersL
    By each layer of offset vector d of iterative calculationLAfter (L=0 ..., Lm), the final light stream of the pixel is
    <mrow> <mi>d</mi> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>L</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>L</mi> </munderover> <msup> <mn>2</mn> <mi>L</mi> </msup> <msup> <mi>d</mi> <mi>L</mi> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
    In formula, d be a certain pixel light stream value, dLFor a certain pixel L layers light stream value;
    After the light stream vectors value of each feature pixel in obtaining image, detected according to described vehicle detection unit on road Motor vehicles and shared spatial positional information on road, i.e., the frame of each vehicle has been obtained in two dimensional image plane, Each frame has four data representations, the position r in the upper left corner, c and length and width h, w;Here all spies of each inframe are calculated The average value of sign point light stream vectors, is calculated with formula (9),
    <mrow> <mover> <mi>d</mi> <mo>&amp;OverBar;</mo> </mover> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>/</mo> <mi>n</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
    In formula,For the average value of the light stream vectors of certain vehicle inframe, diFor the light stream of a certain feature pixel of certain vehicle inframe Vector value, n are the quantity of the feature pixel of certain vehicle inframe;
    The average value of the light stream vectors of certain vehicle inframe is calculatedAfterwards, if the value is less than a certain threshold value T, just by the vehicle Frame is as doubtful stationary vehicle frame;Then timing is proceeded by the doubtful stationary vehicle frame, doubted here with five data representations Like the position r of stationary vehicle frame, the i.e. upper left corner, c, length and width h, w and quiescent time td;In program cyclic process, such as The doubtful stationary vehicle frame occurred before and after fruit in two frames will be added up in same position, quiescent time, i.e. td←td+t。
  5. 5. the traffic congestion vision detection system based on depth convolutional neural networks as described in one of claims 1 to 3, it is special Sign is:It is described to be used to count the stationary vehicle on track by track statistics stationary vehicle unit, in described road and The track in two dimensional image has been customized in direction of traffic customized module, has first counted car respectively from first lane to N tracks here Stationary vehicle on road;First, from nearby to stationary vehicle is determined whether at a distance, the mode of judgement is inspection from first lane Look into the quiescent time t of doubtful stationary vehicle framedIf the quiescent time t of the doubtful stationary vehicle framed≥Ts, then it is determined that Stationary vehicle frame, and carry out mark;Then check on next track whether there is stationary vehicle, until rightmost lane detection is Only;The stationary vehicle on all tracks on whole road is thus obtained.
  6. 6. the traffic congestion vision detection system based on depth convolutional neural networks as described in one of claims 1 to 3, it is special Sign is:Described congestion in road computing unit is used for the congestion status that road is held from overall aspect, is primarily upon here Heavy congestion state, specific practice are:According to the important indicator of heavy congestion state, i.e. localized road and large area paralysis, road Extreme portions vehicle on road on all tracks is all in inactive state;Here with the area A shared by the stationary vehicle on roads With the area and A in each trackdRatio calculated,
    ζ=As/Ad (10)
    In formula, AsFor the area shared by all stationary vehicles on road, AdAnd, ζ is on road for the area in each track on road The dutycycle of stationary vehicle;
    As ζ >=Vs, i.e., when ζ is more than or equal to some threshold value, system is just determined as heavy congestion state automatically, is set as here 0.5。
CN201710440987.2A 2017-06-13 2017-06-13 Traffic congestion vision detection system based on depth convolutional neural networks Pending CN107730881A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710440987.2A CN107730881A (en) 2017-06-13 2017-06-13 Traffic congestion vision detection system based on depth convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710440987.2A CN107730881A (en) 2017-06-13 2017-06-13 Traffic congestion vision detection system based on depth convolutional neural networks

Publications (1)

Publication Number Publication Date
CN107730881A true CN107730881A (en) 2018-02-23

Family

ID=61201673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710440987.2A Pending CN107730881A (en) 2017-06-13 2017-06-13 Traffic congestion vision detection system based on depth convolutional neural networks

Country Status (1)

Country Link
CN (1) CN107730881A (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564791A (en) * 2018-06-13 2018-09-21 新华网股份有限公司 Information processing method, device and computing device
CN108615358A (en) * 2018-05-02 2018-10-02 安徽大学 A kind of congestion in road detection method and device
CN108629279A (en) * 2018-03-27 2018-10-09 哈尔滨理工大学 A method of the vehicle target detection based on convolutional neural networks
CN108628318A (en) * 2018-06-28 2018-10-09 广州视源电子科技股份有限公司 Congestion environment detection method and device, robot and storage medium
CN108682154A (en) * 2018-06-19 2018-10-19 上海理工大学 Congestion in road detecting system based on the analysis of wagon flow state change deep learning
CN108717707A (en) * 2018-04-10 2018-10-30 杭州依图医疗技术有限公司 A kind of tubercle matching process and device
CN108847023A (en) * 2018-06-13 2018-11-20 新华网股份有限公司 Push the method, apparatus and terminal device of warning information
CN108898047A (en) * 2018-04-27 2018-11-27 中国科学院自动化研究所 The pedestrian detection method and system of perception are blocked based on piecemeal
CN109063673A (en) * 2018-08-21 2018-12-21 北京深瞐科技有限公司 Condition of road surface determines method, apparatus, system and computer-readable medium
CN109086528A (en) * 2018-08-06 2018-12-25 北京市市政工程设计研究总院有限公司 Name the method for netted road automatically by ordering rule in civil3d
CN109147331A (en) * 2018-10-11 2019-01-04 青岛大学 A kind of congestion in road condition detection method based on computer vision
CN109635744A (en) * 2018-12-13 2019-04-16 合肥工业大学 A kind of method for detecting lane lines based on depth segmentation network
CN110287905A (en) * 2019-06-27 2019-09-27 浙江工业大学 A kind of traffic congestion region real-time detection method based on deep learning
CN110322716A (en) * 2019-06-27 2019-10-11 合肥革绿信息科技有限公司 A kind of high speed congestion method of river diversion and system based on real-time map
CN110427899A (en) * 2019-08-07 2019-11-08 网易(杭州)网络有限公司 Video estimation method and device, medium, electronic equipment based on face segmentation
CN110493488A (en) * 2018-05-15 2019-11-22 株式会社理光 Video image stabilization method, Video Stabilization device and computer readable storage medium
CN110909588A (en) * 2018-09-15 2020-03-24 斯特拉德视觉公司 Method and device for lane line detection based on CNN
CN111369807A (en) * 2020-03-24 2020-07-03 北京百度网讯科技有限公司 Traffic accident detection method, device, equipment and medium
CN112132871A (en) * 2020-08-05 2020-12-25 天津(滨海)人工智能军民融合创新中心 Visual feature point tracking method and device based on feature optical flow information, storage medium and terminal
CN112241806A (en) * 2020-07-31 2021-01-19 深圳市综合交通运行指挥中心 Road damage probability prediction method, device terminal equipment and readable storage medium
CN112329515A (en) * 2020-09-11 2021-02-05 博云视觉(北京)科技有限公司 High-point video monitoring congestion event detection method
CN112818935A (en) * 2021-03-02 2021-05-18 南京邮电大学 Deep learning-based multi-lane congestion detection and duration prediction method and system
CN112991719A (en) * 2021-01-28 2021-06-18 北京奥泽尔科技发展有限公司 Traffic congestion prediction method and system based on congestion portrait
CN113076893A (en) * 2021-04-09 2021-07-06 太原理工大学 Highway drain pipe blocking situation sensing method based on deep learning
CN113095159A (en) * 2021-03-23 2021-07-09 陕西师范大学 Urban road traffic condition analysis method based on CNN
CN113111822A (en) * 2021-04-22 2021-07-13 深圳集智数字科技有限公司 Video processing method and device for congestion identification and electronic equipment
CN113762135A (en) * 2021-09-02 2021-12-07 中远海运科技股份有限公司 Video-based traffic jam detection method and device
CN117409381A (en) * 2023-12-14 2024-01-16 杭州像素元科技有限公司 Expressway toll station congestion detection model and method based on scene image segmentation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710448A (en) * 2009-12-29 2010-05-19 浙江工业大学 Road traffic state detecting device based on omnibearing computer vision
CN103985250A (en) * 2014-04-04 2014-08-13 浙江工业大学 Light-weight holographic road traffic state visual inspection device
CN106023605A (en) * 2016-07-15 2016-10-12 姹ゅ钩 Traffic signal lamp control method based on deep convolution neural network
CN106250812A (en) * 2016-07-15 2016-12-21 汤平 A kind of model recognizing method based on quick R CNN deep neural network
CN106599939A (en) * 2016-12-30 2017-04-26 深圳市唯特视科技有限公司 Real-time target detection method based on region convolutional neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710448A (en) * 2009-12-29 2010-05-19 浙江工业大学 Road traffic state detecting device based on omnibearing computer vision
CN103985250A (en) * 2014-04-04 2014-08-13 浙江工业大学 Light-weight holographic road traffic state visual inspection device
CN106023605A (en) * 2016-07-15 2016-10-12 姹ゅ钩 Traffic signal lamp control method based on deep convolution neural network
CN106250812A (en) * 2016-07-15 2016-12-21 汤平 A kind of model recognizing method based on quick R CNN deep neural network
CN106599939A (en) * 2016-12-30 2017-04-26 深圳市唯特视科技有限公司 Real-time target detection method based on region convolutional neural network

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629279A (en) * 2018-03-27 2018-10-09 哈尔滨理工大学 A method of the vehicle target detection based on convolutional neural networks
CN108717707A (en) * 2018-04-10 2018-10-30 杭州依图医疗技术有限公司 A kind of tubercle matching process and device
CN108898047B (en) * 2018-04-27 2021-03-19 中国科学院自动化研究所 Pedestrian detection method and system based on blocking and shielding perception
CN108898047A (en) * 2018-04-27 2018-11-27 中国科学院自动化研究所 The pedestrian detection method and system of perception are blocked based on piecemeal
CN108615358A (en) * 2018-05-02 2018-10-02 安徽大学 A kind of congestion in road detection method and device
CN110493488A (en) * 2018-05-15 2019-11-22 株式会社理光 Video image stabilization method, Video Stabilization device and computer readable storage medium
CN110493488B (en) * 2018-05-15 2021-11-26 株式会社理光 Video image stabilization method, video image stabilization device and computer readable storage medium
US11748894B2 (en) 2018-05-15 2023-09-05 Ricoh Company, Ltd. Video stabilization method and apparatus and non-transitory computer-readable medium
CN108847023A (en) * 2018-06-13 2018-11-20 新华网股份有限公司 Push the method, apparatus and terminal device of warning information
CN108564791A (en) * 2018-06-13 2018-09-21 新华网股份有限公司 Information processing method, device and computing device
CN108682154A (en) * 2018-06-19 2018-10-19 上海理工大学 Congestion in road detecting system based on the analysis of wagon flow state change deep learning
CN108682154B (en) * 2018-06-19 2021-03-16 上海理工大学 Road congestion detection system based on deep learning analysis of traffic flow state change
CN108628318A (en) * 2018-06-28 2018-10-09 广州视源电子科技股份有限公司 Congestion environment detection method and device, robot and storage medium
CN109086528B (en) * 2018-08-06 2023-04-28 北京市市政工程设计研究总院有限公司 Method for automatically naming netty roads in civil3d according to ordering rule
CN109086528A (en) * 2018-08-06 2018-12-25 北京市市政工程设计研究总院有限公司 Name the method for netted road automatically by ordering rule in civil3d
CN109063673A (en) * 2018-08-21 2018-12-21 北京深瞐科技有限公司 Condition of road surface determines method, apparatus, system and computer-readable medium
CN110909588B (en) * 2018-09-15 2023-08-22 斯特拉德视觉公司 CNN-based method and device for lane line detection
CN110909588A (en) * 2018-09-15 2020-03-24 斯特拉德视觉公司 Method and device for lane line detection based on CNN
CN109147331A (en) * 2018-10-11 2019-01-04 青岛大学 A kind of congestion in road condition detection method based on computer vision
CN109147331B (en) * 2018-10-11 2021-07-27 青岛大学 Road congestion state detection method based on computer vision
CN109635744A (en) * 2018-12-13 2019-04-16 合肥工业大学 A kind of method for detecting lane lines based on depth segmentation network
CN109635744B (en) * 2018-12-13 2020-04-14 合肥工业大学 Lane line detection method based on deep segmentation network
CN110287905B (en) * 2019-06-27 2021-08-03 浙江工业大学 Deep learning-based real-time traffic jam area detection method
CN110287905A (en) * 2019-06-27 2019-09-27 浙江工业大学 A kind of traffic congestion region real-time detection method based on deep learning
CN110322716A (en) * 2019-06-27 2019-10-11 合肥革绿信息科技有限公司 A kind of high speed congestion method of river diversion and system based on real-time map
CN110322716B (en) * 2019-06-27 2021-08-24 合肥革绿信息科技有限公司 High-speed congestion diversion method and system based on real-time map
CN110427899A (en) * 2019-08-07 2019-11-08 网易(杭州)网络有限公司 Video estimation method and device, medium, electronic equipment based on face segmentation
CN111369807A (en) * 2020-03-24 2020-07-03 北京百度网讯科技有限公司 Traffic accident detection method, device, equipment and medium
CN112241806A (en) * 2020-07-31 2021-01-19 深圳市综合交通运行指挥中心 Road damage probability prediction method, device terminal equipment and readable storage medium
CN112132871A (en) * 2020-08-05 2020-12-25 天津(滨海)人工智能军民融合创新中心 Visual feature point tracking method and device based on feature optical flow information, storage medium and terminal
CN112132871B (en) * 2020-08-05 2022-12-06 天津(滨海)人工智能军民融合创新中心 Visual feature point tracking method and device based on feature optical flow information, storage medium and terminal
CN112329515A (en) * 2020-09-11 2021-02-05 博云视觉(北京)科技有限公司 High-point video monitoring congestion event detection method
CN112329515B (en) * 2020-09-11 2024-03-29 博云视觉(北京)科技有限公司 High-point video monitoring congestion event detection method
CN112991719A (en) * 2021-01-28 2021-06-18 北京奥泽尔科技发展有限公司 Traffic congestion prediction method and system based on congestion portrait
CN112818935B (en) * 2021-03-02 2022-08-12 南京邮电大学 Multi-lane congestion detection and duration prediction method and system based on deep learning
CN112818935A (en) * 2021-03-02 2021-05-18 南京邮电大学 Deep learning-based multi-lane congestion detection and duration prediction method and system
CN113095159A (en) * 2021-03-23 2021-07-09 陕西师范大学 Urban road traffic condition analysis method based on CNN
CN113076893A (en) * 2021-04-09 2021-07-06 太原理工大学 Highway drain pipe blocking situation sensing method based on deep learning
CN113111822A (en) * 2021-04-22 2021-07-13 深圳集智数字科技有限公司 Video processing method and device for congestion identification and electronic equipment
CN113111822B (en) * 2021-04-22 2024-02-09 深圳集智数字科技有限公司 Video processing method and device for congestion identification and electronic equipment
CN113762135A (en) * 2021-09-02 2021-12-07 中远海运科技股份有限公司 Video-based traffic jam detection method and device
CN117409381A (en) * 2023-12-14 2024-01-16 杭州像素元科技有限公司 Expressway toll station congestion detection model and method based on scene image segmentation
CN117409381B (en) * 2023-12-14 2024-03-08 杭州像素元科技有限公司 Expressway toll station congestion detection model and method based on scene image segmentation

Similar Documents

Publication Publication Date Title
CN107730881A (en) Traffic congestion vision detection system based on depth convolutional neural networks
CN107730904A (en) Multitask vehicle driving in reverse vision detection system based on depth convolutional neural networks
CN107730903A (en) Parking offense and the car vision detection system that casts anchor based on depth convolutional neural networks
CN107730906A (en) Zebra stripes vehicle does not give precedence to the vision detection system of pedestrian behavior
CN108710875B (en) A kind of take photo by plane road vehicle method of counting and device based on deep learning
CN106875424B (en) A kind of urban environment driving vehicle Activity recognition method based on machine vision
CN106023605B (en) A kind of method for controlling traffic signal lights based on depth convolutional neural networks
CN103366602B (en) Method of determining parking lot occupancy from digital camera images
CN103390164B (en) Method for checking object based on depth image and its realize device
He et al. Obstacle detection of rail transit based on deep learning
Zhang et al. A traffic surveillance system for obtaining comprehensive information of the passing vehicles based on instance segmentation
CN106228125B (en) Method for detecting lane lines based on integrated study cascade classifier
CN105513354A (en) Video-based urban road traffic jam detecting system
CN107729799A (en) Crowd&#39;s abnormal behaviour vision-based detection and analyzing and alarming system based on depth convolutional neural networks
CN103198302B (en) A kind of Approach for road detection based on bimodal data fusion
CN107967451A (en) A kind of method for carrying out crowd&#39;s counting to static image using multiple dimensioned multitask convolutional neural networks
CN108550259A (en) Congestion in road judgment method, terminal device and computer readable storage medium
CN107576960A (en) The object detection method and system of vision radar Spatial-temporal Information Fusion
CN110197215A (en) A kind of ground perception point cloud semantic segmentation method of autonomous driving
de Silva et al. Automated rip current detection with region based convolutional neural networks
CN107134144A (en) A kind of vehicle checking method for traffic monitoring
CN106326893A (en) Vehicle color recognition method based on area discrimination
CN104361351B (en) A kind of diameter radar image sorting technique based on range statistics similarity
CN101976461A (en) Novel outdoor augmented reality label-free tracking registration algorithm
CN108257154A (en) Polarimetric SAR Image change detecting method based on area information and CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180223

RJ01 Rejection of invention patent application after publication