CN104778727A - Floating car counting method based on video monitoring processing technology - Google Patents

Floating car counting method based on video monitoring processing technology Download PDF

Info

Publication number
CN104778727A
CN104778727A CN201510218281.2A CN201510218281A CN104778727A CN 104778727 A CN104778727 A CN 104778727A CN 201510218281 A CN201510218281 A CN 201510218281A CN 104778727 A CN104778727 A CN 104778727A
Authority
CN
China
Prior art keywords
image
vehicle
floating car
follows
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510218281.2A
Other languages
Chinese (zh)
Inventor
林华根
王传根
陈岩
施康
芮绍军
邱换春
余斌
徐革新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ANHUI CHAOYUAN INFORMATION TECHNOLOGY Co Ltd
Original Assignee
ANHUI CHAOYUAN INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ANHUI CHAOYUAN INFORMATION TECHNOLOGY Co Ltd filed Critical ANHUI CHAOYUAN INFORMATION TECHNOLOGY Co Ltd
Priority to CN201510218281.2A priority Critical patent/CN104778727A/en
Publication of CN104778727A publication Critical patent/CN104778727A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a floating car counting method based on the video monitoring processing technology. Compared with the prior art, the defect that floating cars can not be comprehensively counted through the video image technology is solved. The floating car counting method includes the following steps: carrying out preprocessing, wherein virtual line circles and detection lines are drawn for measured lanes, and manual counting in the car saturation state is carried out; building an initial background image, and constructing a background image BG with a frame difference method; monitoring cars, and forming binary images O<bject> of the cars through difference images DI; obtaining the information of images of the cars in the virtual line circles; judging the state of the cars in the virtual line circles, wherein whether the cars exist in the virtual line circles or not and whether the virtual line circles are in the saturation state or not are judged by calculating the ratio of intersection images to the detection lines; counting the floating cars, wherein the floating cars in the non-saturation state and the floating cars in the saturation state are respectively calculated, and the floating car counting result is obtained. By means of the floating car counting method, the flow number of the cars in the non-saturation state can be obtained, the flow number in the saturation state can further be obtained, and the counting result is more accurate.

Description

Based on the Floating Car statistical method of video monitoring treatment technology
Technical field
The present invention relates to a kind of Floating Car statistical method, especially based on the Floating Car statistical method of video monitoring treatment technology.
Background technology
Along with the develop rapidly that China's economic society is built, automobile pollution also progressively rises.The soaring road traffic congestion that makes of automobile pollution aggravates, and traffic hazard takes place frequently.In traffic administration, use long-range roadnet, significantly reduce manpower consumption.In order to effectively monitor road traffic, for the dynamic change of traffic, make rapidly traffic guidance controlling decision, then need to detect in real time road traffic Floating Car.
The method that floating current car extracts mainly contains: 1, adopt Wireless microwave, Leibo etc. to obtain speed of a motor vehicle relevant information, more commonly install radar installations on both sides of the road in road, its shortcoming is that the method can only obtain the unique information of vehicles of the speed of a motor vehicle, cannot integrated application; 2, adopt ground induction coil to obtain the speed of a motor vehicle and Floating Car information, the method is widely used in domestic communication road monitoring, and its shortcoming is under ground induction coil is embedded in road surface, can destroy road surface to a certain extent, more can not be applicable to overpass.3, utilize video detector, its not only Operation system setting flexibly, install simple, easy to use, do not destroy road surface, and the method rate accuracy and traffic count precision can keep higher level substantially.
Compare with other Traffic flow detection methods, video detecting method obtains traffic parameter and enriches, and sensing range is large, convenient for installation and maintenance, extracts information of vehicles accurate, efficient, safe and reliable, is conducive to the monitoring realizing road traffic net.Though there is portion of techniques to disclose the method for adding up Floating Car based on Computer Vision, all Shortcomings and problems, such as:
1, the patent No. is CN101510358A, name is called the method and device that adopt Computer Vision real-time statistics Floating Car, this patented technology is by processing the pixel of the video image through virtual detection coil, analyze the change of its pixel value to obtain moving vehicle target, thus the number of calculating vehicle;
2, patent publication No. is CN103310638A, name is called the video Floating Car statistical technique based on virtual coil technology, this application utilizes improvement ViBe algorithm to carry out background modeling and renewal, detects and counts, realize vehicle count function to the vehicle of virtual coil.
3, patent publication No. is CN 103413046 A, patent name is Floating Car statistical method, this patent adopts the degree of association of the vehicle calculated in virtual coil in vehicle and existing vehicle list, realizes following the tracks of vehicle in track, thus realizes car statistics accurately.
But above patented technology is all by upgrading virtual coil region, utilizing image detection and tracking technology to add up Floating Car.Although arrange virtual coil to accelerate Computer Image Processing speed, arrange sizes of virtual and easily destroy entire vehicle, easily make vehicle divide, affect vehicle detection and tracking technique precision, error detection is multiple vehicle, causes statistical error.And above-mentioned patented technology adds up Floating Car under stressing traffic behavior unsaturated state, and for Floating Car statistics under state of saturation, due to Computer Image Processing and the restriction of mode identification technology application conditions, especially in the mutual serious shielding situation of vehicle, all do not provide statistical method and step.How to develop comprehensive Floating Car statistical method and become the technical matters being badly in need of solving.
Summary of the invention
The technical problem to be solved in the present invention is to provide a kind of Floating Car statistical method based on video monitoring treatment technology, solves in prior art and video image versatility cannot be utilized to add up the defect of Floating Car.
The present invention is achieved through the following technical solutions.
Based on a Floating Car statistical method for video monitoring treatment technology, comprise the following steps:
Preprocessing process, draws virtual coil and detection line for tested track, carries out complicate statistics under vehicle state of saturation;
Set up initial background image, utilize frame difference method structural setting image BG;
Carry out vehicle monitoring, formed the bianry image O of vehicle by difference figure image DI bject;
Obtain vehicle image information in virtual coil, by bianry image O bjectwith virtual coil two-value template M 1carry out intersection operation, obtain vehicle image M in virtual coil 2;
In virtual coil, vehicle-state judges, accounts in detection line ratio in judgement virtual coil with or without information of vehicles, saturated, unsaturated state by calculating common factor image;
Floating Car is added up, and calculates the Floating Car of unsaturated state and state of saturation respectively, obtains Floating Car statistics.
Described preprocessing process comprises the following steps:
Obtain camera video information, draw virtual coil by track;
According to direction of traffic, trisection is carried out to virtual coil, connects Along ent successively, at built-in three detection lines of magnetic test coil;
Virtual coil is occurred that the floating car data of state of saturation carries out complicate statistics, calculates with 1 minute video sample, add up 20 sample data Floating Car n i, i ∈ 1,2,3..., 20},
Be N by virtual coil Floating Car under calculating the state of saturation in 1 minute this section 0, its computing formula is as follows:
N 0 = ( &Sigma; i = 1 20 n i ) / 20 .
Described initial background image of setting up comprises the following steps:
If I (x, y, t) is for representing t present frame, I (x, y, t-1) is for representing t-1 moment present frame, then the computing formula of t image background pixels point value B (x, y, t) is as follows:
B ( x , y , t ) = &alpha; &times; I ( x , y , t ) + ( 1 - &alpha; ) &times; I ( x , y , t - 1 ) | I ( x , y , t ) - I ( x , y , t - 1 ) | > T ( 1 - &alpha; ) &times; T ( x , y , t ) + &alpha; &times; I ( x , y , t - 1 ) | I ( x , y , t ) - I ( x , y , t - 1 ) | &le; T
Wherein, α ∈ (0,1) is weight parameter, gets smaller value; T is reservation threshold;
Calculate [0, T] time period image background BG (x, y), to [0, T] background image sequence B (x in the time period, y, t), wherein (t ∈ [0, T]), carry out cumulative averaging and calculate background image BG (x, y), its computing formula is as follows:
BG ( x , y ) = &Sigma; t = 0 T BG ( x , y , t ) / N T ,
Wherein, N tfor [0, T] background image sequence frame number in the time period.
The described vehicle monitoring that carries out comprises the following steps:
If I (x, y) is t present frame, BG (x, y) is image background, structure present image I (x, y) Neighborhood Statistics image I n (x, y), its computing formula is as follows:
I N(x,y)=(∑ N(x,y)∈ΩI(x,y))/sum(N(x,y));
Structural setting image BG (x, y) Neighborhood Statistics image BG n (x, y), its computing formula is as follows;
BG N(x,y)=(∑ N(x,y)∈ΩBG(x,y))/sum(N(x,y));
Both calculating difference absolute image DI, its computing formula is as follows:
DI=|I N(x,y)-BG N(x,y)|;
Computed image threshold values T best;
By threshold values T bestsegmentation is carried out to difference figure image DI and obtains bianry image O bject, its computing formula is as follows:
O bject ( x , y ) = 1 DI ( x , y ) > T best 0 DI ( x , y ) < T best .
In described acquisition virtual coil, vehicle image information comprises the following steps:
Set up image coordinate system, to bianry image O bjectset up coordinate system, with summit, the image left side for initial point, set up coordinate system, m is the wide of image, and n is the height of image, is represented by the two-dimensional space coordinate of pixel on the plane of delineation;
Fill image cavity, connect image left and right mid-side node, concatenate rule is (0,0)-(m, 0), (0,1)-(m, 1) ... (0, n)-(m, n), fills image, filling criterion is that the region being less than 100 for cyst areas is filled, otherwise does not fill;
Connect the upper and lower mid-side node of image, concatenate rule is (0,0)-(0, n), (1,0)-(1, n),, (m, 0)-(m, n), fill image, filling criterion is that the region being less than 100 for cyst areas is filled, otherwise does not fill;
Carry out noise cancellation operation, cancellation is carried out in the region that connected region area is less than 1000;
Carry out picture smooth treatment, to bianry image O bjectcarry out image smoothing operation;
Obtain target image, by bianry image O bjectwith virtual coil two-value template M 1carry out intersection operation, obtain vehicle image M in virtual coil 2, concrete steps are as follows:
If O bject(i, j)=255 & M 1(i, j)=255 exist occurs simultaneously, then M 2(i, j)=255;
If there is not common factor, then M 2(i, j)=0.
In described virtual coil, vehicle-state judges to comprise the following steps:
By three detection lines in virtual coil template respectively with vehicle region frame M 2carry out intersection operation, calculating common factor image accounts for detection line ratio and is respectively rito1, rito2 and rito3 respectively;
Get ritomin=min (rito1, rito2, rito3);
If ritomin=0, then illustrate that present image does not have information of vehicles, does not carry out Floating Car statistics;
If 0<ritomin<0.8, then illustrate that in virtual coil, vehicle is in unsaturated state;
If 0.8<ritomin<1, then illustrate that intending vehicle in coil is in state of saturation.
Described Floating Car statistics comprises the following steps:
Floating Car under statistics unsaturated state, its concrete steps are as follows:
Gather a two field picture, extract the moving vehicle template in present image, calculate the center-of-mass coordinate p (k) (x, y) of moving vehicle area S (k) and moving vehicle, the moving target characteristic sequence of composition present frame;
If system is the starting stage, use the feature of the characteristic sequence initialization tracking sequence of the moving target of present frame, initial statistical vehicle N 3=0;
Calculate moving vehicle size difference Dif (S (k), S (k+1)) in adjacent two frames, its computing formula is as follows:
Dif(S(k),S(k+1))=|S(k)-S(k+1)|
Wherein, S (k) is moving vehicle area in kth two field picture, and S (k+1) is moving vehicle area in kth+1 two field picture;
Calculate moving vehicle size difference and adjacent two frame moving vehicle centroid distance size Dis (p (k), p (k+1)) in adjacent two frames, its computing formula is as follows:
Dis ( p ( k ) , p ( k + 1 ) ) = ( p ( k ) ( x ) - p ( k + 1 ) ( x ) ) 2 + ( p ( k ) ( y ) - p ( k + 1 ) ( y ) ) 2
Wherein, p (k) (x, y) is the center-of-mass coordinate of moving vehicle in kth two field picture, and p (k+1) (x, y) is the center-of-mass coordinate of moving vehicle in kth+1 two field picture;
Determine whether the moving vehicle within the scope of the match search of moving vehicle in tracking sequence is same moving vehicle,
If Dif is (S (k), S (k+1)) <30 and Dis (p (k), p (k+1)) <20, in moving vehicle then within the scope of match search and tracking sequence, moving vehicle is same moving vehicle, does not count;
Otherwise, in moving vehicle within the scope of match search and tracking sequence, moving vehicle is not same moving vehicle, judge that current kinetic vehicle is as newly entering vehicle or vehicle division causes the moving target that manifests, upgrades the eigenwert of tracking sequence simultaneously, upgrades calculating vehicle N 3=N 3+ 1;
Floating Car under statistics state of saturation, counts saturated vehicle-state number of image frames;
Setting video has N each second 1width image is N by virtual coil Floating Car under the state of saturation in statistics 1 minute this section 0, count state of saturation hypograph frame number N 2, calculate state of saturation vehicle number N 4, its computing formula is as follows:
N 4=(N 2/N 1)×(N 0/60);
Statistics Floating Car sum N 5, unsaturated state car statistics number and state of saturation car statistics number are sued for peace, its computing formula is as follows:
N 5=N 4+N 3
Described computed image threshold values T bestcomprise the following steps:
If image has L gray level, the pixel count of gray-scale value i is n i, total pixel count is N, and each gray-scale value probability of occurrence is p i=n i/ N;
If there is threshold values T Iamge Segmentation to be become 2 regions, background classes A=(0,1,2 ..., T) and target class B=(T, T+1, T+2 ..., L-1);
Calculate the probability that background classes A occurs, its formula is as follows:
p A = &Sigma; i = 0 T p i ;
Calculate the probability that target class B occurs, its formula is as follows:
p B = &Sigma; i = T + 1 L - 1 p i ;
Calculate background classes A gray average, its formula is as follows:
&omega; A = &Sigma; i = 0 T ip i / p A ;
Calculate target class B gray average, its formula is as follows:
&omega; B = &Sigma; i = T + 1 L - 1 ip i / p B ;
Calculate whole gradation of image average, its formula is as follows:
&omega; 0 = p A &omega; A + p B &omega; B = &Sigma; i = 0 L - 1 ip i ;
Calculate A, B two inter-class variance in region, its formula is as follows:
σ 2=p AA0) 2+p BB0) 2
The principle larger based on inter-class variance, two class gray scale difference are larger, maximizes above formula, tries to achieve best threshold values T best, its formula is as follows:
T best = Arg max 0 &le; T &le; L - 1 p A ( &omega; A - &omega; 0 ) 2 + p B ( &omega; B - &omega; 0 ) 2 .
Described noise cancellation operation of carrying out comprises the following steps:
Set up a false form, size and image O bjectequal;
The area of each connected region in computed image;
The connected region that pixel number is less than 1000 is copied to false form;
Bianry image O bjectdeduct false form, obtain new bianry image O bject, false form is made zero simultaneously.
Beneficial effect of the present invention:
Compared with prior art adopt whole video image background as processing template, utilize current frame image and image background neighborhood difference information to extract moving vehicle, the flow number under vehicle unsaturated state can not only be obtained, can also obtain the flow number under state of saturation, statistics is more accurate.By the complete bianry image of Obtaining Accurate moving vehicle, then with virtual coil template logic be set operate, obtain vehicle two-value template image, globality can obtain vehicle complete information, for supervise technology accuracy improves guarantee.Carry out intelligent differentiation road section traffic volume situation by the built-in detection line of virtual coil, be divided into unsaturated state and state of saturation, accurately can distinguish traffic noise prediction.For Floating Car statistics under unsaturated state, utilize the centroid position of vehicle two-value template image and size to carry out simple effectively following the tracks of, Floating Car under this state can be added up exactly, realize following the tracks of vehicle simply and effectively.For Floating Car statistics under state of saturation, according to vehicle motion feature under state of saturation, Floating Car under this state can be added up exactly, effectively can solve Floating Car statistics under state of saturation.
Accompanying drawing explanation
Fig. 1 is method flow diagram of the present invention.
Embodiment
According to drawings and embodiments the present invention is described in further detail below.
As shown in Figure 1, a kind of Floating Car statistical method based on video monitoring treatment technology of the present invention, comprises the following steps:
The first step, preprocessing process, draws virtual coil and detection line for tested track, carries out complicate statistics under vehicle state of saturation.Its concrete steps are as follows:
(1) obtain camera video information, draw virtual coil by track, coil is stained with whole track as far as possible, and length can be about 1.5 times of general car.
(2) according to direction of traffic, trisection is carried out to virtual coil, be on virtual coil longitudinal direction according to direction of traffic and carry out trisection, connect Along ent successively, at built-in three detection lines of magnetic test coil.
(3) virtual coil is occurred that the floating car data of state of saturation carries out complicate statistics, calculate with 1 minute video sample, add up 20 sample data Floating Car n i, i ∈ { 1,2,3..., 20}.Because vehicle under state of saturation travels approximate identical, ask 20 sample data Floating Car mean values, calculating under the state of saturation in 1 minute this section by virtual coil Floating Car is N 0, its computing formula is as follows:
N 0 = ( &Sigma; i = 1 20 n i ) / 20 .
Second step, sets up initial background image, utilizes frame difference method structural setting image.Because frame difference detection method has comparatively strong robustness to the change of traffic environment light, can rapid extraction moving region, therefore adopt frame difference method to carry out structural setting image.Frame difference method construct image background thought is that the grey scale change of background pixel point is slower, moving region has comparatively significant change in front and back two two field picture gray scale, obtained the absolute value images of luminance difference by two frame subtract, through setting threshold segmentation, extract motion target area.Its concrete steps are as follows:
(1) establish I (x, y, t) for representing t present frame, I (x, y, t-1) for representing t-1 moment present frame, then t image background pixels point value B (x, y, t), namely between adjacent two frames difference image and two-value template image operation relation as follows:
B ( x , y , t ) = &alpha; &times; I ( x , y , t ) + ( 1 - &alpha; ) &times; I ( x , y , t - 1 ) | I ( x , y , t ) - I ( x , y , t - 1 ) | > T ( 1 - &alpha; ) &times; T ( x , y , t ) + &alpha; &times; I ( x , y , t - 1 ) | I ( x , y , t ) - I ( x , y , t - 1 ) | &le; T
Wherein, α ∈ (0,1) is weight parameter, gets smaller value; T is reservation threshold.
As above formula can be learnt, image B (x, y, t) makes full use of front and back two frame image information, stresses to retain front and back frame change smaller portions and image background information, suppresses frame difference to change greatly part and vehicle movement region.
(2) [0 is calculated, T] time period image background BG (x, y), to [0, T] background image sequence B (x in the time period, y, t), wherein (t ∈ [0, T]), carry out cumulative averaging and calculate background image BG (x, y), its computing formula is as follows:
BG ( x , y ) = &Sigma; t = 0 T BG ( x , y , t ) / N T ,
Wherein, N tfor [0, T] background image sequence frame number in the time period.
3rd step, carries out vehicle monitoring, consists of the bianry image O of vehicle difference figure image DI bject.Conventional color monochrome information detection model only relies on gradation of image statistical information, is difficult to the change of process complex background.When monitoring image pixel spatial domain is adjacent, present very strong Dependency Specification, and these packets of information contain image structure information.Imaging surface brightness is the product of surface brightness and reflection coefficient, and monochrome information changes greatly at whole image-region, but partial structurtes information is less by illumination effect.Two frame difference method thoughts are when moving target appears in monitoring image, and moving region has comparatively significant change in front and back two two field picture gray scale, is obtained the absolute value images of luminance difference by two frame subtract, through setting threshold segmentation, extract motion target area.Its concrete steps are as follows:
(1) establish I (x, y) for t present frame, BG (x, y) is image background, structure present image I (x, y) Neighborhood Statistics image I n (x, y), its computing formula is as follows:
I N(x,y)=(∑ N(x,y)∈ΩI(x,y))/sum(N(x,y));
Structural setting image BG (x, y) Neighborhood Statistics image BG n (x, y), its computing formula is as follows;
BG N(x,y)=(∑ N(x,y)∈ΩBG(x,y))/sum(N(x,y))。
(2) construct image gray-scale statistical measure function, two two field picture local average difference images before and after calculating respectively, by threshold segmentation two image difference figure, extract motion target area.Both calculating difference absolute image DI, its computing formula is as follows:
DI=|I N(x,y)-BG N(x,y)|。
Error image DI features two two field picture I n (x, y)and BG n (x, y)local gray level marked difference, examines neighborhood territory pixel value and its average difference, greatly reduces background information impact, highlights moving target part, increase Detection results robustness.
(3) computed image threshold values T best, computed image threshold values T bestautomatically extract motion target area, avoid the deficiency that threshold values is artificially set.Its step is as follows:
A, establish image to have L gray level, the pixel count of gray-scale value i is n i, total pixel count is N, and each gray-scale value probability of occurrence is p i=n i/ N.
B, establish and there is threshold values T and Iamge Segmentation is become 2 regions, background classes A=(0,1,2 ..., T) and target class B=(T, T+1, T+2 ..., L-1);
The probability that C, calculating background classes A occur, its formula is as follows:
p A = &Sigma; i = 0 T p i ;
Calculate the probability that target class B occurs, its formula is as follows:
p B = &Sigma; i = T + 1 L - 1 p i ;
D, calculating background classes A gray average, its formula is as follows:
&omega; A = &Sigma; i = 0 T ip i / p A ;
Calculate target class B gray average, its formula is as follows:
&omega; B = &Sigma; i = T + 1 L - 1 ip i / p B ;
E, calculate whole gradation of image average, its formula is as follows:
&omega; 0 = p A &omega; A + p B &omega; B = &Sigma; i = 0 L - 1 ip i ;
F, calculate A, B two inter-class variance in region, its formula is as follows:
σ 2=p AA0) 2+p BB0) 2
G, the principle that larger based on inter-class variance, two class gray scale difference are larger, maximize above formula, try to achieve best threshold values T best, its formula is as follows:
T best = Arg max 0 &le; T &le; L - 1 p A ( &omega; A - &omega; 0 ) 2 + p B ( &omega; B - &omega; 0 ) 2 .
(4) by threshold values T bestsegmentation is carried out to difference figure image DI and obtains bianry image O bject, its computing formula is as follows:
O bject ( x , y ) = 1 DI ( x , y ) > T best 0 DI ( x , y ) < T best .
4th step, obtains vehicle image information in virtual coil, by bianry image O bjectwith virtual coil two-value template M 1carry out intersection operation, obtain vehicle image M in virtual coil 2, its concrete steps are as follows:
(1) image coordinate system is set up, to bianry image O bjectset up coordinate system, with summit, the image left side for initial point, set up coordinate system, m is the wide of image, and n is the height of image, is represented by pixel on the plane of delineation with such as table 1 two-dimensional space coordinate.
Table 1
It can demonstrate vehicle position information on image (image white part), and for vehicle on image afterwards merges, the operation such as noise cancellation provides directional information, as being identified as same vehicle sections by not connecting vehicle sections up and down better.
(2) image cavity is filled, as shown in table 1, image left and right mid-side node is connected.Concatenate rule is (0,0)-(m, 0), (0,1)-(m, 1) ..., (0, n)-(m, n), fill image, filling criterion is that the region being less than 100 for cyst areas is filled, otherwise does not fill.
Connect the upper and lower mid-side node of image, concatenate rule is (0,0)-(0, n), (1,0)-(1, n),, (m, 0)-(m, n), fill image, filling criterion is that the region being less than 100 for cyst areas is filled, otherwise does not fill.
(3) carry out noise cancellation operation, cancellation is carried out in the region that connected region area is less than 1000.For image after filling with noise or little patch nontarget area, in order to obtain clean bianry image O bject, analyze for image connectivity region, utilize the larger feature of vehicle connected region area occupied, cancellation is carried out in region connected region area being less than to 1000.Concrete steps are as follows:
A, set up a false form, size and image O bjectequal;
The area of each connected region in B, computed image;
C, connected region pixel number being less than 1000 copy to false form;
D, bianry image O bjectdeduct false form, obtain new bianry image O bject, false form is made zero simultaneously.
The advantage of this method is to compare the morphology cancellation noisy operation methods such as Image erosion, and the method does not destroy vehicle region globality at cancellation noise simultaneously.
(4) picture smooth treatment is carried out, to bianry image O bjectcarry out image smoothing operation.Bianry image O bjectdue to padding, vehicle border wedge angle or irregular phenomenon may be there is, corrosion smooth operation is carried out to image, obtain more regular vehicle region.
(5) target image is obtained, by bianry image O bjectwith virtual coil two-value template M 1carry out intersection operation, obtain vehicle image M in virtual coil 2, concrete steps are as follows:
If O bject(i, j)=255 & M 1(i, j)=255 exist occurs simultaneously, then M 2(i, j)=255;
If there is not common factor, then M 2(i, j)=0.
5th step, in virtual coil, vehicle-state judges, accounts in detection line ratio in judgement virtual coil with or without information of vehicles, saturated, unsaturated state by calculating common factor image.Its concrete steps are as follows:
(1) by three detection lines in virtual coil template respectively with vehicle region frame M 2carry out intersection operation, calculating common factor image accounts for detection line ratio and is respectively rito1, rito2 and rito3 respectively;
(2) ritomin=min (rito1, rito2, rito3) is got;
(3) if ritomin=0, then illustrate that present image does not have information of vehicles, does not carry out Floating Car statistics;
If 0<ritomin<0.8, then illustrate that in virtual coil, vehicle is in unsaturated state;
If 0.8<ritomin<1, then illustrate that intending vehicle in coil is in state of saturation.
6th step, Floating Car is added up, and calculates the Floating Car of unsaturated state and state of saturation respectively, obtains Floating Car statistics.Its concrete steps are as follows:
(1) Floating Car under unsaturated state is added up, the tracking of moving vehicle is the process determining the position of same vehicle in different frame, when moving vehicle is detected correctly, it is exactly the problem of mating the vehicle detected in consecutive frame, the centroid position according to mainly object of this patent coupling, size.Suppose that the movement locus of tracking target is level and smooth within the time interval of a frame, namely the change of its exercise parameter should be little as much as possible.For the feature that vehicle tracking itself has, adopt the tracking of method realization to vehicle based on composite character herein, and in mixture model track algorithm, selected two parameters to realize coupling.Its concrete steps are as follows:
A, gather a two field picture, extract the moving vehicle template in present image, calculate the center-of-mass coordinate p (k) (x, y) of moving vehicle area S (k) and moving vehicle, the moving target characteristic sequence of composition present frame.
If B system is the starting stage, use the feature of the characteristic sequence initialization tracking sequence of the moving target of present frame, initial statistical vehicle N 3=0.
C, to calculate in adjacent two frames moving vehicle size difference Dif (S (k), S (k+1)), its computing formula is as follows:
Dif(S(k),S(k+1))=|S(k)-S(k+1)|
Wherein, S (k) is moving vehicle area in kth two field picture, and S (k+1) is moving vehicle area in kth+1 two field picture;
Calculate moving vehicle size difference and adjacent two frame moving vehicle centroid distance size Dis (p (k), p (k+1)) in adjacent two frames, its computing formula is as follows:
Dis ( p ( k ) , p ( k + 1 ) ) = ( p ( k ) ( x ) - p ( k + 1 ) ( x ) ) 2 + ( p ( k ) ( y ) - p ( k + 1 ) ( y ) ) 2
Wherein, p (k) (x, y) is the center-of-mass coordinate of moving vehicle in kth two field picture, and p (k+1) (x, y) is the center-of-mass coordinate of moving vehicle in kth+1 two field picture;
Whether D, the moving vehicle determined within the scope of the match search of moving vehicle in tracking sequence are same moving vehicle, according to characteristic similarity computation rule, determine the set in the present frame that may match with the moving vehicle in tracking sequence.According to multiple features matched rule, find in above-mentioned coupling set with moving vehicle the best mate vehicle, multiple features matched rule whether is greater than given threshold values by characteristic of correspondence similarity between each moving vehicle in comparing motion vehicle and set, if the characteristic similarity of moving vehicle and set is less than threshold values, then between these two moving vehicles, matching degree is high, belongs to same moving vehicle.
If Dif is (S (k), S (k+1)) <30 and Dis (p (k), p (k+1)) <20, in moving vehicle then within the scope of match search and tracking sequence, moving vehicle is same moving vehicle, does not count;
Otherwise, in moving vehicle within the scope of match search and tracking sequence, moving vehicle is not same moving vehicle, judge that current kinetic vehicle is as newly entering vehicle or vehicle division causes the moving target that manifests, upgrades the eigenwert of tracking sequence simultaneously, upgrades calculating vehicle N 3=N 3+ 1.
(2) add up the Floating Car under state of saturation, count saturated vehicle-state number of image frames.
Setting video has N each second 1width image is N by virtual coil Floating Car under the state of saturation in statistics 1 minute this section 0, count state of saturation hypograph frame number N 2, calculate state of saturation vehicle number N 4, its computing formula is as follows:
N 4=(N 2/N 1)×(N 0/60)。
(3) Floating Car sum N is added up 5, unsaturated state car statistics number and state of saturation car statistics number are sued for peace, its computing formula is as follows:
N 5=N 4+N 3
N 5for final Floating Car statistics, include unsaturated state car statistics number and state of saturation car statistics number.
Above-described embodiment, only for technical conceive of the present invention and feature are described, its object is to allow the personage being familiar with this art can understand content of the present invention and be implemented, can not limit the scope of the invention with this.All equivalences done according to Spirit Essence of the present invention change or modify, and all should be encompassed in protection scope of the present invention.

Claims (9)

1., based on a Floating Car statistical method for video monitoring treatment technology, it is characterized in that, comprise the following steps:
Preprocessing process, draws virtual coil and detection line for tested track, carries out complicate statistics under vehicle state of saturation;
Set up initial background image, utilize frame difference method structural setting image BG;
Carry out vehicle monitoring, formed the bianry image O of vehicle by difference figure image DI bject;
Obtain vehicle image information in virtual coil, by bianry image O bjectwith virtual coil two-value template M 1carry out intersection operation, obtain vehicle image M in virtual coil 2;
In virtual coil, vehicle-state judges, accounts in detection line ratio in judgement virtual coil with or without information of vehicles, saturated, unsaturated state by calculating common factor image;
Floating Car is added up, and calculates the Floating Car of unsaturated state and state of saturation respectively, obtains Floating Car statistics.
2. a kind of Floating Car statistical method based on video monitoring treatment technology according to claim 1, it is characterized in that, described preprocessing process comprises the following steps:
Obtain camera video information, draw virtual coil by track;
According to direction of traffic, trisection is carried out to virtual coil, connects Along ent successively, at built-in three detection lines of magnetic test coil;
Virtual coil is occurred that the floating car data of state of saturation carries out complicate statistics, calculates with 1 minute video sample, add up 20 sample data Floating Car n i, i ∈ 1,2,3..., 20},
Be N by virtual coil Floating Car under calculating the state of saturation in 1 minute this section 0, its computing formula is as follows:
N 0 = ( &Sigma; i = 1 20 n i ) / 20 .
3. a kind of Floating Car statistical method based on video monitoring treatment technology according to claim 1, is characterized in that, described initial background image of setting up comprises the following steps:
If I (x, y, t) is for representing t present frame, I (x, y, t-1) is for representing t-1 moment present frame, then the computing formula of t image background pixels point value B (x, y, t) is as follows:
B ( x , y , t ) = &alpha; &times; I ( x , y , t ) + ( 1 - &alpha; ) &times; I ( x , y , t - 1 ) | I ( x , y , t ) - I ( x , y , t - 1 ) | > T ( 1 - &alpha; ) &times; I ( x , y , t ) + &alpha; &times; I ( x , y , t - 1 ) | I ( x , y , t ) - I ( x , y , t - 1 ) | &le; T
Wherein, α ∈ (0,1) is weight parameter, gets smaller value; T is reservation threshold;
Calculate [0, T] time period image background BG (x, y), to [0, T] background image sequence B (x in the time period, y, t), wherein (t ∈ [0, T]), carry out cumulative averaging and calculate background image BG (x, y), its computing formula is as follows:
BG ( x , y ) = &Sigma; t = 0 T BG ( x , y , t ) / N T ,
Wherein, N tfor [0, T] background image sequence frame number in the time period.
4. a kind of Floating Car statistical method based on video monitoring treatment technology according to claim 1, is characterized in that, the described vehicle monitoring that carries out comprises the following steps:
If I (x, y) is t present frame, BG (x, y) is image background, structure present image I (x, y) Neighborhood Statistics image I n (x, y), its computing formula is as follows:
I N ( x , y ) = ( &Sigma; N ( x , y ) &Element; &Omega; I ( x , y ) ) / sum ( N ( x , y ) ) ;
Structural setting image BG (x, y) Neighborhood Statistics image BG n (x, y), its computing formula is as follows;
BG N ( x , y ) = ( &Sigma; N ( x , y ) &Element; &Omega; GB ( x , y ) ) / sum ( N ( x , y ) ) ;
Both calculating difference absolute image DI, its computing formula is as follows:
DI = | I N ( x , y ) - BG N ( x , y ) | ;
Computed image threshold values T best;
By threshold values T bestsegmentation is carried out to difference figure image DI and obtains bianry image O bject, its computing formula is as follows:
O bject ( x , y ) = 1 DI ( x , y ) > T best 0 DI ( x , y ) < T best .
5. a kind of Floating Car statistical method based on video monitoring treatment technology according to claim 1, it is characterized in that, in described acquisition virtual coil, vehicle image information comprises the following steps:
Set up image coordinate system, to bianry image O bjectset up coordinate system, with summit, the image left side for initial point, set up coordinate system, m is the wide of image, and n is the height of image, is represented by the two-dimensional space coordinate of pixel on the plane of delineation;
Fill image cavity, connect image left and right mid-side node, concatenate rule is (0,0)-(m, 0), (0,1)-(m, 1) ... (0, n)-(m, n), fills image, filling criterion is that the region being less than 100 for cyst areas is filled, otherwise does not fill;
Connect the upper and lower mid-side node of image, concatenate rule is (0,0)-(0, n), (1,0)-(1, n),, (m, 0)-(m, n), fill image, filling criterion is that the region being less than 100 for cyst areas is filled, otherwise does not fill;
Carry out noise cancellation operation, cancellation is carried out in the region that connected region area is less than 1000;
Carry out picture smooth treatment, to bianry image O bjectcarry out image smoothing operation;
Obtain target image, by bianry image O bjectwith virtual coil two-value template M 1carry out intersection operation, obtain vehicle image M in virtual coil 2, concrete steps are as follows:
If O bject(i, j)=255 & M 1(i, j)=255 exist occurs simultaneously, then M 2(i, j)=255;
If there is not common factor, then M 2(i, j)=0.
6. a kind of Floating Car statistical method based on video monitoring treatment technology according to claim 1, is characterized in that: in described virtual coil, vehicle-state judges to comprise the following steps:
By three detection lines in virtual coil template respectively with vehicle region frame M 2carry out intersection operation, calculating common factor image accounts for detection line ratio and is respectively rito1, rito2 and rito3 respectively;
Get ritomin=min (rito1, rito2, rito3);
If ritomin=0, then illustrate that present image does not have information of vehicles, does not carry out Floating Car statistics;
If 0<ritomin<0.8, then illustrate that in virtual coil, vehicle is in unsaturated state;
If 0.8<ritomin<1, then illustrate that intending vehicle in coil is in state of saturation.
7. a kind of Floating Car statistical method based on video monitoring treatment technology according to claim 1, is characterized in that: described Floating Car statistics comprises the following steps:
Floating Car under statistics unsaturated state, its concrete steps are as follows:
Gather a two field picture, extract the moving vehicle template in present image, calculate the center-of-mass coordinate p (k) (x, y) of moving vehicle area S (k) and moving vehicle, the moving target characteristic sequence of composition present frame;
If system is the starting stage, use the feature of the characteristic sequence initialization tracking sequence of the moving target of present frame, initial statistical vehicle N 3=0;
Calculate moving vehicle size difference Dif (s (k), s (k+1)) in adjacent two frames, its computing formula is as follows:
Dif(S(k),S(k+1))=|S(k)-S(k+1)|
Wherein, S (k) is moving vehicle area in kth two field picture, and S (k+1) is moving vehicle area in kth+1 two field picture;
Calculate moving vehicle size difference and adjacent two frame moving vehicle centroid distance size Dis (p (k), p (k+1)) in adjacent two frames, its computing formula is as follows:
Dis ( p ( k ) , p ( k + 1 ) ) = ( p ( k ) ( x ) - p ( k + 1 ) ( x ) ) 2 + ( p ( k ) ( y ) - p ( k + 1 ) ( y ) ) 2
Wherein, p (k) (x, y) is the center-of-mass coordinate of moving vehicle in kth two field picture, and p (k+1) (x, y) is the center-of-mass coordinate of moving vehicle in kth+1 two field picture;
Determine whether the moving vehicle within the scope of the match search of moving vehicle in tracking sequence is same moving vehicle,
If Dif is (S (k), S (k+1)) <30 and Dis (p (k), p (k+1)) <20, in moving vehicle then within the scope of match search and tracking sequence, moving vehicle is same moving vehicle, does not count;
Otherwise, in moving vehicle within the scope of match search and tracking sequence, moving vehicle is not same moving vehicle, judge that current kinetic vehicle is as newly entering vehicle or vehicle division causes the moving target that manifests, upgrades the eigenwert of tracking sequence simultaneously, upgrades calculating vehicle N 3=N 3+ 1;
Floating Car under statistics state of saturation, counts saturated vehicle-state number of image frames;
Setting video has N each second 1width image is N by virtual coil Floating Car under the state of saturation in statistics 1 minute this section 0, count state of saturation hypograph frame number N 2, calculate state of saturation vehicle number N 4, its computing formula is as follows:
N 4=(N 2/N 1)×(N 0/60);
Statistics Floating Car sum N 5, unsaturated state car statistics number and state of saturation car statistics number are sued for peace, its computing formula is as follows:
N 5=N 4+N 3
8. a kind of Floating Car statistical method based on video monitoring treatment technology according to claim 4, is characterized in that: described computed image threshold values T bestcomprise the following steps:
If image has L gray level, the pixel count of gray-scale value i is n i, total pixel count is N, and each gray-scale value probability of occurrence is p i=n i/ N;
If there is threshold values T Iamge Segmentation to be become 2 regions, background classes A=(0,1,2 ..., T) and target class B=(T, T+1, T+2 ..., L-1);
Calculate the probability that background classes A occurs, its formula is as follows:
p A = &Sigma; i = 0 T p i ;
Calculate the probability that target class B occurs, its formula is as follows:
p B = &Sigma; i = T + 1 L - 1 p i ;
Calculate background classes A gray average, its formula is as follows:
&omega; A = &Sigma; i = 0 T ip i / p A ;
Calculate target class B gray average, its formula is as follows:
&omega; B = &Sigma; i = T + 1 L - 1 ip i / p B ;
Calculate whole gradation of image average, its formula is as follows:
&omega; 0 = p A &omega; A + p B &omega; B = &Sigma; i = 0 L - 1 ip i ;
Calculate A, B two inter-class variance in region, its formula is as follows:
σ 2=p AA0) 2+p BB0) 2
The principle larger based on inter-class variance, two class gray scale difference are larger, maximizes above formula, tries to achieve best threshold values T best, its formula is as follows:
T best = Arg max 0 &le; T &le; L - 1 p A ( &omega; A - &omega; 0 ) 2 + p B ( &omega; B - &omega; 0 ) 2 .
9. a kind of Floating Car statistical method based on video monitoring treatment technology according to claim 5, is characterized in that: described noise cancellation operation of carrying out comprises the following steps:
Set up a false form, size and image O bjectequal;
The area of each connected region in computed image;
The connected region that pixel number is less than 1000 is copied to false form;
Bianry image O bjectdeduct false form, obtain new bianry image O bject, false form is made zero simultaneously.
CN201510218281.2A 2015-04-30 2015-04-30 Floating car counting method based on video monitoring processing technology Pending CN104778727A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510218281.2A CN104778727A (en) 2015-04-30 2015-04-30 Floating car counting method based on video monitoring processing technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510218281.2A CN104778727A (en) 2015-04-30 2015-04-30 Floating car counting method based on video monitoring processing technology

Publications (1)

Publication Number Publication Date
CN104778727A true CN104778727A (en) 2015-07-15

Family

ID=53620173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510218281.2A Pending CN104778727A (en) 2015-04-30 2015-04-30 Floating car counting method based on video monitoring processing technology

Country Status (1)

Country Link
CN (1) CN104778727A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654737A (en) * 2016-02-05 2016-06-08 浙江浙大中控信息技术有限公司 Video traffic flow detection method by block background modeling
CN106611496A (en) * 2015-10-27 2017-05-03 北京航天长峰科技工业集团有限公司 Traffic flow monitoring method based on GPS positioning technology
CN109739220A (en) * 2018-12-06 2019-05-10 珠海格力电器股份有限公司 Positioning control method and device, storage medium and robot
CN110490898A (en) * 2018-05-15 2019-11-22 苏州欧菲光科技有限公司 Animation play processing method, liquid crystal instrument system and vehicle based on sequence frame
CN111462478A (en) * 2019-01-22 2020-07-28 北京中合云通科技发展有限公司 Method and device for dividing urban road network signal control subareas
CN111739283A (en) * 2019-10-30 2020-10-02 腾讯科技(深圳)有限公司 Road condition calculation method, device, equipment and medium based on clustering
CN112562327A (en) * 2020-11-27 2021-03-26 石家庄铁道大学 Traffic operation information detection method and device based on video data and terminal equipment
CN114613143A (en) * 2021-05-28 2022-06-10 三峡大学 Road vehicle counting method based on YOLOv3 model
CN117521869A (en) * 2024-01-08 2024-02-06 广东拓迪智能科技有限公司 Library space management method, device and system and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310638A (en) * 2013-05-24 2013-09-18 江苏引跑网络科技有限公司 Video traffic flow counting technique based on virtual coil technology
CN104183142A (en) * 2014-08-18 2014-12-03 安徽科力信息产业有限责任公司 Traffic flow statistics method based on image visual processing technology
CN104282157A (en) * 2014-10-16 2015-01-14 银江股份有限公司 Main line video traffic detecting method for traffic signal control

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310638A (en) * 2013-05-24 2013-09-18 江苏引跑网络科技有限公司 Video traffic flow counting technique based on virtual coil technology
CN104183142A (en) * 2014-08-18 2014-12-03 安徽科力信息产业有限责任公司 Traffic flow statistics method based on image visual processing technology
CN104282157A (en) * 2014-10-16 2015-01-14 银江股份有限公司 Main line video traffic detecting method for traffic signal control

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106611496A (en) * 2015-10-27 2017-05-03 北京航天长峰科技工业集团有限公司 Traffic flow monitoring method based on GPS positioning technology
CN105654737B (en) * 2016-02-05 2017-12-29 浙江浙大中控信息技术有限公司 A kind of video car flow quantity measuring method of block background modeling
CN105654737A (en) * 2016-02-05 2016-06-08 浙江浙大中控信息技术有限公司 Video traffic flow detection method by block background modeling
CN110490898A (en) * 2018-05-15 2019-11-22 苏州欧菲光科技有限公司 Animation play processing method, liquid crystal instrument system and vehicle based on sequence frame
CN109739220A (en) * 2018-12-06 2019-05-10 珠海格力电器股份有限公司 Positioning control method and device, storage medium and robot
CN111462478B (en) * 2019-01-22 2021-07-27 北京中合云通科技发展有限公司 Method and device for dividing urban road network signal control subareas
CN111462478A (en) * 2019-01-22 2020-07-28 北京中合云通科技发展有限公司 Method and device for dividing urban road network signal control subareas
CN111739283A (en) * 2019-10-30 2020-10-02 腾讯科技(深圳)有限公司 Road condition calculation method, device, equipment and medium based on clustering
CN111739283B (en) * 2019-10-30 2022-05-20 腾讯科技(深圳)有限公司 Road condition calculation method, device, equipment and medium based on clustering
CN112562327A (en) * 2020-11-27 2021-03-26 石家庄铁道大学 Traffic operation information detection method and device based on video data and terminal equipment
CN114613143A (en) * 2021-05-28 2022-06-10 三峡大学 Road vehicle counting method based on YOLOv3 model
CN114613143B (en) * 2021-05-28 2023-08-25 三峡大学 Road vehicle counting method based on YOLOv3 model
CN117521869A (en) * 2024-01-08 2024-02-06 广东拓迪智能科技有限公司 Library space management method, device and system and storage medium

Similar Documents

Publication Publication Date Title
CN104778727A (en) Floating car counting method based on video monitoring processing technology
CN104183142B (en) A kind of statistical method of traffic flow based on image vision treatment technology
CN103971380B (en) Pedestrian based on RGB-D trails detection method
CN103310444B (en) A kind of method of the monitoring people counting based on overhead camera head
CN103400157B (en) Road pedestrian and non-motor vehicle detection method based on video analysis
CN110425005B (en) Safety monitoring and early warning method for man-machine interaction behavior of belt transport personnel under mine
CN107292297A (en) A kind of video car flow quantity measuring method tracked based on deep learning and Duplication
CN103218816B (en) A kind of crowd density estimation method and people flow rate statistical method based on video analysis
CN108596129A (en) A kind of vehicle based on intelligent video analysis technology gets over line detecting method
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN103914688B (en) A kind of urban road differentiating obstacle
CN102819952B (en) Method for detecting illegal lane change of vehicle based on video detection technique
CN103530874B (en) People stream counting method based on Kinect
CN110175576A (en) A kind of driving vehicle visible detection method of combination laser point cloud data
CN101739551B (en) Method and system for identifying moving objects
CN103150559B (en) Head recognition and tracking method based on Kinect three-dimensional depth image
CN101996401B (en) Target analysis method and apparatus based on intensity image and depth image
CN106951879A (en) Multiple features fusion vehicle checking method based on camera and millimetre-wave radar
CN105512720A (en) Public transport vehicle passenger flow statistical method and system
CN106682586A (en) Method for real-time lane line detection based on vision under complex lighting conditions
Chen et al. Intelligent vehicle counting method based on blob analysis in traffic surveillance
CN105184274B (en) A kind of based on depth image acquisition passenger flow speed and the method for density parameter
CN105608691A (en) High-resolution SAR image individual building extraction method
CN104318263A (en) Real-time high-precision people stream counting method
CN102496281B (en) Vehicle red-light violation detection method based on combination of tracking and virtual loop

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150715

RJ01 Rejection of invention patent application after publication