CN104794907A - Traffic volume detection method using lane splitting and combining - Google Patents

Traffic volume detection method using lane splitting and combining Download PDF

Info

Publication number
CN104794907A
CN104794907A CN201510223898.3A CN201510223898A CN104794907A CN 104794907 A CN104794907 A CN 104794907A CN 201510223898 A CN201510223898 A CN 201510223898A CN 104794907 A CN104794907 A CN 104794907A
Authority
CN
China
Prior art keywords
track
moving target
condition
image
surveyed area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510223898.3A
Other languages
Chinese (zh)
Other versions
CN104794907B (en
Inventor
陈涛
狄明珠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JIANGSU DAWAY TECHNOLOGIES Co Ltd
Original Assignee
JIANGSU DAWAY TECHNOLOGIES Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JIANGSU DAWAY TECHNOLOGIES Co Ltd filed Critical JIANGSU DAWAY TECHNOLOGIES Co Ltd
Priority to CN201510223898.3A priority Critical patent/CN104794907B/en
Publication of CN104794907A publication Critical patent/CN104794907A/en
Application granted granted Critical
Publication of CN104794907B publication Critical patent/CN104794907B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/065Traffic control systems for road vehicles by counting the vehicles in a section of the road or in a parking area, i.e. comparing incoming count with outgoing count

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a traffic volume detection method using lane splitting and combining. The method includes the steps of determining a detection area in a coordinate image of a video image, and splitting each lane into two half lanes; determining background threshold intervals of R,G and B channels of a background image of the detection area; with the background threshold intervals of the R,G and B channels of the background image of the detection area, and subjecting each pixel of a detection image of the detection area to filtering binarization so as to generate a binary image of each single-color channel; subjecting binary contents of the R,G and B channels to OR operation so as to obtain final binary images; counting the number of active pixels of each half lane; judging the lanes which vehicles run on, in a combinatory manner. The method has the advantages that lane splitting and combining helps greatly improve detection accuracy.

Description

Use the traffic flow detecting method of track segmentation merging method
Technical field
The present invention relates to field of road traffic management, the method that especially a kind of traffic video frequency vehicle detects.
Background technology
Current, video monitoring has become the important means of intelligent traffic administration system, and people utilize image processing techniques to carry out the research of traffic flow, has now achieved some significant achievements.Vehicle Flow Detection technology can be widely used in highway city road and collection traffic data, monitoring wagon flow are carried out in crossing.Vehicle Flow Detection technology generally comprises three parts: the foundation of image background, the setting of detection line, and in surveyed area, the car of crossing of vehicle judges.The accuracy rate of vehicle train detection is generally about 90%.
The detection of the general maximum support 4 lane traffic data of current vehicle Flow Detection technology, General Requirements traffic scene maintains static, fixing camera, namely in sequence image except moving target, can not there is significant change in natural background, therefore the foundation of image background is commonly used method of difference and carried out.But, in fact due to the shake of video camera, the change of road surface light, the shade of vehicle target itself and other factors, often make the train detection of vehicle make the mistake, form very large error.
Traditional removes shadow method, generally needs which side being positioned at vehicle according to time section determination shade, and the scanning carrying out image from shade side is to determine whether pixel belongs to shadow region.Although the most shade composition of vehicle can be eliminated, but under the background of comparatively complicated change, still there is certain shade and cause vehicle adhesion phenomenon.
Under some traffic scene, there is in a lot of places on road surface the shade projected by roadside trees, the existence of these shades limits the range of choice of surveyed area.As vehicle flowrate in video is comparatively large, there is situation about continuing through before and after vehicle in a lot of moment.In addition due to imaged viewing angle problem, a lot of vehicle by time can affect multiple track.There is shade in each current vehicle, if the situation that these shades are easy to cause many inspections to there is motorcycle to be passed through, can disturb the train detection of automobile.If there is water barrow to pass through in video, there is road surface rgb value bust in the impact that the track at its place is water stain because road surface remains, train detection is met difficulty.Especially, some vehicle is not travel according to the division in set track, and may appear between two tracks and travel, and train detection may be made to add up and cause repetition.
Summary of the invention
The object of the invention is to overcome the deficiencies in the prior art, provide a kind of traffic flow detecting method using track to split merging method, the method adopting track segmentation to merge, can improve the accuracy rate of detection greatly.The technical solution used in the present invention is:
Use track to split a traffic flow detecting method for merging method, comprise the steps:
Step S1, sets up the coordinate system of video image, produces the coordinate diagram picture of video image; In the coordinate diagram picture of video image, determine surveyed area, described surveyed area need cover the required all tracks detected; Be two and half tracks by each driveway partition;
Step S2, determines that the R of the background image of surveyed area, G, channel B background threshold is separately interval; And the higher limit in R, the G of regular update surveyed area background image, channel B background threshold interval separately and lower limit;
Step S3, uses R, the G of surveyed area background image, channel B background threshold interval separately to set up filtration channel each pixel to detected image in surveyed area and carries out filtering binary conversion treatment, generate the binary image of each single color channel; Afterwards, the binaryzation content of three RGB passages carried out or operates, obtaining final binary image;
Then the active pixel point number in each half region, track is added up, the white pixel point number in each half region, track in namely final binary image;
Step S4, according to the active pixel point in half region, track, judges that moving target enters half region, track, judges that half track region memory is in moving target, and judges that moving target leaves half region, track; And record the entry time that moving target enters half region, track, leave the time departure in half region, track;
Step S5, is set to same first grouping by the moving target in continuous adjacent half track consistent for entry time; Entry time is consistent refers to that the absolute value of the moving target entry time difference in continuous adjacent half track is less than or equal to entry time difference limen value;
When in the first grouping, moving target number is more than or equal to 3, be separated according to the merging direction of regulation, with two and half tracks for partitioning standards, making to be separated moving target number in rear each first grouping is 2 or 1;
The moving target in continuous adjacent half track consistent for time departure is set to same second grouping; Time departure is consistent refers to that the absolute value of the moving target time departure difference in continuous adjacent half track is less than or equal to time departure difference limen value;
When in the second grouping, moving target number is more than or equal to 3, be separated according to the merging direction of regulation, with two and half tracks for partitioning standards, making to be separated moving target number in rear each second grouping is 2 or 1;
Step S6, whether the moving target according to entry time condition half track region adjacent with time departure condition judgment belongs to same target vehicle.
The invention has the advantages that: when there is the problems such as scene shade, vehicle shadow, imaged viewing angle in general video, most method adopting removal shade.The method that this method adopts track segmentation to merge, can when not considering shade direction, whether vehicle when travelling by track, the counting of vehicle flowrate is carried out by the calculating of small amount, and the lane position of vehicle can be obtained, considerably reduce the impact on vehicle Flow Detection such as background and vehicle shadow.
Accompanying drawing explanation
Fig. 1 is the coordinate system schematic diagram of video image of the present invention.
Fig. 2 is surveyed area schematic diagram in video image of the present invention.
Fig. 3 a and Fig. 3 b is the example that in surveyed area of the present invention, Four-Lane Road divides.
Fig. 4 is that under single color channel of the present invention, active pixel spot check measures intention.
Fig. 5 is the exemplary plot of binary image of the present invention.
Fig. 6 is half merging direction, track schematic diagram of the present invention.
Fig. 7 is the moving target schematic diagram in continuous adjacent half track of the present invention.
Fig. 8 is that continuous adjacent moving target of the present invention is across the three and half above schematic diagram in track.
Fig. 9 is that moving target of the present invention grouping is separated schematic diagram.
Figure 10 is that entry time condition of the present invention and time departure are consistent, and when equaling 2 across half track number, moving target is judged as target vehicle method schematic diagram.
Figure 11 is that entry time condition of the present invention is consistent with there being one in time departure condition, and another is inconsistent, and when equaling 2 across half track number, moving target is judged as target vehicle method schematic diagram.
When Figure 12 is entry time condition and the time departure condition generation intersection of moving target of the present invention, moving target is judged as target vehicle method schematic diagram.
Figure 13 is process flow diagram of the present invention.
Embodiment
Below in conjunction with concrete drawings and Examples, the invention will be further described.
The traffic flow detecting method of the use track segmentation merging method that the present invention proposes, comprises the steps:
Step S1, sets up the coordinate system of video image, produces the coordinate diagram picture of video image; In the coordinate diagram picture of video image, determine surveyed area, described surveyed area need cover the required all tracks detected; Be two and half tracks by each driveway partition.
Described in being implemented as follows of step S1:
First read video first two field picture, be the pixel of 10 multiples to row-coordinate in image, change its rgb value, be labeled as redness or blueness, make red blue Alternation Display; Be the pixel of 10 multiples to row coordinate in image, change its rgb value, be labeled as redness or blueness, make red blue Alternation Display; Then the rgb value making 1 to 20 row of image and 1 to 20 row is all 0, for showing coordinate figure corresponding to row, column.According to the ranks coordinate of pixel, every 20 row show a row-coordinate, and every 20 row display row coordinate, as shown in Figure 1, so just establishes the coordinate system of video image, also create the coordinate diagram picture of video image simultaneously.In Fig. 1, solid line represents red line, the blue line of represented by dotted arrows.
Show the coordinate diagram picture of the first two field picture, and choose four apex coordinates wherein, four apex coordinates are interconnected to four line segments, the rectangular area that these four apex coordinates are crowded around covers all tracks needing to detect, using the rectangular area closed that these four apex coordinates are crowded around as surveyed area.As shown in Figure 2.
At this, the present invention is not the foundation using a complete track scope as segmentation, because in vehicle travel process, the situation between two tracks is quite a lot of, traditional track Fractionation regimen, will inevitably cause vehicle line ball to travel thus the problem that simultaneously records of two tracks.
Consider generally, but the width of vehicle is all be greater than half track be less than a complete track.Based on this posterior infromation, each complete track is divided into two and half tracks, namely in detection surveyed area, 2 times according to number of track-lines are carried out longitudinal segmentation again, mark off multiple half region, track, detect respectively.Fig. 3 a and Fig. 3 b is the example that in surveyed area, Four-Lane Road divides.
Step S2, determines that the R of the background image of surveyed area, G, channel B background threshold is separately interval; And the higher limit in R, the G of regular update surveyed area background image, channel B background threshold interval separately and lower limit;
Described in being implemented as follows of step S2:
Video in order to filter out moving vehicle target, needs active pixel point to be separated with background dot in detecting; In order to active pixel point better can be isolated, have employed the method that RGB triple channel high-low threshold value is analyzed.Surveyed area background image refers in surveyed area the image not having movable vehicle.
In surveyed area background image, along with the change of shadow, the colourity of road itself is not a constant, but can change within the specific limits, it can thus be appreciated that the background threshold of surveyed area should not be a fixing constant yet, but a continuous print variation range, that is:
B(n)∈{B|b1(n)~b2(n)} (1)
Wherein, { R, G, B}, R, G, B represent red, green, blue to n ∈ respectively; B1 (n) and b2 (n) represents higher limit and the lower limit in the background threshold interval of the corresponding color channel of surveyed area background image n respectively;
Such as b1 (R) represents surveyed area background image and is converted in the image of red channel, the minimum value of the R value of pixel, and b2 (R) is exactly the maximal value of the R value of pixel.
Meanwhile, in order to surveyed area background threshold interval can automatically be changed along with time variations, every 200 two field pictures, surveyed area does not have the higher limit and the lower limit that regularly upgrade RGB background threshold interval during movable vehicle.
Step S3, uses R, the G of surveyed area background image, channel B background threshold interval separately to set up filtration channel each pixel to detected image in surveyed area and carries out filtering binary conversion treatment, generate the binary image of each single color channel; Afterwards, the binaryzation content of three RGB passages carried out or operates, obtaining final binary image; Then the active pixel point number in each half region, track is added up;
Described in being implemented as follows of step S3:
In previous step, used RGB high-low threshold value to carry out Threshold segmentation to surveyed area background, this step be exactly will in surveyed area the active pixel point number in each half region, track of statistic mixed-state image;
There is difference in the rgb value of moving vehicle and the road of background, namely has certain chrominance distance with background, or high or low, depending on vehicle color and change.
First use R, the G of surveyed area background image, channel B background threshold interval separately to set up filtration channel each pixel to detected image in surveyed area and carry out filtering binary conversion treatment, generate the binary image of each single color channel; As shown in Figure 4; Under single color channel, active pixel point detection formula is:
Fn ( v ) = 1 , v < b 2 ( n ) | | v > b 1 ( n ) 0 , b 2 ( n ) &le; v &le; b 1 ( n ) , n &Element; { R , G , B } - - - ( 2 )
V is the current color channel values of pixel; Wherein 1 in binary image, show as white bright spot, and 0 represents black color dots.
Afterwards, the binaryzation content of three RGB passages carried out or operates, obtaining final binary image, than the example of the binary image of as shown in Figure 5; In final binary image, active pixel point detection formula is:
F(v)=F R(v)|F G(v)|F B(v) (3)
After binaryzation, start the active pixel point number of adding up each half region, track respectively, namely add up white pixel point (the i.e. non-zero pixel) number in each half region, track in final binary image:
region, s=half track total pixel number, i=half track quantity
After this step completes, if there is white pixel point in flakes, or white pixel point (only there is very trickle black tomography centre) roughly in flakes, be then exactly the moving target entering surveyed area.
Step S4, according to the active pixel point in half region, track, judge that moving target enters half region, track, half track region memory is in moving target, and moving target leaves half region, track; And record the entry time that moving target enters half region, track, leave the time departure in half region, track;
Described in being implemented as follows of step S4:
Judge whether moving target enters half region, track, two conditions below must be met simultaneously.
S4.1, active pixel point (non-zero some pixel after binaryzation) summation in half region, track needs to be more than or equal to a threshold value, can be referred to as entry condition threshold value, entry condition threshold value empirically numerical value is determined, is generally decided to be 20% of the half total pixel number in region, track.
S4.2, in order to prevent having black tomography to disturb in the middle of moving target white pixel point in flakes, thus error is produced to the division of former and later two moving targets normal, the difference that then current video image frame number deducts video image frame number when previous moving target leaves surveyed area should be more than or equal to one and distinguish threshold value, can calculate learn that this differentiation threshold value generally gets 3 ~ 12 frames according to the frame per second of video and the conventional speed of a motor vehicle.
Meet above-mentioned two conditions simultaneously and then think have moving target to enter half region, track.
Judge that half track region memory is in moving target, then must meet condition below:
S4.3, when the active pixel of the binary image in half region, track count (after thresholding, value is the point of 1) be more than or equal to an existence condition threshold value, then think that this moving target belongs to existence, totalizer is now used to carry out cumulative video image frame number, until moving target enters the state of leaving, when now judging that accumulator count value is more than or equal to a numerical value, think moving target physical presence and non-interference, this numerical value is generally 3 ~ 5.There is situation then exit judgement as being judged to inactive target.
Judge that moving target leaves half region, track, then need to meet one of following two conditions:
S4.4, is less than or equal to one first leaves condition threshold when the active pixel of the binary image in half region, track is counted, then judge that moving target leaves half region, track.First leaves condition threshold is generally set to 0.
Be simple determination methods according to the determination methods of this condition of S4.4, the scene that simple determination methods is suitable for is: vehicle shadow only appears at vehicle left side or right side, front side and rear side do not affect by vehicle shadow, surface conditions is stablized, the saltus step of few generation rgb value, in the binary image after Threshold segmentation, moving target is inner substantially without tomography or cavity.
S4.5, when the active pixel of the binary image in half region, track count (after thresholding, value is the point of 1) leave condition threshold less than or equal to one second, then think that this moving target belongs to the state of leaving, now start totalizer to add up, keep 3 ~ 5 frames at least continuously when leaving state, then can think that moving target is left.Second leave that condition threshold can be set to the half total pixel number in region, track 3%.
Be complicated determination methods according to the determination methods of this condition of S4.5, this determination methods robustness is very strong, be applicable to very rugged environment, such as there is a large amount of cavity in moving target inside, there is serious shadow interference, have serious road surface rgb value saltus step, under these interference, the method can be detected car, generally can not examine or undetected more.
Concerning whole step S4, the final moving target time departure judged need be greater than a frame number difference limen value with the frame number difference of the moving target entry time previously judged.
After judging that moving target leaves surveyed area, just need to carry out half track and merge work, be used for judging the lane position at the actual place of target vehicle, and whether the moving target in half adjacent region, track is same target vehicle, and finally externally provides testing result.
Along with the difference of vehicle direct of travel, or needing to be divided into merge left merges to the right, and at this, acquiescence is about decided to be from first, the right side moving target of vehicle direct of travel as the reference position merged.If the vehicle namely in video travels from top to bottom, so just take the merging direction merged to the right; Otherwise, then the merging direction merged is adopted left.
After confirming that the moving target in certain region, half track is left, just can start to start merging and judge.From the situation of vehicle actual travel, after same target vehicle leaves surveyed area, its active pixel point being positioned at adjacent multiple half tracks also should leave surveyed area in extremely short mistiming scope simultaneously.In like manner, the mistiming entering surveyed area also should be in a very low range.
Step S5, this step mainly carries out two analyses.Adopt each moving target to enter half track to analyze as one of merging condition zone time.Adopt each moving target to leave half track to analyze as one of merging condition zone time.
First, the entry time according to the moving target in each region, half track is analyzed first, as shown in Figure 7, the moving target in continuous adjacent half track consistent for entry time is set to same first grouping; Entry time is consistent refers to that the absolute value of the moving target entry time difference in continuous adjacent half track is less than or equal to entry time difference limen value; In same grouping, then express possibility as same moving target vehicle, the entry time difference limen value of benchmark is generally 3 ~ 5 frames.Dotted rectangle deputy activity target in Fig. 7, direct of travel is from top to bottom.
If occur that continuous adjacent moving target is across phenomenons more than three and half tracks, as shown in Figure 8, and entry time is consistent, and is more than or equal to 3 across half track number, and this kind of state shows that there is the vehicle parallel running of two or more the same time.Then scan each region, half track in the first grouping successively, from reference position, according to merging direction, with two and half tracks for partitioning standards, be separated, in each first grouping after separation, moving target number is 2 or 1; May be 2 continuously active targets in last first grouping after being separated, also may be a moving target, be all two continuously active targets in other first grouping.As shown in Figure 9.
In Fig. 7 to Fig. 9, the dotted line frame of deputy activity target is all advance from top to bottom.
The track of each moving target process is as the criterion with track number belonging to first half track merging direction, if two and half tracks are in respectively belong to two different tracks number, then represents line ball transport condition.
When moving target time departure is consistent, analytic process is with the same above.The moving target in continuous adjacent half track consistent for time departure is set to same second grouping; Time departure is consistent refers to that the absolute value of the moving target time departure difference in continuous adjacent half track is less than or equal to time departure difference limen value; When in the second grouping, moving target number is more than or equal to 3, be separated according to the merging direction of regulation, making to be separated moving target number in rear each second grouping is 2 or 1.
Step S6, whether the moving target according to entry time condition half track region adjacent with time departure condition judgment belongs to same target vehicle.
Step S6 comprises following determination methods:
S6.1, when there is not adjacent same grouping half track moving target, the moving target in one and half regions, track is simple target vehicle;
The track of this simple target vehicle process is exactly the track number belonging to half track;
S6.2, when the entry time condition of each moving target and time departure consistent, and when equaling 2 across half track number, this kind of situation is the simplest, and meeting then regarding as of this kind of situation is same target vehicle; It is as the criterion with track number belonging to first half track merging direction through track, if two and half tracks are in respectively belong to two different tracks number, then represents line ball transport condition.As shown in Figure 10.
S6.3, when each moving target entry time is consistent, but time departure condition is inconsistent, or entry time condition is inconsistent, but time departure is consistent, and when equaling 2 across half track number, as shown in figure 11,
When different time departure, need the time departure in judgement two and half track poor, if the absolute value of time departure difference is less than or equal to appointment threshold value, then think the same object in moving target position in this region, two and half tracks, otherwise then the moving target in two and half tracks is considered as two self-movement objects, namely two independently vehicle be separated.
Equally when different entry time, also need according to this principle, judge according to two and half track entry time difference absolute values.
According to a large amount of observation experiences, leave half track area condition weight in this scheduled event target and enter half region, track higher than moving object.It can thus be appreciated that the interval of above-mentioned time departure difference absolute value is inconsistent with the interval of the poor absolute value of entry time.
Concrete determination methods is as follows:
If:
Vin1 is track, left side half moving target entry time, and Vin2 is track, right side half moving target entry time;
Vout1 is track, left side half moving target time departure, and Vout2 is track, right side half moving target time departure;
Use a video frame threshold value δ Fbase, draw reference value δ T computing formula:
Said above, moving target is left half track area condition weight and is entered half region, track higher than moving target.
If judge that the mistiming threshold value entering or leave is D, formula is as follows herein:
W1 and W2 is weight coefficient, W1>W2, respectively desirable W1=3 and W2=1.5;
Finally, draw whether should belong to same target vehicle according to this judgment threshold D;
If judged result is R, be same target vehicle result be 1, non-same target vehicle result is 0;
S6.4, when entry time condition and the time departure condition generation intersection of moving target, as shown in figure 12, middle moving target is across grouping, middle moving target and the moving target entry time in left side consistent, but consistent with the moving target time departure on the right again;
Once there is this situation, then adopt and leave mode of priority, be same target vehicle with the half track moving target that time departure is consistent, be directly separated.
Its each moving target is as the criterion with track number belonging to first half track merging direction through track, if two and half tracks are in respectively belong to two different tracks number, then represents line ball transport condition.
After said process, one group of result can be obtained, obtain effective moving target vehicle number, and the lane position residing for each target vehicle.
Add up the moving target vehicle number of process in section detection time on each track, just can detect the vehicle flowrate obtaining road.
Some measured values of actual train detection result of the present invention are as follows:
Train detection result
Generally speaking train detection accuracy reaches 97%, higher than the train detection level can understood at present.

Claims (5)

1. use track to split a traffic flow detecting method for merging method, it is characterized in that, comprise the steps:
Step S1, sets up the coordinate system of video image, produces the coordinate diagram picture of video image; In the coordinate diagram picture of video image, determine surveyed area, described surveyed area need cover the required all tracks detected; Be two and half tracks by each driveway partition;
Step S2, determines that the R of the background image of surveyed area, G, channel B background threshold is separately interval; And the higher limit in R, the G of regular update surveyed area background image, channel B background threshold interval separately and lower limit;
Step S3, uses R, the G of surveyed area background image, channel B background threshold interval separately to set up filtration channel each pixel to detected image in surveyed area and carries out filtering binary conversion treatment, generate the binary image of each single color channel; Afterwards, the binaryzation content of three RGB passages carried out or operates, obtaining final binary image;
Then the active pixel point number in each half region, track is added up, the white pixel point number in each half region, track in namely final binary image;
Step S4, according to the active pixel point in half region, track, judges that moving target enters half region, track, judges that half track region memory is in moving target, and judges that moving target leaves half region, track; And record the entry time that moving target enters half region, track, leave the time departure in half region, track;
Step S5, is set to same first grouping by the moving target in continuous adjacent half track consistent for entry time; Entry time is consistent refers to that the absolute value of the moving target entry time difference in continuous adjacent half track is less than or equal to entry time difference limen value;
When in the first grouping, moving target number is more than or equal to 3, be separated according to the merging direction of regulation, with two and half tracks for partitioning standards, making to be separated moving target number in rear each first grouping is 2 or 1;
The moving target in continuous adjacent half track consistent for time departure is set to same second grouping; Time departure is consistent refers to that the absolute value of the moving target time departure difference in continuous adjacent half track is less than or equal to time departure difference limen value;
When in the second grouping, moving target number is more than or equal to 3, be separated according to the merging direction of regulation, with two and half tracks for partitioning standards, making to be separated moving target number in rear each second grouping is 2 or 1;
Step S6, whether the moving target according to entry time condition half track region adjacent with time departure condition judgment belongs to same target vehicle.
2. the traffic flow detecting method using track segmentation merging method as claimed in claim 1, is characterized in that:
In described step S1, specifically in the coordinate diagram picture of video image, choose four apex coordinates, the rectangular area that these four apex coordinates are crowded around covers all tracks needing to detect, using the rectangular area that these four apex coordinates are crowded around as surveyed area.
3. the traffic flow detecting method using track segmentation merging method as claimed in claim 1, is characterized in that:
In described step S2, R, G, the channel B background threshold interval separately of the background image of surveyed area are expressed as:
B(n)∈{B|b1(n)~b2(n)} (1)
Wherein, { R, G, B}, R, G, B represent red, green, blue to n ∈ respectively; B1 (n) and b2 (n) represents higher limit and the lower limit in the background threshold interval of the corresponding color channel of surveyed area background image n respectively.
4. the traffic flow detecting method using track segmentation merging method as claimed in claim 1, is characterized in that:
In step S4, judge that the condition that moving target enters half region, track is:
Active pixel point summation in half region, track is more than or equal to entry condition threshold value, and the difference that current video image frame number deducts video image frame number when previous moving target leaves surveyed area is more than or equal to one distinguishes threshold value;
In step S4, judge that half track region memory in the condition of moving target is:
Be more than or equal to an existence condition threshold value when the active pixel of the binary image in half region, track is counted, and the video image frame number under this kind of state is continued above setting frame number threshold value;
In step S4, judge that the condition that moving target leaves half region, track is one of following two conditions:
Condition one, is less than or equal to one first leaves condition threshold when the active pixel of the binary image in half region, track is counted, then judge that moving target leaves half region, track;
Or:
Condition two, leave condition threshold, and the video image frame number under this kind of state is continued above setting frame number threshold value when the active pixel of the binary image in half region, track is counted less than or equal to one second;
To whole step S4, the final moving target time departure judged need be greater than a frame number difference limen value with the frame number difference of the moving target entry time previously judged.
5. the traffic flow detecting method using track segmentation merging method as claimed in claim 1, is characterized in that:
Step S6 comprises following determination methods:
S6.1, when there is not adjacent same grouping half track moving target, the moving target in one and half regions, track is simple target vehicle;
S6.2, when the entry time condition of each moving target and time departure consistent, and when equaling 2 across half track number, meeting then regarding as of this kind of situation is same target vehicle;
S6.3, when each moving target entry time is consistent, but time departure condition is inconsistent, or entry time condition is inconsistent, but time departure is consistent, and when equaling 2 across half track number,
Concrete determination methods is as follows:
If:
Vin1 is track, left side half moving target entry time, and Vin2 is track, right side half moving target entry time;
Vout1 is track, left side half moving target time departure, and Vout2 is track, right side half moving target time departure;
Use a video frame threshold value δ Fbase, draw reference value δ T computing formula:
If judge that the mistiming threshold value entering or leave is D, formula is as follows herein:
W1 and W2 is weight coefficient, W1>W2;
Finally, draw whether should belong to same target vehicle according to this judgment threshold D;
If judged result is R, be same target vehicle result be 1, non-same target vehicle result is 0;
Then
S6.4, when entry time condition and the time departure condition generation intersection of moving target, is same target vehicle with the half track moving target that time departure is consistent.
CN201510223898.3A 2015-05-05 2015-05-05 Traffic volume detection method using lane splitting and combining Active CN104794907B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510223898.3A CN104794907B (en) 2015-05-05 2015-05-05 Traffic volume detection method using lane splitting and combining

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510223898.3A CN104794907B (en) 2015-05-05 2015-05-05 Traffic volume detection method using lane splitting and combining

Publications (2)

Publication Number Publication Date
CN104794907A true CN104794907A (en) 2015-07-22
CN104794907B CN104794907B (en) 2017-05-03

Family

ID=53559677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510223898.3A Active CN104794907B (en) 2015-05-05 2015-05-05 Traffic volume detection method using lane splitting and combining

Country Status (1)

Country Link
CN (1) CN104794907B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230343A (en) * 2018-01-05 2018-06-29 厦门华联电子股份有限公司 A kind of image processing method and device
CN109509205A (en) * 2017-09-14 2019-03-22 北京君正集成电路股份有限公司 Foreground detection method and device
CN109979204A (en) * 2019-04-02 2019-07-05 浙江多普勒环保科技有限公司 Light cuts multilane speed and acceleration detecting and its method
CN112232284A (en) * 2020-11-05 2021-01-15 浙江点辰航空科技有限公司 Unmanned aerial vehicle system based on automatic inspection of highway
CN112232286A (en) * 2020-11-05 2021-01-15 浙江点辰航空科技有限公司 Unmanned aerial vehicle image recognition system and unmanned aerial vehicle are patrolled and examined to road
CN112232285A (en) * 2020-11-05 2021-01-15 浙江点辰航空科技有限公司 Unmanned aerial vehicle system that highway emergency driveway was patrolled and examined
CN112329631A (en) * 2020-11-05 2021-02-05 浙江点辰航空科技有限公司 Method for carrying out traffic flow statistics on expressway by using unmanned aerial vehicle
CN114333356A (en) * 2021-11-30 2022-04-12 中交第二公路勘察设计研究院有限公司 Road plane intersection traffic volume statistical method based on video multi-region marks

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109615880B (en) * 2018-10-29 2020-10-23 浙江浙大列车智能化工程技术研究中心有限公司 Vehicle flow measuring method based on radar image processing

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510358A (en) * 2009-03-20 2009-08-19 吉林大学 Method and apparatus for processing real time statistical vehicle flowrate using video image
US20120148094A1 (en) * 2010-12-09 2012-06-14 Chung-Hsien Huang Image based detecting system and method for traffic parameters and computer program product thereof
CN103177586A (en) * 2013-03-05 2013-06-26 天津工业大学 Machine-vision-based urban intersection multilane traffic flow detection method
WO2013187748A1 (en) * 2012-06-12 2013-12-19 Institute Of Electronics And Computer Science System and method for video-based vehicle detection
CN103730015A (en) * 2013-12-27 2014-04-16 株洲南车时代电气股份有限公司 Method and device for detecting traffic flow at intersection
CN103871253A (en) * 2014-03-03 2014-06-18 杭州电子科技大学 Vehicle flow detection method based on self-adaptive background difference
CN104504913A (en) * 2014-12-25 2015-04-08 珠海高凌环境科技有限公司 Video traffic stream detection method and video traffic stream detection device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510358A (en) * 2009-03-20 2009-08-19 吉林大学 Method and apparatus for processing real time statistical vehicle flowrate using video image
US20120148094A1 (en) * 2010-12-09 2012-06-14 Chung-Hsien Huang Image based detecting system and method for traffic parameters and computer program product thereof
WO2013187748A1 (en) * 2012-06-12 2013-12-19 Institute Of Electronics And Computer Science System and method for video-based vehicle detection
CN103177586A (en) * 2013-03-05 2013-06-26 天津工业大学 Machine-vision-based urban intersection multilane traffic flow detection method
CN103730015A (en) * 2013-12-27 2014-04-16 株洲南车时代电气股份有限公司 Method and device for detecting traffic flow at intersection
CN103871253A (en) * 2014-03-03 2014-06-18 杭州电子科技大学 Vehicle flow detection method based on self-adaptive background difference
CN104504913A (en) * 2014-12-25 2015-04-08 珠海高凌环境科技有限公司 Video traffic stream detection method and video traffic stream detection device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109509205A (en) * 2017-09-14 2019-03-22 北京君正集成电路股份有限公司 Foreground detection method and device
CN109509205B (en) * 2017-09-14 2022-04-12 北京君正集成电路股份有限公司 Foreground detection method and device
CN108230343A (en) * 2018-01-05 2018-06-29 厦门华联电子股份有限公司 A kind of image processing method and device
CN108230343B (en) * 2018-01-05 2020-06-05 厦门华联电子股份有限公司 Image processing method and device
CN109979204A (en) * 2019-04-02 2019-07-05 浙江多普勒环保科技有限公司 Light cuts multilane speed and acceleration detecting and its method
CN112232284A (en) * 2020-11-05 2021-01-15 浙江点辰航空科技有限公司 Unmanned aerial vehicle system based on automatic inspection of highway
CN112232286A (en) * 2020-11-05 2021-01-15 浙江点辰航空科技有限公司 Unmanned aerial vehicle image recognition system and unmanned aerial vehicle are patrolled and examined to road
CN112232285A (en) * 2020-11-05 2021-01-15 浙江点辰航空科技有限公司 Unmanned aerial vehicle system that highway emergency driveway was patrolled and examined
CN112329631A (en) * 2020-11-05 2021-02-05 浙江点辰航空科技有限公司 Method for carrying out traffic flow statistics on expressway by using unmanned aerial vehicle
CN114333356A (en) * 2021-11-30 2022-04-12 中交第二公路勘察设计研究院有限公司 Road plane intersection traffic volume statistical method based on video multi-region marks
CN114333356B (en) * 2021-11-30 2023-12-15 中交第二公路勘察设计研究院有限公司 Road plane intersection traffic volume statistical method based on video multi-region marking

Also Published As

Publication number Publication date
CN104794907B (en) 2017-05-03

Similar Documents

Publication Publication Date Title
CN104794907A (en) Traffic volume detection method using lane splitting and combining
CN104246821B (en) Three-dimensional body detection device and three-dimensional body detection method
CN105005771B (en) A kind of detection method of the lane line solid line based on light stream locus of points statistics
CN108320510B (en) Traffic information statistical method and system based on aerial video shot by unmanned aerial vehicle
CN103797529B (en) Three-dimensional body detects device
CN103778786B (en) A kind of break in traffic rules and regulations detection method based on remarkable vehicle part model
CN103971521B (en) Road traffic anomalous event real-time detection method and device
US8750567B2 (en) Road structure detection and tracking
US9773317B2 (en) Pedestrian tracking and counting method and device for near-front top-view monitoring video
CN101639983B (en) Multilane traffic volume detection method based on image information entropy
CN101030256B (en) Method and apparatus for cutting vehicle image
KR101589711B1 (en) Methods and systems for processing of video data
KR100969995B1 (en) System of traffic conflict decision for signalized intersections using image processing technique
CN104504913B (en) Video car flow detection method and device
CN104282020A (en) Vehicle speed detection method based on target motion track
CN100452110C (en) Automobile video frequency discrimination speed-testing method
Chen et al. Conflict analytics through the vehicle safety space in mixed traffic flows using UAV image sequences
CN102509101B (en) Background updating method and vehicle target extracting method in traffic video monitoring
CN107330373A (en) A kind of parking offense monitoring system based on video
CN105740809A (en) Expressway lane line detection method based on onboard camera
CN104029680A (en) Lane departure warning system and method based on monocular camera
CN104463903A (en) Pedestrian image real-time detection method based on target behavior analysis
CN103324930A (en) License plate character segmentation method based on grey level histogram binaryzation
CN101727748A (en) Method, system and equipment for monitoring vehicles based on vehicle taillight detection
CN103985182A (en) Automatic public transport passenger flow counting method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 214101 Xishan Economic Development Zone, Jiangsu Province, science and Technology Industrial Park, No. 1, No.

Applicant after: Jiangsu aerospace Polytron Technologies Inc

Address before: 214101 Xishan, Jiangsu, East Road, South District, No. 39, No.

Applicant before: Jiangsu Daway Technologies Co., Ltd.

GR01 Patent grant
GR01 Patent grant