CN113140110A - Intelligent traffic control method, light-emitting device and monitoring device - Google Patents
Intelligent traffic control method, light-emitting device and monitoring device Download PDFInfo
- Publication number
- CN113140110A CN113140110A CN202110450307.1A CN202110450307A CN113140110A CN 113140110 A CN113140110 A CN 113140110A CN 202110450307 A CN202110450307 A CN 202110450307A CN 113140110 A CN113140110 A CN 113140110A
- Authority
- CN
- China
- Prior art keywords
- image
- traffic
- pixel
- current
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
- G08G1/0133—Traffic data processing for classifying traffic situation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses an intelligent traffic control method, a light-emitting device and a monitoring device, wherein the method comprises the following steps: acquiring a vehicle image and a pedestrian image in real-time traffic information through a target detection algorithm; obtaining a road image through multi-frame fusion, and space occupation ratio parameters of pixel points of a pedestrian image and a vehicle image and pixel points of the road image; carrying out statistics on the road traffic flow and the pedestrian flow in unit time to obtain traffic flow parameters; calculating the global vector motion speed by combining vector motion and angular point detection; identifying the current traffic passing state parameter by using a clustering discrimination model; and indicating the passing state; the invention also provides a light-emitting device and a monitoring device for executing the method. By the method, the passing group can be effectively guided in advance to carry out traffic diversion, and the road passing capacity and the passing efficiency are improved.
Description
The present application is a divisional application of an invention patent application with application number 202010367593.0, entitled "intelligent traffic indication light-emitting device, monitoring device, system and method", filed on 30.4.2020.
Technical Field
The invention relates to the field of intelligent traffic, in particular to an intelligent traffic control method, a light-emitting device and a monitoring device.
Background
In recent years, the economy of China is rapidly developed, the scale of cities and towns is continuously enlarged, the road mileage and the traffic network are continuously enlarged, the number of vehicles is rapidly increased, and great convenience is brought to the life of people. When urban traffic is rapidly developed, traffic flow is rapidly increased, and traffic jam in rush hours and holidays becomes a common phenomenon. In order to solve the problem, the traffic flow needs to be warned, managed and even induced. The basis for achieving the targets is to judge the road traffic jam condition.
For urban traffic, the quality of the urban traffic basically depends on whether the intersection can effectively run, because the intersection is the most important distributed point in the whole urban traffic and is an important reason and link for forming traffic congestion in many cities, how to reasonably analyze and control the intersection by adopting more effective measures and methods is very important, and if the problem can be effectively solved, the problem of traffic congestion can be solved to a great extent.
In addition, road monitoring cameras have been installed in various cities on a large scale in order to improve the current road conditions and enhance the monitoring and management of road traffic. The traffic management department and citizens can timely, effectively and intuitively obtain the real-time video of the current road. The acquired real-time video contains a large amount of traffic information, which brings great convenience to the judgment of the current road traffic jam condition. Meanwhile, the road monitoring video system inevitably has the problems of inaccurate acquired data and loss, which is mainly caused by system faults or insufficient detection precision, so that the road monitoring video system has great practical significance for repairing lost traffic flow.
Disclosure of Invention
The present invention aims to solve the above problems and provide an intelligent traffic control method, a light emitting device and a monitoring device; the intelligent traffic control method can accurately and quickly analyze the current road passing state, the intelligent traffic indication light-emitting device can display the congestion state of each intersection in real time, the intelligent traffic indication system is convenient to install, automatic intelligent judgment is achieved, the road congestion discrimination method greatly improves the inaccuracy of the current road monitoring system, and effective congestion state assessment is provided.
In order to achieve the purpose, the invention provides the following technical scheme:
the invention provides an intelligent traffic control method, which comprises the following steps:
s1: acquiring real-time traffic information;
s2: obtaining a vehicle image and a pedestrian image in the traffic information through a target detection algorithm according to the traffic information; obtaining a road image through multi-frame fusion, and obtaining a space occupation ratio parameter through the ratio of the pixel number of the pedestrian image and the vehicle image to the pixel number of the road image;
s3: according to the space occupation ratio parameter and the target detection algorithm, in combination with the virtual detection coil, the road traffic flow and the pedestrian flow in unit time are counted to obtain a traffic flow parameter;
s4: by combining vector motion with angular point detection, obtaining key characteristic points of moving vehicles and pedestrians according to the traffic flow parameters to obtain a global vector motion speed;
s5: establishing a clustering discrimination model through the global vector motion speed to obtain a traffic passing state parameter;
s6: obtaining road section indication signals comprising traffic groups and the traffic states of the traffic groups according to the traffic state parameters;
the system comprises a target detection algorithm, a clustering discrimination model and a traffic video sequence, wherein the target detection algorithm is used for detecting a current vehicle image or a pedestrian image moving in the traffic video sequence, the space lane occupation ratio parameter is the ratio of the vehicle width or the pedestrian width to the road width at the position of the vehicle width or the pedestrian width, the virtual detection coil is arranged in a road monitoring video system and is perpendicular to a lane and close to a camera, the vehicle images passing through the detection coil are counted by the target detection algorithm, the global vector motion speed is the motion average speed of a target in the horizontal and vertical directions represented by a target vector, the clustering discrimination model is used for outputting the current traffic state, and the traffic state at least comprises one of the following conditions: congested, unblocked, or slow.
Preferably, the step S2 includes the following sub-steps:
s21: converting the vehicle image and the pedestrian image into multi-frame binary images through the target detection algorithm, fusing the binary images and the motion trail, and performing denoising, filling and opening and closing operations; obtaining a complete road image;
s22: by the formula:and obtaining the space track occupation ratio parameter, wherein c is the space track occupation ratio parameter, N is the video frame number in unit time T, A _ vehcle is the pixel point number of the vehicle image or the pedestrian image, A _ road is the pixel point number of the lane image, and i is the ith frame.
Preferably, the target detection algorithm in S21 includes:
211, background modeling is carried out according to the 1 st frame image of the video sequence,
creating a background sample set P { x } containing M gray values for each pixel point of the image:
P{x}={x1,x2,x3...xM}
wherein x isiAs the background gray value of the ith pixel point, any background gray value x in the background sample set P { x } corresponding to each pixel pointiRandomly generating gray values of the current pixel point and 8 pixel points of the neighborhood, and circularly performing the random generation process for M times to complete the initialization process of the background sample set corresponding to the current pixel point;
s212: the foreground of the image is detected and,
according to the image of the ith frame (i > 1) of the video sequence, measuring the similarity of the current pixel point and the corresponding background sample set P { x }, defining xiSphere space S with circle center and radius RR(x) Spherical space SR(x) Number of intersected samples C with background sample set P { x }#:
C#=SR(x)∩P{x};
S213: presetting an intersection threshold C#min,C#>C#minJudging the current pixel point as a background point, otherwise, judging the current pixel point as a foreground point;
s214: calculating the optimal segmentation threshold of the ith frame of image;
s215: the second time of discrimination is carried out,
randomly selecting K from pixel points of the current imagerandomTo calculate KrandomAverage value of gray levels of individual pixels
s216: performing OR operation on the background pixel points determined in the steps 3 and 5 to obtain an accurate foreground target image;
s217: 6, carrying out binarization on the foreground target image obtained in the step 6;
s218: and filling holes in the binarized foreground target image to obtain a foreground image.
Preferably, the S214 includes:
s2141: assuming that the gray level of the current video image is L, the corresponding gray scale range is [0, L-1 ]]The number of pixel points of the whole video frame is K, and the number of pixel points of the gray level i is KiThen, then
Therefore, the probability P that the gray level of a certain pixel point is i can be obtainedi,
probability ω of foreground region0Comprises the following steps:mean gray level of foreground region is mu0:
Background region probability ω1Comprises the following steps:mean gray level of foreground region is mu1:
Wherein L is0For the segmentation threshold of foreground and background, the gray level of the foreground region is [0, L ]0]The gray level of the background region is [ L ]0+1,L-1],ω0+ω1=1;
S2143: calculating the variance between the foreground region and the background region as sigma2:
σ2=ω0ω1(μ0-μ1)2,
Calculated between-class variance σ2The larger the value of (b) is, the larger the difference between the two regions is, the better the foreground and background regions can be distinguished, and only the maximum value needs to be obtained to achieve the optimal segmentation effect, and the corresponding gray value is the optimal threshold value at this time.
S2144: determining optimal segmentation thresholdsL0In [0, L-1 ]]Go through, when σ2At the maximum value, L at this time0Is the most importantBest segmentation threshold
Preferably, the S218 includes:
s2181: establishing an integer marking matrix D corresponding to all pixel points in the foreground target image, simultaneously initializing all elements to 0, and establishing a linear sequence G which is all 0 and is used for storing the seed points and the points in a connected domain thereof;
s2182: scanning pixel points of the binaryzation whole frame image line by line, searching pixel points with a first gray value of 255 appearing in the whole frame image, and taking the pixel points as initial pixel points S of a moving target area needing to be processed;
s2183: carrying out region growth to complete the searching process of a connected domain of the initial pixel point S by taking the initial pixel point S obtained by the previous scanning as a growing seed, wherein the initial pixel point S cannot be the edge of a detection target, or replacing the initial pixel point S by a pixel point which is not at the edge in eight neighborhoods of the initial pixel point S, storing the initial pixel point S in a linear sequence G, and resetting the value of the corresponding position of the initial pixel point S in an integer marking matrix D as 1;
s2184: comprehensively scanning the value of each pixel point of the linear sequence G, if data with the value of 0 exists in the eight neighborhoods of the pixel points of the linear sequence G, modifying the corresponding position in the integer marking matrix D into 2, and determining the peripheral outline of the current region;
s2185: searching for j-th eight-neighborhood pixel point with marking value of 2 corresponding to pixel point SLocated on the peripheral outline of the target region, using pixel pointsUpdating the linear sequence G, and emptying other values to obtain pixel pointsAs seeds, are grown regionally, wherein the growth is regularComprises the following steps: taking out pixel points from linear sequence GScanning the pixel points S in the four adjacent domains corresponding to the pixel points Si(i is 1,2,3,4), and simultaneously searching the gray value L of the corresponding eight-neighborhood pixel points8By indicating pixel points SiThe value of the corresponding position in the integer mark matrix D.
Preferably, the S3 includes:
s31: a virtual detection coil vertical to a road is arranged in a road monitoring system, and the vehicle images and/or pedestrian images passing through the virtual detection coil are counted by using the target detection algorithm;
s32: initializing data, determining unit time T, obtaining the number of video frames N which is T multiplied by f, wherein f represents the video frame rate, the initial value of the number of vehicles and/or pedestrians N _ vehicle is 0, and the judgment result J _ vehicle of the ith frame of vehicles and/or pedestrians is J _ vehicleiThe initial value is 0, i is 0;
s33: calculating the judgment result J _ vehicle of the ith frame of vehicle and/or pedestrian in the current virtual detection coiliNamely:
wherein, A _ refreshiUpdating the number of pixel points for the detection coil area of the ith frame, wherein A _ threshold is the number of updated pixel point thresholds;
s34: if J _ scene of the ith frameiIf 0, the count is not performed, and N _ vehicle is equal to N _ vehicle, the routine proceeds to step S37;
s35: if J _ scene of the ith frame i1 and J _ scene of the i-1 th framei-1When the value is 0, the count is performed, and N _ vehicle is N _ vehicle +1, and the process proceeds to step S37;
s36: if J _ scene of the ith frame i1 and J _ scene of the i-1 th framei-1If it is 1, the count is not performed, N _ vehicle is N _ vehicle +1, and the process proceeds to step S37;
s37: if i > N, ending the detection count, outputting the number of vehicles and/or the number of pedestrians N _ vehicle, and proceeding to step S38; otherwise, returning to step S33;
preferably, the S4 includes:
s41: the method for acquiring the motion characteristic points specifically comprises the following steps:
s411: selecting a pixel point at the position (x, y), and calculating the movement speed in the x direction and the y direction at the position (x, y) as follows:
wherein u isiAnd ui-1X-direction moving speeds, v, for the ith and (i-1) th frames, respectivelyiAnd vi-1Y-direction motion speeds, I, for the ith and I-1 th frames, respectivelyxIs the rate of change of image gray scale with the x direction, IyIs the rate of change of image gray scale with y-direction, ItThe change rate of the image gray scale along with the time t is shown, and lambda is a Lagrange constant;
s412: if it is notIf i is less than or equal to N _ iteration, i is equal to i +1, and the step S411 is returned to continue to iterate the current pixel point, where G _ threshold is a difference threshold and N _ iteration is an iteration threshold;
s413: if it is notIf i is less than or equal to N _ iteration, selecting the current (x, y) as a motion feature point, ending iteration, if i is 0, returning to step S411, selecting other pixel points for calculation, and determining whether the current (x, y) is the motion feature point;
s414: if i is greater than N _ iteration, ending iteration if the current (x, y) is not the motion feature point, and if i is 0, returning to step S411, selecting other pixel points for calculation, and judging whether the current pixel points are the motion feature points;
s415: repeating the steps S411 to S414 until all the motion characteristic points are obtained;
s42: detecting local extreme points by adopting angular point detection;
s421: processing pixel points in the image, calculating horizontal and vertical gradients, and calculating the product of the horizontal and vertical gradients;
s422: adopting a Gaussian filter to filter images and smooth noise interference;
s423: calculating an interest value for each pixel point in the image;
s424: repeating the steps S421 to S423 until all local extreme points are obtained;
s43: obtaining overlapped pixel points according to the motion characteristic points and the local extreme points to form key characteristic points (x)key,ykey);
S44: the calculation of the global vector motion velocity is performed,
s441: the direction of motion of the primary vector is determined,
s442: according to the key feature point (x)key,ykey) And the motion direction of the main vector to obtain a feature point (x ') of the motion direction of the main vector'key,y'key);
S443: a global vector motion velocity e is calculated,
where e is the global vector motion velocity,andthe average values of the vector motion speeds in the horizontal direction and the vertical direction respectively, N _ key is the total number of characteristic points in the motion direction of the main vector, j is the ordinal number of the characteristic point in the motion direction of the main vector, uj(x'key,y'key) And vj(x'key,y'key) Is (x'key,y'key) The x and y direction movement speeds.
Preferably, the S5 includes:
s51: forming a current traffic characteristic vector V according to the space occupation ratio parameter c, the traffic flow parameter d and the global vector motion speed etraffice_current=[c,d,e]TAnd historical traffic feature vector Vtraffice_history=[c,d,e]T;
S52: at the current traffic feature vector Vtraffice_currentIf c is greater than 0.8, judging congestion, and ending the judgment; if c is less than 0.1, judging the flow is smooth, and finishing the judgment; otherwise, go to step S53;
s53: by comparing historical traffic feature vectors Vtraffice_historyClustering to obtain a discrimination center V in three traffic states of unblocked, slow and congestedtraffice_smooth,Vtraffice_normalAnd Vtraffice_jam;
S54: calculating Vtraffice_currentAnd Vtraffice_smooth,Vtraffice_normalAnd Vtraffice_jamRespectively Dcurrent_euler_smooth,Dcurrent_euler_normal,Dcurrent_euler_jam;
S55: calculating Vtraffice_historyAnd Vtraffice_smooth,Vtraffice_normalAnd Vtraffice_jamRespectively Dhistory_euler_smooth,Dhistory_euler_normal,Dhistory_euler_jam;
S56: if D iscurrent_euler_smooth<Dhistory_euler_smoothJudging the flow is smooth, and finishing the judgment; otherwise, go to step S57;
s57: if D iscurrent_euler_normal<Dhistory_euler_normalIf yes, judging to be slow, and ending the judgment; otherwise, go to step S58;
s58: if D iscurrent_euler_jam<Dhistory_euler_jamIf yes, the judgment is slow, and the process is endedJudging; otherwise, returning to the step 5.1 to obtain the current traffic characteristic vector Vtraffice_current。
Has the advantages that:
1. the intelligent traffic indication light-emitting device receives traffic jam information transmitted from the video monitoring and detecting system, can display the traffic jam information in three colors of red, green and yellow, represents jam, smoothness and slowness, and provides real-time clear traffic road conditions for a driver;
2. the luminous signboard is assembled by adopting detachable units and processed by adopting aluminum profiles, so that the luminous signboard is simple in structure, reasonable in strength, greatly reduced in weight, uniform in size and specification and convenient for large-scale production; the luminous light source of the intelligent traffic indication luminous device adopts a special design, and the main and auxiliary light sources are configured, so that the service life of the product is ensured to be more than 5 years, the phenomena of non-luminescence and non-uniform luminescence caused by the damage of circuits and light sources in the market are avoided, all control units fall to the ground, and after the luminous mark fails, all problems can be completely solved on the ground without disassembly and maintenance;
3. the controller of the intelligent traffic indication light-emitting device adjusts the current through the background control chip to realize the automatic brightness adjustment of the light-emitting sign, and can set the working time through the background to automatically detect the brightness to realize the energy-saving function; the light-emitting part is stuck with a light-transmitting film, so that the light can be reflected passively when power is cut off on the premise of ensuring the brightness; on the premise of ensuring the brightness, the unit power is greatly reduced;
4. the intelligent traffic indicating system adopts a road congestion judging method to effectively judge the congestion condition of the lane and realize intelligent congestion judgment;
5. the road congestion judging method adopts a target detection algorithm to realize accurate segmentation of foreground images by means of secondary superposition, and provides accurate basis for obtaining relevant parameters of moving vehicles;
6. the road congestion judging method adopts a vehicle track fusion algorithm and a target detection algorithm to be combined for calculation to obtain an accurate space occupation ratio, adopts a virtual detection coil and a target detection algorithm to be combined for quickly obtaining traffic flow parameters, and combines a vector motion method and angular point detection to calculate the global vector motion speed so as to reflect the overall speed of traffic flow;
7. the direct judgment of the space occupation ratio is combined with the judgment of the clustering center, so that the rapid and accurate jam condition evaluation is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a general assembly structure view of an intelligent traffic indicating light emitting device of the present invention;
FIG. 2 is a block diagram of a cell plate of the present invention;
FIG. 3 is a schematic representation of a landmark indicator region of the present invention;
FIG. 4 is a block diagram of the mounting slot of the present invention;
FIG. 5 is a schematic view of the middle traffic indicating area of the present invention;
FIG. 6 is a block diagram of the inner closure plate of the present invention;
FIG. 7 is a block diagram of an LED light bar of the present invention;
FIG. 8 is a structural view of a light guide plate according to the present invention;
FIG. 9 is a schematic view of a display of the intelligent traffic indicating light of the present invention;
FIG. 10 is a schematic diagram of the controller assembly of the present invention;
FIG. 11 is a flow chart of a method of determining road congestion according to the present invention;
FIG. 12 is a flow chart of an object detection algorithm of the present invention;
part numbers in the drawings:
1. an outer sealing plate; 2. a controller; 3. a unit plate; 31. an LED light bar; 311. a main light source lamp bead; 312. an auxiliary light source lamp bead; 314. pressing the edge groove; 32. a lead of the LED light bar; 33. installing a clamping groove; 331. a screw hole; 34. a lamp strip clamping groove; 35. an inner sealing plate; 351. threading holes; 36. a pressurized backing plate; 37. a light guide plate; 372. a waterproof sheet; 373. a light transmissive film; 374. a light shielding film; 375. a light-reflecting film; 4. an aluminum plate; 5. an angle aluminum; 6. a light guide plate; 7. a traffic indication area; 71. red; 72. yellow; 73. green; 8. a population indicator region.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
Example 1
The utility model provides an intelligent transportation instructs illuminator, as shown in fig. 1 to 9, including controller 2, outer shrouding 1, angle aluminium 5, aluminum plate 4 and a plurality of cell board 3, assemble through bolt fastening between the cell board 3, angle aluminium 5 assembles through bolt fastening with cell board 3, angle aluminium 5 external fixation has outer shrouding 1, outer shrouding 1 encapsulates controller 2, angle aluminium 5 and a plurality of cell board 3 inside, aluminum plate 4 has covered the connecting portion between angle aluminium 5 and the outer shrouding 1.
The controller 2 controls the light emission of the unit panels 3, the plurality of unit panels 3 constituting one light emitting surface;
the unit plate 3 includes: a plurality of first cell plates for road sign display, a plurality of second cell plates for road sign indication, a plurality of third cell plates for displaying traffic groups, and a fourth cell plate disposed between the first, second, and third cell plates;
the first unit plate is internally provided with a monochromatic light source, the second unit plate is internally provided with a multicolor light source for displaying different traffic flows, the third unit plate is provided with a multicolor light source for displaying different traffic groups and different traffic group flows, and the fourth unit plate is not provided with a light source and/or is provided with a monochromatic light source different from the light sources of the first unit plate, the second unit plate and the third unit plate.
Every cell board includes light guide plate 37, pressurization backplate 36, interior shrouding 35, LED lamp strip 31, installation draw-in groove 33, interior shrouding 35 sets up on the middle part buckle between two installation draw-in grooves 33, the open-top of two installation draw-in grooves 33 is covered by light guide plate 37, be provided with pressurization backplate 36 between blank pressing recess 314 and light guide plate 37 and the encapsulation board 35 of light guide plate 37 both sides, LED lamp strip 31 is installed in light guide plate 37 lamp strip draw-in groove 34, LED lamp strip 31 subtends the setting in light guide plate 37, LED lamp strip 31 shines the direction intercrossing and outwards gives out light through light guide plate 37, installation draw-in groove 33 lower part sets up assembles screw 331, the bolt passes and assembles the screw 331 and realizes the fixed of between cell board 3 and between angle aluminium 5 and the cell board 3, interior shrouding 35 bottom sets up through wires hole 351, LED lamp strip passes through wires hole 351 and is connected with controller 3.
The brightness of the LED light bar 31 of the light-emitting device can be adjusted through the mobile phone, and the brightness of the LED light bar 31 is automatically adjusted according to the brightness of the environment or the time.
Wherein, LED lamp strip 31 is by main light source lamp pearl 311, auxiliary light source lamp pearl 312 constitutes, main light source lamp pearl 311, auxiliary light source lamp pearl 312 adopts monochromatic source and three-colour light source, the LED lamp strip that specifically is first cell board is monochromatic source, the LED lamp strip of second cell board and third cell board is the three-colour light source, and all be different with the monochromatic source of first cell board, it is preferred red, yellow-green three kind, the light source of second cell board is the same with the light source of third cell board, the LED lamp strip of fourth cell board is monochromatic source and/or does not have the light source, and with first cell board, the light source of second cell board and third cell board all is different, the illumination mode and the syntropy crowd of the light source of third cell board are the same, for example: the children display the children patterns, and display different colors according to the density of the children, if the children are crowded, the children display red, more the children display yellow and less the children display green, and the display modes of vehicles and/or adults, old people and the like are the same.
The light guide plate comprises a traffic indication area, a landmark indication area, a non-display area and a group indication area, wherein three-color light sources for displaying three colors are correspondingly arranged in the middle traffic indication area, the landmark indication area adopts a monochromatic light source, no light source is arranged in the non-display area, the middle traffic indication area and the landmark indication area comprise surface layers formed by light-transmitting layers and film waterproof layers and bottom layers formed by LED light bars, the non-display area comprises surface layers formed by reflective films and bottom layers formed by light-shielding films, a controller receives traffic jam information transmitted by the intelligent video monitoring device and generates corresponding control signals, arrows in the corresponding direction (comprising the front direction, the left direction and the right direction) of the middle traffic indication area are displayed to be red 71, green 73 and yellow 72 colors, and the three colors represent jam, smoothness and slowness respectively.
Example 2
The invention also provides an intelligent traffic indication monitoring device, as shown in fig. 10, the controller includes a processor (DSP), a clock unit, a programmable logic device, a liquid crystal display, a keyboard, a memory, and a driving circuit of the LED light bar 31.
The DSP controls and manages the operation of the whole controller, and the congestion signals from the intelligent video monitoring device in each direction and the serial number of the camera module are read by the programmable logic device, so that the middle traffic indication area is controlled to display corresponding colors according to corresponding LED light bars 31 in corresponding directions to indicate congestion conditions in each direction; the clock unit is internally provided with a crystal oscillator and a battery port and provides synchronous clocks for all units of the controller; the programmable logic device is based on an SRAM real-time programming technology, a lookup table is formed by the SRAM to realize a large-scale integrated programmable logic device with a digital logic function, a congestion signal of the intelligent video monitoring device is received, and time sequence conversion and chip selection decoding are carried out, wherein the time sequence conversion comprises the following steps: generating an SPI bus interface of the LED light bar 31 driving circuit; generating a read-write time sequence of the memory; chip selection decoding mainly provides chip selection addresses for a memory, a clock unit and an LED light bar 31 driving circuit; the keyboard and the liquid crystal display are used as human-machine interfaces, and the liquid crystal display adopts a liquid crystal screen with a 240 multiplied by 128 lattice due to larger timing information quantity of the controller, and is accessed by the core processor DSP in an IO port mode: a keyboard circuit adopts a 25-key touch keyboard, and a core processor DSP reads key values in a scanning mode; the memory adopts a nonvolatile memory and is used for storing various parameters of the controller, and the read-write time sequence of the memory is completed by the programmable logic device and the DSP (digital signal processor); the LED light bar 31 driving circuit is provided with a serial shift chip with a latching function as a driving chip, the programmable logic device is communicated with the driving chip in an SPI bus mode, the LED light bar 31 driving circuit firstly conducts strong and weak electric isolation on an output signal of the programmable logic device through an optical coupler, and then drives a silicon controlled rectifier after being amplified through a triode, so that the LED light bar 31 is controlled.
The hardware structure avoids the problem of resetting the controller caused by mains supply interference, and the signal machine works more stably and reliably.
Example 3
The invention also provides an intelligent traffic indication system, which comprises the intelligent traffic indication light-emitting device and the intelligent traffic indication monitoring device; the monitoring device comprises an image acquisition module, an image processing module, an image storage module and a control module of the light-emitting device;
the image acquisition module acquires the traffic information of the light-emitting device area through the image shooting device, converts the acquired traffic information into a traffic video sequence and sends the traffic video sequence to the image processing module; the image processing module judges the traffic state of the road section according to the traffic information to obtain road section passing information, and the image processing module comprises a processor chip, a watchdog module, a power supply module, a memory module and a clock circuit module; the image storage module is used for storing an original image and the data processed by the image processing module; the control module sends the road section indication information to a controller of the light-emitting device.
The image acquisition part acquires a traffic image of a current lane by using a camera module, converts the acquired analog image into a traffic video sequence and sends the traffic video sequence to the image processing part, the image processing part judges the congestion state of the road section by using a road congestion judging method on a digital image, the image processing part consists of a processor chip, a watchdog module, a power supply module, a memory module and a clock circuit module, the image storage part stores an original image and a data result after image processing to ensure data safety and facilitate data viewing, and the light-emitting device control part generates a corresponding congestion signal according to the congestion state obtained by the image processing part and sends the serial number of the camera module generating the congestion signal to a controller of the intelligent traffic indication light-emitting device.
Example 4
The invention also provides an intelligent traffic indicating method, as shown in fig. 11 and 12, comprising the following steps:
s1, acquiring real-time traffic information;
specifically, a real-time traffic video sequence is obtained;
s2, obtaining vehicle images and pedestrian images in the traffic information through a target detection algorithm according to the traffic information; obtaining a road image through multi-frame fusion, and obtaining a space occupation ratio parameter through the ratio of the pixel number of the pedestrian image and the vehicle image to the pixel number of the road image;
s3, according to the space occupation ratio parameter and the target detection algorithm, combining the virtual detection coil, counting the road traffic flow and the pedestrian flow in unit time to obtain a traffic flow parameter;
s4, obtaining key characteristic points of moving vehicles and pedestrians according to the traffic flow parameters by combining vector motion and corner detection to obtain a global vector motion speed;
s5, establishing a cluster discrimination model through the global vector motion speed to obtain traffic passing state parameters (namely discriminating the traffic jam condition based on the cluster discrimination model);
and S6, obtaining road section indication signals including the traffic groups and the traffic states of the traffic groups according to the traffic state parameters (namely, outputting judgment results, namely, whether the road sections are smooth, slow or jammed respectively).
Preferably, the step S2 includes the following sub-steps:
s21: detecting a current vehicle image moving in a traffic video sequence by a target detection algorithm, performing lane detection according to a moving vehicle track fusion method, detecting the vehicle image in a multi-frame traffic video sequence by the target detection algorithm to generate a multi-frame moving vehicle binary image, performing OR operation on the multi-frame moving vehicle binary image to complete track fusion, and performing denoising, filling, denoising, hole filling and opening and closing operation to obtain a complete lane image;
s22: by the formula:and obtaining the space track occupation ratio parameter, wherein c is the space track occupation ratio parameter, N is the video frame number in unit time T, A _ vehcle is the pixel point number of the vehicle image or the pedestrian image, A _ road is the pixel point number of the lane image, and i is the ith frame.
Wherein N is 40 frames;
in the actual road monitoring system, because the camera often takes a video of a long time, the road is narrow far away and wide near, the lane can be detected completely through the method, the side lane can be more accurate, only the ratio of the vehicle width to the road width of the position where the vehicle width is located can be really and accurately described, and the traffic parameter effect is stable.
Preferably, the step S3 includes the following sub-steps:
s31: setting a virtual detection coil perpendicular to the road in the road monitoring system, and counting the vehicle images and/or pedestrian images passing through the virtual detection coil by using the target detection algorithm
Specifically, a virtual detection coil is arranged in a road monitoring video system, is perpendicular to a lane and is close to a camera, and vehicle images passing through the detection coil are counted through a target detection algorithm;
s32: initializing data, determining unit time T, obtaining the number of video frames N which is T multiplied by f, wherein f represents the video frame rate, the initial value of the number of vehicles and/or pedestrians N _ vehicle is 0, and the judgment result J _ vehicle of the ith frame of vehicles and/or pedestrians is J _ vehicleiThe initial value is 0, i is 0;
s33: calculating the judgment result J _ vehicle of the ith frame of vehicle and/or pedestrian in the current virtual detection coiliNamely:
wherein, A _ refreshiUpdating the number of pixel points for the detection coil area of the ith frame, wherein A _ threshold is the number of updated pixel point thresholds;
s34: if J _ scene of the ith frameiIf 0, the count is not performed, and N _ vehicle is equal to N _ vehicle, the routine proceeds to step S37;
s35: if J _ scene of the ith frame i1 and J _ scene of the i-1 th framei-1When the value is 0, the count is performed, and N _ vehicle is N _ vehicle +1, and the process proceeds to step S37;
s36: if J _ scene of the ith frame i1 and J _ scene of the i-1 th framei-1If it is 1, the count is not performed, N _ vehicle is N _ vehicle +1, and the process proceeds to step S37;
s37: if i > N, ending the detection count, outputting the number of vehicles and/or the number of pedestrians N _ vehicle, and proceeding to step S38; otherwise, returning to step S33;
preferably, the step S4 includes the following sub-steps:
s41: the method for acquiring the motion characteristic points specifically comprises the following steps:
s411: selecting a pixel point at the position (x, y), and calculating the movement speed in the x direction and the y direction at the position (x, y) as follows:
wherein u isiAnd ui-1X-direction moving speeds, v, for the ith and (i-1) th frames, respectivelyiAnd vi-1Y-direction motion speeds, I, for the ith and I-1 th frames, respectivelyxIs the rate of change of image gray scale with the x direction, IyIs the rate of change of image gray scale with y-direction, ItThe change rate of the image gray scale along with the time t is shown, and lambda is a Lagrange constant;
s412: if it is notIf i is less than or equal to N _ iteration, i is equal to i +1, and the step S411 is returned to continue to iterate the current pixel point, where G _ threshold is a difference threshold and N _ iteration is an iteration threshold;
s413: if it is notIf i is less than or equal to N _ iteration, selecting the current (x, y) as a motion feature point, ending iteration, if i is 0, returning to step S411, selecting other pixel points for calculation, and determining whether the current (x, y) is the motion feature point;
s414: if i is greater than N _ iteration, ending iteration if the current (x, y) is not the motion feature point, and if i is 0, returning to step S411, selecting other pixel points for calculation, and judging whether the current pixel points are the motion feature points;
s415: repeating the steps S411 to S414 until all the motion characteristic points are obtained;
s42: detecting local extreme points by adopting angular point detection;
s421: processing pixel points in the image, calculating horizontal and vertical gradients, and calculating the product of the horizontal and vertical gradients;
s422: adopting a Gaussian filter to filter images and smooth noise interference;
s423: calculating an interest value for each pixel point in the image;
s424: repeating the steps S421 to S423 until all local extreme points are obtained;
s43: obtaining overlapped pixel points according to the motion characteristic points and the local extreme points to form key characteristic points (x)key,ykey);
S44: the calculation of the global vector motion velocity is performed,
s441: the direction of motion of the primary vector is determined,
s442: according to the key feature point (x)key,ykey) And the direction of motion of the primary vectorCharacteristic Point (x'key,y'key);
S443: a global vector motion velocity e is calculated,
where e is the global vector motion velocity,andthe average values of the vector motion speeds in the horizontal direction and the vertical direction respectively, N _ key is the total number of characteristic points in the motion direction of the main vector, j is the ordinal number of the characteristic point in the motion direction of the main vector, uj(x'key,y'key) And vj(x'key,y'key) Is (x'key,y'key) The x and y direction movement speeds.
Preferably, the step S5 includes the following sub-steps:
s51: forming a current traffic characteristic vector V according to the space occupation ratio parameter c, the traffic flow parameter d and the global vector motion speed etraffice_current=[c,d,e]TAnd historical traffic feature vector Vtraffice_history=[c,d,e]T;
S52: at the current traffic feature vector Vtraffice_currentIf c is greater than 0.8, judging congestion, and ending the judgment; if c is less than 0.1, judging the flow is smooth, and finishing the judgment; otherwise, go to step S53;
s53: by comparing historical traffic feature vectors Vtraffice_historyClustering to obtain a discrimination center V in three traffic states of unblocked, slow and congestedtraffice_smooth,Vtraffice_normalAnd Vtraffice_jam;
S54: calculating Vtraffice_currentAnd Vtraffice_smooth,Vtraffice_normalAnd Vtraffice_jamRespectively, of Euclidean distance ofIs Dcurrent_euler_smooth,Dcurrent_euler_normal,Dcurrent_euler_jam;
S55: calculating Vtraffice_historyAnd Vtraffice_smooth,Vtraffice_normalAnd Vtraffice_jamRespectively Dhistory_euler_smooth,Dhistory_euler_normal,Dhistory_euler_jam;
S56: if D iscurrent_euler_smooth<Dhistory_euler_smoothJudging the flow is smooth, and finishing the judgment; otherwise, go to step S57;
s57: if D iscurrent_euler_normal<Dhistory_euler_normalIf yes, judging to be slow, and ending the judgment; otherwise, go to step S58;
s58: if D iscurrent_euler_jam<Dhistory_euler_jamIf yes, judging to be slow, and ending the judgment; otherwise, returning to the step 5.1 to obtain the current traffic characteristic vector Vtraffice_current。
Preferably, the step S2 includes the following sub-steps:
s21: converting the vehicle image and the pedestrian image into multi-frame binary images through the target detection algorithm, fusing the binary images and the motion trail, and performing denoising, filling and opening and closing operations; obtaining a complete road image;
wherein the target detection algorithm comprises the steps of:
211, background modeling is carried out according to the 1 st frame image of the video sequence,
creating a background sample set P { x } containing M gray values for each pixel point of the image:
P{x}={x1,x2,x3...xM}
wherein x isiAs the background gray value of the ith pixel point, any background gray value x in the background sample set P { x } corresponding to each pixel pointiAll generated randomly by the gray value of the current pixel point and the gray values of 8 pixel points in the neighborhoodThe process is circularly carried out for M times, and the initialization process of the background sample set corresponding to the current pixel point is completed;
s212: the foreground of the image is detected and,
according to the image of the ith frame (i > 1) of the video sequence, measuring the similarity of the current pixel point and the corresponding background sample set P { x }, defining xiSphere space S with circle center and radius RR(x) Spherical space SR(x) Number of intersected samples C with background sample set P { x }#:
C#=SR{x}∩P{x};
S213: presetting an intersection threshold C#min,C#>C#minJudging the current pixel point as a background point, otherwise, judging the current pixel point as a foreground point;
s214: calculating the optimal segmentation threshold of the ith frame of image, specifically:
s2141: assuming that the gray level of the current video image is L, the corresponding gray scale range is [0, L-1 ]]The number of pixel points of the whole video frame is K, and the number of pixel points of the gray level i is KiThen, then
Therefore, the probability P that the gray level of a certain pixel point is i can be obtainedi,
probability ω of foreground region0Comprises the following steps:mean gray level of foreground region is mu0:
Background region probability ω1Comprises the following steps:mean gray level of foreground region is mu1:
Wherein L is0For the segmentation threshold of foreground and background, the gray level of the foreground region is [0, L ]0]The gray level of the background region is [ L ]0+1,L-1],ω0+ω1=1;
S2143: calculating the variance between the foreground region and the background region as sigma2:
σ2=ω0ω1(μ0-μ1)2,
Calculated between-class variance σ2The larger the value of (b) is, the larger the difference between the two regions is, the better the foreground and background regions can be distinguished, and only the maximum value needs to be obtained to achieve the optimal segmentation effect, and the corresponding gray value is the optimal threshold value at this time.
S2144: determining optimal segmentation thresholdsL0In [0, L-1 ]]Go through, when σ2At the maximum value, L at this time0To optimally partition the threshold
S215: the second time of discrimination is carried out,
randomly selecting K from pixel points of the current imagerandomTo calculate KrandomAverage value of gray levels of individual pixels
s216: performing OR operation on the background pixel points determined in the steps 3 and 5 to obtain an accurate foreground target image;
s217: 6, carrying out binarization on the foreground target image obtained in the step 6;
s218: filling holes in the binarized foreground target image to obtain a foreground image, and specifically comprising the following steps of:
s2181: establishing an integer marking matrix D corresponding to all pixel points in the foreground target image, simultaneously initializing all elements to 0, and establishing a linear sequence G which is all 0 and is used for storing the seed points and the points in a connected domain thereof;
s2182: scanning pixel points of the binaryzation whole frame image line by line, searching pixel points with a first gray value of 255 appearing in the whole frame image, and taking the pixel points as initial pixel points S of a moving target area needing to be processed;
s2183: carrying out region growth to complete the searching process of a connected domain of the initial pixel point S by taking the initial pixel point S obtained by the previous scanning as a growing seed, wherein the initial pixel point S cannot be the edge of a detection target, or replacing the initial pixel point S by a pixel point which is not at the edge in eight neighborhoods of the initial pixel point S, storing the initial pixel point S in a linear sequence G, and resetting the value of the corresponding position of the initial pixel point S in an integer marking matrix D as 1;
s2184: comprehensively scanning the value of each pixel point of the linear sequence G, if data with the value of 0 exists in the eight neighborhoods of the pixel points of the linear sequence G, modifying the corresponding position in the integer marking matrix D into 2, and determining the peripheral outline of the current region;
s2185: searching for j-th eight-neighborhood pixel point with marking value of 2 corresponding to pixel point SLocated on the peripheral outline of the target region, using pixel pointsUpdating the linear sequence G, and emptying other values to obtain pixel pointsPerforming region growth as seeds, wherein the growth rule is as follows: taking out pixel points from linear sequence GScanning the pixel points in the four adjacent domains corresponding to the pixel pointsSi(i is 1,2,3,4), and simultaneously searching the gray value L of the corresponding eight-neighborhood pixel points8By indicating pixel points SiThe value of the corresponding position in the integer mark matrix D.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (10)
1. An intelligent traffic control method is characterized by comprising the following steps:
s1: acquiring real-time traffic information;
s2: obtaining a vehicle image and a pedestrian image in the traffic information through a target detection algorithm according to the traffic information; obtaining a road image through multi-frame fusion, and obtaining a space occupation ratio parameter through the ratio of the pixel number of the pedestrian image and the vehicle image to the pixel number of the road image;
s3: according to the space occupation ratio parameter and the target detection algorithm, in combination with the virtual detection coil, the road traffic flow and the pedestrian flow in unit time are counted to obtain a traffic flow parameter;
s4: by combining vector motion with angular point detection, obtaining key characteristic points of moving vehicles and pedestrians according to the traffic flow parameters to obtain a global vector motion speed;
s5: establishing a clustering discrimination model through the global vector motion speed to obtain a traffic passing state parameter;
s6: obtaining road section indication signals comprising traffic groups and the traffic states of the traffic groups according to the traffic state parameters;
the system comprises a target detection algorithm, a clustering discrimination model and a traffic video sequence, wherein the target detection algorithm is used for detecting a current vehicle image or a pedestrian image moving in the traffic video sequence, the space lane occupation ratio parameter is the ratio of the vehicle width or the pedestrian width to the road width at the position of the vehicle width or the pedestrian width, the virtual detection coil is arranged in a road monitoring video system and is perpendicular to a lane and close to a camera, the vehicle images passing through the detection coil are counted by the target detection algorithm, the global vector motion speed is the motion average speed of a target in the horizontal and vertical directions represented by a target vector, the clustering discrimination model is used for outputting the current traffic state, and the traffic state at least comprises one of the following conditions: congested, unblocked, or slow.
2. The intelligent traffic control method according to claim 1, wherein the step S2 includes the sub-steps of:
s21: converting the vehicle image and the pedestrian image into multi-frame binary images through the target detection algorithm, fusing the binary images and the motion trail, and performing denoising, filling and opening and closing operations; obtaining a complete road image;
s22: by the formula:and obtaining the space track occupation ratio parameter, wherein c is the space track occupation ratio parameter, N is the video frame number in unit time T, A _ vehcle is the pixel point number of the vehicle image or the pedestrian image, A _ road is the pixel point number of the lane image, and i is the ith frame.
3. The intelligent traffic control method according to claim 2, wherein the target detection algorithm in S21 includes:
211, background modeling is carried out according to the 1 st frame image of the video sequence,
creating a background sample set P { x } containing M gray values for each pixel point of the image:
P{x}={x1,x2,x3...xM}
wherein x isiAs the background gray value of the ith pixel point, any background gray value x in the background sample set P { x } corresponding to each pixel pointiAll from the current pixelRandomly generating gray values of the points and gray values of 8 pixel points of the neighborhood, and circularly performing the random generation process for M times to complete the initialization process of a background sample set corresponding to the current pixel point;
s212: the foreground of the image is detected and,
according to the image of the ith frame (i > 1) of the video sequence, measuring the similarity of the current pixel point and the corresponding background sample set P { x }, defining xiSphere space S with circle center and radius RR(x) Spherical space SR(x) Number of intersected samples C with background sample set P { x }#:
C#=SR(x)∩P{x};
S213: presetting an intersection threshold C#min,C#>C#minJudging the current pixel point as a background point, otherwise, judging the current pixel point as a foreground point;
s214: calculating the optimal segmentation threshold of the ith frame of image;
s215: the second time of discrimination is carried out,
randomly selecting K from pixel points of the current imagerandomTo calculate KrandomAverage value of gray levels of individual pixels
s216: performing OR operation on the background pixel points determined in the steps 3 and 5 to obtain an accurate foreground target image;
s217: 6, carrying out binarization on the foreground target image obtained in the step 6;
s218: and filling holes in the binarized foreground target image to obtain a foreground image.
4. The intelligent traffic control method according to claim 3, wherein the S214 includes:
s2141: assuming that the gray level of the current video image is L, the corresponding gray scale range is [0, L-1 ]]The number of pixel points of the whole video frame is K, and the number of pixel points of the gray level i is KiThen, then
Therefore, the probability P that the gray level of a certain pixel point is i can be obtainedi,
probability ω of foreground region0Comprises the following steps:mean gray level of foreground region is mu0:
Background region probability ω1Comprises the following steps:mean gray level of foreground region is mu1:
Wherein L is0For the segmentation threshold of foreground and background, the gray level of the foreground region is [0, L ]0]The gray level of the background region is [ L ]0+1,L-1],ω0+ω1=1;
S2143: calculating the variance between the foreground region and the background region as sigma2:
σ2=ω0ω1(μ0-μ1)2,
Calculated between-class variance σ2The larger the value of (A) is, the larger the difference between the two areas is, the better the foreground and background areas can be distinguished, only the maximum value needs to be obtained to achieve the optimal segmentation effect, and the corresponding gray value is the optimal threshold value at the moment;
5. The intelligent traffic control method according to claim 3, wherein the S218 includes:
s2181: establishing an integer marking matrix D corresponding to all pixel points in the foreground target image, simultaneously initializing all elements to 0, and establishing a linear sequence G which is all 0 and is used for storing the seed points and the points in a connected domain thereof;
s2182: scanning pixel points of the binaryzation whole frame image line by line, searching pixel points with a first gray value of 255 appearing in the whole frame image, and taking the pixel points as initial pixel points S of a moving target area needing to be processed;
s2183: carrying out region growth to complete the searching process of a connected domain of the initial pixel point S by taking the initial pixel point S obtained by the previous scanning as a growing seed, wherein the initial pixel point S cannot be the edge of a detection target, or replacing the initial pixel point S by a pixel point which is not at the edge in eight neighborhoods of the initial pixel point S, storing the initial pixel point S in a linear sequence G, and resetting the value of the corresponding position of the initial pixel point S in an integer marking matrix D as 1;
s2184: comprehensively scanning the value of each pixel point of the linear sequence G, if data with the value of 0 exists in the eight neighborhoods of the pixel points of the linear sequence G, modifying the corresponding position in the integer marking matrix D into 2, and determining the peripheral outline of the current region;
s2185: searching for j-th eight-neighborhood pixel point with marking value of 2 corresponding to pixel point SLocated on the peripheral outline of the target region, using pixel pointsUpdating the linear sequence G, and emptying other values to obtain pixel pointsPerforming region growth as seeds, wherein the growth rule is as follows: taking out pixel points from linear sequence GScanning the pixel points S in the four adjacent domains corresponding to the pixel points Si(i is 1,2,3,4), and simultaneously searching the gray value L of the corresponding eight-neighborhood pixel points8By indicating pixel points SiThe value of the corresponding position in the integer mark matrix D.
6. The intelligent traffic control method according to claim 1, wherein the S3 includes:
s31: a virtual detection coil vertical to a road is arranged in a road monitoring system, and the vehicle images and/or pedestrian images passing through the virtual detection coil are counted by using the target detection algorithm;
s32: initializing data, determining unit time T, obtaining the number of video frames N which is T multiplied by f, wherein f represents the video frame rate, the initial value of the number of vehicles and/or pedestrians N _ vehicle is 0, and the judgment result J _ vehicle of the ith frame of vehicles and/or pedestrians is J _ vehicleiThe initial value is 0, i is 0;
s33: calculating the judgment result J _ vehicle of the ith frame of vehicle and/or pedestrian in the current virtual detection coiliNamely:
wherein, A _ refreshiUpdating the number of pixel points for the detection coil area of the ith frame, wherein A _ threshold is the number of updated pixel point thresholds;
s34: if J _ scene of the ith frameiIf 0, the count is not performed, and N _ vehicle is equal to N _ vehicle, the routine proceeds to step S37;
s35: if J _ scene of the ith framei1 and J _ scene of the i-1 th framei-1When the value is 0, the count is performed, and N _ vehicle is N _ vehicle +1, and the process proceeds to step S37;
s36: if J _ scene of the ith framei1 and J _ scene of the i-1 th framei-1If it is 1, the count is not performed, N _ vehicle is N _ vehicle +1, and the process proceeds to step S37;
s37: if i > N, ending the detection count, outputting the number of vehicles and/or the number of pedestrians N _ vehicle, and proceeding to step S38; otherwise, returning to step S33;
7. the intelligent traffic control method according to any one of claims 1 to 6, wherein the S4 includes:
s41: the method for acquiring the motion characteristic points specifically comprises the following steps:
s411: selecting a pixel point at the position (x, y), and calculating the movement speed in the x direction and the y direction at the position (x, y) as follows:
wherein u isiAnd ui-1X-direction moving speeds, v, for the ith and (i-1) th frames, respectivelyiAnd vi-1Y-direction motion speeds, I, for the ith and I-1 th frames, respectivelyxIs the rate of change of image gray scale with the x direction, IyIs the rate of change of image gray scale with y-direction, ItThe change rate of the image gray scale along with the time t is shown, and lambda is a Lagrange constant;
s412: if it is notIf i is less than or equal to N _ iteration, i is equal to i +1, and the step S411 is returned to continue to iterate the current pixel point, where G _ threshold is a difference threshold and N _ iteration is an iteration threshold;
s413: if it is notIf i is less than or equal to N _ iteration, selecting the current (x, y) as a motion feature point, ending iteration, if i is 0, returning to step S411, selecting other pixel points for calculation, and determining whether the current (x, y) is the motion feature point;
s414: if i is greater than N _ iteration, ending iteration if the current (x, y) is not the motion feature point, and if i is 0, returning to step S411, selecting other pixel points for calculation, and judging whether the current pixel points are the motion feature points;
s415: repeating the steps S411 to S414 until all the motion characteristic points are obtained;
s42: detecting local extreme points by adopting angular point detection;
s421: processing pixel points in the image, calculating horizontal and vertical gradients, and calculating the product of the horizontal and vertical gradients;
s422: adopting a Gaussian filter to filter images and smooth noise interference;
s423: calculating an interest value for each pixel point in the image;
s424: repeating the steps S421 to S423 until all local extreme points are obtained;
s43: obtaining overlapped pixel points according to the motion characteristic points and the local extreme points to form key characteristic points (x)key,ykey);
S44: the calculation of the global vector motion velocity is performed,
s441: the direction of motion of the primary vector is determined,
s442: according to the key feature point (x)key,ykey) And the motion direction of the main vector to obtain a feature point (x ') of the motion direction of the main vector'key,y'key);
S443: a global vector motion velocity e is calculated,
where e is the global vector motion velocity,andthe average values of the vector motion speeds in the horizontal direction and the vertical direction respectively, N _ key is the total number of characteristic points in the motion direction of the main vector, j is the ordinal number of the characteristic point in the motion direction of the main vector, uj(x'key,y'key) And vj(x'key,y'key) Is (x'key,y'key) The x and y direction movement speeds.
8. The intelligent traffic control method according to claim 7, wherein the S5 includes:
s51: forming a current traffic characteristic vector V according to the space occupation ratio parameter c, the traffic flow parameter d and the global vector motion speed etraffice_current=[c,d,e]TAnd historical traffic feature vector Vtraffice_history=[c,d,e]T;
S52: at the current traffic feature vector Vtraffice_currentIf c is greater than 0.8, judging congestion, and ending the judgment; if c is less than 0.1, judging the flow is smooth, and finishing the judgment; otherwise, go to step S53;
s53: by comparing historical traffic feature vectors Vtraffice_historyClustering to obtain a discrimination center V in three traffic states of unblocked, slow and congestedtraffice_smooth,Vtraffice_normalAnd Vtraffice_jam;
S54: calculating Vtraffice_currentAnd Vtraffice_smooth,Vtraffice_normalAnd Vtraffice_jamRespectively Dcurrent_euler_smooth,Dcurrent_euler_normal,Dcurrent_euler_jam;
S55: calculating Vtraffice_historyAnd Vtraffice_smooth,Vtraffice_normalAnd Vtraffice_jamRespectively Dhistory_euler_smooth,Dhistory_euler_normal,Dhistory_euler_jam;
S56: if D iscurrent_euler_smooth<Dhistory_euler_smoothJudging the flow is smooth, and finishing the judgment; otherwise, go to step S57;
s57: if D iscurrent_euler_normal<Dhistory_euler_normalIf yes, judging to be slow, and ending the judgment; otherwise, go to step S58;
s58: if D iscurrent_euler_jam<Dhistory_euler_jamIf yes, judging to be slow, and ending the judgment; otherwise, returning to the step 5.1 to obtain the current traffic characteristic vector Vtraffice_current。
9. An intelligent traffic indication light-emitting device applied to the intelligent traffic control method according to claim 1, comprising: the outer sealing plate, the controller and the plurality of unit plates are arranged in the groove of the outer sealing plate, the controller controls the unit plates to emit light, and the plurality of unit plates form a light emitting surface;
the unit plate includes: a plurality of first cell plates for road sign display, a plurality of second cell plates for road sign indication, a plurality of third cell plates for displaying traffic groups, and a fourth cell plate disposed between the first, second, and third cell plates;
the first unit plate is internally provided with a monochromatic light source, the second unit plate is internally provided with a multicolor light source for displaying different traffic flows, the third unit plate is provided with a multicolor light source for displaying different traffic groups and different traffic group flows, and the fourth unit plate is not provided with a light source and/or is provided with a monochromatic light source different from the light sources of the first unit plate, the second unit plate and the third unit plate.
10. A traffic intelligent indication monitoring apparatus for generating a control signal to be executed by an intelligent traffic indication light emitting apparatus according to claims 1 to 3, characterized by comprising a processor, a clock unit, an image capturing device, a programmable logic device, a memory, and a driving circuit of a light source;
the image shooting device is used for acquiring traffic information of the light-emitting device area;
a crystal oscillator and a battery port are arranged in the clock unit to provide a synchronous clock for the controller;
the programmable logic device is used for acquiring the traffic information shot by the image shooting device and sending the traffic information to the processor;
the processor generates a corresponding control signal according to the traffic information and sends the control signal to the light-emitting device;
the driving circuit is used for controlling the light emission of the light source.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110450307.1A CN113140110B (en) | 2020-04-30 | 2020-04-30 | Intelligent traffic control method, lighting device and monitoring device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110450307.1A CN113140110B (en) | 2020-04-30 | 2020-04-30 | Intelligent traffic control method, lighting device and monitoring device |
CN202010367593.0A CN111524376B (en) | 2020-04-30 | 2020-04-30 | Intelligent traffic indication light-emitting device, monitoring device, system and method |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010367593.0A Division CN111524376B (en) | 2020-04-30 | 2020-04-30 | Intelligent traffic indication light-emitting device, monitoring device, system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113140110A true CN113140110A (en) | 2021-07-20 |
CN113140110B CN113140110B (en) | 2023-06-09 |
Family
ID=71906756
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110450307.1A Active CN113140110B (en) | 2020-04-30 | 2020-04-30 | Intelligent traffic control method, lighting device and monitoring device |
CN202010367593.0A Active CN111524376B (en) | 2020-04-30 | 2020-04-30 | Intelligent traffic indication light-emitting device, monitoring device, system and method |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010367593.0A Active CN111524376B (en) | 2020-04-30 | 2020-04-30 | Intelligent traffic indication light-emitting device, monitoring device, system and method |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN113140110B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114143940A (en) * | 2022-01-30 | 2022-03-04 | 深圳市奥新科技有限公司 | Tunnel illumination control method, device, equipment and storage medium |
CN114820931A (en) * | 2022-04-24 | 2022-07-29 | 江苏鼎集智能科技股份有限公司 | Virtual reality-based CIM (common information model) visual real-time imaging method for smart city |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113017687A (en) * | 2021-02-19 | 2021-06-25 | 上海长征医院 | Automatic identification method for B-ultrasonic image of abdominal dropsy |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07326210A (en) * | 1994-05-30 | 1995-12-12 | Matsushita Electric Works Ltd | Tunnel lamp control device |
JP2004030484A (en) * | 2002-06-28 | 2004-01-29 | Mitsubishi Heavy Ind Ltd | Traffic information providing system |
CN102136194A (en) * | 2011-03-22 | 2011-07-27 | 浙江工业大学 | Road traffic condition detection device based on panorama computer vision |
CN103150915A (en) * | 2013-02-05 | 2013-06-12 | 林祥兴 | Integral traffic information display device |
CN203673792U (en) * | 2014-01-03 | 2014-06-25 | 云南路翔市政工程有限公司 | Assembly-type LED variable information board |
CN107886739A (en) * | 2017-10-16 | 2018-04-06 | 王宁 | Traffic flow of the people automatic collecting analysis system |
CN108417057A (en) * | 2018-05-15 | 2018-08-17 | 哈尔滨工业大学 | A kind of intelligent signal lamp timing system |
CN108961756A (en) * | 2018-07-26 | 2018-12-07 | 深圳市赛亿科技开发有限公司 | A kind of automatic real-time traffic vehicle flowrate, people flow rate statistical method and system |
CN109493616A (en) * | 2018-12-06 | 2019-03-19 | 江苏华体照明科技有限公司 | Intelligent traffic lamp |
CN209260596U (en) * | 2018-11-21 | 2019-08-16 | 方显峰 | A kind of long-persistence luminous raised terrestrial reference |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101702263B (en) * | 2009-11-17 | 2011-04-06 | 重庆大学 | Pedestrian crosswalk signal lamp green wave self-adaption control system and method |
CN202473149U (en) * | 2012-03-09 | 2012-10-03 | 云南路翔市政工程有限公司 | Light-guide type active light emitting signboard |
CN103646241B (en) * | 2013-12-30 | 2017-01-18 | 中国科学院自动化研究所 | Real-time taxi identification method based on embedded system |
CN105263026B (en) * | 2015-10-12 | 2018-04-17 | 西安电子科技大学 | Global vector acquisition methods based on probability statistics and image gradient information |
CN105809984A (en) * | 2016-06-02 | 2016-07-27 | 西安费斯达自动化工程有限公司 | Traffic signal control method based on image detection and optimal velocity model |
CN105809992A (en) * | 2016-06-02 | 2016-07-27 | 西安费斯达自动化工程有限公司 | Traffic signal control method based on image detection and full velocity difference model |
CN106710261A (en) * | 2017-03-07 | 2017-05-24 | 翁小翠 | Intelligent traffic indicating device |
CN108320540A (en) * | 2018-01-30 | 2018-07-24 | 江苏瑞沃建设集团有限公司 | A kind of intelligent city's traffic lights of annular |
CN108877234B (en) * | 2018-07-24 | 2021-03-26 | 河北德冠隆电子科技有限公司 | Four-dimensional real-scene traffic simulation vehicle illegal lane occupation tracking detection system and method |
CN108961782A (en) * | 2018-08-21 | 2018-12-07 | 北京深瞐科技有限公司 | Traffic intersection control method and device |
CN109359563B (en) * | 2018-09-29 | 2020-12-29 | 江南大学 | Real-time lane occupation phenomenon detection method based on digital image processing |
-
2020
- 2020-04-30 CN CN202110450307.1A patent/CN113140110B/en active Active
- 2020-04-30 CN CN202010367593.0A patent/CN111524376B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07326210A (en) * | 1994-05-30 | 1995-12-12 | Matsushita Electric Works Ltd | Tunnel lamp control device |
JP2004030484A (en) * | 2002-06-28 | 2004-01-29 | Mitsubishi Heavy Ind Ltd | Traffic information providing system |
CN102136194A (en) * | 2011-03-22 | 2011-07-27 | 浙江工业大学 | Road traffic condition detection device based on panorama computer vision |
CN103150915A (en) * | 2013-02-05 | 2013-06-12 | 林祥兴 | Integral traffic information display device |
CN203673792U (en) * | 2014-01-03 | 2014-06-25 | 云南路翔市政工程有限公司 | Assembly-type LED variable information board |
CN107886739A (en) * | 2017-10-16 | 2018-04-06 | 王宁 | Traffic flow of the people automatic collecting analysis system |
CN108417057A (en) * | 2018-05-15 | 2018-08-17 | 哈尔滨工业大学 | A kind of intelligent signal lamp timing system |
CN108961756A (en) * | 2018-07-26 | 2018-12-07 | 深圳市赛亿科技开发有限公司 | A kind of automatic real-time traffic vehicle flowrate, people flow rate statistical method and system |
CN209260596U (en) * | 2018-11-21 | 2019-08-16 | 方显峰 | A kind of long-persistence luminous raised terrestrial reference |
CN109493616A (en) * | 2018-12-06 | 2019-03-19 | 江苏华体照明科技有限公司 | Intelligent traffic lamp |
Non-Patent Citations (1)
Title |
---|
王辉: "基于道路监控视频的交通拥堵判别方法研究", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114143940A (en) * | 2022-01-30 | 2022-03-04 | 深圳市奥新科技有限公司 | Tunnel illumination control method, device, equipment and storage medium |
CN114820931A (en) * | 2022-04-24 | 2022-07-29 | 江苏鼎集智能科技股份有限公司 | Virtual reality-based CIM (common information model) visual real-time imaging method for smart city |
CN114820931B (en) * | 2022-04-24 | 2023-03-24 | 江苏鼎集智能科技股份有限公司 | Virtual reality-based CIM (common information model) visual real-time imaging method for smart city |
Also Published As
Publication number | Publication date |
---|---|
CN113140110B (en) | 2023-06-09 |
CN111524376A (en) | 2020-08-11 |
CN111524376B (en) | 2021-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111524376B (en) | Intelligent traffic indication light-emitting device, monitoring device, system and method | |
CN102074113B (en) | License tag recognizing and vehicle speed measuring method based on videos | |
US9704060B2 (en) | Method for detecting traffic violation | |
CN102867417B (en) | Taxi anti-forgery system and taxi anti-forgery method | |
CN102708378B (en) | Method for diagnosing fault of intelligent traffic capturing equipment based on image abnormal characteristic | |
US7619648B2 (en) | User assisted customization of automated video surveillance systems | |
CN102834309A (en) | Automatic vehicle equipment monitoring, warning, and control system | |
CN111339905B (en) | CIM well lid state visual detection system based on deep learning and multiple visual angles | |
CN105844257A (en) | Early warning system based on machine vision driving-in-fog road denoter missing and early warning method | |
Song et al. | Vehicle behavior analysis using target motion trajectories | |
CN111553201A (en) | Traffic light detection method based on YOLOv3 optimization algorithm | |
CN110135383A (en) | Loading goods train video intelligent monitoring system | |
CN112697814B (en) | Cable surface defect detection system and method based on machine vision | |
CN104851288B (en) | Traffic light positioning method | |
CN107749055A (en) | A kind of fault detection method, system and the device of LED traffic guidances screen | |
CN103268470A (en) | Method for counting video objects in real time based on any scene | |
CN109887276B (en) | Night traffic jam detection method based on fusion of foreground extraction and deep learning | |
CN103198300A (en) | Parking event detection method based on double layers of backgrounds | |
CN109859519A (en) | A kind of parking stall condition detecting system and its detection method | |
Taha et al. | Day/night detector for vehicle tracking in traffic monitoring systems | |
Minnikhanov et al. | Detection of traffic anomalies for a safety system of smart city | |
Zhou et al. | Street-view imagery guided street furniture inventory from mobile laser scanning point clouds | |
CN104091402B (en) | Distinguishing system and distinguishing method of multi-state alarm color-changing lamp of 24V power supply cabinet | |
KR102178202B1 (en) | Method and apparatus for detecting traffic light | |
CN106997685A (en) | A kind of roadside parking space detection device based on microcomputerized visual |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |