CN114333356B - Road plane intersection traffic volume statistical method based on video multi-region marking - Google Patents

Road plane intersection traffic volume statistical method based on video multi-region marking Download PDF

Info

Publication number
CN114333356B
CN114333356B CN202111446985.7A CN202111446985A CN114333356B CN 114333356 B CN114333356 B CN 114333356B CN 202111446985 A CN202111446985 A CN 202111446985A CN 114333356 B CN114333356 B CN 114333356B
Authority
CN
China
Prior art keywords
vehicle
road plane
area
plane intersection
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111446985.7A
Other languages
Chinese (zh)
Other versions
CN114333356A (en
Inventor
熊文磊
王丽园
马天奕
李正军
罗丰
杨晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCCC Second Highway Consultants Co Ltd
Original Assignee
CCCC Second Highway Consultants Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCCC Second Highway Consultants Co Ltd filed Critical CCCC Second Highway Consultants Co Ltd
Priority to CN202111446985.7A priority Critical patent/CN114333356B/en
Publication of CN114333356A publication Critical patent/CN114333356A/en
Application granted granted Critical
Publication of CN114333356B publication Critical patent/CN114333356B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a traffic volume statistical method for a road plane intersection based on video multi-region marking, which comprises the following steps: collecting video of a road plane intersection; marking the area number to obtain a road plane intersection marking area; setting a vehicle detection area; creating a traffic statistics summary of the road plane intersections; creating a marked area vehicle information table set; pretreatment; tracking the vehicle which is subjected to the vehicle calibration operation and obtains the vehicle number; an in-out inspection operation; screening operation; filling the vehicle information and the marked area number into a traffic statistics summary of the intersection of the road plane; and outputting a traffic statistics summary of the road plane intersections. The invention improves the working efficiency of traffic investigation at the road plane intersection, and eliminates the influence of subjective factors on investigation results; excessive computing resource consumption caused by extracting the whole process of the vehicle track data is avoided, and the software implementation difficulty is greatly reduced.

Description

Road plane intersection traffic volume statistical method based on video multi-region marking
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to a traffic volume statistical method for a road plane intersection based on video multi-region marking.
Background
The intersection is an important node of urban traffic and is a bottleneck point for affecting the smoothness of the urban traffic. Road traffic is frequently split, converged and crossed at a plane intersection, so that the traffic condition is particularly complex, the problem of urban traffic congestion is often highlighted at the intersection, and the key problem of urban traffic is solved by solving the problem of the intersection. Basic data can be provided for the optimal design of the intersection through scientific and reasonable investigation of traffic volume of the intersection, and traffic conditions and problem symptoms can be comprehensively mastered.
In the prior art, traffic volume data is mostly obtained through traditional manual field investigation, namely, the traffic volume data is obtained through the most original manual data collection and manual statistics; the manual operation has the advantages of simplicity, easiness and no high technical threshold.
In the prior art, some methods are used for counting traffic volume of a road plane intersection based on different types of videos, but all the methods are used for extracting whole-course track data of a vehicle from the videos of the intersection and then carrying out graphical statistics; the method has the advantage of being independent of manpower and capable of realizing computer automatic statistics.
The defects of the prior art are as follows:
1. Because the manual investigation method in the prior art is time-consuming and labor-consuming, has high labor intensity, and needs to develop business training for staff in advance, the subjective factors have great influence on investigation results, and the results are not completely objective and accurate;
2. because the manual investigation method in the prior art adopts a large amount of manpower to carry out investigation statistics for a long time, particularly under the condition of long-term continuous traffic investigation, a large amount of economic cost is required;
3. the intersection traffic investigation method based on the whole-course track data of the vehicle in the prior art has larger demand on system calculation force and complex implementation method, so that development cost is high, the system bug is large, and the application and popularization degree is low.
Disclosure of Invention
Aiming at the problems, the invention provides a traffic statistics method for a road plane intersection based on video multi-region marking, which aims to improve the working efficiency of traffic investigation of the road plane intersection and discard the influence of subjective factors on investigation results; excessive computing resource consumption caused by extracting the whole process of the vehicle track data is avoided, and the software implementation difficulty is greatly reduced.
In order to solve the problems, the technical scheme provided by the invention is as follows:
A traffic volume statistical method of a road plane intersection based on video multi-region marking comprises the following steps:
s100, collecting a road plane intersection video of a road plane intersection to be counted; the road plane intersection video comprises an unmanned aerial vehicle aerial video and a road plane intersection monitoring bayonet video;
s200, independently calibrating a unique marking area number for each inlet and outlet of each driving direction of the road plane intersection in the road plane intersection video to obtain a road plane intersection marking area; the road plane intersection marking area comprises an inlet marking area and an outlet marking area; the marking area number includes an inlet marking area number for characterizing a position of the inlet marking area and an outlet marking area number for characterizing a position of the outlet marking area;
then, a vehicle detection area for vehicle detection is arranged at each inlet and outlet of each driving direction; the two adjacent vehicle detection areas are mutually independent and isolated; the vehicle detection area is a rectangular frame of the intersection in each driving direction and is arranged at the road plane intersection; one side of the vehicle detection area is overlapped with the parking line, and the other side opposite to the parking line is positioned at one side far away from the road plane intersection and the parking line; the side lengths of two sides of the vehicle detection area adjacent to the parking line are preset manually and are respectively overlapped with the leftmost side of the left first lane and the rightmost side of the right first lane in the same direction; each vehicle detection area covers all traffic lanes in the same direction in the area;
S300, creating a road plane intersection traffic volume statistics summary table for recording traffic volumes of different vehicle types in each driving direction of the road plane intersection;
s400, creating a marked area vehicle information table set; the marked area vehicle information table set comprises a plurality of marked area vehicle information tables for storing vehicle information of vehicles passing through the marked area of the road plane intersection; the marking area vehicle information table and the road plane intersection marking area are in one-to-one correspondence; the vehicle information includes a vehicle type and a vehicle number; the vehicle type contains the character strings "Car", "Bus" and "Truck"; the vehicle numbers and the vehicles are in one-to-one correspondence;
s500, performing new-entry vehicle identification and calibration operation on the road plane intersection video, wherein the method specifically comprises the following steps of:
s510, segmenting the video of the road plane intersection into a video frame stream of the road plane intersection at a manually preset acquisition frequency; the road plane intersection video frame stream comprises video frames which are arranged in a time sequence;
s520, creating a current check frame pointer; the current check frame pointer points to the storage address of the video frame, and the initial value of the offset of the address value of the current check frame pointer is 0;
S530, checking the video frames pointed by the current checking frame pointer, and calibrating the video frames into an initial frame, a new target occurrence frame and a common frame one by one according to the following standard:
the video frame pointed when the initial value of the offset of the address value of the current check frame pointer is 0 is marked as the initial frame;
the video frame of the newly-entered vehicle appears as the new target appearance frame; the new incoming vehicle is a vehicle which is not in the video frame adjacent to the previous video frame pointed by the pointer of the relative current check frame;
the video frames which do not meet the requirement of being marked as the initial frames and do not meet the requirement of being marked as the new target occurrence frames are marked as the common frames;
s540, performing the following operations according to the calibration result:
if the video frame pointed by the current check frame pointer is the normal frame, directly executing S600;
if the video frame pointed by the current check frame pointer is the initial frame or the new target occurrence frame, respectively defining a vehicle boundary frame for each new entering vehicle in the video frame pointed by the current check frame pointer; the vehicle boundary frame is a rectangle framed along the outer edge of the vehicle and synchronously moves along with the vehicle; then performing a vehicle type identification operation for calibrating one of the vehicle types for each of the vehicles and a vehicle calibration operation for assigning one of the vehicle numbers for each of the vehicles once for each of the newly entered vehicles;
S600, performing in-out inspection operation on the video of the road plane intersection, wherein the method specifically comprises the following steps:
s610, tracking each vehicle which has undergone the vehicle calibration operation and successfully obtains the vehicle number;
then checking whether the vehicle bounding box of the vehicle, which has been subjected to the vehicle marking operation and successfully obtained the vehicle number, and the road plane intersection marking area overlap for each of the video frames pointed to by the current check frame pointer, and making the following operations according to the check result:
if no vehicle boundary box of the vehicle overlaps the road plane intersection marking area, adding 1 to the address value of the current check frame pointer; then return to and execute again S530;
if the vehicle boundary frame of the vehicle is overlapped with the road plane intersection marking area, recording the vehicle number of the vehicle corresponding to the vehicle boundary frame overlapped with the road plane intersection marking area into the marking area vehicle information table corresponding to the road plane intersection marking area overlapped with the vehicle boundary frame; then S700 is performed;
s700, screening the marked area vehicle information table, and screening out the vehicle information of the vehicle which is recorded in the marked area vehicle information table corresponding to the entrance marked area and the marked area vehicle information table corresponding to the exit marked area;
S800, filling the vehicle information of each vehicle, the import marking area number corresponding to the import marking area corresponding to the reserved record of each vehicle and the exit marking area number corresponding to the exit marking area corresponding to the reserved record of each vehicle obtained by screening in the S700 into a traffic volume statistics summary table of the road plane intersection;
s900, outputting a traffic volume statistics summary of the road plane intersections processed by the S800 in real time, and adding 1 to the address value of the current check frame pointer; then return to and execute again S530;
and outputting the traffic statistics summary of the road plane intersection in real time to obtain a final result obtained by the method.
Preferably, the vehicle detection area for vehicle detection is provided at each of the entrance and the exit in each driving direction in S200, specifically including the steps of:
s210, counting the number of inlets in the road plane intersection, and recording the number as the total number of inlet marking areas; counting the number of outlets in the road plane intersection, and recording the number as the total number of outlet marking areas;
s220, marking each of the inlet marking area and the outlet marking area, and setting the vehicle detection area; the vehicle detection areas cover all traffic lanes in the corresponding area, and two adjacent vehicle detection areas are mutually independent and isolated;
S230, adjusting the frames of the vehicle detection area to enable the frames to have intervals with manual preset widths.
Preferably, the road plane intersection traffic statistics summary table contains the vehicle type, the marking area number and traffic data of corresponding vehicle types for characterizing respective directions in the road plane intersection; the initial value of the traffic data is 0.
Preferably, the marked area vehicle information table set further includes an inlet attribute for characterizing the inlet marked area taken by the vehicle and an outlet attribute for characterizing the outlet marked area taken by the vehicle.
Preferably, in S610, the step of checking whether the vehicle bounding box of the vehicle that has undergone the vehicle calibration operation and successfully obtained the vehicle number overlaps the road plane intersection marking area for each of the video frames pointed to by the current check frame pointer specifically includes the steps of:
s611, calculating the frame overlapping degree of the vehicle boundary frame and the road plane intersection marking area; the frame overlap is expressed as:
wherein:the frame overlapping degree is the frame overlapping degree; vehicle_id is the vehicle number; x is the sequential coding representing the sequence of the video frames in the video frame stream of the road plane intersection, and the initial value is 1; [ r ] 1 ,c 1 ]、[r 2 ,c 2 ]、[r 3 ,c 3 ]、[r 4 ,c 4 ]Respectively representing coordinates of 4 vertexes of the vehicle boundary frame under a rectangular coordinate system, wherein r represents row coordinates and c represents column coordinates; [ rw ] 1 ,cl 1 ]、[rw 2 ,cl 2 ]Respectively representing the coordinates of each vertex of the road plane intersection marking area under a rectangular coordinate system, wherein rw represents row coordinates and cl represents column coordinates;
s612, judging whether the relation between the frame overlapping degrees of the adjacent 2 video frames satisfies the following formula one by one:
then, according to the result of the determination, the following operations are made:
if yes, writing the vehicle information into a vehicle information table of the marking area corresponding to the marking area of the road plane intersection;
if not, judging whether the relation between the frame overlapping degrees of two adjacent video frames satisfies the following formula or not:
then, according to the result of the determination, the following operations are made:
if yes, writing the vehicle information into a vehicle information table of the marking area corresponding to the marking area of the road plane intersection;
if the road intersection mark area is not met, the vehicle information is not written into the mark area vehicle information table corresponding to the road plane intersection mark area; the value of x is then added to 1.
Preferably, s700 performs a screening operation on the marked area vehicle information table, which specifically includes the following steps:
s710, establishing an import vehicle information traversing pointer; the import vehicle information traversing pointer points to a storage address of the marking area vehicle information table corresponding to the import marking area, and the initial value of the offset of the address value of the import vehicle information traversing pointer is 0;
s720, taking out the vehicle number in the vehicle information in the storage space of the marking area vehicle information table corresponding to the exit marking area where the record update happens last time;
s730, traversing the import vehicle information traversing pointer to point to the marked area vehicle information table by taking the vehicle number as a retrieval key, and then performing the following operations according to a retrieval result:
if the vehicle number can be retrieved in the marked area vehicle information table pointed by the import vehicle information traversing pointer, the exit marked area number corresponding to the exit marked area where the record update happens last time and the import marked area number corresponding to the import marked area corresponding to the marked area vehicle information table pointed by the import vehicle information traversing pointer are taken out; then adding 1 to the address value of the import vehicle information traversing pointer; then S740 is performed;
If the vehicle number cannot be retrieved in the marked area vehicle information table pointed by the import vehicle information traversing pointer, adding 1 to the address value of the import vehicle information traversing pointer; then S740 is performed;
s740, checking whether the address value of the imported vehicle information traversing pointer is larger than the address value of the last marked area vehicle information table, and making the following operations according to the checking result:
if the address value of the import vehicle information traversal pointer is not greater than the address value of the last tag area vehicle information table, returning to and executing again S730;
if the address value of the import vehicle information traversal pointer is greater than the address value of the last video frame, S800 is performed.
Preferably, in S800, the vehicle information of each vehicle obtained in S700 and the marking area numbers of all the road plane intersection marking areas are filled into the road plane intersection traffic volume statistics summary table, and further comprising the steps of:
s910, after the vehicle information of one vehicle and the marking area numbers of all marking areas of the road plane intersections are successfully filled into the road plane intersection traffic volume statistics summary table, the value of the traffic volume data is increased by 1.
Preferably, in S510, the video of the road plane intersection is segmented into a video frame stream of the road plane intersection by using a convolutional neural network constructed in advance at an artificially preset acquisition frequency.
Preferably, in S540, a vehicle type identification operation for calibrating one of the vehicle types for each vehicle and a vehicle calibration operation for assigning one of the vehicle numbers for each vehicle are performed once for each of the newly entered vehicles using a convolutional neural network constructed in advance.
Preferably, in S610, a Deep start algorithm is used to track each vehicle that has undergone the vehicle calibration operation and successfully obtains the vehicle number in each video frame.
Compared with the prior art, the invention has the following advantages:
1. because a large number of personnel are not required to be allocated to carry out long-time and high-strength field operation, the panoramic video of the road plane intersection is directly obtained by unmanned aerial vehicle aerial photography or traffic police departments, and then the traffic data of the road plane intersection can be obtained by processing the panoramic video by the method, so that the working efficiency of traffic investigation of the road plane intersection is greatly improved, the influence of subjective factors on investigation results is eliminated, and the results are objective and accurate;
2. The application judges the driving direction by utilizing the entrance and exit mark areas of the road plane intersection without the whole path data of the vehicle in the prior art, thereby avoiding excessive calculation resource consumption caused by extracting the whole path data of the vehicle, having simple and clear algorithm process and greatly reducing the software realization difficulty.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of the present application;
fig. 2 is a schematic view of a road-plane intersection according to an embodiment of the present application.
Detailed Description
The present application is further illustrated below in conjunction with specific embodiments, it being understood that these embodiments are meant to be illustrative of the application and not limiting the scope of the application, and that modifications of the application, which are equivalent to those skilled in the art to which the application pertains, fall within the scope of the application defined in the appended claims after reading the application.
As shown in fig. 1, the traffic statistics method for the road plane intersection based on the video multi-region mark comprises the following steps:
s100, collecting a road plane intersection video of a road plane intersection to be counted; the road plane intersection video comprises unmanned aerial vehicle aerial video and road plane intersection monitoring bayonet video.
It should be noted that, the following requirements must be satisfied simultaneously by the pictures of the road plane intersection video:
high angle, no shielding, fixed video shooting angle, complete coverage of road plane intersections, and clear and discernable vehicle contours in video.
S200, independently calibrating a unique marking area number for each inlet and outlet of each driving direction of a road plane intersection in a road plane intersection video to obtain a road plane intersection marking area; the road plane intersection marking area comprises an inlet marking area and an outlet marking area; the marking area number includes an inlet marking area number for characterizing a position of the inlet marking area and an outlet marking area number for characterizing a position of the outlet marking area.
Then, a vehicle detection area for vehicle detection is arranged at each inlet and outlet of each driving direction; the two adjacent vehicle detection areas are mutually independent and isolated; the vehicle detection area is a rectangular frame of the intersection in each driving direction of the road plane intersection; one side of the vehicle detection area is overlapped with the parking line, and the other side opposite to the parking line is positioned at one side far away from the road plane intersection and the parking line; the side lengths of two sides of the vehicle detection area adjacent to the parking line are preset manually and are respectively overlapped with the leftmost side of the left first lane and the rightmost side of the right first lane in the same direction; each vehicle detection zone covers all lanes of traffic in the same direction within the zone.
As shown in FIG. 2, in this embodiment, the inlet label area is numbered R i (1.ltoreq.i.ltoreq.m) and the exit mark region number is S j (1.ltoreq.j.ltoreq.n); wherein: m represents the number of entrance mark areas of the intersection; n represents the number of exit mark areas of the intersection; r represents an inlet marking area; s represents an exit mark region; i and j are corresponding counters for identifying the labels of the respective regions.
The inlet marking area number and the outlet marking area number are manually marked by a user using the method.
It should be further noted that, in order to facilitate the use of the subsequent user, the marking area of the intersection on the road plane is given a unique marking area name in this embodiment, in addition to the unique marking area number; the tag area name is custom defined by the user and can be modified and edited to facilitate memory and information transfer.
In this embodiment, a vehicle detection area for vehicle detection is provided at each of the entrance and the exit in each driving direction, and specifically includes the following steps:
s210, counting the number of inlets in the road plane intersection, and recording the number as the total number of inlet marking areas; the number of exits in the road plane intersection is counted and recorded as the total number of exit marking areas.
S220, marking each inlet marking area and each outlet marking area, and setting a vehicle detection area; the vehicle detection areas cover all traffic lanes in the corresponding area, and two adjacent vehicle detection areas are mutually independent and isolated.
S230, adjusting the frames of the vehicle detection area to enable the frames to have intervals with manual preset widths.
It should be noted that S230 is to consider the vehicle boundary box [ [ r ] 1 ,c 1 ],[r 2 ,c 2 ],[r 3 ,c 3 ],[r 4 ,c 4 ]]Trimming the border [ [ rw ] of adjacent vehicle detection areas 1 ,cl 1 ],[rw 2 ,cl 2 ],...]Therefore, a certain interval is ensured between the frames of the adjacent vehicle detection areas, so that erroneous judgment of the vehicle driving areas is avoided.
It should be further noted that the vehicle detection area should be set along a direction perpendicular to the road direction; wherein: the positions of the entrance mark areas are near the corresponding stop lines, the positions of the exit mark areas are near the corresponding zebra crossings, and the positions of the mark areas are not in the intersection vehicle intersection areas, so that misjudgment of the vehicle driving areas is avoided.
It should be further noted that, since the vehicle bounding box is wider and longer than the actual vehicle size, in order to avoid erroneous judgment of the vehicle driving area, the borders of the adjacent vehicle detection areas need to be adjusted, and the interval between the vehicle detection areas is enlarged on the basis that each driving lane can be covered by the marking area.
S300, creating a road plane intersection traffic volume statistics summary table for recording traffic volumes of different vehicle types of each driving direction of the road plane intersection.
The traffic statistics summary of the road plane intersection comprises vehicle types, marking area numbers and traffic data of corresponding vehicle types used for representing corresponding directions in the road plane intersection; the initial value of the traffic data is 0.
In this embodiment, the traffic statistics table at the intersection of the road plane is a four-dimensional data table, which is expressed as RESULT (R i ,S j ,T y ,N ijy ) Wherein: r is R i And S is j The meaning of (c) is already described in S200 and will not be described here again; it should be noted that R i And S is j The total set of (a) is all the marked area numbers; t (T) y Is a vehicle type, a subset of vehicle information, as will be described in detail in S400 below; n (N) ijy Is traffic data;
it should be further noted that in N ijy I is aligned with the number of the road plane intersection marking area where the vehicle enters and exits, and y is aligned with the type of the vehicle; such as N 121 Represented by R 1 Entering into the intersection from S 2 Traffic data of the type "Car" of the vehicle exiting the intersection.
S400, creating a marked area vehicle information table set; the marked area vehicle information table set comprises a plurality of marked area vehicle information tables for storing vehicle information of vehicles passing through the marked area of the road plane intersection; the vehicle information table of the marking area and the marking area of the road plane intersection are in one-to-one correspondence; the vehicle information includes a vehicle type and a vehicle number; the vehicle type contains the character strings "Car", "Bus" and "Truck"; the vehicle numbers are in one-to-one correspondence with the vehicles.
It should be noted that, the value of y corresponds to the vehicle type, in which: a value of y is 1 for a vehicle type Car, a value of y is 2 for a vehicle type Bus, and a value of y is 3 for a vehicle type Truck; the value of y is automatically marked by the program.
As shown in fig. 2, for a road plane intersection having m inlets and n outlets, the number of the marked area vehicle information tables in the marked area vehicle information table set is m+n, which corresponds to m+n road plane intersection marked areas.
The vehicle number is a unique number in the video sequence, and is automatically marked by the program.
It should be noted that, for convenience of subsequent users, the naming rule of the marking area vehicle information table is "marking area name" +character string "vehicle information table" in this embodiment, because the marking area vehicle information table and the road plane intersection marking area are in one-to-one correspondence; the naming of the vehicle information table of the marking area can be customized and defined by a user, and can be modified and edited so as to facilitate memory and information transmission, and the naming can be automatically generated by a system according to the name of the marking area, so that the labor of the user is reduced; the two methods can be switched at any time and are very flexible.
The tag area vehicle information table collection also includes an inlet attribute for characterizing an inlet tag area taken by the vehicle and an outlet attribute for characterizing an outlet tag area taken by the vehicle.
S500, performing new-entry vehicle identification and calibration operation on videos of road plane intersections, wherein the method specifically comprises the following steps of:
s510, dividing the video of the road plane intersection into a video frame stream of the road plane intersection at a manually preset acquisition frequency; the stream of road plane intersection video frames contains video frames arranged in time succession.
In this embodiment, a pre-constructed convolutional neural network is adopted to segment the video of the road plane intersection into a video frame stream of the road plane intersection at a manually preset acquisition frequency.
S520, creating a current check frame pointer; the current check frame pointer points to the memory address of the video frame, and the initial value of the offset of the address value of the current check frame pointer is 0.
S530, checking the video frames pointed by the current checking frame pointer, and marking the video frames as an initial frame, a new target appearance frame and a common frame one by one according to the following standard:
the video frame pointed to when the initial value of the offset of the address value of the current check frame pointer is 0 is marked as the initial frame.
The video frame of the newly-entered vehicle is marked as a new target occurrence frame; the newly entered vehicle is a vehicle that is not in the next previous video frame relative to the video frame pointed to by the current inspection frame pointer.
Video frames that do not meet the criteria of being marked as initial frames and that do not meet the criteria of being marked as new target occurrence frames are marked as normal frames.
S540, performing the following operations according to the calibration result:
if the video frame pointed to by the current check frame pointer is a normal frame, S600 is directly performed.
If the video frame pointed by the current check frame pointer is an initial frame or a new target occurrence frame, each new entering vehicle in the video frame pointed by the current check frame pointer is respectively defined as a vehicle boundary frame; the vehicle boundary frame is a rectangle defined along the outer edge of the vehicle and moves synchronously with the vehicle; then, a vehicle type identification operation for calibrating a vehicle type for each vehicle and a vehicle calibration operation for assigning a vehicle number to each vehicle are performed once for each newly entered vehicle;
in this embodiment, a vehicle type identification operation for calibrating one vehicle type for each vehicle and a vehicle calibration operation for assigning one vehicle number for each vehicle are performed once for each newly entered vehicle using a convolutional neural network constructed in advance.
In this embodiment, for the vehicle that has appeared in the initial frame, the vehicle detection and the vehicle type recognition operation can be completed in the initial frame; and for the new target vehicle appearing in the subsequent frames, the vehicle detection and vehicle type recognition operation can be completed in the video frame of the first appearance of the new target vehicle.
It should be further noted that, the method of the present invention classifies vehicles into three categories, car (small Bus), bus (Bus/Bus), and Truck (Truck), and the following traffic statistics results of the split vehicles are based on the vehicle classification rule.
S600, performing in-out inspection operation on videos of road plane intersections, wherein the method specifically comprises the following steps of:
s610, tracking each vehicle which has undergone the vehicle calibration operation and successfully obtains the vehicle number.
Then checking whether a vehicle boundary box of a vehicle, which has been subjected to a vehicle calibration operation and successfully obtained a vehicle number, of each of the video frames pointed by the current check frame pointer overlaps with the road plane intersection marking area, and making the following operations according to the check result:
if no vehicle boundary box of the vehicle is overlapped with the road plane intersection marking area, adding 1 to the address value of the current check frame pointer; and then returns to and performs S530 again.
If the vehicle boundary frame of the vehicle is overlapped with the road plane intersection marking area, recording the vehicle number of the vehicle corresponding to the vehicle boundary frame overlapped with the road plane intersection marking area into a marking area vehicle information table corresponding to the road plane intersection marking area overlapped with the vehicle boundary frame; then S700 is performed.
In S610, it is checked whether the vehicle bounding box of each of the video frames pointed by the current check frame pointer, which has been subjected to the vehicle calibration operation and successfully obtained the vehicle number, overlaps the road plane intersection marking area, specifically including the steps of:
s611, calculating the frame overlapping degree of the vehicle boundary frame and the road plane intersection marking area; the frame overlap is expressed by formula (1):
wherein:is the overlapping degree of the frames; vehicle_id is the vehicle number; x is the sequence code representing the sequence of video frames in the video frame stream of the road plane intersection, and the initial value is 1;
[r 1 ,c 1 ]、[r 2 ,c 2 ]、[r 3 ,c 3 ]、[r 4 ,c 4 ]respectively representing coordinates of 4 vertexes of a vehicle boundary frame under a rectangular coordinate system, wherein r represents row coordinates and c represents column coordinates; [ rw ] 1 ,cl 1 ]、[rw 2 ,cl 2 ]And respectively representing the coordinates of each vertex of the road plane intersection marking area under a rectangular coordinate system, wherein rw represents row coordinates and cl represents column coordinates.
S612, judging whether the relation between the frame overlapping degrees of the adjacent 2 video frames simultaneously meets the conditions of the formulas (2) and (3) one by one:
then, according to the result of the determination, the following operations are made:
if so, writing the vehicle information into a vehicle information table of a marking area corresponding to the marking area of the road plane intersection.
If not, judging whether the relation between the frame overlapping degrees of two adjacent video frames meets the condition of the formula (4) or not:
then, according to the result of the determination, the following operations are made:
if so, writing the vehicle information into a vehicle information table of a marking area corresponding to the marking area of the road plane intersection.
If the road intersection mark area does not meet the requirement, the vehicle information is not written into a mark area vehicle information table corresponding to the road plane intersection mark area; the value of x is then added to 1.
In this embodiment, the Deep Sort algorithm is used to track each vehicle that has undergone a vehicle calibration operation and successfully obtained the vehicle number in each video frame.
Note that, in addition to the initial frame (i.e., x=1), when the vehicle boundary frame and the border of the vehicle detection areaWhen a sudden change from 0 to non-0 occurs in the value, the operation of writing the vehicle information into the corresponding flag area data table is performed, No information writing operation is performed when the value changes between non-0 values and from non-0 to 0; in the initial frame, if the border of the vehicle border frame and the vehicle detection area is detected +.>And if the value is greater than 0, the vehicle information is directly written into the corresponding marked area vehicle information table.
S700, screening the marked area vehicle information table, and screening out vehicle information of the vehicle which is recorded in the marked area vehicle information table corresponding to the entrance marked area and the marked area vehicle information table corresponding to the exit marked area.
Screening operation is carried out on the vehicle information table in the marked area, and the method specifically comprises the following steps:
s710, establishing an import vehicle information traversing pointer; the entry vehicle information traversing pointer points to the storage address of the marking area vehicle information table corresponding to the entry marking area, and the initial value of the offset of the address numerical value of the entry vehicle information traversing pointer is 0.
S720, the vehicle number in the vehicle information in the storage space of the marking area vehicle information table corresponding to the exit marking area where the record update happens last time is taken out.
S730, taking the vehicle number as a search key, traversing an imported vehicle information traversing pointer to point to a marked area vehicle information table, and then performing the following operations according to a search result:
If the vehicle number can be retrieved from the vehicle information table of the marked area pointed by the entrance vehicle information traversing pointer, the exit marked area number corresponding to the exit marked area where the record update happens last time and the entrance marked area number corresponding to the entrance marked area corresponding to the vehicle information table of the marked area pointed by the entrance vehicle information traversing pointer are taken out; then the address value of the inlet vehicle information traversing pointer is added with 1; and then S740 is performed.
If the vehicle number cannot be retrieved in the vehicle information table of the marked area pointed by the import vehicle information traversing pointer, adding 1 to the address value of the import vehicle information traversing pointer; and then S740 is performed.
S740, checking whether the address value of the imported vehicle information traversing pointer is larger than the address value of the last marked area vehicle information table, and performing the following operations according to the checking result:
if the address value of the import vehicle information traversal pointer is not greater than the address value of the last tag area vehicle information table, S730 is returned to and executed again.
If the address value of the import vehicle information traversal pointer is greater than the address value of the last video frame, S800 is performed.
S800, filling the traffic statistics summary of the intersection of the road plane with the vehicle information of each vehicle, the entrance mark area number corresponding to the entrance mark area with the record and the exit mark area number corresponding to the exit mark area with the record, which are obtained by screening in S700.
When the vehicle information is written into the traffic statistics summary table of the road plane intersection, the vehicle is described to pass through the entrance mark area and the exit mark area of the road plane intersection successively, and the driving process at the road plane intersection is completed; therefore, on the basis, the inlet marked area number and the outlet marked area number of the vehicle are extracted, and the driving direction of the vehicle can be obtained.
In this embodiment, the method further includes the following steps:
s810, after vehicle information of a vehicle and marking area numbers of marking areas of all road plane intersections are successfully filled into a traffic volume statistics summary table of the road plane intersections, adding 1 to the value of traffic volume data.
In S810, R to be obtained in S700 i 、S j 、T y And vehicle number and RESULT (R i ,S j ,T y ,N ijy ) Correlation matching, wherein the matching rule is R obtained in S700 i 、S j 、T y And RESULT (R) i ,S j ,T y ,N ijy ) R in (a) i 、S j 、T y Meanwhile, if the correlation matching is the same, the correlation matching is successful; based on the successful association, N is selected ijy 1 is added.
S900, outputting a road plane intersection traffic volume statistics summary table processed by the S800 in real time, and adding 1 to the address value of the current check frame pointer; and then returns to and performs S530 again.
The real-time output traffic statistics summary of the road plane intersection is the final result obtained by the method.
After the data of all the videos of the road plane intersections are processed by the method, traffic volume statistical data of the road plane intersection directions and the vehicle types are obtained, and the method is very convenient for providing data support for various statistical calibers.
When the method is adopted, the synchronous playing of the panoramic video of the road plane intersection and the statistics of traffic volume can be realized, and the investigator can check the rationality and the reliability of the statistics result in real time. Meanwhile, as the whole-course track data extraction of the vehicle is not involved, compared with the existing traffic volume statistical case based on video analysis, the more complex the geometric structure of the road plane intersection as a investigation object and the traffic flow thereof, the more obvious the advantages of the method are.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, application lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate preferred embodiment of this application.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. As will be apparent to those skilled in the art; various modifications to these embodiments will be readily apparent, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, as used in the specification or claims, the term "comprising" is intended to be inclusive in a manner similar to the term "comprising," as interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean "non-exclusive or".
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.
Finally, it should be noted that the above embodiments are merely representative examples of the present utility model. Obviously, the utility model is not limited to the above-described embodiments, but many variations are possible. Any simple modification, equivalent variation and modification of the above embodiments according to the technical substance of the present utility model should be considered to be within the scope of the present utility model.
Here, it should be noted that the description of the above technical solution is exemplary, and the present specification may be embodied in different forms and should not be construed as being limited to the technical solution set forth herein. Rather, these descriptions will be provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Furthermore, the technical solution of the utility model is limited only by the scope of the claims.
The shapes, dimensions, ratios, angles, and numbers disclosed for describing aspects of the present specification and claims are merely examples, and thus, the present specification and claims are not limited to the details shown. In the following description, a detailed description of related known functions or configurations will be omitted when it may be determined that the emphasis of the present specification and claims is unnecessarily obscured.
Where the terms "comprising," "having," and "including" are used in this specification, there may be additional or alternative parts unless the use is made, the terms used may generally be in the singular but may also mean the plural.
It should be noted that although the terms "first," "second," "top," "bottom," "one side," "another side," "one end," "the other end," etc. may be used and used in this specification to describe various components, these components and portions should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, with top and bottom elements, under certain circumstances, also being interchangeable or convertible with one another; the components at one end and the other end may be the same or different in performance from each other.
In addition, when constituting the components, although not explicitly described, it is understood that a certain error region is necessarily included.
In describing positional relationships, for example, when positional sequences are described as "on," "above," "below," and "next," unless words or terms such as "just" or "directly" are used, it is also possible to include cases where there is no contact or contact between them. If a first element is referred to as being "on" a second element, it does not mean that the first element must be located above the second element in the figures. The upper and lower portions of the component will change in response to changes in the angle and orientation of the view. Thus, in the drawings or in actual construction, if it is referred to that a first element is "on" a second element, it can comprise the case that the first element is "under" the second element and the case that the first element is "over" the second element. In describing the time relationship, unless "just" or "direct" is used, a case where there is no discontinuity between steps may be included in describing "after", "subsequent" and "preceding". The features of the various embodiments of the utility model may be combined or spliced with one another, either in part or in whole, and may be implemented in a variety of different configurations as will be well understood by those skilled in the art. Embodiments of the present utility model may be performed independently of each other or may be performed together in an interdependent relationship.

Claims (6)

1. A traffic volume statistical method of a road plane intersection based on video multi-region marking is characterized by comprising the following steps: comprises the following steps:
s100, collecting a road plane intersection video of a road plane intersection to be counted; the road plane intersection video comprises an unmanned aerial vehicle aerial video and a road plane intersection monitoring bayonet video;
s200, independently calibrating a unique marking area number for each inlet and outlet of each driving direction of the road plane intersection in the road plane intersection video to obtain a road plane intersection marking area; the road plane intersection marking area comprises an inlet marking area and an outlet marking area; the marking area number includes an inlet marking area number for characterizing a position of the inlet marking area and an outlet marking area number for characterizing a position of the outlet marking area;
then, a vehicle detection area for vehicle detection is arranged at each inlet and outlet of each driving direction; the two adjacent vehicle detection areas are mutually independent and isolated; the vehicle detection area is a rectangular frame of the intersection in each driving direction and is arranged at the road plane intersection; one side of the vehicle detection area is overlapped with the parking line, and the other side opposite to the parking line is positioned at one side far away from the road plane intersection and the parking line; the side lengths of two sides of the vehicle detection area adjacent to the parking line are preset manually and are respectively overlapped with the leftmost side of the left first lane and the rightmost side of the right first lane in the same direction; each vehicle detection area covers all traffic lanes in the same direction in the area;
S300, creating a road plane intersection traffic volume statistics summary table for recording traffic volumes of different vehicle types in each driving direction of the road plane intersection;
s400, creating a marked area vehicle information table set; the marked area vehicle information table set comprises a plurality of marked area vehicle information tables for storing vehicle information of vehicles passing through the marked area of the road plane intersection; the marking area vehicle information table and the road plane intersection marking area are in one-to-one correspondence; the vehicle information includes a vehicle type and a vehicle number; the vehicle type contains the character strings "Car", "Bus" and "Truck"; the vehicle numbers and the vehicles are in one-to-one correspondence;
s500, performing new-entry vehicle identification and calibration operation on the road plane intersection video, wherein the method specifically comprises the following steps of:
s510, segmenting the video of the road plane intersection into a video frame stream of the road plane intersection at a manually preset acquisition frequency; the road plane intersection video frame stream comprises video frames which are arranged in a time sequence;
s520, creating a current check frame pointer; the current check frame pointer points to the storage address of the video frame, and the initial value of the offset of the address value of the current check frame pointer is 0;
S530, checking the video frames pointed by the current checking frame pointer, and calibrating the video frames into an initial frame, a new target occurrence frame and a common frame one by one according to the following standard:
the video frame pointed when the initial value of the offset of the address value of the current check frame pointer is 0 is marked as the initial frame;
the video frame of the newly-entered vehicle appears as the new target appearance frame; the new incoming vehicle is a vehicle which is not in the video frame adjacent to the previous video frame pointed by the pointer of the relative current check frame;
the video frames which do not meet the requirement of being marked as the initial frames and do not meet the requirement of being marked as the new target occurrence frames are marked as the common frames;
s540, performing the following operations according to the calibration result:
if the video frame pointed by the current check frame pointer is the normal frame, directly executing S600;
if the video frame pointed by the current check frame pointer is the initial frame or the new target occurrence frame, respectively defining a vehicle boundary frame for each new entering vehicle in the video frame pointed by the current check frame pointer; the vehicle boundary frame is a rectangle framed along the outer edge of the vehicle and synchronously moves along with the vehicle; then performing a vehicle type identification operation for calibrating one of the vehicle types for each of the vehicles and a vehicle calibration operation for assigning one of the vehicle numbers for each of the vehicles once for each of the newly entered vehicles;
S600, performing in-out inspection operation on the video of the road plane intersection, wherein the method specifically comprises the following steps:
s610, tracking each vehicle which has undergone the vehicle calibration operation and successfully obtains the vehicle number;
then checking whether the vehicle bounding box of the vehicle, which has been subjected to the vehicle marking operation and successfully obtained the vehicle number, and the road plane intersection marking area overlap for each of the video frames pointed to by the current check frame pointer, and making the following operations according to the check result:
if no vehicle boundary box of the vehicle overlaps the road plane intersection marking area, adding 1 to the address value of the current check frame pointer; then return to and execute again S530;
if the vehicle boundary frame of the vehicle is overlapped with the road plane intersection marking area, recording the vehicle number of the vehicle corresponding to the vehicle boundary frame overlapped with the road plane intersection marking area into the marking area vehicle information table corresponding to the road plane intersection marking area overlapped with the vehicle boundary frame; then S700 is performed;
s700, screening the marked area vehicle information table, and screening out the vehicle information of the vehicle which is recorded in the marked area vehicle information table corresponding to the entrance marked area and the marked area vehicle information table corresponding to the exit marked area;
S800, filling the vehicle information of each vehicle, the import marking area number corresponding to the import marking area corresponding to the reserved record of each vehicle and the exit marking area number corresponding to the exit marking area corresponding to the reserved record of each vehicle obtained by screening in the S700 into a traffic volume statistics summary table of the road plane intersection;
s900, outputting a traffic volume statistics summary of the road plane intersections processed by the S800 in real time, and adding 1 to the address value of the current check frame pointer; then return to and execute again S530;
the real-time output traffic statistics summary of the road plane intersection is the final result obtained by the method;
the step S200 of setting a vehicle detection area for vehicle detection at each inlet and outlet of each driving direction comprises the following steps:
s210, counting the number of inlets in the road plane intersection, and recording the number as the total number of inlet marking areas; counting the number of outlets in the road plane intersection, and recording the number as the total number of outlet marking areas;
s220, marking each of the inlet marking area and the outlet marking area, and setting the vehicle detection area; the vehicle detection areas cover all traffic lanes in the corresponding area, and two adjacent vehicle detection areas are mutually independent and isolated;
S230, adjusting the frames of the vehicle detection area to enable the frames to have intervals with manual preset widths;
the road plane intersection traffic statistics summary table comprises the vehicle type, the marking area number and traffic data of corresponding vehicle types used for representing corresponding directions in the road plane intersection; the initial value of the traffic data is 0;
the tag region vehicle information table set further includes an inlet attribute for characterizing the inlet tag region traversed by the vehicle and an outlet attribute for characterizing the outlet tag region traversed by the vehicle;
in S610, the step of checking whether the vehicle bounding box of the vehicle that has been subjected to the vehicle calibration operation and successfully obtained the vehicle number in each of the video frames pointed to by the current check frame pointer overlaps with the road plane intersection marking area specifically includes the steps of:
s611, calculating the frame overlapping degree of the vehicle boundary frame and the road plane intersection marking area; the frame overlap is expressed as:
wherein:the frame overlapping degree is the frame overlapping degree; vehicle_id is the vehicle number; x is the sequential coding representing the sequence of the video frames in the video frame stream of the road plane intersection, and the initial value is 1; [ r ] 1 ,c 1 ]、[r 2 ,c 2 ]、[r 3 ,c 3 ]、[r 4 ,c 4 ]Respectively representing coordinates of 4 vertexes of the vehicle boundary frame under a rectangular coordinate system, wherein r represents row coordinates and c represents column coordinates; [ rw ] 1 ,cl 1 ]、[rw 2 ,cl 2 ]Respectively representing the coordinates of each vertex of the road plane intersection marking area under a rectangular coordinate system, wherein rw represents row coordinates and cl represents column coordinates;
s612, judging whether the relation between the frame overlapping degrees of the adjacent 2 video frames satisfies the following formula one by one:
then, according to the result of the determination, the following operations are made:
if yes, writing the vehicle information into a vehicle information table of the marking area corresponding to the marking area of the road plane intersection;
if not, judging whether the relation between the frame overlapping degrees of two adjacent video frames satisfies the following formula or not:
then, according to the result of the determination, the following operations are made:
if yes, writing the vehicle information into a vehicle information table of the marking area corresponding to the marking area of the road plane intersection;
if the road intersection mark area is not met, the vehicle information is not written into the mark area vehicle information table corresponding to the road plane intersection mark area; the value of x is then added to 1.
2. The video multi-zone marker-based traffic statistics method for road-plane intersections of claim 1, wherein: s700, screening the marked area vehicle information table, which specifically comprises the following steps:
s710, establishing an import vehicle information traversing pointer; the import vehicle information traversing pointer points to a storage address of the marking area vehicle information table corresponding to the import marking area, and the initial value of the offset of the address value of the import vehicle information traversing pointer is 0;
s720, taking out the vehicle number in the vehicle information in the storage space of the marking area vehicle information table corresponding to the exit marking area where the record update happens last time;
s730, traversing the import vehicle information traversing pointer to point to the marked area vehicle information table by taking the vehicle number as a retrieval key, and then performing the following operations according to a retrieval result:
if the vehicle number can be retrieved in the marked area vehicle information table pointed by the import vehicle information traversing pointer, the exit marked area number corresponding to the exit marked area where the record update happens last time and the import marked area number corresponding to the import marked area corresponding to the marked area vehicle information table pointed by the import vehicle information traversing pointer are taken out; then adding 1 to the address value of the import vehicle information traversing pointer; then S740 is performed;
If the vehicle number cannot be retrieved in the marked area vehicle information table pointed by the import vehicle information traversing pointer, adding 1 to the address value of the import vehicle information traversing pointer; then S740 is performed;
s740, checking whether the address value of the imported vehicle information traversing pointer is larger than the address value of the last marked area vehicle information table, and making the following operations according to the checking result:
if the address value of the import vehicle information traversal pointer is not greater than the address value of the last tag area vehicle information table, returning to and executing again S730;
if the address value of the import vehicle information traversal pointer is greater than the address value of the last video frame, S800 is performed.
3. The video multi-zone marker-based traffic statistics method for road-plane intersections of claim 2, wherein: in S800, the vehicle information of each vehicle and the marking area numbers of all marking areas of the road plane intersections obtained in S700 are filled into the road plane intersection traffic statistics summary table, and the method further comprises the following steps:
s910, after the vehicle information of one vehicle and the marking area numbers of all marking areas of the road plane intersections are successfully filled into the road plane intersection traffic volume statistics summary table, the value of the traffic volume data is increased by 1.
4. The video multi-zone marker based traffic statistics method for road plane intersections of claim 3, wherein: in S510, the video of the road plane intersection is segmented into a video frame stream of the road plane intersection by adopting a convolutional neural network constructed in advance at a manually preset acquisition frequency.
5. The video multi-zone marker based traffic statistics method for road plane intersections of claim 4, wherein: in S540, a vehicle type identification operation for calibrating one of the vehicle types for each vehicle and a vehicle calibration operation for assigning one of the vehicle numbers for each vehicle are performed once for each of the newly entered vehicles using a convolutional neural network constructed in advance.
6. The video multi-zone marker based traffic statistics method for road plane intersections of claim 5, wherein: in S610, a Deep Sort algorithm is used to track each vehicle that has undergone the vehicle calibration operation and successfully obtained the vehicle number in each of the video frames.
CN202111446985.7A 2021-11-30 2021-11-30 Road plane intersection traffic volume statistical method based on video multi-region marking Active CN114333356B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111446985.7A CN114333356B (en) 2021-11-30 2021-11-30 Road plane intersection traffic volume statistical method based on video multi-region marking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111446985.7A CN114333356B (en) 2021-11-30 2021-11-30 Road plane intersection traffic volume statistical method based on video multi-region marking

Publications (2)

Publication Number Publication Date
CN114333356A CN114333356A (en) 2022-04-12
CN114333356B true CN114333356B (en) 2023-12-15

Family

ID=81049556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111446985.7A Active CN114333356B (en) 2021-11-30 2021-11-30 Road plane intersection traffic volume statistical method based on video multi-region marking

Country Status (1)

Country Link
CN (1) CN114333356B (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014059655A (en) * 2012-09-14 2014-04-03 Toshiba Corp Road situation-monitoring device, and road situation-monitoring method
CN103733237A (en) * 2011-07-05 2014-04-16 高通股份有限公司 Road-traffic-based group, identifier, and resource selection in vehicular peer-to-peer networks
CN103730015A (en) * 2013-12-27 2014-04-16 株洲南车时代电气股份有限公司 Method and device for detecting traffic flow at intersection
CN104794907A (en) * 2015-05-05 2015-07-22 江苏大为科技股份有限公司 Traffic volume detection method using lane splitting and combining
CN105069407A (en) * 2015-07-23 2015-11-18 电子科技大学 Video-based traffic flow acquisition method
CN106128103A (en) * 2016-07-26 2016-11-16 北京市市政工程设计研究总院有限公司 A kind of intersection Turning movement distribution method based on recursion control step by step and device
KR101696881B1 (en) * 2016-01-06 2017-01-17 주식회사 씨티아이에스 Method and apparatus for analyzing traffic information
CN107545742A (en) * 2017-09-14 2018-01-05 四川闫新江信息科技有限公司 Monitor the traffic controller of the adjust automatically of vehicle flowrate
CN110033620A (en) * 2019-05-17 2019-07-19 东南大学 A kind of intersection flux and flow direction projectional technique based on Traffic monitoring data
CN110111575A (en) * 2019-05-16 2019-08-09 北京航空航天大学 A kind of Forecast of Urban Traffic Flow network analysis method based on Complex Networks Theory
WO2020189475A1 (en) * 2019-03-19 2020-09-24 株式会社Ihi Moving body monitoring system, control server for moving body monitoring system, and moving body monitoring method
CN112802348A (en) * 2021-02-24 2021-05-14 辽宁石化职业技术学院 Traffic flow counting method based on mixed Gaussian model
CN112907978A (en) * 2021-03-02 2021-06-04 江苏集萃深度感知技术研究所有限公司 Traffic flow monitoring method based on monitoring video
CN113192336A (en) * 2021-05-28 2021-07-30 三峡大学 Road congestion condition detection method taking robust vehicle target detection as core
CN113257005A (en) * 2021-06-25 2021-08-13 之江实验室 Traffic flow statistical method based on correlation measurement
CN113269768A (en) * 2021-06-08 2021-08-17 中移智行网络科技有限公司 Traffic congestion analysis method, device and analysis equipment
KR102323437B1 (en) * 2021-06-01 2021-11-09 시티아이랩 주식회사 Method, System for Traffic Monitoring Using Deep Learning
CN113688717A (en) * 2021-08-20 2021-11-23 云往(上海)智能科技有限公司 Image recognition method and device and electronic equipment

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103733237A (en) * 2011-07-05 2014-04-16 高通股份有限公司 Road-traffic-based group, identifier, and resource selection in vehicular peer-to-peer networks
JP2014059655A (en) * 2012-09-14 2014-04-03 Toshiba Corp Road situation-monitoring device, and road situation-monitoring method
CN103730015A (en) * 2013-12-27 2014-04-16 株洲南车时代电气股份有限公司 Method and device for detecting traffic flow at intersection
CN104794907A (en) * 2015-05-05 2015-07-22 江苏大为科技股份有限公司 Traffic volume detection method using lane splitting and combining
CN105069407A (en) * 2015-07-23 2015-11-18 电子科技大学 Video-based traffic flow acquisition method
KR101696881B1 (en) * 2016-01-06 2017-01-17 주식회사 씨티아이에스 Method and apparatus for analyzing traffic information
CN106128103A (en) * 2016-07-26 2016-11-16 北京市市政工程设计研究总院有限公司 A kind of intersection Turning movement distribution method based on recursion control step by step and device
CN107545742A (en) * 2017-09-14 2018-01-05 四川闫新江信息科技有限公司 Monitor the traffic controller of the adjust automatically of vehicle flowrate
WO2020189475A1 (en) * 2019-03-19 2020-09-24 株式会社Ihi Moving body monitoring system, control server for moving body monitoring system, and moving body monitoring method
CN110111575A (en) * 2019-05-16 2019-08-09 北京航空航天大学 A kind of Forecast of Urban Traffic Flow network analysis method based on Complex Networks Theory
CN110033620A (en) * 2019-05-17 2019-07-19 东南大学 A kind of intersection flux and flow direction projectional technique based on Traffic monitoring data
CN112802348A (en) * 2021-02-24 2021-05-14 辽宁石化职业技术学院 Traffic flow counting method based on mixed Gaussian model
CN112907978A (en) * 2021-03-02 2021-06-04 江苏集萃深度感知技术研究所有限公司 Traffic flow monitoring method based on monitoring video
CN113192336A (en) * 2021-05-28 2021-07-30 三峡大学 Road congestion condition detection method taking robust vehicle target detection as core
KR102323437B1 (en) * 2021-06-01 2021-11-09 시티아이랩 주식회사 Method, System for Traffic Monitoring Using Deep Learning
CN113269768A (en) * 2021-06-08 2021-08-17 中移智行网络科技有限公司 Traffic congestion analysis method, device and analysis equipment
CN113257005A (en) * 2021-06-25 2021-08-13 之江实验室 Traffic flow statistical method based on correlation measurement
CN113688717A (en) * 2021-08-20 2021-11-23 云往(上海)智能科技有限公司 Image recognition method and device and electronic equipment

Also Published As

Publication number Publication date
CN114333356A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
Hasegawa et al. Robust Japanese road sign detection and recognition in complex scenes using convolutional neural networks
CN102638675B (en) Method and system for target tracking by using multi-view videos
CN111931627A (en) Vehicle re-identification method and device based on multi-mode information fusion
WO2018068653A1 (en) Point cloud data processing method and apparatus, and storage medium
CN111461209B (en) Model training device and method
CN109598743B (en) Pedestrian target tracking method, device and equipment
CN105160309A (en) Three-lane detection method based on image morphological segmentation and region growing
CN110148196A (en) A kind of image processing method, device and relevant device
CN109033950A (en) Vehicle based on multiple features fusion cascade deep model, which is disobeyed, stops detection method
CN110689724B (en) Automatic motor vehicle zebra crossing present pedestrian auditing method based on deep learning
CN113570864B (en) Method and device for matching running path of electric bicycle and storage medium
CN110688902B (en) Method and device for detecting vehicle area in parking space
CN112016605A (en) Target detection method based on corner alignment and boundary matching of bounding box
CN113011331B (en) Method and device for detecting whether motor vehicle gives way to pedestrians, electronic equipment and medium
CN111598069B (en) Highway vehicle lane change area analysis method based on deep learning
CN109800321A (en) A kind of bayonet image vehicle retrieval method and system
CN108537169A (en) A kind of high-resolution remote sensing image method for extracting roads based on center line and detection algorithm of having a lot of social connections
CN110659601A (en) Depth full convolution network remote sensing image dense vehicle detection method based on central point
Bu et al. A UAV photography–based detection method for defective road marking
Cao et al. An end-to-end neural network for multi-line license plate recognition
CN114333356B (en) Road plane intersection traffic volume statistical method based on video multi-region marking
CN116363856A (en) Method and device for generating road intersection topological structure
CN110765900A (en) DSSD-based automatic illegal building detection method and system
Alomari et al. Smart real-time vehicle detection and tracking system using road surveillance cameras
Yang Novel traffic sensing using multi-camera car tracking and re-identification (MCCTRI)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant