CN111818313A - Vehicle real-time tracking method and device based on monitoring video - Google Patents

Vehicle real-time tracking method and device based on monitoring video Download PDF

Info

Publication number
CN111818313A
CN111818313A CN202010885136.0A CN202010885136A CN111818313A CN 111818313 A CN111818313 A CN 111818313A CN 202010885136 A CN202010885136 A CN 202010885136A CN 111818313 A CN111818313 A CN 111818313A
Authority
CN
China
Prior art keywords
video
points
network
target vehicle
reachable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010885136.0A
Other languages
Chinese (zh)
Other versions
CN111818313B (en
Inventor
张晓春
林涛
冯国臣
丘建栋
王宇
徐腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Urban Transport Planning Center Co Ltd
Original Assignee
Shenzhen Urban Transport Planning Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Urban Transport Planning Center Co Ltd filed Critical Shenzhen Urban Transport Planning Center Co Ltd
Priority to CN202010885136.0A priority Critical patent/CN111818313B/en
Publication of CN111818313A publication Critical patent/CN111818313A/en
Application granted granted Critical
Publication of CN111818313B publication Critical patent/CN111818313B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a vehicle real-time tracking method and device based on a monitoring video, which relate to the technical field of intelligent traffic and comprise the following steps: constructing a video network according to the plurality of video points; determining an accessible network of each video point according to a video network; determining a travel time interval of the target vehicle according to the reachable network; determining monitoring videos corresponding to a plurality of video points according to a reachable network within a travel time interval to extract the characteristics of the target vehicle; and when the characteristics of the target vehicle are extracted, switching to the corresponding monitoring video for displaying. The method forms a topological relation graph among different video points by constructing a video network, determines the travel time interval of the target vehicle according to the video network, timely starts a subsequent video point to identify the target vehicle, completes the rapid positioning of the vehicle, combines basic data with a video AI technology, realizes the rapid and stable tracking of the real-time track of the vehicle, and has wide application scenes in city management.

Description

Vehicle real-time tracking method and device based on monitoring video
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to a vehicle real-time tracking method and device based on a monitoring video.
Background
In recent years, with the rapid development of economy in China and the improvement of the living standard of residents, automobiles become more and more important as important transportation means in daily life. Along with the increasing keeping quantity of automobiles, various traffic problems such as illegal driving, messy parking and the like in urban road traffic are increasing day by day, the traffic problems generated therewith are becoming more serious, and how to efficiently supervise running vehicles in roads is becoming more important.
Along with the gradual improvement of the basic construction of China, in a road traffic system, in order to better monitor road surface information, related departments set a large number of monitoring cameras in areas such as crossroads, high-speed entrances and exits, key road sections and the like, and the comprehensive monitoring of urban roads is realized. When a certain target vehicle needs to be tracked from traffic video streams, the traditional manual monitoring method is time-consuming and labor-consuming, key video information is easy to miss, and the efficiency is low. With the continuous development of computer technology, intelligent video monitoring systems have become the focus of popular research in academic and industrial fields. Based on image processing, deep learning and other related technologies, the system and the method automatically realize the functions of target detection, tracking, identification and the like on the vehicles in the monitoring video by using a computer, thereby achieving the aims of saving manpower and improving the utilization rate of monitoring equipment.
At present, the related research of target detection and identification mainly focuses on the scene of a single camera, but in a real road environment, the method based on the single camera has the defects of too small monitoring range and insufficient information, the method based on multiple cameras can provide richer information and larger monitoring range, and meanwhile, multiple cameras generally exist in traffic scenes such as intersections, so the vehicle tracking identification method based on the multiple cameras is more suitable for the real road traffic scene. However, the tracking effect of the existing vehicle crossing the camera is generally poor, and most of the tracking effects are realized by manually clicking video for switching; the rest are realized by a vehicle real-time GPS, and the applicability is not high; there is also pure video AI (Artificial Intelligence) recognition, but video jumping is large and landing effect is poor.
Disclosure of Invention
To achieve the above object, a first aspect of the present invention provides a real-time vehicle tracking method based on a surveillance video, which includes:
constructing a video network according to the plurality of video points;
determining a reachable network of each video point according to the video network;
determining a travel time interval of the target vehicle according to the reachable network;
determining monitoring videos corresponding to a plurality of video points according to the reachable network within the travel time interval to extract the characteristics of the target vehicle;
and when the characteristics of the target vehicle are extracted, switching to the corresponding monitoring video for displaying.
Further, the constructing the video network according to the plurality of video points comprises:
respectively acquiring a plurality of preset positions of each camera on a road;
determining the video frequency point according to each preset bit;
and constructing the video network according to the video points and the road network information.
Further, the constructing the video network according to the plurality of video points and the road network information includes:
constructing a node set according to the video points and constructing an edge set file according to a plurality of paths in the road network information, wherein each video point forms a node in the node set;
and loading the node set to the edge set file, so that all the nodes in the node set fall on the edge set file to form the video network.
Further, the determining the reachable network of each video point according to the video network includes:
and in the video network, determining adjacent video points which are directly reached by each video point as next layer reachable video points of the video points to form the reachable network of each video point.
Further, the determining a travel time interval of the target vehicle according to the reachable network comprises:
determining a set of alternative paths for the target vehicle according to the reachable network;
and respectively determining the travel time lower limit and the travel time upper limit of the target vehicle according to the alternative path set to form the travel time interval.
Further, the determining the set of alternative paths for the target vehicle from the reachable network comprises:
and determining a plurality of paths from each video point to the next layer of reachable video points as the alternative path set according to the reachable network, wherein the next layer of reachable video points are near video points directly reached by each video point.
Further, the determining the lower travel time limit and the upper travel time limit of the target vehicle according to the candidate path set respectively includes:
respectively determining a lower path passing time limit and an upper path passing time limit of each path in the alternative path set;
determining the travel time lower limit and the travel time upper limit of the target vehicle according to the route passing time lower limit and the route passing time upper limit of each route.
Further, the determining, according to the reachable network, the surveillance videos corresponding to the plurality of video points for performing the feature extraction of the target vehicle in the travel time interval includes:
extracting the characteristics of the target vehicle in the monitoring video of the current video point;
determining the driving-away time when the target vehicle leaves the current frequency point;
determining a plurality of next-layer reachable video frequency points of the current video frequency points according to the reachable network, wherein the next-layer reachable video frequency points are near-sighted frequency points which are directly reached by each video frequency point;
and performing feature extraction on the target vehicle on the monitoring videos corresponding to the plurality of next-layer reachable video points in the travel time interval according to the driving-away time.
Further, still include:
when one video point does not extract the characteristics of the target vehicle in the travel time interval, if the characteristics of the target vehicle are not extracted by other video points in the same level at the moment, the characteristics of the target vehicle are extracted by the other video points in the same level, and the characteristics of the target vehicle are extracted by the other video points in the same level and the reachable video points in the next level of the video points of which the characteristics are extracted, wherein the other video points in the same level and the reachable video points in the next level are determined according to the reachable network.
In order to achieve the above object, in a second aspect, the present invention provides a real-time vehicle tracking device based on surveillance video, including:
the construction module is used for constructing a video network according to the plurality of video points;
the processing module is used for determining the reachable network of each video point according to the video network; the system is also used for determining the travel time interval of the target vehicle according to the reachable network; the system is also used for determining monitoring videos corresponding to a plurality of video points according to the reachable network within the travel time interval to extract the characteristics of the target vehicle;
and the tracking module is used for switching to the corresponding monitoring video to display when the characteristics of the target vehicle are extracted.
By using the real-time vehicle tracking method or device based on the monitoring video, the construction of a video network is completed by loading the configured video points based on the basic GIS road network, and a topological relation graph among different video points is formed. When the vehicle leaves the current front view frequency point, the travel time interval of the target vehicle is determined according to the subsequent reachable video frequency points in the video network, the subsequent video points are started to search and identify the characteristics of the target vehicle according to the travel time interval, the vehicle is quickly positioned, the video picture is switched to the video point to be displayed after the target vehicle is detected and positioned at a certain video frequency point, the video is automatically switched according to the running track of the vehicle, the target vehicle is tracked in real time, and the target vehicle can be tracked in the visual field of each video. The video network constructed by the invention has the advantages of simple method and strong maintainability, combines basic data with a video AI technology to realize the rapid and stable tracking of the real-time track of the vehicle, reuses video equipment to the maximum extent, has higher stability and wider application range compared with the method in the current industry, is suitable for the tracking of various vehicles, is not strictly limited by the distribution density of videos, can realize automatic tracking, greatly reduces the cost of manually operating the videos, and has wide application scenes in city management.
To achieve the above object, in a third aspect, the present invention provides a non-transitory computer-readable storage medium having a computer program stored thereon, where the computer program, when executed by a processor, implements the surveillance video-based real-time vehicle tracking method according to the first aspect of the present invention.
In order to achieve the above object, in a fourth aspect, the present invention provides a computing device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the surveillance video-based real-time vehicle tracking method according to the first aspect of the present invention.
The non-transitory computer-readable storage medium and the computing device according to the present invention have similar beneficial effects to those of the surveillance video based vehicle real-time tracking method according to the first aspect of the present invention, and are not described in detail herein.
Drawings
FIG. 1 is a schematic flow chart of a surveillance video-based method for real-time tracking of a vehicle according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of constructing a video network according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a video point configuration according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of a process of constructing a video network according to video points and road network information according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a video network according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a reachable network of video points according to an embodiment of the invention;
FIG. 7 is a flowchart illustrating a process of determining a travel time interval according to an embodiment of the invention;
FIG. 8 is a schematic flow chart illustrating a process for determining a lower travel time limit and an upper travel time limit for a target vehicle in accordance with an embodiment of the present invention;
FIG. 9 is a schematic flow chart illustrating the extraction of target vehicle features during a trip time interval according to an embodiment of the present invention;
FIG. 10 is a schematic flow chart illustrating feature extraction for a target vehicle according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a surveillance video point expansion according to an embodiment of the present invention;
FIG. 12 is a diagram illustrating two video points belonging to the same camera, according to an embodiment of the invention;
FIG. 13 is a schematic diagram of video point switching according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of a real-time vehicle tracking device based on surveillance video according to an embodiment of the present invention.
Detailed Description
Embodiments in accordance with the present invention will now be described in detail with reference to the drawings, wherein like reference numerals refer to the same or similar elements throughout the different views unless otherwise specified. It is to be noted that the embodiments described in the following exemplary embodiments do not represent all embodiments of the present invention. They are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the claims, and the scope of the present disclosure is not limited in these respects. Features of the various embodiments of the invention may be combined with each other without departing from the scope of the invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In recent years, in the wave of urban information construction, video equipment is used as the most important information acquisition infrastructure, and vigorous construction of each department is greatly developed, not only from equipment coverage, equipment functions, but also from structural data extraction of video streams, and methods for extracting moving vehicles and typical characteristics of the vehicles from the video streams are mature day by day and have greatly improved precision.
In the field of public safety and traffic violation, real-time tracking of monitoring of specific vehicles is often required to ensure real-time visibility and close tracking of target vehicles in the visual field of videos. The realization of real-time tracking of the target vehicle based on various monitoring videos of the road side is the embodiment of big data and intelligent capability, and especially has important significance for tracking illegal vehicles, tracking key vehicles and the like.
At present, there are two main types of implementation methods in a vehicle monitoring and real-time tracking method:
one category is tracking for specific vehicles, particularly service vehicles. As the service route is generally preset, specific video configuration preset positions can be selected at two sides of the service route, and videos are switched in real time according to the running path of the service vehicle, so that the target vehicle is tracked. The method has the advantages of good tracking effect and smooth video switching, but the defects are also obvious: a great deal of configuration work is required, the application range of the target vehicle is narrow, the target vehicle is limited to service vehicles, and the target vehicle can only be tracked on a specific path.
And the second type is that the typical characteristics of the target vehicle are extracted from each path of video purely by means of a video AI identification technology, and when the vehicle is tracked, all vehicle characteristics in each path of video are searched and matched with the target vehicle on the basis of strong calculation support, so that the tracking switching of the video is realized. The method needs to consume a large amount of computing resources (the video in the whole city can reach tens of thousands of roads) for real-time retrieval, and the key is lack of vehicle path analysis, so that a large number of vehicles with the same characteristics can be retrieved, the tracking effect is extremely unstable, and the video switching effect is poor.
The method is based on a basic GIS road network, and completes the construction of the video network by loading the configured video points to form a topological relation graph among different video points. When the vehicle leaves the current front view point, the travel time interval of the target vehicle is determined according to the subsequent reachable video points in the video network, the subsequent video points are started to search and identify the characteristics of the target vehicle according to the travel time interval, the vehicle is quickly positioned, the video pictures are switched to the video points to be displayed, the videos are automatically switched according to the running track of the vehicle, and the target vehicle can be tracked in real time in the view field of each video.
Fig. 1 is a flowchart illustrating a surveillance video-based real-time vehicle tracking method according to an embodiment of the present invention, including steps S1 to S5.
In step S1, a video network is constructed from a plurality of video points. In the embodiment of the invention, based on the view of each preset position of each camera, the video points (different preset positions of different videos) are marked on a GIS (Geographic Information System) map to form a video network model based on a road network, and the different video points can form mutually communicated paths through the constructed video network. Fig. 2 is a schematic flowchart of constructing a video network according to an embodiment of the present invention, which includes steps S11 to S13.
In step S11, a plurality of preset bits of each camera on the road are acquired, respectively. In the embodiment of the invention, in order to improve the reusability of the video, each camera can be configured with a plurality of preset bits. It can be understood that the existing camera can carry out holder operation, namely, parameters such as direction, focal length and aperture are changed, different fields of vision can be shot by the same camera through adjusting parameters such as direction, focal length and the like, and each corresponding field of vision is different preset positions.
In step S12, the video point is determined according to each of the preset bits. In the embodiment of the invention, a point position (such as a road center point) on a road covered by each preset position vision field is respectively selected as a video point of the vision field, wherein the video point is a longitude and latitude point of each preset position of each camera on a road section (any point position on the road in the vision field coverage range can be taken). Fig. 3 is a schematic view illustrating a configuration of video points according to an embodiment of the present invention, and as shown in fig. 3, if a traffic flow direction detected by one of the preset positions of the camera is from west to east, longitude and latitude identification is performed at a corresponding position (west to east road section) on the GIS map as a video point position of one preset position of the camera. It can be understood that the mapping relation between the camera preset positions and the video points is formed after the video points are selected, and each camera can form a plurality of corresponding relations according to actual conditions.
In step S13, the video network is constructed according to the video points and the road network information. Fig. 4 is a schematic flowchart illustrating a process of constructing a video network according to video points and road network information according to an embodiment of the present invention, which includes steps S131 to S132.
In step S131, a node set is constructed according to a plurality of video points, and an edge set file is constructed according to a plurality of paths in the road network information, where each video point constitutes a node in the node set. In the embodiment of the present invention, two files are defined when the video network is constructed, which are a node set (video points) and an edge set file (road segments, which can be obtained according to road network information). Tables 1 and 2 below respectively show a specific example of the file formats of the node set and the edge set, but the present invention is not limited thereto.
TABLE 1
Name (R) Type (B) Description of the invention Remarks for note
ID* string Video point identification ID Unique ID
Camera ID* string Video device ID Unique ID for each video device
Camera position* string Video preset bit Visual preset bit of video point
Type string Type of video Gunlock, ball machine, high point, other
longitude* string Longitude (G)
latitude* string Latitude
SectionID* String Location link or link ID
machinename string Video name
sectionname string Road section name
Flowlane float Number of lanes covered by video frequency point
...... ......
TABLE 2
Name (R) Type (B) Description of the invention Remarks for note
ID* string Edge identification
Source* List Longitude and latitude of road section starting point From the basic road network
Target* List Road segment end point longitude and latitude From the basic road network
Name string Road section name
Label string Road section description information
Directed* int Identifying directed networks 0: 1, forward direction: reverse direction 2: mixing
Location* List Longitude and latitude string between start point and end point
Length* string Road section length
...... ......
It is understood that the storage formats of the two files are not fixed, the fields required to be explicitly given in the files are denoted by x, and the rest fields can be added autonomously according to actual data and application requirements, such as video types, covered lane numbers and the like.
In step S132, the node set is loaded onto the edge set file, so that all the nodes in the node set fall on the edge set file, and the video network is formed. In the embodiment of the invention, whether the nodes in the node set fall on the edge set can be confirmed through latitude and longitude. If a certain node does not fall on the edge set, the node is not deleted, matching is carried out by using the nearest distance, the edge with the nearest distance is taken, the intersection point of the vertical lines is taken as the substitute position of a new corrected point, and therefore all video points are reserved.
Fig. 5 is a schematic diagram of a video network according to an embodiment of the present invention, in which each point is a video point, and each line is a path. It can be understood that when a video network is constructed, other related attribute values can be loaded at the same time, so that the subsequent calculation time is reduced, and the overall processing efficiency is improved. By the method for constructing the video network based on the GIS map and the video points, the maximum multiplexing of video equipment in the whole market is realized, and the workload of manual participation is reduced. And each video device can be configured with a plurality of preset positions, a video network is formed based on a road traffic network, the construction of the interoperability among different video points is completed, the video network can be rapidly expanded and operated and maintained, and the video devices are fully utilized.
In step S2, an accessible network for each video point is determined according to the video network. In the embodiment of the invention, the adjacent video points directly reached by each video point are determined according to the video network to be used as the next layer of reachable video points of the video points, so as to form the reachable network of each video point. In order to reduce the calculation amount, on the basis of the constructed video network, a breadth-first algorithm is adopted to search adjacent video points which can be directly reached of each video point as prestored data according to a Directed network identified by a Directed identifier in an edge set file, and then real-time calculation is not needed.
Fig. 6 is a schematic diagram of reachable networks of video points according to an embodiment of the present invention, where a breadth-first search algorithm is used to determine a reachable network of each video point, and starting from a video point id01, all video point sets adjacent to a video point id01 are searched first, that is, all neighboring video points that can be directly reached by a video point id01, such as the video point id02, the video point id03, and the video point id04 in fig. 6, and then these video points are used as the first-layer reachable video points of the video point id 01; and then, starting from the video point id02, the video point id03 and the video point id04 which are the next layer of reachable video points of the video point id01, searching all other video points adjacent to the video points, so as to serve as the second layer of reachable video points of the video point id01, and forming a tree-shaped data structure as shown in fig. 6. It can be understood that, according to the density of the arrangement of the cameras, the reachable network of each video point can be searched to 2 layers or 3 layers in advance, and the more sparse the arrangement density of the cameras is, the more layers need to be searched.
In the embodiment of the invention, the adjacent video frequency point which can be directly reached is the position of the line which reaches the video frequency point and does not need to pass other video frequency points. It can be understood that the current video point to the next layer of reachable video points are not necessarily continuous in the same road segment in physical space, and may be directly connected across multiple road segments according to actual situations.
In step S3, a travel time interval of the target vehicle is determined according to the reachable network. Fig. 7 is a flowchart illustrating a process of determining a travel time interval according to an embodiment of the invention, including steps S31 to S32.
In step S31, a set of alternative paths for the target vehicle is determined from the reachable network. In the embodiment of the invention, a plurality of paths from each video point to the next layer of reachable video points are determined as the alternative path set according to the reachable network. For example, a KSP algorithm (K shortest path algorithm) is adopted to calculate a K shortest path set from each video point to all video points in the next layer of reachable video points according to the reachable network
Figure 786450DEST_PATH_IMAGE001
As a set of alternative paths.
Taking fig. 6 as an example, K shortest paths from video point id01 to video point id02, video point id03 and video point id04 are calculated:
id01→ id02,
Figure 844535DEST_PATH_IMAGE002
id01→ id03,
Figure 440252DEST_PATH_IMAGE003
id01→ id04,
Figure 897778DEST_PATH_IMAGE004
it can be understood that the value of the parameter K can be set according to actual requirements, and is generally within 10.
In step S32, a travel time lower limit and a travel time upper limit of the target vehicle are determined respectively according to the candidate path sets, and the travel time interval is formed. Fig. 8 is a flowchart illustrating the process of determining the lower and upper travel time limits of the target vehicle according to the embodiment of the present invention, including steps S321 to S322.
In step S321, a lower path transit time limit and an upper path transit time limit of each path in the candidate path set are determined respectively. In the embodiment of the invention, in combination with the road condition, the lower limit of the path passing time and the upper limit of the path passing time of the vehicle are respectively calculated for the K paths between any two video point pairs (taking the video point id01 to the video point id02 in fig. 6 as an example), and the lower limit and the upper limit of the path passing time are taken
Figure 728462DEST_PATH_IMAGE005
Of (2) a
Figure 878820DEST_PATH_IMAGE006
Separately calculating the paths
Figure 411433DEST_PATH_IMAGE006
Lower limit of passage time of route
Figure 892224DEST_PATH_IMAGE007
And upper bound of path transit time
Figure 888999DEST_PATH_IMAGE008
As shown in the following formula:
Figure 413521DEST_PATH_IMAGE009
Figure 184162DEST_PATH_IMAGE010
wherein the content of the first and second substances,
Figure 452332DEST_PATH_IMAGE011
representing each sub-segment in the path
Figure 506876DEST_PATH_IMAGE012
The length of the road section of (a),
Figure 12419DEST_PATH_IMAGE013
representing each sub-segment in the path
Figure 785203DEST_PATH_IMAGE012
The design speed of the system can be defaulted to 60km/h,
Figure 325906DEST_PATH_IMAGE014
representing road sections
Figure 500535DEST_PATH_IMAGE012
In a period of time
Figure 117592DEST_PATH_IMAGE015
The maximum travel time. In 0-24 hours of a day, the time can be divided into a plurality of sub-periods, for example, one period of 5 minutes, and then the time can be divided into 288 sub-periods, the travel time of each period can be obtained through internet data or electronic police data, and the maximum value is taken as
Figure 377672DEST_PATH_IMAGE014
. It will be appreciated that the lower route transit time limit represents the minimum time required for a vehicle to traverse the route, and the upper and lower route transit time limits represent the maximum time required for a vehicle to traverse the route.
In step S322, the line of the target vehicle is determined according to the lower and upper route passing time limits of each routeA lower trip time limit and the upper trip time limit. In the embodiment of the invention, the lower limit of the path transit time of each path in the alternative path set is determined
Figure 784383DEST_PATH_IMAGE007
And upper bound of path transit time
Figure 564251DEST_PATH_IMAGE008
Thereafter, it may be determined that the vehicle is in each time period
Figure 867057DEST_PATH_IMAGE015
Time range interval passing through each path
Figure 348854DEST_PATH_IMAGE016
. Respectively extracting the upper limit and the lower limit of the transit time of all the K paths from the video point id01 to the video point id02, and calculating the 75% quantile, the 25% quantile and the quantile difference of the data
Figure 496938DEST_PATH_IMAGE017
And determining therefrom an upper time-of-flight limit
Figure 131313DEST_PATH_IMAGE018
And lower limit of travel time
Figure 605020DEST_PATH_IMAGE019
As shown in the following formula:
Figure 370850DEST_PATH_IMAGE020
Figure 798990DEST_PATH_IMAGE021
Figure 599456DEST_PATH_IMAGE022
wherein the content of the first and second substances,
Figure 978485DEST_PATH_IMAGE017
the difference between the upper and lower four-way bits is shown,
Figure 982344DEST_PATH_IMAGE018
the upper limit of the travel time is indicated,
Figure 737810DEST_PATH_IMAGE019
indicating a lower travel time limit. Thus, the target vehicle may be determined during the time period
Figure 64886DEST_PATH_IMAGE015
The travel time interval between any two video point pairs is
Figure 631128DEST_PATH_IMAGE023
In the embodiment of the invention, if the current time period
Figure 574813DEST_PATH_IMAGE015
The number of the path transit time data in the time slot is less than 10, and the previous time slot has data, and the upper limit and the lower limit of the travel time are adjusted by using the 95% quantile and the 5% quantile of the data in the previous time slot. It is to be understood that the above-described specific calculation formulas for determining the upper and lower travel time limits are exemplary only, and the present invention is not limited thereto.
In the embodiment of the invention, after the vehicle leaves from the current video point, when the vehicle reaches the next layer of reachable video points, the estimated time interval of the current video point reaching each video point in the next layer of reachable video points is calculated according to the reachable network of the video points, and each video point in the next layer is searched for the vehicle characteristics in the estimated time interval, so that the GPU computing resource is effectively saved.
In step S4, feature extraction of the target vehicle is performed on the surveillance videos corresponding to the plurality of video points determined according to the reachable network within the travel time interval. In the embodiment of the invention, according to the prediction of the vehicle travel time interval, the video analysis algorithm of each subsequent video point is started in time to search the target vehicle. Fig. 9 is a schematic flowchart illustrating the target vehicle feature extraction in the travel time interval according to the embodiment of the invention, including steps S41 to S44.
In step S41, feature extraction of the target vehicle is performed in the surveillance video of the current video point. In the embodiment of the invention, the feature extraction of the target vehicle is to extract the vehicle features of the target vehicle, such as license plate, vehicle type, color, vehicle length and the like, from the real-time traffic video stream, so that the target vehicle can be uniformly identified in different video points. In the embodiment of the invention, the vehicle characteristics can be extracted by adopting a currently mature and widely applied HOG (Histogram of oriented gradients) + SVM (Support Vector Machine) algorithm. Fig. 10 is a schematic flowchart illustrating a feature extraction process performed on a target vehicle according to an embodiment of the present invention, including steps S411 to S415.
In step S411, the HOG features of the labeled image training set are extracted, and a linear support vector machine classifier is trained. In the embodiment of the invention, the linear support vector machine classifier is trained according to the HOG characteristics of the training set, so that the effective extraction of various characteristics of the vehicle can be realized.
In step S412, the vehicle in the image is searched using the trained SVM classifier based on the sliding window technique. In the embodiment of the present invention, the size of the sliding window may be set according to actual requirements, which is not limited in the present invention.
In step S413, the above-described process is run on the traffic video stream, and a heatmap of loop detection is created frame by frame to remove outliers and track detected vehicles. In the embodiment of the invention, the video image is detected frame by frame and the abnormal value is deleted, so that the accuracy of the target vehicle detection is improved.
In step S414, the detected frame (bounding box) of the vehicle is estimated. In the embodiment of the present invention, the frame of the vehicle may be detected by using each image processing method in the prior art, which is not limited by the present invention.
In step S415, the vehicle frame is detected from the traffic video stream, the feature vectors of the vehicle frame are extracted, and the extracted feature vectors are sent to a classifier for multi-classification, so as to obtain vehicle attribute information. In the embodiment of the invention, after the frame of the vehicle is detected, the characteristic vectors are extracted, and the extracted characteristic vectors are sent to the classifier for classification, so that various information of the target vehicle is determined, and the accuracy of extracting the characteristics of the target vehicle is improved.
In the embodiment of the invention, by taking the example that the target vehicle is currently located at the position of the video point id01 shown in fig. 6, the vehicle feature extraction is performed on the monitoring video in the video point id01 view field as described above, so that the detection of the target vehicle in the video point id01 view field is realized.
In step S42, the departure time at which the target vehicle departs from the current-view frequency point is determined. In the embodiment of the invention, when the characteristics of the target vehicle cannot be continuously monitored in the visual field range of the video point id01, the target vehicle is determined to have driven away from the current video point id01, and the current driving-away time is recorded as
Figure 930708DEST_PATH_IMAGE024
In step S43, a plurality of next-layer reachable video points of the current video point are determined according to the reachable network, where the next-layer reachable video points are near video points that each of the video points directly reaches. In the embodiment of the present invention, as shown in fig. 6, the reachable video points of the current video point id01 are determined to be video point id02, video point id03 and video point id 04.
In step S44, feature extraction of the target vehicle is performed on the surveillance videos corresponding to the plurality of next-layer reachable video points within the travel time interval according to the driving-away time. In the embodiment of the invention, the time for extracting the characteristics of the target vehicle at each next layer of reachable video points is respectively determined according to the determined travel time interval between the video points.
The time for extracting the target vehicle features of the monitoring video with the video point id02 is as follows:
Figure 394182DEST_PATH_IMAGE025
the time for extracting the target vehicle features of the monitoring video with the video point id03 is as follows:
Figure 115013DEST_PATH_IMAGE026
the time for extracting the target vehicle features of the monitoring video with the video point id04 is as follows:
Figure 280415DEST_PATH_IMAGE027
in the embodiment of the invention, after the target vehicle is confirmed to drive away from the current video point, the next video point can be reached after the target vehicle is driven for a period of time, so that the time for extracting the features is determined according to the travel time interval between the two video points. Based on the constructed video network, the short-time arrival time interval prediction of the video points of the vehicle passing through the vehicle is realized, and simultaneously based on the topological relation among the video points in the constructed video network, the range is reduced from the two aspects of video quantity and analysis time of video analysis, so that the requirement of video feature extraction on computational resources is greatly reduced.
In step S5, when the feature of the target vehicle is extracted, the corresponding surveillance video is switched to be displayed. In the embodiment of the present invention, if a certain video point among the video point id02, the video point id03, and the video point id04 extracts a feature of the target vehicle in a corresponding time interval, indicating that the target vehicle is tracked at the video point, the video point is switched to a corresponding video for display, for example, when the video point id03 detects the target vehicle in the corresponding time interval, indicating that the target vehicle is driven away from the video point id01 and then driven into the visual range of the video point id03, at this time, the tracking video is switched from id01 → id 03.
In the embodiment of the invention, when one video point does not extract the features of the target vehicle in the travel time interval, if the feature extraction of the target vehicle is not finished by other video points in the same hierarchy at the moment, the feature extraction of the target vehicle is started to be performed on the next layer of reachable video points of the video points of which the feature extraction is finished. FIG. 11 is a schematic diagram illustrating an expansion of surveillance video points, wherein darker video points represent video points with a vehicle feature extraction algorithm enabled. In the embodiment of the invention, if a certain video point (for example, the video point id 02) does not monitor the target vehicle within the determined travel time interval range, and other video points (the video point id03 and the video point id 04) at the same level do not finish monitoring, the feature extraction algorithm of the reachable video point (the video point id05, the video point id06 and the video point id 07) at the next level of the video point is used for detecting and tracking the target vehicle. Therefore, the time for overall tracking can be saved, omission of target vehicle detection caused by different path running time differences can be avoided, and the reliability of real-time tracking of the target vehicle is improved. Based on the video network, when the vehicle is driven away from the visual field of the current video point, the target vehicle feature searching and identifying can be carried out according to the subsequent reachable video points in the topological relation, the searching hierarchy can be rapidly expanded, and the efficiency of detecting and positioning the target vehicle is improved.
Fig. 12 (a) and 12 (b) are schematic diagrams illustrating two video points belonging to the same camera according to an embodiment of the present invention. In the embodiment of the invention, if a current video point and one of the next layer of reachable video points belong to the same camera (i.e. different preset bits of the same physical device), there are two situations. If two video points of the same camera are located on the same road segment as shown in part (a) of fig. 12, the video point is directly switched to the next layer of video point of the same camera, and if the video point is switched from the video point id01 to the video point id02 as shown in part (a) of fig. 12. As shown in part (b) of fig. 12, if two video points of the same camera are not located in the same road segment, the video point of the same camera is set as a virtual video point, and the video point of the next level is directly searched. For example, assume that the video point id02 in section (b) of fig. 12 is set as a virtual video point, and then the video point id05, the video point id06, and the video point id07 are directly searched.
Fig. 13 is a schematic view illustrating video point switching according to an embodiment of the present invention, in which darker video points represent video points at which a target vehicle is detected and video switching is performed. In the embodiment of the invention, in the tracking process of the target vehicle, the steps are continuously repeated, namely after the target vehicle drives away from the video point id01, if the target vehicle is detected to appear at the video point id02 in a travel time interval, the monitoring video picture of the video point id02 is switched to, and when the target vehicle is continuously detected to appear at the video point id07 by the method, the monitoring video picture of the video point id07 is switched to, so that the automatic switching of the video point along with the real-time running track of the vehicle can be realized. It can be understood that under the constraint of relevance between a current video and a subsequent video in a video network, searching is performed according to vehicle multivariate features (independent of recognition of a high-definition video on a license plate), real-time track tracking and video switching after a vehicle is quickly locked can be realized, the video stability during vehicle tracking is improved, and video jumping confusion is avoided. And after basic data and video AI technology are fused, the tracking effect can be more stable and reliable.
By adopting the real-time vehicle tracking method based on the monitoring video, the construction of a video network is completed by loading the configured video points based on the basic GIS road network, and a topological relation graph among different video points is formed. When the vehicle leaves the current front view frequency point, the travel time interval of the target vehicle is determined according to the subsequent reachable video frequency points in the video network, the subsequent video points are started to search and identify the characteristics of the target vehicle according to the travel time interval, the vehicle is quickly positioned, the video picture is switched to the video point to be displayed after the target vehicle is detected and positioned at a certain video frequency point, the video is automatically switched according to the running track of the vehicle, the target vehicle is tracked in real time, and the target vehicle can be tracked in the visual field of each video. The video network constructed by the invention has the advantages of simple method and strong maintainability, combines basic data with a video AI technology to realize the rapid and stable tracking of the real-time track of the vehicle, reuses video equipment to the maximum extent, has higher stability and wider application range compared with the method in the current industry, is suitable for the tracking of various vehicles, is not strictly limited by the distribution density of videos, can realize automatic tracking, greatly reduces the cost of manually operating the videos, and has wide application scenes in city management.
The embodiment of the second aspect of the invention also provides a vehicle real-time tracking device based on the monitoring video. Fig. 14 is a schematic structural diagram of a real-time vehicle tracking apparatus 1400 based on surveillance video according to an embodiment of the present invention, which includes a building module 1401, a processing module 1402, and a tracking module 1403.
The construction module 1401 is used to construct a video network from a plurality of video points.
The processing module 1402 is configured to determine, according to the video network, a reachable network of each video point; the system is also used for determining the travel time interval of the target vehicle according to the reachable network; and the system is also used for determining monitoring videos corresponding to a plurality of video points according to the reachable network within the travel time interval to extract the characteristics of the target vehicle.
The tracking module 1403 is configured to switch to a corresponding monitoring video for displaying when the feature of the target vehicle is extracted.
In this embodiment of the present invention, the processing module 1402 is further configured to respectively obtain a plurality of preset bits of each camera on the road; determining the video frequency point according to each preset bit; and constructing the video network according to the video points and the road network information.
In this embodiment of the present invention, the processing module 1402 is further configured to perform feature extraction of the target vehicle in the surveillance video of the current video point; determining the driving-away time when the target vehicle leaves the current frequency point; determining a plurality of next-layer reachable video frequency points of the current video frequency points according to the reachable network, wherein the next-layer reachable video frequency points are near-sighted frequency points which are directly reached by each video frequency point; and performing feature extraction on the target vehicle on the monitoring videos corresponding to the plurality of next-layer reachable video points in the travel time interval according to the driving-away time.
For a more detailed implementation of each module of the monitoring video-based vehicle real-time tracking apparatus 1400, reference may be made to the description of the monitoring video-based vehicle real-time tracking method of the present invention, and similar beneficial effects are obtained, and details are not repeated herein.
An embodiment of the third aspect of the invention proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the surveillance video-based real-time vehicle tracking method according to the first aspect of the invention.
Generally, computer instructions for carrying out the methods of the present invention may be carried using any combination of one or more computer-readable storage media. Non-transitory computer readable storage media may include any computer readable medium except for the signal itself, which is temporarily propagating.
A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages, and in particular may employ Python languages suitable for neural network computing and TensorFlow, PyTorch-based platform frameworks. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
An embodiment of a fourth aspect of the present invention provides a computing device, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the program to implement the surveillance video based real-time vehicle tracking method according to the first aspect of the present invention. It is to be understood that the computing device of the present invention may be a server or a computationally limited terminal device.
The non-transitory computer-readable storage medium and the computing device according to the third and fourth aspects of the present invention may be implemented with reference to the content specifically described in the embodiment of the first aspect of the present invention, and have similar beneficial effects to the real-time vehicle tracking method based on surveillance video according to the embodiment of the first aspect of the present invention, and are not described herein again.
Although embodiments of the present invention have been shown and described above, it should be understood that the above embodiments are illustrative and not to be construed as limiting the present invention, and that changes, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (12)

1. A vehicle real-time tracking method based on surveillance videos is characterized by comprising the following steps:
constructing a video network according to the plurality of video points;
determining a reachable network of each video point according to the video network;
determining a travel time interval of the target vehicle according to the reachable network;
determining monitoring videos corresponding to a plurality of video points according to the reachable network within the travel time interval to extract the characteristics of the target vehicle;
and when the characteristics of the target vehicle are extracted, switching to the corresponding monitoring video for displaying.
2. The surveillance video-based real-time vehicle tracking method of claim 1, wherein the constructing a video network from a plurality of video points comprises:
respectively acquiring a plurality of preset positions of each camera on a road;
determining the video frequency point according to each preset bit;
and constructing the video network according to the video points and the road network information.
3. The real-time vehicle tracking method based on the surveillance video as claimed in claim 2, wherein the constructing the video network according to the video points and the road network information comprises:
constructing a node set according to the video points and constructing an edge set file according to a plurality of paths in the road network information, wherein each video point forms a node in the node set;
and loading the node set to the edge set file, so that all the nodes in the node set fall on the edge set file to form the video network.
4. The real-time vehicle tracking method based on surveillance video as claimed in claim 1, wherein the determining the reachable network of each video point according to the video network comprises:
and in the video network, determining adjacent video points which are directly reached by each video point as next layer reachable video points of the video points to form the reachable network of each video point.
5. The surveillance video-based real-time vehicle tracking method of claim 1, wherein the determining a travel time interval of a target vehicle from the reachable network comprises:
determining a set of alternative paths for the target vehicle according to the reachable network;
and respectively determining the travel time lower limit and the travel time upper limit of the target vehicle according to the alternative path set to form the travel time interval.
6. The surveillance video-based real-time vehicle tracking method of claim 5, wherein the determining the set of alternate paths for the target vehicle from the reachable network comprises:
and determining a plurality of paths from each video point to the next layer of reachable video points as the alternative path set according to the reachable network, wherein the next layer of reachable video points are near video points directly reached by each video point.
7. The surveillance video-based real-time vehicle tracking method according to claim 6, wherein the determining the lower and upper travel time limits of the target vehicle respectively according to the set of alternative paths comprises:
respectively determining a lower path passing time limit and an upper path passing time limit of each path in the alternative path set;
determining the travel time lower limit and the travel time upper limit of the target vehicle according to the route passing time lower limit and the route passing time upper limit of each route.
8. The real-time vehicle tracking method based on surveillance video as claimed in claim 1, wherein the determining surveillance videos corresponding to a plurality of video points according to the reachable network within the travel time interval for feature extraction of the target vehicle comprises:
extracting the characteristics of the target vehicle in the monitoring video of the current video point;
determining the driving-away time when the target vehicle leaves the current frequency point;
determining a plurality of next-layer reachable video frequency points of the current video frequency points according to the reachable network, wherein the next-layer reachable video frequency points are near-sighted frequency points which are directly reached by each video frequency point;
and performing feature extraction on the target vehicle on the monitoring videos corresponding to the plurality of next-layer reachable video points in the travel time interval according to the driving-away time.
9. The surveillance video-based real-time vehicle tracking method according to any one of claims 1-8, further comprising:
when one video point does not extract the characteristics of the target vehicle in the travel time interval, if the characteristics of the target vehicle are not extracted by other video points in the same level at the moment, the characteristics of the target vehicle are extracted by the other video points in the same level, and the characteristics of the target vehicle are extracted by the other video points in the same level and the reachable video points in the next level of the video points of which the characteristics are extracted, wherein the other video points in the same level and the reachable video points in the next level are determined according to the reachable network.
10. A real-time vehicle tracking device based on surveillance video, comprising:
the construction module is used for constructing a video network according to the plurality of video points;
the processing module is used for determining the reachable network of each video point according to the video network; the system is also used for determining the travel time interval of the target vehicle according to the reachable network; the system is also used for determining monitoring videos corresponding to a plurality of video points according to the reachable network within the travel time interval to extract the characteristics of the target vehicle;
and the tracking module is used for switching to the corresponding monitoring video to display when the characteristics of the target vehicle are extracted.
11. A non-transitory computer readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the surveillance video-based real-time vehicle tracking method according to any one of claims 1-9.
12. A computing device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the program, implements the surveillance video-based real-time vehicle tracking method according to any one of claims 1-9.
CN202010885136.0A 2020-08-28 2020-08-28 Vehicle real-time tracking method and device based on monitoring video Active CN111818313B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010885136.0A CN111818313B (en) 2020-08-28 2020-08-28 Vehicle real-time tracking method and device based on monitoring video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010885136.0A CN111818313B (en) 2020-08-28 2020-08-28 Vehicle real-time tracking method and device based on monitoring video

Publications (2)

Publication Number Publication Date
CN111818313A true CN111818313A (en) 2020-10-23
CN111818313B CN111818313B (en) 2021-02-02

Family

ID=72859758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010885136.0A Active CN111818313B (en) 2020-08-28 2020-08-28 Vehicle real-time tracking method and device based on monitoring video

Country Status (1)

Country Link
CN (1) CN111818313B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257683A (en) * 2020-12-07 2021-01-22 之江实验室 Cross-mirror tracking method for vehicle running track monitoring
CN113002595A (en) * 2021-03-19 2021-06-22 通号通信信息集团有限公司 Train tracking method and system
CN113139721A (en) * 2021-04-16 2021-07-20 深圳市艾赛克科技有限公司 Aggregate storage yard management system and method
CN113660462A (en) * 2021-08-09 2021-11-16 苏州工业园区测绘地理信息有限公司 Surrounding ring type mobile vehicle video tracking method based on fusion multi-source data analysis
CN113706586A (en) * 2021-10-29 2021-11-26 深圳市城市交通规划设计研究中心股份有限公司 Target tracking method and device based on multi-point position perception and storage medium
CN113705417A (en) * 2021-08-23 2021-11-26 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN113724298A (en) * 2021-11-01 2021-11-30 深圳市城市交通规划设计研究中心股份有限公司 Multipoint perception fusion method and device and computer readable storage medium
CN114615473A (en) * 2022-03-23 2022-06-10 云火科技(盐城)有限公司 Intelligent monitoring method based on target tracking
CN114860976A (en) * 2022-04-29 2022-08-05 南通智慧交通科技有限公司 Image data query method and system based on big data

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1472709A (en) * 2003-05-14 2004-02-04 大连市公安局交通警察支队 Method for rapid calling traffic TV monitoring image within required range
CN102509310A (en) * 2011-11-18 2012-06-20 上海电机学院 Video tracking analysis method and system combined with geographic information
CN102693634A (en) * 2011-03-24 2012-09-26 大连航天金穗科技有限公司 Television monitoring system for tracking set vehicle and television monitoring method thereof
CN105245853A (en) * 2015-10-27 2016-01-13 太原市公安局 Video monitoring method
CN105719313A (en) * 2016-01-18 2016-06-29 中国石油大学(华东) Moving object tracking method based on intelligent real-time video cloud
CN110858400A (en) * 2018-08-22 2020-03-03 北京航天长峰科技工业集团有限公司 Method for tracking moving target through monitoring video
CN111340856A (en) * 2018-12-19 2020-06-26 杭州海康威视***技术有限公司 Vehicle tracking method, device, equipment and storage medium
CN111523362A (en) * 2019-12-26 2020-08-11 珠海大横琴科技发展有限公司 Data analysis method and device based on electronic purse net and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1472709A (en) * 2003-05-14 2004-02-04 大连市公安局交通警察支队 Method for rapid calling traffic TV monitoring image within required range
CN102693634A (en) * 2011-03-24 2012-09-26 大连航天金穗科技有限公司 Television monitoring system for tracking set vehicle and television monitoring method thereof
CN102509310A (en) * 2011-11-18 2012-06-20 上海电机学院 Video tracking analysis method and system combined with geographic information
CN105245853A (en) * 2015-10-27 2016-01-13 太原市公安局 Video monitoring method
CN105719313A (en) * 2016-01-18 2016-06-29 中国石油大学(华东) Moving object tracking method based on intelligent real-time video cloud
CN110858400A (en) * 2018-08-22 2020-03-03 北京航天长峰科技工业集团有限公司 Method for tracking moving target through monitoring video
CN111340856A (en) * 2018-12-19 2020-06-26 杭州海康威视***技术有限公司 Vehicle tracking method, device, equipment and storage medium
CN111523362A (en) * 2019-12-26 2020-08-11 珠海大横琴科技发展有限公司 Data analysis method and device based on electronic purse net and electronic equipment

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257683A (en) * 2020-12-07 2021-01-22 之江实验室 Cross-mirror tracking method for vehicle running track monitoring
CN113002595A (en) * 2021-03-19 2021-06-22 通号通信信息集团有限公司 Train tracking method and system
CN113002595B (en) * 2021-03-19 2023-09-08 通号通信信息集团有限公司 Train tracking method and system
CN113139721A (en) * 2021-04-16 2021-07-20 深圳市艾赛克科技有限公司 Aggregate storage yard management system and method
CN113139721B (en) * 2021-04-16 2023-12-19 深圳市艾赛克科技有限公司 Aggregate storage yard management system and method
CN113660462A (en) * 2021-08-09 2021-11-16 苏州工业园区测绘地理信息有限公司 Surrounding ring type mobile vehicle video tracking method based on fusion multi-source data analysis
CN113660462B (en) * 2021-08-09 2023-12-29 园测信息科技股份有限公司 Surrounding ring type moving vehicle video tracking method based on fusion multi-source data analysis
WO2023024787A1 (en) * 2021-08-23 2023-03-02 上海商汤智能科技有限公司 Image processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN113705417A (en) * 2021-08-23 2021-11-26 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN113706586A (en) * 2021-10-29 2021-11-26 深圳市城市交通规划设计研究中心股份有限公司 Target tracking method and device based on multi-point position perception and storage medium
CN113724298A (en) * 2021-11-01 2021-11-30 深圳市城市交通规划设计研究中心股份有限公司 Multipoint perception fusion method and device and computer readable storage medium
CN113724298B (en) * 2021-11-01 2022-03-18 深圳市城市交通规划设计研究中心股份有限公司 Multipoint perception fusion method and device and computer readable storage medium
CN114615473A (en) * 2022-03-23 2022-06-10 云火科技(盐城)有限公司 Intelligent monitoring method based on target tracking
CN114860976A (en) * 2022-04-29 2022-08-05 南通智慧交通科技有限公司 Image data query method and system based on big data

Also Published As

Publication number Publication date
CN111818313B (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN111818313B (en) Vehicle real-time tracking method and device based on monitoring video
JP7351487B2 (en) Intelligent navigation method and system based on topology map
CN106197458B (en) A kind of mobile phone user's trip mode recognition methods based on mobile phone signaling data and navigation route data
WO2020052530A1 (en) Image processing method and device and related apparatus
Yang et al. Generating lane-based intersection maps from crowdsourcing big trace data
KR20210087005A (en) Method and apparatus of estimating road condition, and method and apparatus of establishing road condition estimation model
CN116824859B (en) Intelligent traffic big data analysis system based on Internet of things
Chang et al. Video analytics in smart transportation for the AIC'18 challenge
CN109741628A (en) A kind of cell intelligent parking system and the intelligent parking route planning method using it
CN114419924B (en) AI application control management system based on wisdom city
CN112418081B (en) Method and system for quickly surveying traffic accidents by air-ground combination
CN112132071A (en) Processing method, device and equipment for identifying traffic jam and storage medium
CN115393745A (en) Automatic bridge image progress identification method based on unmanned aerial vehicle and deep learning
Minnikhanov et al. Detection of traffic anomalies for a safety system of smart city
CN113313006A (en) Urban illegal construction supervision method and system based on unmanned aerial vehicle and storage medium
CN114003672B (en) Method, device, equipment and medium for processing road dynamic event
CN114078319A (en) Method and device for detecting potential hazard site of traffic accident
Pi et al. Visual recognition for urban traffic data retrieval and analysis in major events using convolutional neural networks
Gupta et al. A computer vision based approach for automated traffic management as a smart city solution
CN112562315B (en) Method, terminal and storage medium for acquiring traffic flow information
Yuliandoko et al. Automatic vehicle counting using Raspberry pi and background subtractions method in the sidoarjo toll road
WO2021138372A1 (en) Feature coverage analysis
CN116311166A (en) Traffic obstacle recognition method and device and electronic equipment
CN104121917A (en) Method and device for automatically discovering new bridge
CN115394089A (en) Vehicle information fusion display method, sensorless passing system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant