CN115294169A - Vehicle tracking method and device, electronic equipment and storage medium - Google Patents

Vehicle tracking method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115294169A
CN115294169A CN202210848108.0A CN202210848108A CN115294169A CN 115294169 A CN115294169 A CN 115294169A CN 202210848108 A CN202210848108 A CN 202210848108A CN 115294169 A CN115294169 A CN 115294169A
Authority
CN
China
Prior art keywords
target vehicle
vehicle
track point
latitude
longitude
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210848108.0A
Other languages
Chinese (zh)
Inventor
李响
陈硕
张渊佳
孟祥松
陈金
徐洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202210848108.0A priority Critical patent/CN115294169A/en
Publication of CN115294169A publication Critical patent/CN115294169A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a vehicle tracking method, a vehicle tracking device, electronic equipment and a storage medium, which belong to the technical field of intelligent traffic, and the method comprises the following steps: when the target vehicle moves from the first image acquisition area of the first camera to the image acquisition blind area between the first camera and the second camera, the characteristic parameters for representing the motion characteristics of the target vehicle are obtained, the track point of the target vehicle in the image acquisition blind area is predicted according to the rule that the target vehicle keeps the motion characteristics based on the characteristic parameters, and if the target vehicle is determined to move to the second image acquisition area of the second camera, the track point of the target vehicle is stopped being predicted. Thus, when the first camera and the second camera are not overlapped in the image acquisition area, the track point of the target vehicle can be predicted, the situation that the track point of the target vehicle is discontinuous or missing can not occur, and therefore the scheme that the vehicle is tracked across the cameras when the image acquisition areas between the cameras are not overlapped is provided.

Description

Vehicle tracking method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of intelligent transportation technologies, and in particular, to a vehicle tracking method and apparatus, an electronic device, and a storage medium.
Background
Currently, in the field of intelligent transportation technology, cross-camera tracking of vehicles is an important capability of highway toll collection systems.
In the related art, vehicles are tracked across cameras by depending on overlapped image acquisition regions between the cameras, but in a highway scene, the factors such as high installation cost of the cameras and uncontrollable installation positions exist, and the factors often cause that the image acquisition regions between two adjacent cameras are not overlapped, so that the vehicles cannot be tracked across the cameras.
Therefore, how to track the vehicle across the cameras when the cameras have no overlapped image acquisition regions is an urgent problem to be solved.
Disclosure of Invention
The embodiment of the application provides a vehicle tracking method and device, electronic equipment and a storage medium, and aims to solve the problem that a vehicle is tracked across cameras when no overlapped image acquisition area exists between the cameras.
In a first aspect, an embodiment of the present application provides a vehicle tracking method, including:
when a target vehicle moves from a first image acquisition area of a first camera to an image acquisition blind area between the first camera and a second camera, acquiring characteristic parameters for representing the movement characteristics of the target vehicle;
predicting track points of the target vehicle in the image acquisition blind area according to the characteristic parameters and the rule that the target vehicle keeps the motion characteristics;
and if the target vehicle is determined to move to the second image acquisition area of the second camera, stopping predicting the track point of the target vehicle.
In some embodiments, predicting a trajectory point of the target vehicle in the image capturing blind area according to a rule that the target vehicle keeps a motion characteristic based on the characteristic parameter includes:
determining the longitude motion amplitude and the latitude motion amplitude of the target vehicle in a track point generation period based on the characteristic parameters;
and when a track point generation cycle is reached, performing offset processing on the longitude and latitude of the last track point according to the longitude movement amplitude and the latitude movement amplitude to obtain the longitude and latitude of the current track point, wherein the last track point is the last track point of the target vehicle determined in the first image acquisition area at the beginning.
In some embodiments, the characteristic parameter comprises a velocity vector of the target vehicle when passing the last trajectory point;
determining the longitude motion amplitude and the latitude motion amplitude of the target vehicle in a track point generation period based on the characteristic parameters, wherein the method comprises the following steps:
determining, based on the velocity vector, a longitudinal velocity in a longitudinal direction and a latitudinal velocity in a latitudinal direction of the target vehicle;
and determining the longitude movement amplitude based on the longitude speed and the track point generation period, and determining the latitude movement amplitude based on the latitude speed and the track point generation period.
In some embodiments, the characteristic parameters include latitude and longitude of the last two trajectory points of the target vehicle determined in the first image acquisition area;
based on the characteristic parameters, determining the longitude motion amplitude and the latitude motion amplitude of the target vehicle in a track point generation period, wherein the method comprises the following steps:
and determining the longitude difference between the last two track points as the longitude movement amplitude, and determining the latitude difference between the last two track points as the latitude movement amplitude.
In some embodiments, after obtaining the longitude and latitude of the current trace point, the method further includes:
and if the longitude and latitude of the current track point are determined not to be located in the image acquisition blind area, stopping predicting the track point of the target vehicle.
In some embodiments, determining that the target vehicle moves to the second image capture area is based on:
carrying out vehicle identification on the acquired image acquired by the second camera;
for each recognized vehicle, if the license plate of the vehicle is recognized from the image, matching the license plate of the vehicle with the license plate of the target vehicle;
and when the preset matching condition is determined to be met, determining that the target vehicle moves to the second image acquisition area.
In some embodiments, the preset matching condition is that the license plate of the vehicle and the license plate of the target vehicle contain M identical characters, where M is not less than a preset value and less than the total number of characters contained in the license plate.
In some embodiments, further comprising:
if the license plate recognized from the image is not matched with the license plate of the target vehicle, determining similarity between the extracted image features of the vehicle and the stored image features of the target vehicle for each vehicle of which the license plate is not recognized in the image;
and if the similarity is higher than the set value, determining that the target vehicle moves to the second image acquisition area.
In a second aspect, an embodiment of the present application provides a vehicle tracking device, including:
the system comprises an acquisition module, a judgment module and a display module, wherein the acquisition module is used for acquiring characteristic parameters for representing the motion characteristics of a target vehicle when the target vehicle moves from a first image acquisition area of a first camera to an image acquisition blind area between the first camera and a second camera;
the prediction module is used for predicting track points of the target vehicle in the image acquisition blind area according to the characteristic parameters and the rule that the target vehicle keeps the motion characteristics;
and the stopping module is used for stopping predicting the track point of the target vehicle if the target vehicle is determined to move to the second image acquisition area of the second camera.
In some embodiments, the prediction module is specifically configured to:
determining the longitude motion amplitude and the latitude motion amplitude of the target vehicle in a track point generation period based on the characteristic parameters;
and when a track point generation cycle is reached, performing offset processing on the longitude and latitude of the last track point according to the longitude movement amplitude and the latitude movement amplitude to obtain the longitude and latitude of the current track point, wherein the last track point is the last track point of the target vehicle determined in the first image acquisition area at the beginning.
In some embodiments, the characteristic parameter comprises a velocity vector of the target vehicle when passing the last trajectory point; the prediction module is specifically configured to:
determining, based on the velocity vector, a longitudinal velocity in a longitudinal direction and a latitudinal velocity in a latitudinal direction of the target vehicle;
and determining the longitude movement amplitude based on the longitude speed and the track point generation period, and determining the latitude movement amplitude based on the latitude speed and the track point generation period.
In some embodiments, the characteristic parameters include latitude and longitude of the last two trajectory points of the target vehicle determined in the first image acquisition area; the prediction module is specifically configured to:
and determining the longitude difference between the last two track points as the longitude movement amplitude, and determining the latitude difference between the last two track points as the latitude movement amplitude.
In some embodiments, the stopping module is further to:
and after the longitude and latitude of the current track point are obtained, if the longitude and latitude of the current track point are determined not to be located in the image acquisition blind area, stopping predicting the track point of the target vehicle.
In some embodiments, the stopping module is specifically configured to determine that the target vehicle moves to the second image capture area according to:
carrying out vehicle identification on the acquired image acquired by the second camera;
for each recognized vehicle, if the license plate of the vehicle is recognized from the image, matching the license plate of the vehicle with the license plate of the target vehicle;
and when the preset matching condition is determined to be met, determining that the target vehicle moves to the second image acquisition area.
In some embodiments, the preset matching condition is that the license plate of the vehicle and the license plate of the target vehicle contain M identical characters, where M is not less than a preset value and less than the total number of characters contained in the license plate.
In some embodiments, the stop module is further to:
if the license plate recognized from the image is not matched with the license plate of the target vehicle, determining similarity between the extracted image features of the vehicle and the stored image features of the target vehicle for each vehicle of which the license plate is not recognized in the image;
and if the similarity is higher than a set value, determining that the target vehicle moves to the second image acquisition area.
In a third aspect, an embodiment of the present application provides an electronic device, including: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform any of the vehicle tracking methods described above.
In a fourth aspect, embodiments of the present application provide a storage medium, where a computer program in the storage medium is executed by a processor of an electronic device, and the electronic device is capable of executing any one of the vehicle tracking methods described above.
In the embodiment of the application, when a target vehicle moves from a first image acquisition area of a first camera to an image acquisition blind area between the first camera and a second camera, characteristic parameters for representing the motion characteristics of the target vehicle are acquired, track points of the target vehicle in the image acquisition blind area are predicted according to rules that the target vehicle keeps the motion characteristics based on the characteristic parameters, and if the target vehicle is determined to move to a second image acquisition area of the second camera, the prediction of the track points of the target vehicle is stopped. Thus, when an image acquisition blind area (namely an image acquisition area without overlap) exists between the first camera and the second camera, the track point of the target vehicle can be predicted, and the situation that the track point of the target vehicle is discontinuous or missing can not occur, so that a scheme for tracking the vehicle across the cameras when the image acquisition areas between the cameras are not overlapped is provided.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic view of an application scenario of a vehicle tracking method according to an embodiment of the present application;
FIG. 2 is a flow chart of a vehicle tracking method according to an embodiment of the present disclosure;
FIG. 3 is a schematic view of a vehicle tracking process provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of another vehicle tracking process provided by the embodiment of the present application;
fig. 5 is a schematic structural diagram of a vehicle tracking device according to an embodiment of the present application;
fig. 6 is a schematic hardware structure diagram of an electronic device for implementing a vehicle tracking method according to an embodiment of the present application.
Detailed Description
In order to solve the problem of tracking a vehicle across cameras when there is no overlapped image acquisition area between the cameras, embodiments of the present application provide a vehicle tracking method and apparatus, an electronic device, and a storage medium.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it should be understood that the preferred embodiments described herein are merely for illustrating and explaining the present application, and are not intended to limit the present application, and that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Fig. 1 is an application scene diagram of a vehicle tracking method provided in an embodiment of the present application, and includes a plurality of cameras, where the plurality of cameras may be installed on both sides of a road or on one side of the road, and are used to collect images of vehicles running on the road, each camera may send the collected image to an image analysis device (not shown) such as a computer, a server, and the like, and the server generates a track point of each vehicle based on the received images.
Wherein, there is not the image acquisition region of overlapping between two adjacent cameras, promptly, there is the image acquisition blind area between two adjacent cameras. Generally, the size of the image capturing blind area can be determined by a technician according to working parameters of the camera, such as capturing direction, capturing angle, and the like of the camera, and road parameters, such as road width, curve condition, and the like.
After introducing the application scenarios of the embodiments of the present application, the following describes a vehicle tracking method proposed in the present application with specific embodiments. Fig. 2 is a flowchart of a vehicle tracking method according to an embodiment of the present application, where an execution subject of the method is an image analysis apparatus, and the method includes the following steps.
In step 21, when the target vehicle moves from the first image capturing area of the first camera to the image capturing blind area between the first camera and the second camera, the characteristic parameters for characterizing the movement characteristics of the target vehicle are acquired.
In practical application, a target vehicle firstly enters a first image acquisition area of a first camera, the first camera can periodically acquire images of the target vehicle and send the images to an image analysis device, and the image analysis device can generate track points of the target vehicle in the first image acquisition area based on the images, namely, the target vehicle is tracked in the first image acquisition area.
And when the image analysis device cannot generate the track point of the target vehicle based on the image sent by the first camera, that is, the target vehicle cannot be tracked in the first image acquisition area, it is indicated that the target vehicle moves from the first image acquisition area to an image acquisition blind area between the first camera and the second camera, and at this time, the characteristic parameters for representing the motion characteristics of the target vehicle can be obtained.
In step 22, track points of the target vehicle in the image acquisition blind area are predicted according to the rule that the target vehicle keeps the motion characteristics based on the characteristic parameters.
First, the latitude and longitude movement amplitudes of the target vehicle in one track point generation period may be determined based on the characteristic parameters.
In the first case, the characteristic parameter includes a velocity vector of the target vehicle when passing the last trajectory point, which is determined in the first image capturing area. At this time, the longitudinal velocity in the longitudinal direction and the latitudinal velocity in the latitudinal direction of the target vehicle may be determined based on the velocity vector, and then the longitudinal motion amplitude may be determined based on the longitudinal velocity and the track point generation period, which may be the time interval between two adjacent frames of images, and the latitudinal motion amplitude may be determined based on the latitudinal velocity and the track point generation period.
For example, longitude motion amplitude = longitude speed × track point generation period; latitude motion amplitude = latitude velocity trace point generation period.
In a second case, the characteristic parameters include the latitude and longitude of the last two trajectory points of the target vehicle determined in the first image capturing area. At this time, the longitude difference between the last two trace points may be determined as the longitude movement amplitude, and the latitude difference between the last two trace points may be determined as the latitude movement amplitude.
Taking the last two track points as track point 1 and track point 2, and the generation time of track point 2 is later than that of track point 1 as an example, then, the latitude motion amplitude = longitude of track point 2-longitude of track point 1, and the latitude motion amplitude = latitude of track point 2-latitude of track point 1.
And then, when a track point generation cycle is reached, performing offset processing on the longitude and latitude of the last track point according to the longitude motion amplitude and the latitude motion amplitude to obtain the longitude and latitude of the current track point, wherein the last track point is the last track point of the target vehicle determined in the first image acquisition area initially.
For example, if the longitude and latitude of the previous track point are (a, B), the longitude movement amplitude is Δ a, and the latitude movement amplitude is Δ B, the longitude and latitude of the current track point are (a + Δ a, B + Δ B).
In addition, after the longitude and latitude of the current track point are obtained, whether the longitude and latitude of the current track point are located in the image acquisition blind area can be judged, if not, the target vehicle is not located in the image acquisition blind area, the predicted track point is inaccurate, and the prediction of the track point of the target vehicle can be stopped; if so, the track points of the target vehicle can be continuously predicted.
It should be noted that the image analysis device may determine the position of the target vehicle in the image based on the image captured by the first camera, and then may convert the position of the target vehicle in the image into the latitude and longitude according to a conversion relationship between the position and the latitude and longitude in the image, which is established in advance. Based on the longitude and latitude, the longitude and latitude of the last track point or the last two track points of the target vehicle determined in the first image acquisition area can be obtained.
In step 23, if it is determined that the target vehicle moves to the second image capturing area of the second camera, prediction of the track point of the target vehicle is stopped.
In particular implementation, the movement of the target vehicle to the second image capturing area may be determined according to the following steps:
and performing vehicle recognition on the acquired image acquired by the second camera, and for each recognized vehicle, if the license plate of the vehicle is recognized from the image, matching the license plate of the vehicle with the license plate of the target vehicle, and when the license plate of the vehicle and the license plate of the target vehicle meet preset matching conditions, indicating that the target vehicle appears in the image acquired by the second camera, so that the target vehicle can be determined to move to a second image acquisition area.
Considering that the probability that two vehicles with license plates similar in number run together is low, when the license plates are matched, the license plates of the vehicles and the license plate of the target vehicle can only be required to contain M identical characters, wherein M is not less than a preset value and less than the total number of characters contained in the license plates, and the preset values are 4, 5, 6 and the like. Therefore, the speed of identifying the target vehicle by crossing the camera is improved.
In addition, if the license plate recognized from the image is not matched with the license plate of the target vehicle, the vehicles without the license plate recognized in the image can be matched with the target vehicle based on the image characteristics.
Specifically, for each vehicle of which the license plate is not recognized in the image, the similarity between the image features of the extracted vehicle and the stored image features of the target vehicle can be calculated, and if the similarity is higher than a set value, for example, 80%, it is determined that the target vehicle appears in the image acquired by the second camera, and it is determined that the target vehicle moves to the second image acquisition area.
Subsequently, the image analysis device can generate track points of the target vehicle in the second image acquisition area based on the image of the target vehicle acquired by the second camera, namely, the target vehicle is tracked in the second image acquisition area.
In addition, it should be noted that when the predicted track point of the target vehicle stops being predicted because the predicted track point exceeds the image capturing blind area, the predicted track point and the track point of the target vehicle in the second image capturing area may be discontinuous or missing. At this time, the predicted longitude and latitude of the last track point is taken as a starting point, the speed vector of the target vehicle when the target vehicle passes through the last track point is taken as an initial motion direction, the longitude and latitude of the first track point of the target vehicle determined in the second image acquisition area is taken as an end point, the speed vector of the target vehicle when the target vehicle passes through the first track point is taken as an end point motion direction, and missing track points are generated from the predicted last track point to the first track point of the target vehicle determined in the second image acquisition area.
In practical application, the track of the target vehicle can be displayed in real time, so that even if the target vehicle moves to the image acquisition blind area, a user can see track points of the vehicle, and the problems that the target vehicle disappears or the track points of the target vehicle are discontinuous can be avoided.
The embodiments of the present application will be described with reference to specific embodiments.
According to the embodiment of the application, vehicle detection and license plate recognition are carried out on the basis of images collected by the cameras, license plates, vehicle attributes and the like are stored in the database, the license plates, the vehicle attributes and the like are matched in a grading mode in the process of crossing camera tracking, and the cameras are quickly and accurately crossed to track the vehicle.
Referring to fig. 3, at time t-3, when the vehicle just enters the image capturing area of the camera 1, since only a small part of the vehicle body can be detected based on the image captured by the camera 1, and the vehicle cannot be successfully detected, initialization is not performed. At the time of t-2, a vehicle can be detected based on an image acquired by the camera 1, and since the vehicle is detected for the first time, the vehicle can be initialized, for example, the vehicle is assigned with the number ID =1, license plate recognition is performed, vehicle attributes such as vehicle type, color and vehicle brand are detected, then, matching can be performed in Redis based on the license plate, the vehicle type, the color and the vehicle brand, and the like, and if the matching is not successful, the license plate, the vehicle type, the color and the vehicle brand are stored in Redis. And at the time of t-1, tracking the vehicles in the single camera, namely tracking the vehicles with ID =1 in the images acquired by the camera 1 again, if the vehicles are tracked, keeping the ID of the vehicles unchanged, and not needing initialization, wherein the vehicles are used before the license plate, the model, the color, the brand of the vehicles and the like are used, if the vehicles are not tracked, redistributing an ID for the current vehicle, initializing to obtain the information of the license plate, the model, the color, the brand of the vehicles and the like, then firstly matching in Redis, and if the matching is not successful, storing the license plate, the model, the color, the brand of the vehicles and the like in Redis. the processing procedure at time t is similar to that at time t-1, and is not described herein again. Until the moment of t +10, the vehicle enters an image acquisition area of the camera 2, when the vehicle is detected based on the image acquired by the camera 2, initialization is carried out, information such as a license plate, a vehicle type, a color and a vehicle brand is detected, if the license plate is not detected, the license plate is unknown, then matching is carried out in Redis based on the vehicle type, the color and the vehicle brand, and if the matching is not successful, the license plate, the vehicle type, the color and the vehicle brand are stored in Redis.
Wherein each initialization indicates that a new vehicle may be present (which may be untracked or truly present), and based on the initialized information interacting with the information in the Redis, it can be finally determined whether a new vehicle is actually present.
When the method is specifically implemented, the vehicle tracking process between the cross-camera comprises the following steps:
1. the shooting visual angles are unified to the tail direction after the camera is installed.
2. And (5) calibrating a vehicle detection area and screening vehicles.
The detection area of the vehicle is set, and the vehicle detected only when the detected vehicle center point enters the detection area is effective, so that the detected vehicle is not complete or the license plate is not complete. Referring to fig. 3, the vehicle does not enter the detection zone at time t-3, so the detected vehicle is not initialized at this time, i.e., the vehicle is filtered out.
3. Vehicle detection and initialization.
When a vehicle is detected for the first time (in the detection area), the position and feature vector of the vehicle can be obtained, and as shown in fig. 3, when the vehicle is detected for the first time at time t-2 (ID = 1), initialization is started, and the following steps are performed:
i. matting the detected vehicles to obtain a small vehicle image;
detecting the license plate of the small vehicle image, and identifying the license plate;
performing vehicle attribute identification on the vehicle small graph: such as vehicle type, color, vehicle brand, etc.
4. And carrying out hierarchical matching on the detected vehicles in a Redis database.
4.1 if the license plate identification is successful, performing two-stage matching, and specifically comprising the following steps:
the first step is to carry out the first-stage matching, fuzzy matching is carried out by utilizing the license plate, if the complete license plate is n bits, the vehicle can be regarded as the same vehicle only by matching n-m bits, and m is usually a positive integer not more than 3.
And secondly, performing second-stage matching, and matching by using the feature vectors of the rest vehicles (i.e. the vehicles without the license plates being identified) under the condition that the first-stage matching is not successful, wherein the specific steps are as follows:
i. calculating the total characteristic of the current vehicle:
total features = a vehicle feature vector + b vehicle type vector + c vehicle color vector + d vehicle brand vector;
the vehicle feature vectors are used for representing the overall features of the vehicle, and can uniquely identify one vehicle.
And ii, calculating the distance of the total features in the Redis database, indexing the vehicle ID with the minimum distance, judging whether the minimum distance is smaller than a given threshold value threshold, if so, judging that the matching is successful, and if not, judging that the matching is failed.
And thirdly, if the previous two steps are not matched successfully, storing the vehicle ID, the license plate (if detected), the attribute and the total characteristic of the current vehicle into Redis. And if the matching is positive, covering the current vehicle ID, the license plate and the like by the vehicle ID, the license plate and the like matched in the Redis.
4.2 if the license plate identification fails, the license plate is set as unknown at the moment, and the second step in 4.1 is directly carried out for second-stage matching.
5. In the tracking process of the single camera, a certain frame of tracking failure may occur, and once the tracking failure occurs, the initialization is performed, i.e. the steps of 3-4 are repeated.
When a vehicle enters an image acquisition blind area between the adjacent cameras, track prediction can be performed through longitude and latitude, speed vectors and the like corresponding to track points of the vehicle, and generation of track points in the image acquisition blind area is completed. The generation logic of the track points in the image acquisition blind area is as follows:
referring to fig. 4, in a road, areas a, C, and E are image acquisition areas of the camera 1, the camera 2, and the camera 3, respectively, areas B and D are compensation areas (i.e., image acquisition blind areas), solid lines in the areas a, C, and E are real tracks of vehicles captured by the cameras, and dotted lines in the areas B and D are compensation tracks of the vehicles. After a vehicle is captured by the camera 1, the actual track of the vehicle in the area A can be generated based on the image captured by the camera 1, after the vehicle exits the area A and enters the area B, the compensation track of the vehicle in the area B can be generated according to the movement rule of the vehicle in the area A, after the vehicle is captured by the camera 2 in the area C, the actual track of the vehicle in the area C can be generated based on the image captured by the camera 2, and the like, so that when the vehicle moves from the camera 1 to the camera 3, the track of the vehicle in all the areas from the area A to the area E can be ensured, and therefore the problems that the vehicle in a blind area part between the cameras disappears and the track is discontinuous are solved.
The steps of the compensated trajectory generation are as follows:
1. acquiring the area range of longitude and latitude calibration blind areas of n points, wherein n is an integer;
2. converting the vehicle position coordinates in the detected image into longitude and latitude coordinates;
Figure BDA0003752201990000121
wherein, (x, y) is the coordinates of the vehicle position in the image, (lat, lon) is the latitude and longitude of the vehicle position,
Figure BDA0003752201990000122
the conversion relationship between the vehicle position and the latitude and longitude in the image can be predetermined through calibration.
3. The vehicle position is predicted.
For example, the trajectory points of the vehicle are predicted according to the following ways:
lat_pred=lat_pre+vlat_pred×Δt;
lon_pred=lon_pre+vlon_pred×Δt;
wherein, (lat _ pre, lon _ pre) is the longitude and latitude of the last track point, the last track point is the last track point (not the predicted track point) of the vehicle determined actually at the beginning, the image acquisition blind area is the B area as an example, and the last track point is the last track point of the vehicle determined in the A area at the beginning; (vlat _ pred, vlon _ pred) are the speed of the vehicle in the longitudinal and latitudinal directions, respectively, when passing the last track point; Δ t is the frame interval time.
As another example, the trajectory points of the vehicle are predicted according to the following manner:
lat_pred=lat_pre+Δlat;
lon_pred=lon_pre+Δlon;
wherein, (lat _ pre, lon _ pre) is the longitude and latitude of the last track point, and initially, the last track point is the last track point (not the predicted track point) of the vehicle determined actually, taking the image acquisition blind area as the B area as an example, and initially, the last track point is the last track point of the vehicle determined in the a area. And (Δ lat, Δ lon) are the distance difference values of the last two track points (i.e. the track points which are not predicted) of the vehicle determined actually in the longitudinal direction and the latitudinal direction respectively, and taking the image acquisition blind area as the B area as an example, (Δ lat, Δ lon) are the distance difference values of the track point determined finally in the a area and the previous track point in the longitudinal direction and the latitudinal direction respectively.
4. And judging whether the track is in the compensation area or not according to the longitude and latitude coordinates, and stopping track prediction if the compensation area is out.
The grading matching method provided by the embodiment of the application utilizes the advantages that the license plate is relatively unique and is not easily interfered by the environment and the like, the license plate matching is used as a matching strategy with higher priority to be matched firstly, and other information is used for matching when the license plate matching fails, so that the matching of the same vehicle under different cameras can be completed more accurately and rapidly. In addition, the track connection problem of the compensation area and the non-compensation area is solved by a method for predicting the vehicle position and limiting the latitude and longitude range by the vehicle speed, and the situations that the vehicle disappears in a blind area and the track is discontinuous are avoided.
Based on the same technical concept, the embodiment of the present application further provides a vehicle tracking device, and the principle of the vehicle tracking device to solve the problem is similar to the vehicle tracking method, so the implementation of the vehicle tracking device can refer to the implementation of the vehicle tracking method, and repeated details are omitted.
Fig. 5 is a schematic structural diagram of a vehicle tracking device according to an embodiment of the present application, and the vehicle tracking device includes an obtaining module 501, a predicting module 502, and a stopping module 503.
The acquiring module 501 is configured to acquire a characteristic parameter for characterizing a motion characteristic of a target vehicle when the target vehicle moves from a first image acquisition area of a first camera to an image acquisition blind area between the first camera and a second camera;
the prediction module 502 is configured to predict track points of the target vehicle in the image acquisition blind area according to a rule that the target vehicle keeps a motion characteristic based on the characteristic parameter;
and a stopping module 503, configured to stop predicting the track point of the target vehicle if it is determined that the target vehicle moves to the second image acquisition area of the second camera.
In some embodiments, the prediction module 502 is specifically configured to:
determining the longitude motion amplitude and the latitude motion amplitude of the target vehicle in a track point generation period based on the characteristic parameters;
and when a track point generating cycle is reached, performing offset processing on the longitude and latitude of the previous track point according to the longitude movement amplitude and the latitude movement amplitude to obtain the longitude and latitude of the current track point, wherein the previous track point is the last track point of the target vehicle determined in the first image acquisition area initially.
In some embodiments, the characteristic parameter comprises a velocity vector of the target vehicle as it passes the last trajectory point; the prediction module 502 is specifically configured to:
determining, based on the velocity vector, a longitudinal velocity in a longitudinal direction and a latitudinal velocity in a latitudinal direction of the target vehicle;
and determining the longitude movement amplitude based on the longitude speed and the track point generation period, and determining the latitude movement amplitude based on the latitude speed and the track point generation period.
In some embodiments, the characteristic parameters include longitude and latitude of the last two trajectory points of the target vehicle determined in the first image acquisition region; the prediction module 502 is specifically configured to:
and determining the longitude difference between the last two track points as the longitude movement amplitude, and determining the latitude difference between the last two track points as the latitude movement amplitude.
In some embodiments, the stopping module 503 is further configured to:
and after the longitude and latitude of the current track point are obtained, if the longitude and latitude of the current track point are determined not to be located in the image acquisition blind area, stopping predicting the track point of the target vehicle.
In some embodiments, the stopping module 503 is specifically configured to determine that the target vehicle moves to the second image capturing area according to the following steps:
carrying out vehicle identification on the acquired image acquired by the second camera;
for each recognized vehicle, if the license plate of the vehicle is recognized from the image, matching the license plate of the vehicle with the license plate of the target vehicle;
and when the preset matching condition is determined to be met, determining that the target vehicle moves to the second image acquisition area.
In some embodiments, the preset matching condition is that the license plate of the vehicle and the license plate of the target vehicle contain M identical characters, where M is not less than a preset value and less than the total number of characters contained in the license plate.
In some embodiments, the stopping module 503 is further configured to:
if the license plate recognized from the image is not matched with the license plate of the target vehicle, determining the similarity between the extracted image features of the vehicle and the stored image features of the target vehicle for each vehicle of which the license plate is not recognized in the image;
and if the similarity is higher than the set value, determining that the target vehicle moves to the second image acquisition area.
The division of the modules in the embodiments of the present application is schematic, and only one logic function division is provided, and in actual implementation, there may be another division manner, and in addition, each function module in each embodiment of the present application may be integrated in one processor, may also exist alone physically, or may also be integrated in one module by two or more modules. The coupling of the various modules to each other may be through interfaces that are typically electrical communication interfaces, but mechanical or other forms of interfaces are not excluded. Accordingly, modules illustrated as separate components may or may not be physically separate, may be located in one place, or may be distributed in different locations on the same or different devices. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Having described the vehicle tracking method and apparatus of the exemplary embodiments of the present application, next, an electronic device according to another exemplary embodiment of the present application is described.
An electronic device 130 implemented according to this embodiment of the present application is described below with reference to fig. 6. The electronic device 130 shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the electronic device 130 is represented in the form of a general electronic device. The components of the electronic device 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 that couples various system components including the memory 132 and the processor 131.
Bus 133 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The memory 132 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323.
Memory 132 may also include a program/utility 1325 having a set (at least one) of program modules 1324, such program modules 1324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may comprise an implementation of a network environment.
The electronic device 130 may also communicate with one or more external devices 134 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with the electronic device 130, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 130 to communicate with one or more other electronic devices. Such communication may occur via input/output (I/O) interfaces 135. Also, the electronic device 130 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 136. As shown, the network adapter 136 communicates with other modules for the electronic device 130 over the bus 133. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 130, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In an exemplary embodiment, there is also provided a storage medium in which a computer program is stored, the computer program being executable by a processor of an electronic device to perform the vehicle tracking method described above. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, the electronic device of the present application may include at least one processor, and a memory communicatively connected to the at least one processor, wherein the memory stores a computer program executable by the at least one processor, and the computer program, when executed by the at least one processor, may cause the at least one processor to perform the steps of any of the vehicle tracking methods provided by the embodiments of the present application.
In an exemplary embodiment, a computer program product is also provided, which, when executed by an electronic device, enables the electronic device to implement any of the exemplary methods provided herein.
Also, a computer program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable Disk, a hard Disk, a RAM, a ROM, an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a Compact Disk Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for vehicle tracking in the embodiments of the present application may be a CD-ROM and include program code and may be run on a computing device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, radio Frequency (RF), etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In situations involving remote computing devices, the remote computing devices may be connected to the user computing device over any kind of Network, such as a Local Area Network (LAN) or Wide Area Network (WAN), or may be connected to external computing devices (e.g., connected over the internet using an internet service provider).
It should be noted that although in the above detailed description several units or sub-units of the apparatus are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all changes and modifications that fall within the scope of the present application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A vehicle tracking method, comprising:
when a target vehicle moves from a first image acquisition area of a first camera to an image acquisition blind area between the first camera and a second camera, acquiring characteristic parameters for representing the movement characteristics of the target vehicle;
predicting track points of the target vehicle in the image acquisition blind area according to the characteristic parameters and the rule that the target vehicle keeps the motion characteristics;
and if the target vehicle is determined to move to the second image acquisition area of the second camera, stopping predicting the track point of the target vehicle.
2. The method according to claim 1, wherein predicting a trajectory point of the target vehicle in the image-capturing blind area based on the feature parameter in accordance with a rule that the target vehicle maintains a moving feature includes:
determining the longitude motion amplitude and the latitude motion amplitude of the target vehicle in a track point generation period based on the characteristic parameters;
and when a track point generating cycle is reached, performing offset processing on the longitude and latitude of the previous track point according to the longitude movement amplitude and the latitude movement amplitude to obtain the longitude and latitude of the current track point, wherein the previous track point is the last track point of the target vehicle determined in the first image acquisition area initially.
3. The method according to claim 2, characterized in that the characteristic parameter comprises a velocity vector of the target vehicle when passing the last trajectory point;
based on the characteristic parameters, determining the longitude motion amplitude and the latitude motion amplitude of the target vehicle in a track point generation period, wherein the method comprises the following steps:
determining, based on the velocity vector, a longitudinal velocity in a longitudinal direction and a latitudinal velocity in a latitudinal direction of the target vehicle;
and determining the longitude movement amplitude based on the longitude speed and the track point generation period, and determining the latitude movement amplitude based on the latitude speed and the track point generation period.
4. The method of claim 2, wherein the characteristic parameters include the latitude and longitude of the last two trajectory points of the target vehicle determined in the first image capture area;
determining the longitude motion amplitude and the latitude motion amplitude of the target vehicle in a track point generation period based on the characteristic parameters, wherein the method comprises the following steps:
and determining the longitude difference between the last two track points as the longitude movement amplitude, and determining the latitude difference between the last two track points as the latitude movement amplitude.
5. The method of any of claims 2-4, further comprising, after obtaining the latitude and longitude of the current trace point:
and if the longitude and latitude of the current track point are determined not to be located in the image acquisition blind area, stopping predicting the track point of the target vehicle.
6. The method of claim 1, wherein the target vehicle is determined to be moving to the second image acquisition area according to the following steps:
carrying out vehicle identification on the acquired image acquired by the second camera;
for each recognized vehicle, if the license plate of the vehicle is recognized from the image, matching the license plate of the vehicle with the license plate of the target vehicle;
and when the preset matching condition is determined to be met, determining that the target vehicle moves to the second image acquisition area.
7. The method of claim 6, wherein the predetermined matching condition is that the license plate of the vehicle and the license plate of the target vehicle contain M identical characters, wherein M is not less than a predetermined value and is less than the total number of characters contained in the license plate.
8. The method of claim 6 or 7, further comprising:
if the license plate recognized from the image is not matched with the license plate of the target vehicle, determining the similarity between the extracted image features of the vehicle and the stored image features of the target vehicle for each vehicle of which the license plate is not recognized in the image;
and if the similarity is higher than the set value, determining that the target vehicle moves to the second image acquisition area.
9. An electronic device, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
10. A storage medium, characterized in that, when the computer program in the storage medium is executed by a processor of an electronic device, the electronic device is capable of performing the method according to any one of claims 1-8.
CN202210848108.0A 2022-07-19 2022-07-19 Vehicle tracking method and device, electronic equipment and storage medium Pending CN115294169A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210848108.0A CN115294169A (en) 2022-07-19 2022-07-19 Vehicle tracking method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210848108.0A CN115294169A (en) 2022-07-19 2022-07-19 Vehicle tracking method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115294169A true CN115294169A (en) 2022-11-04

Family

ID=83823301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210848108.0A Pending CN115294169A (en) 2022-07-19 2022-07-19 Vehicle tracking method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115294169A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292337A (en) * 2023-11-24 2023-12-26 中国科学院空天信息创新研究院 Remote sensing image target detection method
CN117315574A (en) * 2023-09-20 2023-12-29 北京卓视智通科技有限责任公司 Blind area track completion method, blind area track completion system, computer equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315574A (en) * 2023-09-20 2023-12-29 北京卓视智通科技有限责任公司 Blind area track completion method, blind area track completion system, computer equipment and storage medium
CN117315574B (en) * 2023-09-20 2024-06-07 北京卓视智通科技有限责任公司 Blind area track completion method, blind area track completion system, computer equipment and storage medium
CN117292337A (en) * 2023-11-24 2023-12-26 中国科学院空天信息创新研究院 Remote sensing image target detection method

Similar Documents

Publication Publication Date Title
CN115294169A (en) Vehicle tracking method and device, electronic equipment and storage medium
CN102902955B (en) The intelligent analysis method of a kind of vehicle behavior and system
Bui et al. A vehicle counts by class framework using distinguished regions tracking at multiple intersections
CN112836683B (en) License plate recognition method, device, equipment and medium for portable camera equipment
CN113537362A (en) Perception fusion method, device, equipment and medium based on vehicle-road cooperation
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN112085953A (en) Traffic command method, device and equipment
Bui et al. Video-based traffic flow analysis for turning volume estimation at signalized intersections
CN115063454A (en) Multi-target tracking matching method, device, terminal and storage medium
Wang et al. A semi-automatic video labeling tool for autonomous driving based on multi-object detector and tracker
CN114360261B (en) Vehicle reverse running identification method and device, big data analysis platform and medium
Salma et al. Smart parking guidance system using 360o camera and haar-cascade classifier on iot system
CN114627409A (en) Method and device for detecting abnormal lane change of vehicle
Jiang et al. Surveillance from above: A detection-and-prediction based multiple target tracking method on aerial videos
US11417114B2 (en) Method and apparatus for processing information
Espino et al. Rail and turnout detection using gradient information and template matching
CN112541457B (en) Searching method and related device for monitoring node
CN117523914A (en) Collision early warning method, device, equipment, readable storage medium and program product
CN113744302B (en) Dynamic target behavior prediction method and system
Glasl et al. Video based traffic congestion prediction on an embedded system
CN109740518B (en) Method and device for determining object in video
Abbas et al. Vision based intelligent traffic light management system using Faster R‐CNN
CN116543356B (en) Track determination method, track determination equipment and track determination medium
CN117494029B (en) Road casting event identification method and device
Singh et al. Improved YOLOv5l for vehicle detection: an application to estimating traffic density and identifying over speeding vehicles on highway scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination