CN117011341A - Vehicle track detection method and system based on target tracking - Google Patents

Vehicle track detection method and system based on target tracking Download PDF

Info

Publication number
CN117011341A
CN117011341A CN202310999420.4A CN202310999420A CN117011341A CN 117011341 A CN117011341 A CN 117011341A CN 202310999420 A CN202310999420 A CN 202310999420A CN 117011341 A CN117011341 A CN 117011341A
Authority
CN
China
Prior art keywords
target
vehicle
frame
sequence
kth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310999420.4A
Other languages
Chinese (zh)
Inventor
闫军
霍建杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart Intercommunication Technology Co ltd
Original Assignee
Smart Intercommunication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Intercommunication Technology Co ltd filed Critical Smart Intercommunication Technology Co ltd
Priority to CN202310999420.4A priority Critical patent/CN117011341A/en
Publication of CN117011341A publication Critical patent/CN117011341A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a vehicle track detection method and a system based on target tracking, which relate to the technical field of track detection and comprise the following steps: the method comprises the steps of acquiring a real-time image of a target moving vehicle, acquiring a target image sequence, carrying out target detection, acquiring a vehicle target sequence, carrying out key point matching on the vehicle target sequence by taking a first frame of vehicle target as a reference, generating a target tracker, applying the target tracker to the vehicle target sequence, carrying out target position and state prediction and updating, acquiring a target tracking result, carrying out motion state prediction on the vehicle target by utilizing history information and an observation result of a kth frame, acquiring vehicle position information, connecting the vehicle position information, and generating a vehicle track. The application solves the technical problems that the traditional vehicle detection method can only detect the vehicle in a single image generally, and can not provide the motion information of the vehicle, so that the continuous positioning and motion state analysis effects of the vehicle in time are poor, and the appearance and the position of a target are difficult to keep consistent.

Description

Vehicle track detection method and system based on target tracking
Technical Field
The application relates to the technical field of track detection, in particular to a vehicle track detection method and system based on target tracking.
Background
In the past, traditional computer vision-based methods have been widely used for vehicle track detection, and these methods generally rely on manually designed features or simple motion models, such as a frame difference method, an optical flow method, background modeling, etc., however, these methods often have poor robustness to factors such as occlusion, illumination change, target deformation, etc. in complex scenes.
The traditional vehicle detection method can only detect the vehicle in a single image, can not provide the motion information of the vehicle, and a plurality of targets can be mutually shielded, deformed, scale-changed and the like, so that the appearance and the position of the targets are difficult to keep consistent, further, the continuous positioning of the vehicle in time and the motion state analysis effect are poor, and the appearance and the position of the targets are difficult to keep consistent. Therefore, there is some liftable space for vehicle track detection.
Disclosure of Invention
The application provides a vehicle track detection method and a vehicle track detection system based on target tracking, and aims to solve the technical problems that the traditional vehicle detection method can only detect a vehicle in a single image, can not provide motion information of the vehicle, and a plurality of targets can be mutually shielded, deformed, scale changed and the like, so that the appearance and the position of the targets are difficult to keep consistent, and further the continuous positioning of the vehicle in time and the analysis effect of the motion state are poor, and the appearance and the position of the targets are difficult to keep consistent.
In view of the above problems, the present application provides a method and a system for detecting a vehicle track based on target tracking.
In a first aspect of the disclosure, a method for detecting a vehicle track based on target tracking is provided, the method comprising: the method comprises the steps that real-time image acquisition is carried out on a target moving vehicle through image acquisition equipment, and a target image sequence is obtained; performing target detection on the target image sequence, identifying a vehicle target in each frame of image, and obtaining a vehicle target sequence; extracting a first frame of vehicle target according to the vehicle target sequence, and performing key point matching on the vehicle target sequence by taking the first frame of vehicle target as a reference to generate a target tracker, wherein the target tracker is associated with an identifier of the vehicle target sequence; the target tracker is applied to the vehicle target sequence, and a tracking algorithm is used for predicting and updating the target position and state according to the characteristics of the kth frame and the tracking result of the kth-1 frame, so that a target tracking result is obtained; based on the target tracking result, predicting the motion state of the vehicle target by utilizing the history information and the observation result of the kth frame to acquire vehicle position information; and connecting the vehicle position information of the vehicle target in the continuous frames according to the identifier to generate a vehicle track.
In another aspect of the present disclosure, there is provided a vehicle track detection system based on object tracking, the system being used in the above method, the system comprising: the real-time image acquisition module is used for acquiring real-time images of the target moving vehicle through the image acquisition equipment to acquire a target image sequence; the target detection module is used for carrying out target detection on the target image sequence, identifying a vehicle target in each frame of image and obtaining a vehicle target sequence; the key point matching module is used for extracting a first frame of vehicle target according to the vehicle target sequence, carrying out key point matching on the vehicle target sequence by taking the first frame of vehicle target as a reference, and generating a target tracker, wherein the target tracker is associated with an identifier of the vehicle target sequence; the target position updating module is used for applying the target tracker to the vehicle target sequence, and predicting and updating the target position and state according to the characteristics of the kth frame and the tracking result of the kth-1 frame by using a tracking algorithm to acquire a target tracking result; the motion state prediction module is used for predicting the motion state of a vehicle target by utilizing historical information and the observation result of a kth frame based on the target tracking result and acquiring vehicle position information; and the vehicle track generation module is used for connecting the vehicle position information of the vehicle target in the continuous frames according to the identifier to generate a vehicle track.
One or more technical schemes provided by the application have at least the following technical effects or advantages:
the method comprises the steps of acquiring a real-time image of a target moving vehicle, acquiring a target image sequence, carrying out target detection, acquiring a vehicle target sequence, carrying out key point matching on the vehicle target sequence by taking a first frame of vehicle target as a reference, generating a target tracker, applying the target tracker to the vehicle target sequence, carrying out target position and state prediction and updating, acquiring a target tracking result, carrying out motion state prediction on the vehicle target by utilizing history information and an observation result of a kth frame, acquiring vehicle position information, connecting the vehicle position information, and generating a vehicle track. The method solves the technical problems that the traditional vehicle detection method can only detect the vehicle in a single image generally, can not provide the motion information of the vehicle, and the multiple targets can be blocked, deformed, scale changed and the like, so that the appearance and the position of the targets are difficult to keep consistent, further the continuous positioning of the vehicle in time and the analysis effect of the motion state are poor, and the appearance and the position of the targets are difficult to keep consistent, realizes the continuous tracking of the targets, the consistency of the related targets in time and space, and processes multiple targets, and further achieves the technical effect of improving the description accuracy of the positions and the motion tracks of the vehicle.
The foregoing description is only an overview of the present application, and is intended to be implemented in accordance with the teachings of the present application in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present application more readily apparent.
Drawings
FIG. 1 is a schematic flow chart of a method for detecting a vehicle track based on target tracking according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a possible flow for obtaining a vehicle target sequence in a vehicle track detection method based on target tracking according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a possible flow chart of generating a target tracker in a method for detecting a vehicle track based on target tracking according to an embodiment of the present application;
fig. 4 is a schematic diagram of a possible structure of a vehicle track detection system based on object tracking according to an embodiment of the present application.
Reference numerals illustrate: the system comprises a real-time image acquisition module 10, a target detection module 20, a key point matching module 30, a target position updating module 40, a motion state prediction module 50 and a vehicle track generation module 60.
Detailed Description
The embodiment of the application solves the technical problems that the traditional vehicle detection method can only detect the vehicle in a single image generally and can not provide the motion information of the vehicle, and the appearance and the position of a plurality of targets are difficult to keep consistent due to the fact that the targets are possibly shielded, deformed, scale changed and the like, so that the continuous positioning of the vehicle in time and the analysis effect of the motion state are poor, and the appearance and the position of the targets are difficult to keep consistent, and the technical effects of continuously tracking the targets, correlating the targets in time and space and processing multiple targets are realized, and further the description accuracy of the positions and the motion trajectories of the vehicle is improved are achieved.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Example 1
As shown in fig. 1, an embodiment of the present application provides a method for detecting a vehicle track based on target tracking, the method including:
step S100: the method comprises the steps that real-time image acquisition is carried out on a target moving vehicle through image acquisition equipment, and a target image sequence is obtained;
specifically, a proper image acquisition device, such as a camera or a video acquisition card, is selected according to requirements, and is installed at a proper position, such as a region near a traffic light, an intersection or a monitoring camera, so that the acquisition device can cover the movement track of a target vehicle. And starting an image acquisition device to acquire images of the target vehicle in real time, wherein the image acquisition device generates a series of continuous image frames every second to form a target image sequence. The acquired images are transmitted in real time to a storage device or computer for processing and analysis by corresponding transmission techniques, such as network transmission.
Step S200: performing target detection on the target image sequence, identifying a vehicle target in each frame of image, and obtaining a vehicle target sequence;
further, as shown in fig. 2, step S200 of the present application includes:
step S210: inputting the target image sequence into a target detection model, and outputting a candidate target frame and a confidence score;
step S220: for each candidate target frame, carrying out maximum value screening according to the confidence score, and reserving a target frame with high confidence score;
step S230: and acquiring a vehicle target corresponding to the target frame and position information thereof, and generating the vehicle target sequence.
Specifically, a target detection model, such as a fast R-CNN (Faster regional convolutional neural network, which is a classical target detection model, is selected to detect targets in a two-stage manner, and is divided into a regional generation network and a regional classification network), and the model is trained on the basis of deep learning and the neural network, so that multiple types of targets, including vehicles, can be effectively detected and located. Inputting the target image into a target detection model, forward deducing the image by the model to generate candidate target frames and corresponding confidence scores, wherein the higher confidence scores mean that the recognition degree of the model on the targets contained in the target frames is higher, and the candidate target frames and the corresponding confidence scores in each image can be obtained according to the output of the model.
The method comprises the steps of obtaining a predefined threshold, wherein the setting of the threshold is determined according to specific requirements and application scenes, and the accuracy of a target frame can be ensured by a higher threshold; a lower threshold increases the number of target boxes. For each candidate target frame, acquiring a corresponding confidence coefficient score, comparing the confidence coefficient score with a threshold value, and if the confidence coefficient score is greater than or equal to the threshold value, treating the candidate target frame as a target frame with high confidence coefficient, wherein the target frame can be reserved; if the confidence score is smaller than the threshold value, the candidate target frame is regarded as a target frame with low confidence, the candidate target frame can be discarded or ignored, the steps are repeated, all candidate target frames are traversed, and all target frames with high confidence are screened out.
For the reserved high-confidence object boxes, required information, such as object category, position information, etc., is extracted from each object box, and for the object category, it can be identified as a vehicle category or other required category, position information is extracted from the reserved object box, for example, in terms of coordinates of bounding boxes, such as coordinates of upper left corner and lower right corner, which can be used to determine the position and size of the vehicle in the image. And sequencing the extracted target category and the position information according to the time sequence to form a vehicle target sequence.
Step S300: extracting a first frame of vehicle target according to the vehicle target sequence, and performing key point matching on the vehicle target sequence by taking the first frame of vehicle target as a reference to generate a target tracker, wherein the target tracker is associated with an identifier of the vehicle target sequence;
further, as shown in fig. 3, step S300 of the present application includes:
step S310: extracting features of the vehicle target sequence to obtain a plurality of key point descriptors;
step S320: taking the first frame key point descriptor as a reference, and carrying out ratio test on a plurality of key point descriptors in the subsequent frames;
step S330: distributing an identifier for each vehicle target, and sorting the ratio test results from large to small according to the identifiers to generate a target tracker;
step S340: and associating the generated target tracker with a corresponding vehicle target sequence.
Specifically, a vehicle target of a first frame is obtained from a vehicle target sequence, a corresponding feature extraction algorithm, such as SIFT (scale invariant feature transform), is adopted for the vehicle target of each frame, feature descriptors with robustness to factors such as scale, rotation and illumination are extracted by detecting stable key points in an image, key points and corresponding key point descriptors thereof are extracted, specifically, positions with stronger structures or textures in the image are detected by the feature extraction algorithm, the positions are key points, and for each key point, the feature descriptors of an area around the point are extracted.
The key point descriptors in the first frame are used as a reference to be compared with the key point descriptors of a plurality of vehicle targets in the subsequent frames, different measurement methods such as Euclidean distance and similarity measurement can be adopted for comparison, and similarity comparison is carried out between the key point descriptors of each subsequent frame and the key point descriptors of the first frame, so that the distance or similarity score between the key point descriptors of each subsequent frame and the key point descriptors of the first frame is calculated.
Each vehicle target is assigned a unique identifier, which may be represented using integers or characters, and the results of the ratio test are associated with the vehicle target identifier to form a structure containing the identifier and the ratio test results. According to the generated target tracker and identifiers corresponding to each target in the vehicle target sequence, the identifiers are associated, the relationship between the target tracker and the corresponding vehicle target sequence can be stored by using a dictionary, an association array or a list and other data structures, the target tracker and the vehicle target sequence are matched and associated according to the time sequence, and the target state is tracked and updated in the correct sequence.
Step S400: the target tracker is applied to the vehicle target sequence, and a tracking algorithm is used for predicting and updating the target position and state according to the characteristics of the kth frame and the tracking result of the kth-1 frame, so that a target tracking result is obtained;
further, step S400 of the present application includes:
step S410: extracting features of each target detected in the kth frame image to obtain kth frame features;
step S420: using the tracking result of the k-1 frame to establish initial target association with the k frame characteristics;
step S430: based on the initial target association, predicting an untracked target in the kth frame according to the target state of the tracking result of the kth-1 frame to obtain an initial prediction result;
step S440: and carrying out similarity matching on the kth frame characteristic and the kth-1 frame characteristic, carrying out state updating on the initial prediction result according to the matching result, and taking the updated target state as a tracking result of the kth frame.
Specifically, for each target detected in the kth frame image, a feature extraction algorithm is used to extract the keypoints and their corresponding keypoint descriptors, and keypoints with rich texture or structure information are found in the target region.
Based on the tracking result of the k-1 frame, matching with the k frame features, establishing initial target association, measuring the matching degree between the features by using similarity measures such as similarity scores, intersection proportions and the like, comparing the similarity between the features of each tracked target of the k-1 frame and all the features in the k frame, selecting the best matching according to the similarity scores, and acquiring initial association, wherein the initial association can be used as a starting point of a follow-up tracking algorithm for predicting and updating the position and state information of the target.
For each object that is not tracked in the kth frame, prediction is performed using object state information of the kth-1 frame, and position and state estimation is performed using a motion model, such as kalman filtering, based on information of the position, velocity, acceleration, etc. of the object in the kth-1 frame, for example.
And performing similarity matching on the feature descriptors of the untracked targets in the kth frame and the feature descriptors of the tracked targets in the kth-1 frame to obtain the similarity, performing state updating on initial prediction results, such as correcting target states of position, speed, size and the like, according to the matching results to obtain updated target states as target tracking results of the kth frame, so that the positions and states of the targets can be updated more accurately by utilizing the tracking results of the kth-1 frame and the feature information of the kth frame to complete target tracking tasks.
Further, step S440 of the present application includes:
step S441: setting a similarity threshold, and taking a matching result meeting the similarity threshold as a matching success target;
step S442: obtaining a kth frame target observation value by utilizing the matching result;
step S443: comparing the kth frame target observation value with the initial prediction result, and updating the initial prediction result according to a Kalman gain;
step S444: and correcting the target position and speed successfully matched according to the updating result, and outputting the corrected target position and speed as a tracking result of the kth frame.
Specifically, in the similarity matching process, a similarity threshold is set, the threshold is used for screening the similarity level meeting the requirement in the matching results, if the similarity of the matching results is higher than the set threshold, the matching results are regarded as targets of successful matching, the results which do not meet the requirement of accurate matching are filtered out, and the matching results with higher similarity are reserved as targets of successful matching.
And extracting information such as feature points, areas or descriptors corresponding to the successfully matched targets from the features of the kth frame according to the successful result of the similarity matching, wherein the extracted features are regarded as observed values of the targets in the kth frame and can be used for further analyzing and updating the target states.
The target observed value in the kth frame is compared with the previous initial prediction result, and a filtering method, such as Kalman filtering, is used to perform fusion or correction of state estimation and prediction by using the accuracy of the observed value and the reliability of the prediction value. In the updating process, according to updating results obtained by Kalman filtering and other methods, state parameters such as the position and the speed of a successfully matched target are corrected, and the corrected target position and speed are output as a tracking result of a kth frame and are used for further target analysis, application or display.
Through the steps, key links in a target tracking algorithm are realized, the aims are ensured to be accurately matched in a cross frame, corresponding target observation information is acquired, and initial prediction results are updated and corrected according to Kalman gain through a filtering method, so that states such as the position and the speed of the targets can be estimated more accurately, and the corrected positions and speeds of the targets are output as the tracking results of a kth frame.
Further, the step S400 of the present application further includes:
step S400-1: performing object detection in each frame to determine the appearing and disappearing objects;
step S400-2: for each detected object, extracting a corresponding feature and computing a feature descriptor;
step S400-3: based on the feature descriptor, matching a target in a j-th frame with a target in a j-1-th frame by using a data association method to obtain a matching result;
step S400-4: and correcting the initial prediction result according to the confidence coefficient of the matching through the matching result.
Specifically, for each frame of image, an object detection operation is performed in the image, an object detection algorithm such as edge detection or the like is used to identify and locate the object, and by the object detection algorithm, it is determined that a new object or an object that disappears appears.
For each detected target, capturing local information of the target through a feature extraction algorithm, extracting local features with differentiation from the image, and calculating feature descriptors, wherein the feature descriptors are a way of digitizing the features and can describe the shape, texture, edge and other information of the target.
And matching the target in the j frame with the target in the j-1 frame by using the feature descriptor through a data association method such as nearest neighbor matching so as to find the optimal target corresponding relation. And correcting the initial prediction result by using the confidence coefficient of the matching result, such as the matching score or the distance, and defining proper weight and correction strategy to update the state parameters of the initial prediction, such as the target position, the speed and the like, based on the target corresponding relation in the matching result so as to improve the accuracy and the robustness of target tracking.
Step S500: based on the target tracking result, predicting the motion state of the vehicle target by utilizing the history information and the observation result of the kth frame to acquire vehicle position information;
specifically, the object tracking results and observation data including the first several frames, for example, the first n frames, are summarized, and a motion model of the vehicle object is derived as history information based on the history information using an appropriate modeling method such as linear motion, nonlinear motion, acceleration, and the like, and this model can be used to describe the position and state change of the object over time. Based on the target motion model and the previous target state, a filtering method such as Kalman filtering or particle filtering is utilized to predict the state, and a predicted value of the position and the state of the target in the kth frame is obtained according to the current observation result and the historical information. The position information of the vehicle, such as coordinates, directions, etc., is extracted from the state prediction result, and the accurate position of the vehicle is obtained by extracting appropriate parameter values from the state vector or retrieving the position information.
Step S600: and connecting the vehicle position information of the vehicle target in the continuous frames according to the identifier to generate a vehicle track.
Specifically, in each frame, position information, such as coordinates, and corresponding frame numbers, related to the vehicle object are recorded, so that the position information of the vehicle object between different frames can be digitally recorded, the position information of the vehicle object in successive frames can be connected based on the unique identifier of the vehicle object, and the position information of the same vehicle between different frames can be associated and combined by matching the identifiers. Based on the linked vehicle position information, a trajectory of the vehicle is generated, the trajectory being a sequence describing a path of motion of the vehicle over time.
Through the steps, the position information of the vehicle target in the continuous frames can be connected according to the unique identifier, and the track of the vehicle is finally generated, so that the movement of the vehicle can be effectively tracked, and the visualized track information is provided for analyzing, predicting or displaying the behavior of the vehicle in the scene.
In summary, the vehicle track detection method and system based on target tracking provided by the embodiment of the application have the following technical effects:
the method comprises the steps of acquiring a real-time image of a target moving vehicle, acquiring a target image sequence, carrying out target detection, acquiring a vehicle target sequence, carrying out key point matching on the vehicle target sequence by taking a first frame of vehicle target as a reference, generating a target tracker, applying the target tracker to the vehicle target sequence, carrying out target position and state prediction and updating, acquiring a target tracking result, carrying out motion state prediction on the vehicle target by utilizing history information and an observation result of a kth frame, acquiring vehicle position information, connecting the vehicle position information, and generating a vehicle track.
The method solves the technical problems that the traditional vehicle detection method can only detect the vehicle in a single image generally, can not provide the motion information of the vehicle, and the multiple targets can be blocked, deformed, scale changed and the like, so that the appearance and the position of the targets are difficult to keep consistent, further the continuous positioning of the vehicle in time and the analysis effect of the motion state are poor, and the appearance and the position of the targets are difficult to keep consistent, realizes the continuous tracking of the targets, the consistency of the related targets in time and space, and processes multiple targets, and further achieves the technical effect of improving the description accuracy of the positions and the motion tracks of the vehicle.
Example two
Based on the same inventive concept as the vehicle track detection method based on target tracking in the foregoing embodiments, as shown in fig. 4, the present application provides a vehicle track detection system based on target tracking, the system comprising:
the real-time image acquisition module 10 is used for acquiring real-time images of the target moving vehicle through the image acquisition equipment to acquire a target image sequence;
the target detection module 20 is used for performing target detection on the target image sequence, identifying a vehicle target in each frame of image and acquiring a vehicle target sequence;
the key point matching module 30 is configured to extract a first frame of vehicle target according to the vehicle target sequence, perform key point matching on the vehicle target sequence based on the first frame of vehicle target, and generate a target tracker, where the target tracker is associated with an identifier of the vehicle target sequence;
the target position updating module 40 is configured to apply the target tracker to the vehicle target sequence, and predict and update the target position and state according to the characteristics of the kth frame and the tracking result of the kth-1 frame by using a tracking algorithm, so as to obtain a target tracking result;
the motion state prediction module 50 is configured to predict a motion state of a vehicle target based on the target tracking result and using history information and an observation result of a kth frame, to obtain vehicle position information;
and a vehicle track generation module 60, wherein the vehicle track generation module 60 is used for connecting the vehicle position information of the vehicle target in continuous frames according to the identifier to generate a vehicle track.
Further, the system further comprises:
the candidate target frame acquisition module is used for inputting the target image sequence into a target detection model and outputting a candidate target frame and a confidence score;
the maximum value screening module is used for carrying out maximum value screening on each candidate target frame according to the confidence score, and reserving the target frame with high confidence score;
and the target sequence generation module is used for acquiring the vehicle target corresponding to the target frame and the position information thereof and generating the vehicle target sequence.
Further, the system further comprises:
the feature extraction module is used for extracting features of the vehicle target sequence to obtain a plurality of key point descriptors;
the ratio test module is used for carrying out ratio test on a plurality of key point descriptors in the subsequent frames by taking the key point descriptors of the first frame as a reference;
the sequencing module is used for distributing an identifier for each vehicle target, sequencing the ratio test result from large to small according to the identifier, and generating a target tracker;
and the association module is used for associating the generated target tracker with the corresponding vehicle target sequence.
Further, the system further comprises:
the target feature extraction module is used for extracting features of each target detected in the kth frame image to obtain kth frame features;
the initial target association establishing module is used for establishing initial target association with the kth frame characteristic by using the tracking result of the kth-1 frame;
the prediction module is used for predicting an untracked target in the kth frame according to the target state of the tracking result of the kth-1 frame based on the initial target association, and obtaining an initial prediction result;
and the state updating module is used for carrying out similarity matching on the kth frame characteristic and the kth-1 frame characteristic, carrying out state updating on the initial prediction result according to the matching result, and taking the updated target state as a tracking result of the kth frame.
Further, the system further comprises:
the matching success target acquisition module is used for setting a similarity threshold and taking a matching result meeting the similarity threshold as a matching success target;
the observation value acquisition module is used for acquiring a kth frame target observation value by utilizing the matching result;
the comparison module is used for comparing the kth frame target observed value with the initial prediction result and updating the initial prediction result according to the Kalman gain;
and the correction module is used for correcting the successfully matched target position and speed according to the updating result and outputting the corrected target position and speed as a tracking result of the kth frame.
Further, the system further comprises:
the target detection module is used for detecting targets in each frame and determining the targets which appear and disappear;
a feature descriptor acquisition module for extracting corresponding features for each detected object and calculating feature descriptors;
the target matching module is used for matching the target in the j frame with the target in the j-1 frame by using a data association method based on the feature descriptor to obtain a matching result;
and the prediction result correction module is used for correcting the initial prediction result according to the confidence level of the matching through the matching result.
The foregoing detailed description of the method for detecting a vehicle track based on object tracking will clearly enable those skilled in the art to know the method and system for detecting a vehicle track based on object tracking in this embodiment, and the device disclosed in the embodiments corresponds to the method disclosed in the embodiments, so that the description is relatively simple, and the relevant places refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. A method for detecting a vehicle track based on target tracking, the method comprising:
the method comprises the steps that real-time image acquisition is carried out on a target moving vehicle through image acquisition equipment, and a target image sequence is obtained;
performing target detection on the target image sequence, identifying a vehicle target in each frame of image, and obtaining a vehicle target sequence;
extracting a first frame of vehicle target according to the vehicle target sequence, and performing key point matching on the vehicle target sequence by taking the first frame of vehicle target as a reference to generate a target tracker, wherein the target tracker is associated with an identifier of the vehicle target sequence;
the target tracker is applied to the vehicle target sequence, and a tracking algorithm is used for predicting and updating the target position and state according to the characteristics of the kth frame and the tracking result of the kth-1 frame, so that a target tracking result is obtained;
based on the target tracking result, predicting the motion state of the vehicle target by utilizing the history information and the observation result of the kth frame to acquire vehicle position information;
and connecting the vehicle position information of the vehicle target in the continuous frames according to the identifier to generate a vehicle track.
2. The method of claim 1, wherein performing object detection on the sequence of object images, identifying a vehicle object in each frame of images, and obtaining the sequence of vehicle objects comprises:
inputting the target image sequence into a target detection model, and outputting a candidate target frame and a confidence score;
for each candidate target frame, carrying out maximum value screening according to the confidence score, and reserving a target frame with high confidence score;
and acquiring a vehicle target corresponding to the target frame and position information thereof, and generating the vehicle target sequence.
3. The method of claim 1, wherein performing a keypoint match on the sequence of vehicle targets with respect to the first frame of vehicle targets to generate a target tracker comprises:
extracting features of the vehicle target sequence to obtain a plurality of key point descriptors;
taking the first frame key point descriptor as a reference, and carrying out ratio test on a plurality of key point descriptors in the subsequent frames;
distributing an identifier for each vehicle target, and sorting the ratio test results from large to small according to the identifiers to generate a target tracker;
and associating the generated target tracker with a corresponding vehicle target sequence.
4. The method of claim 1, wherein predicting and updating the target location and state based on the characteristics of the kth frame and the tracking result of the kth-1 frame using a tracking algorithm, obtaining the target tracking result comprises:
extracting features of each target detected in the kth frame image to obtain kth frame features;
using the tracking result of the k-1 frame to establish initial target association with the k frame characteristics;
based on the initial target association, predicting an untracked target in the kth frame according to the target state of the tracking result of the kth-1 frame to obtain an initial prediction result;
and carrying out similarity matching on the kth frame characteristic and the kth-1 frame characteristic, carrying out state updating on the initial prediction result according to the matching result, and taking the updated target state as a tracking result of the kth frame.
5. The method of claim 4, wherein matching the kth frame feature to the kth-1 frame feature for similarity, and updating the initial prediction result based on the matching result, comprises:
setting a similarity threshold, and taking a matching result meeting the similarity threshold as a matching success target;
obtaining a kth frame target observation value by utilizing the matching result;
comparing the kth frame target observation value with the initial prediction result, and updating the initial prediction result according to a Kalman gain;
and correcting the target position and speed successfully matched according to the updating result, and outputting the corrected target position and speed as a tracking result of the kth frame.
6. The method of claim 4, wherein predicting and updating the target location and state based on the characteristics of the kth frame and the tracking result of the kth-1 frame using a tracking algorithm, obtaining a target tracking result, further comprising:
performing object detection in each frame to determine the appearing and disappearing objects;
for each detected object, extracting a corresponding feature and computing a feature descriptor;
based on the feature descriptor, matching a target in a j-th frame with a target in a j-1-th frame by using a data association method to obtain a matching result;
and correcting the initial prediction result according to the confidence coefficient of the matching through the matching result.
7. A target tracking based vehicle trajectory detection system for implementing the target tracking based vehicle trajectory detection method of any one of claims 1 to 6, comprising:
the real-time image acquisition module is used for acquiring real-time images of the target moving vehicle through the image acquisition equipment to acquire a target image sequence;
the target detection module is used for carrying out target detection on the target image sequence, identifying a vehicle target in each frame of image and obtaining a vehicle target sequence;
the key point matching module is used for extracting a first frame of vehicle target according to the vehicle target sequence, carrying out key point matching on the vehicle target sequence by taking the first frame of vehicle target as a reference, and generating a target tracker, wherein the target tracker is associated with an identifier of the vehicle target sequence;
the target position updating module is used for applying the target tracker to the vehicle target sequence, and predicting and updating the target position and state according to the characteristics of the kth frame and the tracking result of the kth-1 frame by using a tracking algorithm to acquire a target tracking result;
the motion state prediction module is used for predicting the motion state of a vehicle target by utilizing historical information and the observation result of a kth frame based on the target tracking result and acquiring vehicle position information;
and the vehicle track generation module is used for connecting the vehicle position information of the vehicle target in the continuous frames according to the identifier to generate a vehicle track.
CN202310999420.4A 2023-08-09 2023-08-09 Vehicle track detection method and system based on target tracking Pending CN117011341A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310999420.4A CN117011341A (en) 2023-08-09 2023-08-09 Vehicle track detection method and system based on target tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310999420.4A CN117011341A (en) 2023-08-09 2023-08-09 Vehicle track detection method and system based on target tracking

Publications (1)

Publication Number Publication Date
CN117011341A true CN117011341A (en) 2023-11-07

Family

ID=88575872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310999420.4A Pending CN117011341A (en) 2023-08-09 2023-08-09 Vehicle track detection method and system based on target tracking

Country Status (1)

Country Link
CN (1) CN117011341A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557600A (en) * 2023-12-07 2024-02-13 深圳市昊瑞云技术有限公司 Vehicle-mounted image processing method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557600A (en) * 2023-12-07 2024-02-13 深圳市昊瑞云技术有限公司 Vehicle-mounted image processing method and system

Similar Documents

Publication Publication Date Title
CN113674328B (en) Multi-target vehicle tracking method
CN113034548B (en) Multi-target tracking method and system suitable for embedded terminal
Teoh et al. Symmetry-based monocular vehicle detection system
CN110197502B (en) Multi-target tracking method and system based on identity re-identification
CN106934817B (en) Multi-attribute-based multi-target tracking method and device
CN110288627B (en) Online multi-target tracking method based on deep learning and data association
CN110751096B (en) Multi-target tracking method based on KCF track confidence
CN115240130A (en) Pedestrian multi-target tracking method and device and computer readable storage medium
CN111798487A (en) Target tracking method, device and computer readable storage medium
CN117011341A (en) Vehicle track detection method and system based on target tracking
CN115375736A (en) Image-based pedestrian trajectory tracking method and device
CN114926859A (en) Pedestrian multi-target tracking method in dense scene combined with head tracking
CN113379795B (en) Multi-target tracking and segmentation method based on conditional convolution and optical flow characteristics
CN114820765A (en) Image recognition method and device, electronic equipment and computer readable storage medium
CN117830356A (en) Target tracking method, device, equipment and medium
CN112949615B (en) Multi-target tracking system and method based on fusion detection technology
CN108346158B (en) Multi-target tracking method and system based on main block data association
CN114066933A (en) Multi-target tracking method, system and related equipment
CN113920168A (en) Image tracking method in audio and video control equipment
Sujatha et al. An innovative moving object detection and tracking system by using modified region growing algorithm
Wan et al. End-to-end multi-object tracking with global response map
Lee et al. Tracking multiple moving vehicles in low frame rate videos based on trajectory information
CN117152826B (en) Real-time cross-mirror tracking method based on target tracking and anomaly detection
Vaquero et al. Fast Multi-Object Tracking with Feature Pyramid and Region Proposal Networks
Ali et al. A fast approach for person detection and tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination