CN112465866B - Multi-target track acquisition method, device, system and storage medium - Google Patents

Multi-target track acquisition method, device, system and storage medium Download PDF

Info

Publication number
CN112465866B
CN112465866B CN202011364337.2A CN202011364337A CN112465866B CN 112465866 B CN112465866 B CN 112465866B CN 202011364337 A CN202011364337 A CN 202011364337A CN 112465866 B CN112465866 B CN 112465866B
Authority
CN
China
Prior art keywords
track data
data
camera
track
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011364337.2A
Other languages
Chinese (zh)
Other versions
CN112465866A (en
Inventor
简春菲
李林森
莫致良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202011364337.2A priority Critical patent/CN112465866B/en
Publication of CN112465866A publication Critical patent/CN112465866A/en
Application granted granted Critical
Publication of CN112465866B publication Critical patent/CN112465866B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a multi-target track acquisition method, a multi-target track acquisition device, a multi-target track acquisition system and a storage medium, wherein a plurality of single-camera track data to be correlated are acquired; respectively acquiring space-time class characteristics between each single-camera track data and historical associated track data; based on the obtained space-time class characteristics, a classification model obtained by training in advance is utilized to obtain a classification result of whether each single-camera track data and the history associated track data belong to the same target; the classification model is a model obtained by training a plurality of samples with labels, wherein the samples comprise two sample track data, and the labels are used for indicating whether the two sample track data belong to the same sample target; and based on the classification result, correlating the track data belonging to the same target in the plurality of single-camera track data and the history correlation track data to obtain the track of the target movement. By the scheme, the application scene of multi-target track acquisition can be expanded.

Description

Multi-target track acquisition method, device, system and storage medium
Technical Field
The present invention relates to the field of track acquisition technologies, and in particular, to a method, apparatus, system, and storage medium for acquiring multiple target tracks.
Background
In order to realize the management of multiple targets in security monitoring, unmanned aerial vehicle management, unmanned driving and other scenes, the track of the motion of the multiple targets needs to be acquired. In this regard, a plurality of targets may be generally image-captured by a single camera, and the trajectory of each target motion is obtained using a single camera tracking model based on the captured images. Or, the cross-camera image acquisition can be carried out on a plurality of targets through a plurality of cameras with different monitoring ranges, a plurality of track data of each target are acquired based on the plurality of images acquired by the plurality of cameras, and the plurality of track data of each target are associated to obtain the track of each target moving in a larger range.
In a specific application, for a situation of multiple cameras, an overlapping monitoring range is generally required to exist between adjacent first cameras and second cameras, so that in a process that each target enters the monitoring range of the first camera and triggers the first camera to acquire track data of each target, the second camera is triggered to acquire track data of each target based on the fact that each target enters the overlapping monitoring range, so that track data of each target are acquired by utilizing different camera segments, multiple track data of each target are obtained, and multiple track data of each target are associated, so that a track of each target motion is obtained.
However, the monitoring ranges of the single camera and the monitoring ranges of the two adjacent cameras need to have overlapping areas, which severely limits the applicable scenes of the target track acquisition mode. For example, a scene with a large range to be monitored, such as a hotel, a business trip, etc., a scene with a limited number of cameras and installation positions, etc.
Disclosure of Invention
The embodiment of the invention aims to provide a multi-target track acquisition method, a multi-target track acquisition device, a multi-target track acquisition system and a storage medium, so as to achieve the effect of expanding a suitable scene of multi-target track acquisition. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for acquiring multiple target tracks, where the method includes:
acquiring a plurality of single-camera track data to be associated; the single-camera track data are track data of a target acquired by a camera;
respectively acquiring space-time class characteristics between each single-camera track data and historical associated track data; the space-time class features are used for indicating the relation of the corresponding track of each two track data in space-time dimension; the history associated track data is track data which is already associated;
Based on the obtained space-time class characteristics, a classification model obtained by training in advance is utilized to obtain a classification result of whether each single-camera track data and the history associated track data belong to the same target; the classification model is a model obtained by training a plurality of samples with labels, wherein the samples comprise two sample track data, and the labels are used for indicating whether the two sample track data belong to the same sample target;
and based on the classification result, correlating the track data belonging to the same target in the plurality of single-camera track data and the history correlation track data to obtain the track of the target movement.
In a second aspect, an embodiment of the present invention provides a multi-target trajectory acquisition device, including:
the track data acquisition module is used for acquiring a plurality of single-camera track data to be associated; the single-camera track data are track data of a target acquired by a camera;
the track feature extraction module is used for respectively acquiring space-time class features between each single-camera track data and the history associated track data; the space-time class features are used for indicating the relation of the corresponding track of each two track data in space-time dimension; the history associated track data is track data which is already associated;
The track data classification module is used for acquiring a classification result of whether each single-camera track data and the history associated track data belong to the same target or not by utilizing a classification model obtained through pre-training based on the acquired space-time class characteristics; the classification model is a model obtained by training a plurality of samples with labels, wherein the samples comprise two sample track data, and the labels are used for indicating whether the two sample track data belong to the same sample target;
and the target track acquisition module is used for correlating the track data belonging to the same target in the plurality of single-camera track data and the history correlation track data based on the classification result to obtain the track of the target movement.
In a third aspect, an embodiment of the present invention provides a multi-target trajectory acquisition system, including: at least one camera, and a management background communicatively coupled to the camera;
the camera is used for acquiring track data of a target and is used as single-camera track data;
the management background is used for acquiring a plurality of single-camera track data to be associated; respectively acquiring space-time class characteristics between each single-camera track data and historical associated track data; the space-time class features are used for indicating the relation of the corresponding track of each two track data in space-time dimension; the history associated track data is track data which is already associated; based on the obtained space-time class characteristics, a classification model obtained by training in advance is utilized to obtain a classification result of whether each single-camera track data and the history associated track data belong to the same target; the classification model is a model obtained by training a plurality of samples with labels, wherein the samples comprise two sample track data, and the labels are used for indicating whether the two sample track data belong to the same sample target; and based on the classification result, correlating the track data belonging to the same target in the plurality of single-camera track data and the history correlation track data to obtain the track of the target movement.
In a fourth aspect, an embodiment of the present invention provides an electronic device, which is applied to a multi-target track acquisition system, and includes a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and a processor, configured to implement the steps of the multi-target track acquisition method provided in the first aspect when executing the program stored in the memory.
In a fifth aspect, an embodiment of the present invention provides a computer readable storage medium, in which a computer program is stored, the computer program implementing the steps of the multi-target trajectory acquisition method provided in the first aspect when being executed by a processor.
The embodiment of the invention has the beneficial effects that:
in the scheme provided by the embodiment of the invention, a classification model is obtained by training a plurality of samples with labels in advance, the samples comprise two sample track data, and the labels are used for indicating whether the two sample track data belong to the same sample target. Thus, the classification model can be used to obtain classification results of whether different trajectory data belong to the same object. The historical associated track data is track data which has been associated, so that the classification result of whether each single-camera track data and the historical associated track data belong to the same target is obtained, which is equivalent to determining which target each single-camera track data belongs to from a plurality of single-camera track data. On the basis, based on the classification result that each single-camera track data and the history associated track data belong to the same target, the track data belonging to the same target in the plurality of single-camera track data and the history associated track data can be associated to obtain the track of the target motion. Therefore, when the image acquisition of the cross cameras is carried out on a plurality of targets through a plurality of cameras so as to acquire the motion track of each target, the method is not limited by the monitoring range of a single camera and the overlapping area required to exist in the monitoring ranges of two adjacent cameras, can be suitable for scenes with larger area required to be monitored, and has limited number of cameras and limited mounting positions, and the like, and achieves the effect of expanding the applicable scenes acquired by the multi-target track.
Of course, it is not necessary for any one product or method of practicing the invention to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are necessary for the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention and that other embodiments may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a multi-target track acquisition method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a classification model acquisition flow in a multi-target trajectory acquisition method according to another embodiment of the present invention;
FIG. 3 is a flowchart illustrating a multi-target trajectory acquisition method according to another embodiment of the present invention;
FIG. 4 is a schematic diagram of a multi-target track acquisition device according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a multi-target track acquisition system according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a management background in the multi-target track acquisition system according to an embodiment of the present invention;
Fig. 7 is a schematic diagram of a camera in the multi-target track acquisition system according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
When acquiring the track of a plurality of targets, for the case that overlapping monitoring ranges exist between the adjacent first camera and the second camera to acquire the multi-target track, in order to improve the accuracy of acquiring the track data of each target by the second camera, a global camera with the monitoring range covering the monitoring ranges of all cameras can be further added. Specifically, when the second camera is triggered to acquire track data of each target based on the fact that each target enters an overlapped monitoring range, the second camera can receive position information of each target acquired by the global camera, accurately acquire image data of each target by utilizing the received position information, and acquire the track data of each target by utilizing the acquired image data. However, the condition that the monitoring range of the global camera covers the monitoring ranges of all cameras limits the applicable scenes of the multi-target track acquisition mode, for example, the super-business scenes with a large monitoring range cannot be monitored, and the cost of the cameras is increased.
In order to expand the applicable scene of multi-target track acquisition, the embodiment of the invention provides a multi-target track acquisition method. The method may be applied in the management background in communicative connection with a plurality of cameras or any of a plurality of cameras. The management background and the camera can be various. By way of example, the management background may include any of a computer device, an intelligent mobile terminal, a wearable terminal, a server, and the like. The camera may include: a web camera, an infrared illumination type camera, a dome camera, or the like. Moreover, among the plurality of cameras, the monitoring ranges of the adjacent two cameras may not overlap.
The following describes a multi-target track acquisition method provided by the embodiment of the invention.
As shown in fig. 1, a flow of a multi-target track acquisition method according to an embodiment of the present invention may include:
s101, acquiring a plurality of single-camera track data to be associated.
The single-camera track data is track data of a target acquired by a camera.
In a specific application, when a target enters a monitoring range of a camera, the camera tracks the target in real time to form a motion track of the target under the monitoring range of the monitoring camera, and track data corresponding to the track is called single-camera track data. On this basis, for multi-target trajectory acquisition, there are a plurality of targets, and thus, the plurality of single-camera trajectory data may be single-camera trajectory data of a plurality of targets acquired by one camera, or trajectory data of a plurality of targets respectively acquired by a plurality of cameras.
Also, for example, obtaining the plurality of single-camera trajectory data to be associated may include: and receiving single-camera track data actively transmitted by each camera according to a preset period, or actively requesting the single-camera track acquired by each camera according to the preset period. The preset period may be a preset period for acquiring a multi-target track, for example, 5 seconds, 10 seconds, 1 minute, or 1 day, etc. For the case of longer preset period, such as 1 day, the method is equivalent to non-real-time, that is, off-line acquisition of multiple target tracks; for the case that the preset period is shorter, for example, 5 seconds, 10 seconds and the like, the method is equivalent to acquiring the multi-target track in real time. Therefore, the user can set different preset periods according to specific multi-target track acquisition requirements.
In addition, the track data of an object acquired by a camera may specifically include: when a target enters the monitoring range of a camera, the camera collects image data of the target, and single-camera track data of the target is generated by utilizing a single-camera tracking model based on the collected image data. Wherein the single-camera tracking model may be various. Illustratively, the single-camera tracking model may include: a detection-based tracking (Detection Based Tracking, DBT) model, or a no detection tracking (Detection Free Tracking, DFT) model. The DBT model is a model that first identifies a target and then links the target into an existing trajectory; the DFT model is a model for manually calibrating a target in a first frame image and then tracking the target while detecting. Any method of acquiring single-camera trajectory data may be used in the present invention, and the present embodiment is not limited thereto.
S102, respectively acquiring space-time class characteristics between each single-camera track data and the historical associated track data.
The space-time class features are used for indicating the relation of the corresponding track of each two track data in space-time dimension; the history associated track data is track data which has been associated.
In a specific application, the track data of the same target is likely to be continuous in forming time, so in order to improve the efficiency of track data association, the method for respectively acquiring the space-time class characteristics between each single-camera track data and the history associated track data may specifically include: and respectively acquiring space-time class characteristics between each single-camera track data and the historical associated track data according to the formation time of the plurality of single-camera track data to be associated. Moreover, the spatiotemporal class features may be varied and are described in detail below in alternative embodiments for purposes of rational layout and ease of understanding.
When the track data is not associated, the history associated track data may be single-camera track data with the earliest time among the plurality of single-camera track data to be associated. In addition, the spatiotemporal class feature is used for indicating the relation of the corresponding tracks of each two track data in the spatiotemporal dimension, so that the spatiotemporal class feature between each single-camera track data and the historical associated track data can comprise: for each single-camera trajectory data, a spatiotemporal class feature between the single-camera trajectory data and one of the historical correlated trajectory data.
S103, based on the obtained space-time class characteristics, a classification model obtained through pre-training is utilized to obtain a classification result of whether each single-camera track data and the historical associated track data belong to the same target.
The classification model is a model trained by using a plurality of samples with labels, wherein the samples comprise two sample track data, and the labels are used for indicating whether the two sample track data belong to the same sample target.
In order to ensure that the classification model can be used to obtain the above classification result based on the obtained spatiotemporal features, when the classification model is obtained, the spatiotemporal features between the two sample trajectory data in each sample can be obtained, and the obtaining manner is similar to that of the spatiotemporal features in step S102, except that the trajectory data are different. And, whether a single-camera track data and a history associated track data belong to the same target classification result may be various. For example, the classification result may be a confidence level of whether a single camera trajectory data and a history associated trajectory data belong to the same object, or a result tag, etc. Any classification result for indicating whether each single-camera trajectory data and the history associated trajectory data belong to the same object may be used in the present invention, which is not limited in this embodiment.
In addition, in order to improve accuracy of acquiring the classification result of whether each single-camera trajectory data and the history associated trajectory data belong to the same target, image data corresponding to the single-camera trajectory data may be recorded, so that before step S103, image class features between each single-camera trajectory data and the history associated trajectory data are also acquired, and the classification result is acquired based on the image class features and the space-time class features. The following description is provided in terms of alternative embodiments for ease of understanding and rational layout.
And S104, based on the classification result, correlating the track data belonging to the same target in the plurality of single-camera track data and the history correlation track data to obtain the track of the target movement.
In a specific application, based on the classification result, track data belonging to the same target is associated with track data belonging to a plurality of single-camera track data and history associated track data to obtain a track of the target movement, which can be multiple. For example, the classification result may indicate that a group of single-camera track data belonging to the same target is associated with the history associated track data in the plurality of single-camera track data and the history associated track data, so as to obtain a track of the movement of the target. Or, for example, the classification result may indicate that a group of single-camera track data belonging to the same target is associated with the history associated track data in the plurality of single-camera track data and the history associated track data, and the associated track data without the association abnormality is used as the track of the target motion. Or, for example, in the case of the association abnormality, the abnormal associated track data may be repaired, and the associated track data to which the repaired track data belongs may be used as the track of the corresponding target motion. Or, for example, for the obtaining of the track of any one of the above exemplary target movements, the confidence coefficient may be used as a classification result, a preset association layer number may be obtained, and association with the historical association track data is performed on the preset association layer number single-camera track data in the multiple single-camera track data according to the order of the confidence coefficient from large to small, so as to obtain the track of the target movement corresponding to the associated track data. For easy understanding and reasonable layout, the determination and repair of the association anomaly are carried out according to the preset association layer number for carrying out the associated exemplary description, and the following specific description is carried out in the form of alternative embodiments.
In addition, any method that can correlate track data belonging to the same object in a plurality of single-camera track data and history correlation track data based on the classification result to obtain the track of the object motion can be used in the present invention, which is not limited in this embodiment.
In the scheme provided by the embodiment of the invention, the classification result of whether each single-camera track data and the historical associated track data belong to the same target is obtained, which is equivalent to determining which target each single-camera track data respectively belongs to from a plurality of single-camera track data. On the basis, based on the classification result that each single-camera track data and the history associated track data belong to the same target, the track data belonging to the same target in the plurality of single-camera track data and the history associated track data can be associated to obtain the track of the target motion. Therefore, when the image acquisition of the cross cameras is carried out on a plurality of targets through a plurality of cameras so as to acquire the motion track of each target, the method is not limited by the monitoring range of a single camera and the overlapping area required to exist in the monitoring ranges of two adjacent cameras, can be suitable for scenes with larger area required to be monitored, and has limited number of cameras and limited mounting positions, and the like, and achieves the effect of expanding the applicable scenes acquired by the multi-target track.
In an alternative embodiment, the plurality of samples with labels may further include: image data corresponding to the sample trajectory data;
correspondingly, before the above-mentioned classification model obtained by training in advance is used to obtain the classification result of whether each single-camera track data and the history associated track data belong to the same target, the multi-target track obtaining method provided by the embodiment of the invention may further include the following steps:
respectively acquiring image class characteristics of each single-camera track data and history associated track data; the image class features are used for indicating the relation of targets in image dimensions in the image data corresponding to each two track data respectively;
correspondingly, the method for acquiring the classification result of whether each single-camera track data and the historical associated track data belong to the same target or not by utilizing the classification model obtained by training in advance based on the acquired space-time class characteristics specifically comprises the following steps:
and inputting the acquired space-time class features and the acquired image class features into a classification model trained in advance, and acquiring a classification result of whether each single-camera track data and the history associated track data belong to the same target.
In order to improve accuracy of obtaining the classification result of whether each single-camera track data and the history associated track data belong to the same target, the plurality of samples with labels may further include: at this time, the classification model corresponds to a model trained by using the spatiotemporal features, the image features and the labels in the sample. Correspondingly, the multi-target track acquisition method provided by the embodiment of the invention can also be used for locally storing or acquiring the image data corresponding to the single-camera track data from the video camera so as to acquire space-time type features based on the single-camera track data and acquire image type features based on the image data when acquiring the classification result. Based on the method, based on the space-time type features and the image type features, more comprehensive comparison can be carried out on single-camera track data and history associated track data, and accuracy of obtaining classification results of whether each single-camera track data and the history associated track data belong to the same target is improved. The temporal-spatial type feature and the image type feature may be various, and are described in detail in the following in the form of alternative embodiments.
In an alternative embodiment, the above-mentioned spatiotemporal class features may include: spatial class features and temporal class features;
the spatial class features include at least one of the following features: features for indicating whether each two pieces of trajectory data respectively contain a first designated position and a second designated position, features for indicating a topological relationship between each two cameras among the plurality of cameras, and features for indicating information of an object within a reference range; the first designated position and the second designated position are positions which enable the track of the target motion to be associated; the reference ranges include: a range of a preset distance from the track corresponding to each two track data;
the temporal class features include at least one of the following features: the method comprises the steps of determining the track data of the target in a reference range, determining the track data of the target in the reference range, and determining the track data of the target in the reference range.
In a specific application, the following three-point introduction is performed on the spatial class features:
The first point, the first designated position and the second designated position that relate the trajectory of the target motion may be various. For example, for a scenario in which exit position information and entrance position information exist, the first designated position and the second designated position may be exit position information and entrance position information, respectively; for a scene in which there is movement direction identification position information, the first designated position and the second designated position may identify position information for two consecutive movement directions, respectively. The exit position information, the entrance position information, and the movement direction identification position information may be stored in advance according to a specific scene. And, acquiring the feature indicating whether each of the two pieces of trajectory data contains the first designated position and the second designated position, respectively, may include: and determining whether the first designated position and the second designated position are respectively contained in the two track data, and taking the determined result as a space-time class feature.
And the second point is that the probability that the single-camera track data respectively acquired by the two cameras belong to the same target is different when the topological relation between the two cameras is different due to a certain relation between the track of the target motion and the position of the camera in the space position, so that the characteristic for indicating the topological relation between every two cameras in the plurality of cameras can be used as a space-time characteristic. And, acquiring the feature for indicating the topological relation between each two cameras in the plurality of cameras may include: acquiring equipment identification of each camera; and combining the obtained device identifiers in pairs to obtain a plurality of combinations respectively comprising the two device identifiers as space class characteristics.
Third, in order to facilitate understanding of a reference range of a predetermined distance from each track corresponding to each two track data, an exemplary description will be made. For example, the preset distance is 50 cm, and the reference range may include: a range R1 apart from the track T1 corresponding to the track data D1, and a range R2 apart from the track T2 corresponding to the track data D2 by 50 cm. Where the range R1 corresponds to a range of 50 cm around the track T1, the shape of the range may be various, such as square, circular, etc., and the range R2 is similar, except that the corresponding track is T2. Thus, the information of the target in the reference range is the information of the target in the preset distance around the track corresponding to each two track data. If the information of the targets in the reference range is the same, which indicates that the targets around the tracks corresponding to each of the two track data are the same, the probability that the two track data belong to the same target increases. Thus, features of information indicating objects within the reference range can be regarded as spatial class features. And, acquiring the feature of the information indicating the target in the reference range may include: for the two track data, taking a range which is a preset distance away from the track data as a reference range in an image coordinate system of an image acquired by a camera; and acquiring information of the target in the reference range as a space class feature. Wherein, the information of the target may include: identification of the object, image data and trajectory data, etc.
In a specific application, the following four-point introduction is performed on the time class feature:
first, when there is overlap in the formation times of the tracks respectively corresponding to the two track data, the two track data are likely to be tracks of the same object acquired by different cameras, and thus, a feature indicating the similarity between the formation times of the tracks respectively corresponding to the two track data can be taken as a time-class feature. And, the obtaining of the feature for indicating the similarity between the formation times of the tracks corresponding to each of the two track data may include: and calculating the average distance, the maximum distance, the minimum distance and the overlapping duration between the overlapping time in the forming time of the tracks respectively corresponding to the two track data, and taking the average distance, the maximum distance, the minimum distance and the overlapping duration as time type characteristics. The distance for indicating the similarity may be any of a euclidean distance, a manhattan distance, and the like. Correspondingly, in the forming time of the tracks corresponding to the two track data respectively, the average Euclidean distance, the maximum Euclidean distance, the minimum Euclidean distance and the overlapping time length between the overlapping time can be calculated as the time class characteristics.
And a second point, when the forming time of the tracks corresponding to each two track data is not overlapped, the track data corresponding to the tracks formed by the movement of the same object at different times can be obtained. And if the movement speed accords with the reasonable movement speed of the target when the target reaches the track corresponding to the other track data from the track corresponding to one track data, the probability that the two tracks belong to the same target is increased. Therefore, a feature indicating the join relationship of the tracks corresponding to each of the two track data can be regarded as a time-like feature. And, the obtaining the feature for indicating the connection relation of the tracks corresponding to each two track data may include: calculating the difference value of the distance between the end position of the track with the former forming time and the start position of the other track and the time difference between the forming time of the end position and the forming time of the start position in the two tracks corresponding to the two track data respectively; calculating the movement speed of the target by using the distance difference and the time difference; determining whether the difference value between the calculated movement speed and the preset speed meets the preset reasonable speed condition or not, and taking the determined result as a time type characteristic. The preset speed may be a reasonable moving speed of the object under the scene, for example, in a monitoring range of the camera, the traffic rule prescribes that the legal speed of a certain object is not greater than S, and then the preset speed of the object may be S. The preset reasonable speed condition may have a difference value less than or equal to a preset difference threshold.
The third point, the feature for indicating the relationship between the movement tendencies of the trajectories respectively corresponding to each of the two pieces of trajectory data, may be plural. For example, the obtaining the feature for indicating the relationship between the motion trends of the trajectories respectively corresponding to the two trajectory data may include: when the tracks corresponding to the two track data are not overlapped, acquiring the motion direction of the ending position of the track with the previous time and the motion direction of the starting position of the track with the subsequent time from the tracks corresponding to the two track data; calculating an included angle between the two acquired motion directions as a time characteristic; or based on the two track data, obtaining the similarity of the motion trend between the tracks corresponding to the two track data respectively by utilizing a similarity model obtained by training in advance, and taking the similarity as the time class characteristic. The similarity model obtained through pre-training is a model obtained through training by utilizing a plurality of sample track data.
Fourth, since the acquisition time of the image data and the formation time of the track corresponding to the track data in the information of the target in the reference range can indicate the similarity of the target from the time dimension, the feature of the information for indicating the target in the reference range can be used as the time class feature. The reference ranges are the same as those in the spatial class features, and detailed descriptions in the spatial class features are omitted herein.
In an alternative embodiment, the image class features may specifically include at least one of the following features:
for indicating two objects indicated by the image data corresponding to each of the two trajectory data, respectively: features of similarity between contours, features of similarity between designated parts, and features of classification labels of two classified features in the plurality of features.
In a specific application, image data corresponding to two track data respectively can be input into a neural network model obtained by training in advance to obtain a modeling result of a target indicated by the image data; the neural network model is a model trained by using a plurality of sample image data and modeling result labels of the sample image data. And, the modeling result includes: the feature vector of the outline of the target corresponding to the modeling result, the feature vector of the appointed part of the target, and the classification label of the feature classified by two in the feature of the target corresponding to the modeling result. Of course, the modeling result specifically includes content corresponding to the kind of the image class feature. For example, image class features are two targets: when the similarity between the contours is characteristic, the modeling may include a feature vector of the contour of the target to which the modeling result corresponds. Based on this, for each image class feature, it is possible to: the similarity between feature vectors of the contours included in the two modeling results is calculated and is used as the image type feature; the similarity between the feature vectors of the designated parts included in the two modeling results is calculated and used as the image type feature; the quality scores of the features included in the two modeling results obtained through calculation are used as image type features; and determining whether classification labels about the same feature are the same or not, wherein the classification labels are included in the two obtained modeling results, and taking the determined results as image class features.
The image class feature is illustrated with two targets being human and two targets being vehicles as examples. The similarity between contours may include: the similarity between human bodies can be specifically the similarity between human body images; the similarity between the vehicle outlines may specifically be the similarity between the vehicle images, or the like. The similarity between the designated parts may include: the similarity between faces may be specifically the similarity between face images; the human joint morphology, i.e. the similarity between walking gait; similarity between license plates of a vehicle, and so forth. The classified features may be referred to as attribute features and may include: whether a human body is knapsack, handbag, riding, scarf, umbrella and the like; whether the face wears glasses, whether the hat is worn, whether the mask is worn, whether the beard is worn, and the like; whether the vehicle is equipped with a flag, whether the vehicle shields a license plate, and the like. The quality score for a feature may include: a score indicating the sharpness of the outline features, a score indicating the sharpness of the face features, a score indicating the accuracy of classification of the classified features, and so forth.
As shown in fig. 2, in the multi-target trajectory acquisition method according to another embodiment of the present invention, the acquisition process of the classification model, that is, the classification model, may be trained by the following steps:
S201, track data of a plurality of sample targets acquired by a plurality of cameras are acquired.
Wherein, the installation scene of a plurality of cameras is the same as the installation scene of the camera in the embodiment of fig. 1 of the present invention. For example, when the track is acquired for multiple targets in the stadium D1, the classification model used is M2, and then the multiple cameras trained to obtain M2 acquire track data of multiple sample targets in the stadium D1. In addition, in order to improve accuracy of the classification model, track data of a plurality of targets collected by a plurality of cameras can be directly read as track data of a plurality of sample targets according to a preset collection period, such as one week or two days, etc., that is, the classification model is obtained by training with offline track data, or an existing classification model is trained to obtain a new classification model and use the new classification model, which is reasonable.
S202, taking each two pieces of track data and the corresponding image data of each two pieces of track data in the track data of the plurality of sample targets as one sample, and marking the sample with a label.
In a specific application, when the forming time span of the tracks respectively corresponding to the two sample track data belonging to the same sample target is greater than a preset duration threshold, the labels of the samples consisting of the two sample track data are set to be not belonging to the same target. The preset duration threshold value can be set in a data exploration mode according to a specific scene. And when the formation time span is greater than the preset duration threshold, if the labels of the samples are set to belong to the same target, the samples are redundant samples without learning value.
For example, the sample target unmanned plane U1 forms sample trajectory data B1 at 13:00-13:15, stays in the shutdown zone at 13:15-16:15, forms sample trajectory data B2 at 16:16-16:30, and the time span between the sample trajectory data B1 and the sample trajectory data B2 is greater than a preset duration threshold value of 6 minutes. In this regard, in general, learning for identifying the sample trajectory data B1 and the sample trajectory data B2 is not applicable to the case of most target movements. Therefore, when the above-described formation time span is greater than the preset duration threshold, if the labels of the samples are set to belong to the same target, this will result in the samples being redundant samples of no learning value.
S203, acquiring the space-time class characteristics and the image class characteristics of each sample.
The spatiotemporal class feature and the image class feature of the sample are similar to those in the alternative embodiment of fig. 1 of the present invention, and are different in trace data. The same parts are not described in detail herein, and the description of the alternative embodiment of fig. 1 of the present invention is described above.
S204, training a preset model by using the space-time class feature and the image class feature of each sample and the label of each sample to obtain a classification model.
In particular applications, the pre-set model may be various. By way of example, the pre-set model may be any of a support vector machine, and a lightweight gradient lifting algorithm (Light Gradient Boosting Machine, lightgbm) or the like may be used for the classification model. The lightgbm is a framework for realizing the GBDT algorithm, supports high-efficiency parallel training, and has the advantages of higher training speed, lower memory consumption, better accuracy, support of distributed data processing, and the like. In the training process, the performance index of the preset model can be continuously verified, corresponding parameter adjustment is carried out on the preset model, and the adjusted preset model is used as the classification model until the performance of the adjusted preset model meets the verification index.
By way of example, a K-fold cross-validation approach may be utilized: the method comprises the steps of firstly, randomly dividing a plurality of samples into k sample subsets, wherein the samples in each sample subset are not repeated; secondly, 1 part of the training set is selected as a verification set, the remaining k-1 parts are used as training sets for training of the preset model, the training sets are used for training to obtain an adjusted preset model, and the verification set is used for testing the adjusted preset model to obtain performance indexes of the adjusted preset model; repeating the second step k times, namely ensuring that each sample subset has a chance to serve as a verification set; fourth, calculating an average value of K groups of performance indexes as an accuracy estimation and performance index of the adjusted preset model corresponding to the Kth verification; and continuously repeating the step of K-fold cross verification until the performance of the adjusted preset model meets the verification index, and taking the adjusted preset model as a classification model.
In addition, the classification model in the embodiment of fig. 1 of the present invention may be obtained by training in a similar manner to the embodiment of fig. 2 of the present invention, which is different in that the sample in the embodiment of fig. 1 of the present invention does not include image data, and does not need to extract image class features, and accordingly, the training may be performed by using space-time class features and labels of the sample. Therefore, for the flow of obtaining the classification model in the embodiment of fig. 1 of the present invention, reference may be made to the description of the embodiment of fig. 2 of the present invention, which is not repeated herein.
In an alternative embodiment, the classification result may include: confidence of whether each single-camera track data and the history associated track data belong to the same target;
correspondingly, the method for obtaining the track of the target motion by correlating the track data belonging to the same target in the plurality of single-camera track data and the history correlation track data based on the classification result specifically comprises the following steps:
acquiring a preset association layer number;
and carrying out correlation with the historical correlation track data on the correlation layer number single-camera track data preset in the plurality of single-camera track data according to the sequence of the confidence coefficient from large to small, and obtaining the track of the target motion corresponding to the correlated track data.
In a specific application, the more the number of preset association layers is, the more the association is performed on the track data of the same target, so that the accuracy of track acquisition of the movement of the target can be improved. In addition, in order to give consideration to the track acquisition accuracy and the track acquisition efficiency of the target motion, a user can set a preset association layer number meeting the requirements of the user according to the specific situation of a scene. For example, the scene is a straight roadway, the number of preset association layers can be 1 layer, the scene is a market with relatively more targets and a larger range, and the number of preset association layers can be 5 layers.
In addition, the track data with higher probability belonging to the same target can be associated with priority according to the order of the confidence level from high to low. Therefore, the method is gradually associated layer by layer to form a funnel shape, and finally, the association of all tracks is completed, so that the track of the target motion can be obtained efficiently under the condition of fewer preset layers.
As shown in fig. 3, a flow of a multi-target track acquisition method according to still another embodiment of the present invention,
s301, acquiring a plurality of single-camera track data to be associated.
S302, respectively acquiring space-time class characteristics between each single-camera track data and the historical associated track data.
S303, based on the acquired space-time class characteristics, a classification model obtained through pre-training is utilized to obtain a classification result of whether each single-camera track data and the historical associated track data belong to the same target.
The steps S301 to S303 are the same as the steps S102 to S103 in the embodiment of fig. 1 of the present invention, and are not described herein again, and detailed descriptions of the embodiment of fig. 1 of the present invention will be omitted.
S304, based on the classification result, correlating the track data belonging to the same target in the plurality of single-camera track data and the history correlation track data.
In S304, the content related to the track data is the same as that in S104 of the embodiment of fig. 1 of the present invention, and details thereof are not described herein.
S305, obtaining image class characteristics corresponding to each single-camera track data and each history associated track data respectively.
The image class features are used for indicating the relation of targets in the image dimension in the image data corresponding to each two track data respectively.
In order to determine whether there is abnormal track data in the associated track data according to the relationship between the space-time dimension and the image dimension in the track data belonging to the same target in the subsequent step S306, the image class feature for indicating the relationship between the targets in the image dimension in the image data corresponding to each two track data may be acquired in step S305.
S306, determining whether track data with abnormal association exists in the associated track data according to the relation of the track data belonging to the same target in the space-time dimension and/or the image dimension.
In an alternative embodiment, the step S306 may specifically include the following steps:
when the track data subjected to association does not meet any one of a plurality of association conditions, determining track data with abnormal association in the track data subjected to association;
wherein the plurality of association conditions may include: the first condition is that the associated track data belongs to single-camera track data of the same camera, and the corresponding target identifier at the same moment is unique; a second condition that one single-camera track data with the same forming time of a plurality of position data is carried out in the associated track data, wherein the difference value among the plurality of position data is smaller than or equal to a preset difference threshold value; the third condition is that when the association is carried out, the unassociated single-camera track data does not exist in the associated track data; fourth, in the associated track data, the similarity between different image data corresponding to the same single-camera track data is larger than or equal to a first similarity threshold value; and fifth condition, performing that the similarity between different image data corresponding to the associated track data and belonging to the same target is greater than or equal to a second similarity threshold value.
Among the above five conditions, the first condition to the third condition may indicate a relationship of track data belonging to the same object in the space-time dimension, and the fourth condition to the fifth condition may indicate a relationship of track data belonging to the same object in the image dimension. Specifically, the associated track data does not satisfy the first condition, indicating that the track data has a label switch (Idswitch) problem: i.e. the id exchange of tracked objects occurs when a single camera tracks multiple objects, resulting in the case that the single camera trajectory data output by one camera corresponds to multiple, e.g. two, different object identifications. The associated track data does not meet the second condition, which indicates that the problem of incorrect association of different track data acquired by different cameras occurs, that is, the track corresponding to the track data of the same object generates a large-scale jitter. The associated track data does not meet a third condition, indicating that a problem of failed track data association occurs. The associated track data does not satisfy the fourth condition, which indicates that the similarity between different image data corresponding to the same single-camera track data is too low, and it is likely that the image data of another object different from the object indicated by the single-camera track data is associated with the image data corresponding to the single-camera track data, that is, the track data association abnormality is caused by the image data association error. The associated track data does not meet the fifth condition, which indicates that the similarity of different images of the same object is too low, that is, the associated track data of different objects causes abnormal association
S307, if the track data does not exist, the associated track data is used as the track of the corresponding target motion.
According to the embodiment of the invention, the track data with the association abnormality can be effectively identified according to the relation between the space-time dimension and the image dimension of the track data belonging to the same target, so that the track data with the association abnormality does not exist in the track of the target movement, and the accuracy of the track of the target movement can be improved.
In addition, the step of determining whether the associated track data has the associated abnormal track data in the embodiment of fig. 3 of the present invention may be used in the case of the embodiment of fig. 2 of the present invention, where the difference is that the classification model is different, and will not be described herein.
In an optional implementation manner, after determining whether there is abnormal track data in the associated track data according to the relationship between the space-time dimension and/or the image dimension of the track data belonging to the same target, the multi-target track acquisition method provided by the embodiment of the present invention may further include the following steps:
if so, performing the following repair operations from step A to step E on the track data with the associated abnormality, and taking the associated track data to which the repaired track data belongs as the track of the corresponding target movement:
And step A, aiming at the single-camera track data with non-unique target identifications corresponding to the same moment, determining the target identification which is unique and corresponds to the single-camera track data from a plurality of target identifications which are corresponding to the single-camera track data.
Step a is to repair the trace data having the associated abnormality, i.e., the trace data where the Idswitch (identification exchange) occurs, which does not satisfy the first condition. For example, the single-camera track data with the non-unique target mark corresponding to the same time can be used as abnormal single-camera track data, and the space-time type features and the image type features corresponding to the position data in the abnormal single-camera track data can be compared; determining the position data of which the difference value between the space-time class features corresponding to other position data in each position data is larger than a preset space-time threshold value and the difference value between the image class features corresponding to other position data in each position data is larger than a preset image threshold value as segmentation position data; dividing abnormal single-camera track data according to the position indicated by the segmentation position data to obtain a plurality of sub-track data; and determining the target identifications corresponding to the sub-track data respectively from a plurality of target identifications corresponding to the abnormal single-camera track data at the same moment. The process is equivalent to cutting the track data with the Idswitch, and the obtained sub-track data are associated with the actual target identification of the sub-track data.
And B, eliminating position data which do not belong to the single-camera track data aiming at the single-camera track data with the difference value between the plurality of position data being larger than a preset difference threshold value, and associating the track data which belong to the corresponding target of the eliminated position data with the single-camera track data to which the eliminated position data belong.
The step B is to repair the trace data having the associated abnormality which does not satisfy the second condition. For example, the single-camera trajectory data with the difference value between the plurality of position data being greater than the preset difference threshold may be used as abnormal single-camera trajectory data, the position data corresponding to the single-camera trajectory data of the same object as the abnormal single-camera trajectory data may be removed from the abnormal single-camera trajectory data by using the classification result between the single-camera trajectory data corresponding to the plurality of position data and the abnormal single-camera trajectory data, and the trajectory data of the object corresponding to the removed position data may be associated with the single-camera trajectory data of the removed position data.
And C, aiming at the unassociated single-camera track data when the association is completed, associating the single-camera track data belonging to the same target in the unassociated single-camera track data.
The step C is to repair the trajectory data having the associated abnormality which does not satisfy the third condition. For example, associating the single-camera trajectory data belonging to the same target in the unassociated single-camera trajectory data may specifically include: when the number of the unassociated single-camera track data is 1, the unassociated single-camera track data is used as history associated track data to participate in the next track association. Otherwise, the forming time of the corresponding track in the plurality of unassociated single-camera track data is not overlapped, only track data of the unassociated single-camera track data exist in the reference range of each unassociated single-camera track data, and track association is carried out. The reference ranges are the same as those in the alternative embodiment of fig. 1 of the present invention, and will not be described herein.
And D, eliminating image data which does not belong to the single-camera track data aiming at the single-camera track data with the similarity smaller than the first similarity threshold value among the corresponding different image data, and establishing a corresponding relation between the eliminated image data and the single-camera track data to which the eliminated image data belongs.
The step D is to repair the trace data having the associated abnormality which does not satisfy the fourth condition. For example, when the single-camera trajectory data with respect to the corresponding different image data having the similarity smaller than the first similarity threshold value is regarded as the abnormal single-camera trajectory data, the image data not belonging to the abnormal single-camera trajectory data may be image data having the similarity smaller than the first similarity threshold value with the first specified number of image data of the abnormal single-camera trajectory data. Wherein the total number of image data of the abnormal single-camera trajectory data is N, the first designated number may be N-1.
And E, eliminating the image data which do not belong to the same target aiming at the corresponding track data which are associated and have the similarity smaller than a second similarity threshold value among different image data which belong to the same target, and establishing the corresponding relation between the eliminated image data and the target to which the eliminated image data belong.
The step E is to repair the trace data having the associated abnormality which does not satisfy the fifth condition. For example, when the object of the associated track data, which corresponds to the different image data belonging to the same object and has the similarity smaller than the second similarity threshold, is taken as the abnormal object, the image data not belonging to the same object may be image data, which has the similarity smaller than the second similarity threshold with the second specified number of image data of the abnormal object. Wherein the total number of image data of the abnormal target is M, the second specified number may be M-1.
According to the method and the device for repairing the track data, the track data with the associated abnormality are repaired, the associated track data which the repaired track data belongs to are used as the track of the corresponding target motion, and therefore the utilization rate of the track data and the accuracy of the track of the target motion can be improved.
Corresponding to the method embodiment, the embodiment of the invention also provides a multi-target track acquisition device.
As shown in fig. 4, the structure of the multi-target track acquisition apparatus according to an embodiment of the present invention includes:
a track data acquisition module 401, configured to acquire a plurality of single-camera track data to be associated; the single-camera track data are track data of a target acquired by a camera;
the track feature extraction module 402 is configured to obtain space-time class features between each single-camera track data and the history associated track data; the space-time class features are used for indicating the relation of the corresponding track of each two track data in space-time dimension; the history associated track data is track data which is already associated;
the track data classification module 403 is configured to obtain, based on the obtained spatiotemporal features, a classification result of whether each single-camera track data and the historical associated track data belong to the same target by using a classification model obtained by training in advance; the classification model is a model obtained by training a plurality of samples with labels, wherein the samples comprise two sample track data, and the labels are used for indicating whether the two sample track data belong to the same sample target;
And a target track acquisition module 404, configured to correlate track data belonging to the same target in the plurality of single-camera track data and the history associated track data based on the classification result, so as to obtain a track of the target motion.
In the scheme provided by the embodiment of the invention, the classification result of whether each single-camera track data and the historical associated track data belong to the same target is obtained, which is equivalent to determining which target each single-camera track data respectively belongs to from a plurality of single-camera track data. On the basis, based on the classification result that each single-camera track data and the history associated track data belong to the same target, the track data belonging to the same target in the plurality of single-camera track data and the history associated track data can be associated to obtain the track of the target motion. Therefore, when the image acquisition of the cross cameras is carried out on a plurality of targets through a plurality of cameras so as to acquire the motion track of each target, the method is not limited by the monitoring range of a single camera and the overlapping area required to exist in the monitoring ranges of two adjacent cameras, can be suitable for scenes with larger area required to be monitored, and has limited number of cameras and limited mounting positions, and the like, and achieves the effect of expanding the applicable scenes acquired by the multi-target track.
Optionally, the plurality of samples with labels further includes: image data corresponding to the sample trajectory data;
the track feature extraction module 402 is further configured to obtain image class features of the track data of each camera and the history associated track data respectively; the image class features are used for indicating the relation of targets in image dimensions in the image data corresponding to each two track data respectively;
the track data classification module 403 is specifically configured to input the acquired spatio-temporal class feature and image class feature into a classification model trained in advance, and acquire a classification result of whether each single-camera track data and the historical associated track data belong to the same target.
Optionally, the classification model is obtained by training in advance through the following steps:
acquiring track data of a plurality of sample targets acquired by a plurality of cameras;
taking each two pieces of track data and the image data corresponding to each two pieces of track data in the track data of the plurality of sample targets as one sample, and marking the label for the sample;
acquiring space-time class characteristics and image class characteristics of each sample;
training a preset model by using the space-time class features and the image class features of each sample and the labels of each sample to obtain the classification model.
Optionally, the spatiotemporal class features include: spatial class features and temporal class features;
the spatial class features include at least one of the following features: features for indicating whether the each two pieces of trajectory data respectively contain a first designated position and a second designated position, features for indicating a topological relation between each two cameras of the plurality of cameras, and features for indicating information of an object within a reference range; the first appointed position and the second appointed position are positions which enable the track of the target motion to be associated; the reference range includes: a range of a preset distance from the track corresponding to each two track data respectively;
the time-class features include at least one of the following features: the characteristic used for showing the similarity between the forming time of the tracks corresponding to each two track data is used for showing the connection relation of the tracks corresponding to each two track data, the characteristic used for showing the relation between the motion trend of the tracks corresponding to each two track data and the characteristic used for showing the information of the targets in the reference range.
Optionally, the image class feature includes at least one of the following features:
For indicating two objects indicated by the image data corresponding to each of the two trajectory data: features of similarity between contours, features of similarity between designated parts, and features of classification labels of two classified features in the plurality of features.
Optionally, the classification result includes: confidence of whether each single-camera track data and the history associated track data belong to the same target;
the target track acquisition module 404 is specifically configured to:
acquiring a preset association layer number;
and carrying out association with the historical association track data on the preset association layer number single-camera track data in the plurality of single-camera track data according to the order of the confidence coefficient from large to small to obtain a track of the target motion corresponding to the associated track data.
Optionally, image data of an image dimension corresponding to the single camera track data is recorded;
the target track acquisition module 404 is specifically configured to:
acquiring image type characteristics corresponding to each single-camera track data and each history associated track data respectively; the image class features are used for indicating the relation of targets in image dimensions in the image data corresponding to each two track data respectively;
Based on the classification result, correlating the track data belonging to the same target in the plurality of single-camera track data and the history correlation track data;
determining whether track data with abnormal association exists in the associated track data according to the relation of the track data belonging to the same target in the space-time dimension and/or the image dimension;
and if the track data does not exist, the associated track data is used as the track of the corresponding target motion.
Optionally, the target track acquisition module 404 is specifically configured to:
when the track data subjected to association does not meet any one of a plurality of association conditions, determining track data with association abnormality in the track data subjected to association;
wherein the plurality of association conditions includes: among the associated track data, the track data of one single camera belongs to the same camera, and the corresponding target identifier at the same moment is unique; among the associated track data, the single-camera track data with the same forming time of a plurality of position data is formed, and the difference value among the plurality of position data is smaller than or equal to a preset difference threshold value; when the association is completed, the track data which are associated do not have unassociated single-camera track data; in the related track data, the similarity between different image data corresponding to the same single-camera track data is larger than or equal to a first similarity threshold value; and the similarity between different image data belonging to the same target and corresponding to the associated track data is greater than or equal to a second similarity threshold value.
Optionally, the target track acquisition module 404 is further configured to:
if the track data with the associated abnormality exists, carrying out the following repairing operation, and taking the associated track data to which the repaired track data belongs as the track of the corresponding target movement:
aiming at the single-camera track data with non-unique target identifiers corresponding to the same moment, determining the target identifier which is unique and corresponds to the single-camera track data from a plurality of target identifiers which are corresponding to the single-camera track data;
for the single-camera track data with the difference value between the plurality of position data being larger than a preset difference threshold value, eliminating position data which do not belong to the single-camera track data, and associating track data which belong to a target corresponding to the eliminated position data with the single-camera track data to which the eliminated position data belong;
aiming at the unassociated single-camera track data when the association is completed, associating the single-camera track data belonging to the same target in the unassociated single-camera track data;
for single-camera track data with the similarity between corresponding different image data smaller than a first similarity threshold, eliminating image data not belonging to the single-camera track data, and establishing a corresponding relation between the eliminated image data and the single-camera track data to which the eliminated image data belongs;
And eliminating the image data which do not belong to the same target aiming at the corresponding track data which are subjected to association and have the similarity smaller than a second similarity threshold value among different image data which belong to the same target, and establishing the corresponding relation between the eliminated image data and the target to which the eliminated image data belong.
As shown in fig. 5, the structure of a multi-target track acquisition system according to an embodiment of the present invention includes: at least one camera 501, and a management background 502 communicatively coupled to the camera;
the camera 501 is configured to acquire track data of an object as single-camera track data;
the management background 502 is configured to obtain a plurality of single-camera track data to be associated; respectively acquiring space-time class characteristics between each single-camera track data and historical associated track data; the space-time class features are used for indicating the relation of the corresponding track of each two track data in space-time dimension; the history associated track data is track data which is already associated; based on the obtained space-time class characteristics, a classification model obtained by training in advance is utilized to obtain a classification result of whether each single-camera track data and the history associated track data belong to the same target; the classification model is a model obtained by training a plurality of samples with labels, wherein the samples comprise two sample track data, and the labels are used for indicating whether the two sample track data belong to the same sample target; and based on the classification result, correlating the track data belonging to the same target in the plurality of single-camera track data and the history correlation track data to obtain the track of the target movement.
In the scheme provided by the embodiment of the invention, the classification result of whether each single-camera track data and the historical associated track data belong to the same target is obtained, which is equivalent to determining which target each single-camera track data respectively belongs to from a plurality of single-camera track data. On the basis, based on the classification result that each single-camera track data and the history associated track data belong to the same target, the track data belonging to the same target in the plurality of single-camera track data and the history associated track data can be associated to obtain the track of the target motion. Therefore, when the image acquisition of the cross cameras is carried out on a plurality of targets through a plurality of cameras so as to acquire the motion track of each target, the method is not limited by the monitoring range of a single camera and the overlapping area required to exist in the monitoring ranges of two adjacent cameras, can be suitable for scenes with larger area required to be monitored, and has limited number of cameras and limited mounting positions, and the like, and achieves the effect of expanding the applicable scenes acquired by the multi-target track.
The embodiment of the invention also provides a management background applied to the multi-target track acquisition system shown in fig. 5, as shown in fig. 6, comprising a processor 601, a communication interface 602, a memory 603 and a communication bus 604, wherein the processor 601, the communication interface 602 and the memory 603 complete communication with each other through the communication bus 604,
A memory 603 for storing a computer program;
the processor 601 is configured to execute the program stored in the memory 603, and implement the following steps:
acquiring a plurality of single-camera track data to be associated; the single-camera track data are track data of a target acquired by a camera;
respectively acquiring space-time class characteristics between each single-camera track data and historical associated track data; the space-time class features are used for indicating the relation of the corresponding track of each two track data in space-time dimension; the history associated track data is track data which is already associated;
based on the obtained space-time class characteristics, a classification model obtained by training in advance is utilized to obtain a classification result of whether each single-camera track data and the history associated track data belong to the same target; the classification model is a model obtained by training a plurality of samples with labels, wherein the samples comprise two sample track data, and the labels are used for indicating whether the two sample track data belong to the same sample target;
and based on the classification result, correlating the track data belonging to the same target in the plurality of single-camera track data and the history correlation track data to obtain the track of the target movement.
The embodiment of the invention also provides a camera applied to the multi-target track acquisition system shown in fig. 5, as shown in fig. 7, comprising a processor 701, a communication interface 702, a memory 703 and a communication bus 704, wherein the processor 701, the communication interface 702 and the memory 703 complete communication with each other through the communication bus 704,
a memory 703 for storing a computer program;
the processor 701 is configured to execute the program stored in the memory 703, and implement the following steps:
acquiring a plurality of single-camera track data to be associated; the single-camera track data are track data of a target acquired by a camera.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In yet another embodiment of the present invention, a computer readable storage medium is provided, in which a computer program is stored, which when executed by a processor, implements the steps of any of the multi-target trajectory acquisition methods described above.
In the scheme provided by the embodiment of the invention, the classification result of whether each single-camera track data and the historical associated track data belong to the same target is obtained, which is equivalent to determining which target each single-camera track data respectively belongs to from a plurality of single-camera track data. On the basis, based on the classification result that each single-camera track data and the history associated track data belong to the same target, the track data belonging to the same target in the plurality of single-camera track data and the history associated track data can be associated to obtain the track of the target motion. Therefore, when the image acquisition of the cross cameras is carried out on a plurality of targets through a plurality of cameras so as to acquire the motion track of each target, the method is not limited by the monitoring range of a single camera and the overlapping area required to exist in the monitoring ranges of two adjacent cameras, can be suitable for scenes with larger area required to be monitored, and has limited number of cameras and limited mounting positions, and the like, and achieves the effect of expanding the applicable scenes acquired by the multi-target track.
In yet another embodiment of the present invention, a computer program product containing instructions that, when run on a computer, cause the computer to perform any of the multi-target trajectory acquisition methods of the above embodiments is also provided.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus and system embodiments, the description is relatively simple, as it is substantially similar to method embodiments, with reference to the description of method embodiments in part.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (10)

1. A multi-target trajectory acquisition method, the method comprising:
acquiring a plurality of single-camera track data to be associated; the single-camera track data are track data of a target acquired by a camera;
respectively acquiring space-time class characteristics between each single-camera track data and historical associated track data; the space-time class features are used for indicating the relation of the corresponding track of each two track data in space-time dimension; the history associated track data is track data which is already associated;
based on the obtained space-time class characteristics, a classification model obtained by training in advance is utilized to obtain a classification result of whether each single-camera track data and the history associated track data belong to the same target; the classification model is a model obtained by training a plurality of samples with labels, wherein the samples comprise two sample track data, and the labels are used for indicating whether the two sample track data belong to the same sample target;
Based on the classification result, correlating the track data belonging to the same target in the plurality of single-camera track data and the history correlation track data to obtain the track of the target movement;
wherein the spatiotemporal class features comprise: spatial class features and temporal class features;
the spatial class features include at least one of the following features: features for indicating whether the each two pieces of trajectory data respectively contain a first designated position and a second designated position, features for indicating a topological relation between each two cameras of the plurality of cameras, and features for indicating information of an object within a reference range; the first appointed position and the second appointed position are positions which enable the track of the target motion to be associated; the reference range includes: a range of a preset distance from the track corresponding to each two track data respectively;
the time-class features include at least one of the following features: the characteristic used for showing the similarity between the forming time of the tracks corresponding to each two track data is used for showing the connection relation of the tracks corresponding to each two track data, the characteristic used for showing the relation between the motion trend of the tracks corresponding to each two track data and the characteristic used for showing the information of the targets in the reference range;
The characteristic for indicating the similarity between the forming times of the tracks corresponding to each two track data is determined by the appointed distance between the overlapping time and the time length corresponding to the overlapping time in the forming times of the tracks corresponding to the two track data;
based on the classification result, correlating the track data belonging to the same target in the plurality of single-camera track data and the history correlation track data to obtain a track of the target motion, including:
acquiring image type characteristics corresponding to each single-camera track data and each history associated track data respectively; the image class features are used for indicating the relation of targets in image dimensions in the image data corresponding to each two track data respectively; based on the classification result, correlating the track data belonging to the same target in the plurality of single-camera track data and the history correlation track data; determining whether track data with abnormal association exists in the associated track data according to the relation of the track data belonging to the same target in the space-time dimension and/or the image dimension; if the track data does not exist, the associated track data is used as the track of the corresponding target motion;
After determining whether there is abnormal track data in the associated track data according to the relation of the track data belonging to the same target in the space-time dimension and/or the image dimension, the method further comprises:
if the track data with the associated abnormality exists, carrying out the following repairing operation, and taking the associated track data to which the repaired track data belongs as the track of the corresponding target movement: aiming at the single-camera track data with non-unique target identifiers corresponding to the same moment, determining the target identifier which is unique and corresponds to the single-camera track data from a plurality of target identifiers which are corresponding to the single-camera track data; for the single-camera track data with the difference value between the plurality of position data being larger than a preset difference threshold value, eliminating position data which do not belong to the single-camera track data, and associating track data which belong to a target corresponding to the eliminated position data with the single-camera track data to which the eliminated position data belong; aiming at the unassociated single-camera track data when the association is completed, associating the single-camera track data belonging to the same target in the unassociated single-camera track data; for single-camera track data with the similarity between corresponding different image data smaller than a first similarity threshold, eliminating image data not belonging to the single-camera track data, and establishing a corresponding relation between the eliminated image data and the single-camera track data to which the eliminated image data belongs; and eliminating the image data which do not belong to the same target aiming at the corresponding track data which are subjected to association and have the similarity smaller than a second similarity threshold value among different image data which belong to the same target, and establishing the corresponding relation between the eliminated image data and the target to which the eliminated image data belong.
2. The method of claim 1, wherein the plurality of tagged samples further comprises: image data corresponding to the sample trajectory data;
before the classification result of whether each single-camera track data and the history associated track data belong to the same target is obtained by utilizing the classification model obtained by training in advance based on the obtained space-time class characteristics, the method further comprises:
respectively acquiring image type characteristics of each single-camera track data and the history associated track data; the image class features are used for indicating the relation of targets in image dimensions in the image data corresponding to each two track data respectively;
based on the obtained space-time class characteristics, the method for obtaining the classification result of whether each single-camera track data and the history associated track data belong to the same target by utilizing a classification model obtained through pre-training comprises the following steps:
and inputting the acquired space-time class features and image class features into a classification model trained in advance, and acquiring a classification result of whether each single-camera track data and the history associated track data belong to the same target.
3. The method according to claim 2, wherein the classification model is trained in advance by:
Acquiring track data of a plurality of sample targets acquired by a plurality of cameras;
taking each two pieces of track data and the image data corresponding to each two pieces of track data in the track data of the plurality of sample targets as one sample, and marking the label for the sample;
acquiring space-time class characteristics and image class characteristics of each sample;
training a preset model by using the space-time class features and the image class features of each sample and the labels of each sample to obtain the classification model.
4. The method of claim 2, wherein the image class features include at least one of the following features:
for indicating two objects indicated by the image data corresponding to each of the two trajectory data: features of similarity between contours, features of similarity between designated parts, and features of classification labels of two classified features in the plurality of features.
5. The method of claim 1, wherein the classification result comprises: confidence of whether each single-camera track data and the history associated track data belong to the same target;
based on the classification result, correlating the track data belonging to the same target in the plurality of single-camera track data and the history correlation track data to obtain a track of the target motion, including:
Acquiring a preset association layer number;
and carrying out association with the historical association track data on the preset association layer number single-camera track data in the plurality of single-camera track data according to the order of the confidence coefficient from large to small to obtain a track of the target motion corresponding to the associated track data.
6. The method according to claim 1, wherein determining whether there is abnormal track data in the associated track data according to the relation between the track data belonging to the same object in the space-time dimension and/or the image dimension includes:
when the track data subjected to association does not meet any one of a plurality of association conditions, determining track data with association abnormality in the track data subjected to association;
wherein the plurality of association conditions includes: among the associated track data, the track data of one single camera belongs to the same camera, and the corresponding target identifier at the same moment is unique; among the associated track data, the single-camera track data with the same forming time of a plurality of position data is formed, and the difference value among the plurality of position data is smaller than or equal to a preset difference threshold value; when the association is completed, the track data which are associated do not have unassociated single-camera track data; in the related track data, the similarity between different image data corresponding to the same single-camera track data is larger than or equal to a first similarity threshold value; and the similarity between different image data belonging to the same target and corresponding to the associated track data is greater than or equal to a second similarity threshold value.
7. A multi-target trajectory acquisition device, the device comprising:
the track data acquisition module is used for acquiring a plurality of single-camera track data to be associated; the single-camera track data are track data of a target acquired by a camera;
the track feature extraction module is used for respectively acquiring space-time class features between each single-camera track data and the history associated track data; the space-time class features are used for indicating the relation of the corresponding track of each two track data in space-time dimension; the history associated track data is track data which is already associated;
the track data classification module is used for acquiring a classification result of whether each single-camera track data and the history associated track data belong to the same target or not by utilizing a classification model obtained through pre-training based on the acquired space-time class characteristics; the classification model is a model obtained by training a plurality of samples with labels, wherein the samples comprise two sample track data, and the labels are used for indicating whether the two sample track data belong to the same sample target;
the target track acquisition module is used for associating track data belonging to the same target in the plurality of single-camera track data and the history associated track data based on the classification result to obtain a track of the target movement;
Wherein the spatiotemporal class features comprise: spatial class features and temporal class features;
the spatial class features include at least one of the following features: features for indicating whether the each two pieces of trajectory data respectively contain a first designated position and a second designated position, features for indicating a topological relation between each two cameras of the plurality of cameras, and features for indicating information of an object within a reference range; the first appointed position and the second appointed position are positions which enable the track of the target motion to be associated; the reference range includes: a range of a preset distance from the track corresponding to each two track data respectively;
the time-class features include at least one of the following features: the characteristic used for showing the similarity between the forming time of the tracks corresponding to each two track data is used for showing the connection relation of the tracks corresponding to each two track data, the characteristic used for showing the relation between the motion trend of the tracks corresponding to each two track data and the characteristic used for showing the information of the targets in the reference range;
the characteristic for indicating the similarity between the forming times of the tracks corresponding to each two track data is determined by the appointed distance between the overlapping time and the time length corresponding to the overlapping time in the forming times of the tracks corresponding to the two track data;
The target track acquisition module is specifically configured to:
acquiring image type characteristics corresponding to each single-camera track data and each history associated track data respectively; the image class features are used for indicating the relation of targets in image dimensions in the image data corresponding to each two track data respectively; based on the classification result, correlating the track data belonging to the same target in the plurality of single-camera track data and the history correlation track data; determining whether track data with abnormal association exists in the associated track data according to the relation of the track data belonging to the same target in the space-time dimension and/or the image dimension; if the track data does not exist, the associated track data is used as the track of the corresponding target motion;
the target track acquisition module is further configured to:
if the track data with the associated abnormality exist, performing the following repairing operation, and taking the associated track data to which the repaired track data belongs as the track of the corresponding target movement; aiming at the single-camera track data with non-unique target identifiers corresponding to the same moment, determining the target identifier which is unique and corresponds to the single-camera track data from a plurality of target identifiers which are corresponding to the single-camera track data; for the single-camera track data with the difference value between the plurality of position data being larger than a preset difference threshold value, eliminating position data which do not belong to the single-camera track data, and associating track data which belong to a target corresponding to the eliminated position data with the single-camera track data to which the eliminated position data belong; aiming at the unassociated single-camera track data when the association is completed, associating the single-camera track data belonging to the same target in the unassociated single-camera track data; for single-camera track data with the similarity between corresponding different image data smaller than a first similarity threshold, eliminating image data not belonging to the single-camera track data, and establishing a corresponding relation between the eliminated image data and the single-camera track data to which the eliminated image data belongs; and eliminating the image data which do not belong to the same target aiming at the corresponding track data which are subjected to association and have the similarity smaller than a second similarity threshold value among different image data which belong to the same target, and establishing the corresponding relation between the eliminated image data and the target to which the eliminated image data belong.
8. A multi-target trajectory acquisition system, the system comprising: at least one camera, and a management background communicatively coupled to the camera;
the camera is used for acquiring track data of a target and is used as single-camera track data;
the management background is used for acquiring a plurality of single-camera track data to be associated; respectively acquiring space-time class characteristics between each single-camera track data and historical associated track data; the space-time class features are used for indicating the relation of the corresponding track of each two track data in space-time dimension; the history associated track data is track data which is already associated; based on the obtained space-time class characteristics, a classification model obtained by training in advance is utilized to obtain a classification result of whether each single-camera track data and the history associated track data belong to the same target; the classification model is a model obtained by training a plurality of samples with labels, wherein the samples comprise two sample track data, and the labels are used for indicating whether the two sample track data belong to the same sample target; based on the classification result, correlating the track data belonging to the same target in the plurality of single-camera track data and the history correlation track data to obtain the track of the target movement;
Wherein the spatiotemporal class features comprise: spatial class features and temporal class features;
the spatial class features include at least one of the following features: features for indicating whether the each two pieces of trajectory data respectively contain a first designated position and a second designated position, features for indicating a topological relation between each two cameras of the plurality of cameras, and features for indicating information of an object within a reference range; the first appointed position and the second appointed position are positions which enable the track of the target motion to be associated; the reference range includes: a range of a preset distance from the track corresponding to each two track data respectively;
the time-class features include at least one of the following features: the characteristic used for showing the similarity between the forming time of the tracks corresponding to each two track data is used for showing the connection relation of the tracks corresponding to each two track data, the characteristic used for showing the relation between the motion trend of the tracks corresponding to each two track data and the characteristic used for showing the information of the targets in the reference range;
the characteristic for indicating the similarity between the forming times of the tracks corresponding to each two track data is determined by the appointed distance between the overlapping time and the time length corresponding to the overlapping time in the forming times of the tracks corresponding to the two track data;
The management background correlates the track data belonging to the same target in the plurality of single-camera track data and the history correlation track data based on the classification result to obtain the track of the target motion, and the method comprises the following steps:
acquiring image type characteristics corresponding to each single-camera track data and each history associated track data respectively; the image class features are used for indicating the relation of targets in image dimensions in the image data corresponding to each two track data respectively; based on the classification result, correlating the track data belonging to the same target in the plurality of single-camera track data and the history correlation track data; determining whether track data with abnormal association exists in the associated track data according to the relation of the track data belonging to the same target in the space-time dimension and/or the image dimension; if the track data does not exist, the associated track data is used as the track of the corresponding target motion;
the management background is further configured to:
after determining whether the associated abnormal track data exists in the associated track data according to the relation of the track data belonging to the same target in the space-time dimension and/or the image dimension, if so, performing the following repairing operation on the track data with the associated abnormality, and taking the associated track data to which the repaired track data belongs as the track of the corresponding target movement: aiming at the single-camera track data with non-unique target identifiers corresponding to the same moment, determining the target identifier which is unique and corresponds to the single-camera track data from a plurality of target identifiers which are corresponding to the single-camera track data; for the single-camera track data with the difference value between the plurality of position data being larger than a preset difference threshold value, eliminating position data which do not belong to the single-camera track data, and associating track data which belong to a target corresponding to the eliminated position data with the single-camera track data to which the eliminated position data belong; aiming at the unassociated single-camera track data when the association is completed, associating the single-camera track data belonging to the same target in the unassociated single-camera track data; for single-camera track data with the similarity between corresponding different image data smaller than a first similarity threshold, eliminating image data not belonging to the single-camera track data, and establishing a corresponding relation between the eliminated image data and the single-camera track data to which the eliminated image data belongs; and eliminating the image data which do not belong to the same target aiming at the corresponding track data which are subjected to association and have the similarity smaller than a second similarity threshold value among different image data which belong to the same target, and establishing the corresponding relation between the eliminated image data and the target to which the eliminated image data belong.
9. The electronic equipment is characterized by being applied to a multi-target track acquisition system and comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are in communication with each other through the communication bus;
a memory for storing a computer program;
a processor for carrying out the method steps of any one of claims 1-6 when executing a program stored on a memory.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-6.
CN202011364337.2A 2020-11-27 2020-11-27 Multi-target track acquisition method, device, system and storage medium Active CN112465866B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011364337.2A CN112465866B (en) 2020-11-27 2020-11-27 Multi-target track acquisition method, device, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011364337.2A CN112465866B (en) 2020-11-27 2020-11-27 Multi-target track acquisition method, device, system and storage medium

Publications (2)

Publication Number Publication Date
CN112465866A CN112465866A (en) 2021-03-09
CN112465866B true CN112465866B (en) 2024-02-02

Family

ID=74809254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011364337.2A Active CN112465866B (en) 2020-11-27 2020-11-27 Multi-target track acquisition method, device, system and storage medium

Country Status (1)

Country Link
CN (1) CN112465866B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113536084B (en) * 2021-06-29 2022-10-14 中国人民解放军战略支援部队航天工程大学 Space-time event extraction system and method
CN114066974A (en) * 2021-11-17 2022-02-18 上海高德威智能交通***有限公司 Target track generation method and device, electronic equipment and medium
CN114973143B (en) * 2022-06-17 2023-07-07 湖南中科助英智能科技研究院有限公司 Low-altitude aircraft robust detection method integrating motion characteristics
CN115169588A (en) * 2022-09-06 2022-10-11 深圳前海中电慧安科技有限公司 Electrographic computation space-time trajectory vehicle code correlation method, device, equipment and storage medium
CN116363565B (en) * 2023-06-01 2023-08-11 深圳须弥云图空间科技有限公司 Target track determining method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436662A (en) * 2011-11-29 2012-05-02 南京信息工程大学 Human body target tracking method in nonoverlapping vision field multi-camera network
CN104112282A (en) * 2014-07-14 2014-10-22 华中科技大学 A method for tracking a plurality of moving objects in a monitor video based on on-line study
CN107016374A (en) * 2017-04-12 2017-08-04 电子科技大学 Intelligent Measurement tracking and the generation method of space-time track towards specific objective
CN107145862A (en) * 2017-05-05 2017-09-08 山东大学 A kind of multiple features matching multi-object tracking method based on Hough forest
CN110378931A (en) * 2019-07-10 2019-10-25 成都数之联科技有限公司 A kind of pedestrian target motion track acquisition methods and system based on multi-cam
CN110728702A (en) * 2019-08-30 2020-01-24 深圳大学 High-speed cross-camera single-target tracking method and system based on deep learning
CN110969644A (en) * 2018-09-28 2020-04-07 杭州海康威视数字技术股份有限公司 Personnel trajectory tracking method, device and system
CN111724412A (en) * 2020-06-17 2020-09-29 杭州海康威视数字技术股份有限公司 Method and device for determining motion trail and computer storage medium
CN111784728A (en) * 2020-06-29 2020-10-16 杭州海康威视数字技术股份有限公司 Track processing method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2545658A (en) * 2015-12-18 2017-06-28 Canon Kk Methods, devices and computer programs for tracking targets using independent tracking modules associated with cameras

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436662A (en) * 2011-11-29 2012-05-02 南京信息工程大学 Human body target tracking method in nonoverlapping vision field multi-camera network
CN104112282A (en) * 2014-07-14 2014-10-22 华中科技大学 A method for tracking a plurality of moving objects in a monitor video based on on-line study
CN107016374A (en) * 2017-04-12 2017-08-04 电子科技大学 Intelligent Measurement tracking and the generation method of space-time track towards specific objective
CN107145862A (en) * 2017-05-05 2017-09-08 山东大学 A kind of multiple features matching multi-object tracking method based on Hough forest
CN110969644A (en) * 2018-09-28 2020-04-07 杭州海康威视数字技术股份有限公司 Personnel trajectory tracking method, device and system
CN110378931A (en) * 2019-07-10 2019-10-25 成都数之联科技有限公司 A kind of pedestrian target motion track acquisition methods and system based on multi-cam
CN110728702A (en) * 2019-08-30 2020-01-24 深圳大学 High-speed cross-camera single-target tracking method and system based on deep learning
CN111724412A (en) * 2020-06-17 2020-09-29 杭州海康威视数字技术股份有限公司 Method and device for determining motion trail and computer storage medium
CN111784728A (en) * 2020-06-29 2020-10-16 杭州海康威视数字技术股份有限公司 Track processing method, device, equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《Tracking Across Multiple CamerasWith Disjoint Views》;Omar Javed et al.;《Proceedings of the Ninth IEEE International Conference on Computer Vision (ICCV’03)》;全文 *
《人体目标的跨视域识别》;项宗杰;《中国博士学位论文全文数据库信息科技辑》;第1.3节、第1.4节、第2.1节、第4.1节 *
《基于目标检测的视觉多目标跟踪技术研究》;蓝龙;《中国博士学位论文全文数据库信息科技辑》;第1.2节、第3章 *
《基于轨迹关联的多目标跟踪》;许正 等;《南京邮电大学学报( 自然科学版)》;第37卷(第2期);全文 *

Also Published As

Publication number Publication date
CN112465866A (en) 2021-03-09

Similar Documents

Publication Publication Date Title
CN112465866B (en) Multi-target track acquisition method, device, system and storage medium
CN108133172B (en) Method for classifying moving objects in video and method and device for analyzing traffic flow
CN107145862B (en) Multi-feature matching multi-target tracking method based on Hough forest
Bai et al. Traffic anomaly detection via perspective map based on spatial-temporal information matrix.
Chang et al. Video analytics in smart transportation for the AIC'18 challenge
CN112770265B (en) Pedestrian identity information acquisition method, system, server and storage medium
Wei et al. City-scale vehicle tracking and traffic flow estimation using low frame-rate traffic cameras
CN108932509A (en) A kind of across scene objects search methods and device based on video tracking
CN114241016A (en) Cross-camera track association method and device and electronic equipment
CN112651398B (en) Snapshot control method and device for vehicle and computer readable storage medium
WO2023197232A1 (en) Target tracking method and apparatus, electronic device, and computer readable medium
CN112507860A (en) Video annotation method, device, equipment and storage medium
Bashar et al. Multiple object tracking in recent times: A literature review
CN114898326A (en) Method, system and equipment for detecting reverse running of one-way vehicle based on deep learning
CN116311063A (en) Personnel fine granularity tracking method and system based on face recognition under monitoring video
CN109389177B (en) Tunnel vehicle re-identification method based on cooperative cascade forest
Karim et al. An attention-guided multistream feature fusion network for early localization of risky traffic agents in driving videos
CN111950507B (en) Data processing and model training method, device, equipment and medium
CN112150814B (en) Information processing method and device based on intelligent traffic and intelligent traffic system
CN115187884A (en) High-altitude parabolic identification method and device, electronic equipment and storage medium
CN115620098B (en) Evaluation method and system of cross-camera pedestrian tracking algorithm and electronic equipment
US20230081930A1 (en) Data collection device, data collection method, and data collection program
CN113361392B (en) Unsupervised multi-mode pedestrian re-identification method based on camera and wireless positioning
Jiang et al. Vehicle tracking with non-overlapping views for multi-camera surveillance system
Bilecen et al. Video anomaly detection with autoregressive modeling of covariance features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant