CN110766719A - Target tracking method, device and storage medium - Google Patents

Target tracking method, device and storage medium Download PDF

Info

Publication number
CN110766719A
CN110766719A CN201910895581.2A CN201910895581A CN110766719A CN 110766719 A CN110766719 A CN 110766719A CN 201910895581 A CN201910895581 A CN 201910895581A CN 110766719 A CN110766719 A CN 110766719A
Authority
CN
China
Prior art keywords
point cloud
current target
matching
cloud cluster
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910895581.2A
Other languages
Chinese (zh)
Other versions
CN110766719B (en
Inventor
冯强
疏达
李�远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beixing (beijing) Photon Technology Co Ltd
Original Assignee
Beixing (beijing) Photon Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beixing (beijing) Photon Technology Co Ltd filed Critical Beixing (beijing) Photon Technology Co Ltd
Priority to CN201910895581.2A priority Critical patent/CN110766719B/en
Publication of CN110766719A publication Critical patent/CN110766719A/en
Application granted granted Critical
Publication of CN110766719B publication Critical patent/CN110766719B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the specification provides a target tracking method, target tracking equipment and a storage medium. The method comprises the following steps: performing density clustering on the current frame point cloud data to obtain at least one point cloud cluster; matching the point cloud cluster with at least one current target by adopting at least one matching mode to obtain a matching result, and marking the target corresponding to the point cloud cluster according to the matching result; the matching result indicates that no current target is matched with the point cloud cluster, and the target corresponding to the point cloud cluster is a new target, or the matching result indicates the point cloud clusters matched with the current target one by one, and the target corresponding to the point cloud cluster is the current target indicated by the matching result; at least one matching method comprises the following steps: feature matching, space matching and matching according to a weighting result. The embodiment of the invention can improve the accuracy of target tracking.

Description

Target tracking method, device and storage medium
Technical Field
The embodiments of the present disclosure relate to the field of target tracking technologies, and in particular, to a target tracking method, device, and storage medium.
Background
With the continuous improvement of computing power level of hardware such as processors and the rapid development of software algorithms, the implementation scheme for tracking the target has been developed from the traditional target tracking based on image processing technology to the target tracking based on point cloud data.
Generally, a target tracking scheme based on point cloud data is to segment and cluster point cloud data, extract a local or global feature descriptor operator, and then match the operator with a reference descriptor operator to establish association so as to realize target tracking.
If the same or similar feature descriptor operators are extracted, the matching may be inaccurate, resulting in association errors. For example, extracted feature descriptor operators a and B are the same, and their corresponding targets should be a ' and B ', respectively, but when matching with the reference descriptor operator, it is possible to erroneously match both a and B with the reference descriptor operator corresponding to target a '.
Disclosure of Invention
The embodiment of the invention provides a target tracking method, target tracking equipment and a storage medium, which are used for improving the accuracy of target tracking.
The embodiment of the invention provides a target tracking method, which comprises the following steps:
performing density clustering on the current frame point cloud data to obtain at least one point cloud cluster;
matching the point cloud cluster with at least one current target by adopting at least one matching mode to obtain a matching result, and marking the target corresponding to the point cloud cluster according to the matching result; the matching result indicates that no current target is matched with the point cloud cluster, and the target corresponding to the point cloud cluster is a new target, or the matching result indicates the current target which is uniquely matched with the point cloud cluster, and the target corresponding to the point cloud cluster is the current target indicated by the matching result;
the at least one matching method includes:
carrying out feature matching on a feature descriptor operator of the point cloud cluster and a feature descriptor operator of a current target;
carrying out space matching on the space position of the point cloud cluster and the estimated space position of the current target;
weighting the feature matching deviation of the point cloud cluster and the current target, the space matching deviation of the point cloud cluster and the current target, and the motion parameter matching deviation of the point cloud cluster and the current target, and matching according to a weighting result; the characteristic matching deviation is the deviation of a point cloud cluster and a characteristic descriptor operator of the current target, the space matching deviation is the deviation of the space position of the point cloud cluster and the current target, and the motion parameter matching deviation is the deviation of an estimated motion parameter of the point cloud cluster and the current target.
The target tracking method provided by the embodiment of the invention can adopt a plurality of modes to carry out combined matching so as to obtain a unique matching result, thereby avoiding the situation of matching errors, avoiding target association errors and improving the accuracy of target tracking.
Based on the same inventive concept as the above-mentioned target tracking method embodiment, an embodiment of the present invention further provides a target tracking apparatus, including:
a memory and a processor;
the memory is used for storing data used when the processor executes the computer program;
the processor is used for executing the computer program to realize the method provided by the target tracking method embodiment.
The target tracking device provided by the embodiment of the invention can adopt a plurality of modes to carry out combined matching so as to obtain a unique matching result, thereby avoiding the situation of matching errors, avoiding target association errors and improving the accuracy of target tracking.
Based on the same inventive concept as the above-mentioned target tracking method embodiment, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the following steps:
performing density clustering on the current frame point cloud data to obtain at least one point cloud cluster;
matching the point cloud cluster with at least one current target by adopting at least one matching mode to obtain a matching result, and marking the target corresponding to the point cloud cluster according to the matching result; the matching result indicates that no current target is matched with the point cloud cluster, and the target corresponding to the point cloud cluster is a new target, or the matching result indicates the current target which is uniquely matched with the point cloud cluster, and the target corresponding to the point cloud cluster is the current target indicated by the matching result;
the at least one matching method includes:
carrying out feature matching on a feature descriptor operator of the point cloud cluster and a feature descriptor operator of a current target;
carrying out space matching on the space position of the point cloud cluster and the estimated space position of the current target;
weighting the feature matching deviation of the point cloud cluster and the current target, the space matching deviation of the point cloud cluster and the current target, and the motion parameter matching deviation of the point cloud cluster and the current target, and matching according to a weighting result; the characteristic matching deviation is the deviation of a point cloud cluster and a characteristic descriptor operator of the current target, the space matching deviation is the deviation of the space position of the point cloud cluster and the current target, and the motion parameter matching deviation is the deviation of an estimated motion parameter of the point cloud cluster and the current target.
The storage medium provided by the embodiment of the invention can adopt a plurality of modes for combined matching, so that a unique matching result is obtained, the matching error condition is avoided, the target association error is avoided, and the target tracking accuracy is improved.
Drawings
Fig. 1 is a schematic flow chart of a target tracking method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a target tracking method according to another embodiment of the present invention;
FIG. 3 is a schematic diagram of an object in a detection scenario according to an embodiment of the present invention;
fig. 4 is a schematic diagram of space division according to an embodiment of the present invention.
Detailed Description
In order to better understand the technical solutions, the technical solutions of the embodiments of the present specification are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features of the embodiments and embodiments of the present specification are detailed descriptions of the technical solutions of the embodiments of the present specification, and are not limitations of the technical solutions of the present specification, and the technical features of the embodiments and embodiments of the present specification may be combined with each other without conflict.
An embodiment of the present invention provides a target tracking method, as shown in fig. 1, including the following operations:
step 101, performing density clustering on current frame point cloud data to obtain at least one point cloud cluster.
In the embodiment of the present invention, the point cloud data may be acquired by a laser radar, may also be acquired by a depth camera, and may also have other acquisition manners, which is not limited in the embodiment of the present invention.
The point cloud data is a set of a plurality of sample points, and each sample point is represented by a spatial coordinate.
Preferably, before density clustering is performed, point cloud data is preprocessed, and the preprocessing can be, but is not limited to, filtering to filter out sample points in a close-range blind area range, so that influence on subsequent processing due to serious distortion of the distance of the sample points in the blind area range is avoided.
In the embodiment of the invention, only one point cloud cluster can be obtained through density clustering, but a plurality of point cloud clusters can be obtained under normal conditions. Each point cloud cluster corresponds to one target in an actual detection scene, but the target corresponding to each point cloud cluster cannot be determined.
102, matching the point cloud cluster with at least one current target by adopting at least one matching mode to obtain a matching result, and marking the target corresponding to the point cloud cluster according to the matching result; the matching result indicates that no current target is matched with the point cloud cluster, and the target corresponding to the point cloud cluster is a new target, or the matching result indicates a current target uniquely matched with the point cloud cluster, and the target corresponding to the point cloud cluster is the current target indicated by the matching result.
The unique matching of the point cloud clusters and the current target means that one point cloud cluster is only matched with one current target, and one current target is only matched with one point cloud cluster.
In the embodiment of the present invention, the step 101 and the step 102 form a complete target tracking process, which may be referred to as a target tracking period, in which a target tracking period for current frame point cloud data is referred to as a current target tracking period, and then a current target refers to a target marked after a previous target tracking period of the current target tracking period is completed.
The targets marked in each target tracking period correspond to the point cloud clusters in the target tracking period one by one, so that the current target can be one or more.
Taking a plurality of point cloud clusters and a plurality of current targets as an example, in step 102, each current target of each point cloud cluster is respectively matched by adopting at least one matching mode.
The at least one matching method includes:
102a, carrying out feature matching on a feature descriptor operator of the point cloud cluster and a feature descriptor operator of a current target;
102b, carrying out space matching on the space position of the point cloud cluster and the estimated space position of the current target;
102c, weighting the feature matching deviation of the point cloud cluster and the current target, the space matching deviation of the point cloud cluster and the current target, and the motion parameter matching deviation of the point cloud cluster and the current target, and matching according to a weighting result; the characteristic matching deviation is the deviation of a point cloud cluster and a characteristic descriptor operator of the current target, the space matching deviation is the deviation of the space position of the point cloud cluster and the current target, and the motion parameter matching deviation is the deviation of an estimated motion parameter of the point cloud cluster and the current target.
The weight value of the weighting process may be determined according to the actual situation of the engineering application. Preferably, the weight of the feature matching deviation of the point cloud cluster and the current target, the spatial matching deviation of the point cloud cluster and the current target, and the motion parameter matching deviation of the point cloud cluster and the current target are the same.
According to the method provided by the embodiment of the invention, the point cloud cluster and the current target can be matched in different matching modes, so that a unique matching result is obtained, the situation of matching errors is avoided, the target association errors are avoided, and the target tracking accuracy is improved.
In the embodiment of the present invention, there are various matching manners in the step 102, and some of them are exemplified below.
In one implementation:
and (3) feature matching: extracting a feature descriptor operator of the point cloud cluster, acquiring a feature descriptor operator of a current target, performing feature matching on the feature descriptor operator of the point cloud cluster and the feature descriptor operator of the current target, wherein the obtained feature matching result indicates that the feature descriptor operator without the current target is matched with the feature descriptor operator of the point cloud cluster, or the feature matching result indicates that at least one feature descriptor operator of the current target is matched with the feature descriptor operator of the point cloud cluster;
spatial matching: acquiring the spatial position of a point cloud cluster, acquiring the estimated spatial position of a current target, and performing spatial matching on the spatial position of the point cloud cluster and the estimated spatial position of the current target, wherein the acquired spatial matching result indicates that the spatial position without the current target is matched with the spatial position of the point cloud cluster, or the spatial matching result indicates that the spatial position of at least one current target is matched with the spatial position of the point cloud cluster;
after determining that the feature descriptor operator without the current target is matched with the feature descriptor operator of the point cloud cluster and the spatial position without the current target is matched with the spatial position of the point cloud cluster, obtaining a matching result indicating that no current target is matched with the point cloud cluster;
after the unique matching of the feature descriptor operator of the current target and the feature descriptor operator of the point cloud cluster is determined, a matching result of the current target indicating the matching of the feature descriptor operator and the point cloud cluster is obtained;
matching the feature descriptor operators of at least two point cloud clusters with the feature descriptor operator of the same current target, and obtaining a matching result of the current target indicating that the spatial position is matched with the point cloud clusters after only the spatial position of one point cloud cluster is matched with the spatial position of the same current target;
after determining that the feature descriptor operators of at least two point cloud clusters are matched with the feature descriptor operators of the same current target and the spatial positions of at least two point cloud clusters are matched with the spatial position of the same current target, acquiring the feature matching deviation of the point cloud clusters and the current target, the spatial matching deviation of the point cloud clusters and the current target and the motion parameter matching deviation of the point cloud clusters and the current target, weighting the feature matching deviation of the point cloud clusters and the current target, the spatial matching deviation of the point cloud clusters and the current target and the motion parameter matching deviation of the point cloud clusters and the current target, and matching according to a weighting result.
The processing of feature matching and spatial matching may be performed in parallel or in series. If the serial processing is performed, the execution sequence is not limited in the embodiment of the present invention.
In the embodiment of the invention, if the feature matching and the space matching are carried out simultaneously, once the matching result of one matching is not unique, the final matching result can be quickly determined according to the other matching result or other matching modes are carried out, and the processing speed can be improved.
It should be noted that, in the above processing procedure, the matching result of the feature matching is prioritized, and in practical engineering application, the matching result of the spatial matching may also be prioritized.
In another implementation, feature matching and space matching are performed first, specifically referring to the description of the previous implementation; after determining that the feature descriptor operator without the current target is matched with the feature descriptor operator of the point cloud cluster and the spatial position without the current target is matched with the spatial position of the point cloud cluster, obtaining a matching result indicating that no current target is matched with the point cloud cluster; after the unique matching of the feature descriptor operator of the current target and the feature descriptor operator of the point cloud cluster is determined, whether the spatial position of the current target is matched with the spatial position of the point cloud cluster is judged, if not, a matching result indicating that the current target is not matched with the point cloud cluster is obtained (it should be noted that in practical engineering application, the situation is almost not existed), and if so, and the current targets indicated by the two matching results are different, the matching is carried out in the weighted matching mode.
It should be noted that when two similar-shaped objects a 'and B' are close to each other, the case of a matching error may also be the case of mismatching a with the reference descriptor operator corresponding to object B ', and mismatching B with the reference descriptor operator corresponding to object a'. Then, matching errors in this case can be effectively avoided by the above implementation.
It should be noted that, in the above processing procedure, the matching result of the feature matching is prioritized, and in practical engineering application, the matching result of the spatial matching may also be prioritized.
In yet another implementation, feature matching and spatial matching may be performed serially, and after a match results in a unique match, no subsequent matching process is performed.
In yet another implementation, the matching result can be obtained only by the above-mentioned weighted matching method.
In the embodiment of the invention, the feature matching specifically refers to comparing a feature descriptor operator of the point cloud cluster with a feature descriptor operator of a current target, and if the similarity of the two exceeds a set threshold, the matching is considered to be successful. By way of example and not limitation, the feature descriptor operator may be mapped into a specified coordinate system, and the similarity between the two feature descriptor operators is described by the distance between the two feature descriptor operators in the coordinate system, but the similarity may also be determined by using other existing feature matching methods.
In the embodiment of the present invention, the spatial matching specifically refers to comparing the spatial position of the point cloud cluster with the spatial position of the current target, and if the similarity between the two exceeds a set threshold, the matching is considered to be successful. By way of example and not limitation, spatial locations may be mapped into a specified coordinate system, and the similarity of two spatial locations may be described by their distance in the coordinate system.
In the embodiment of the present invention, the feature matching deviation may refer to a similarity between two feature descriptor operators, or may refer to a value difference between the two feature descriptor operators; the deviation of the spatial position may refer to a spatial distance between the two spatial positions, or may refer to a value difference between the two spatial positions; the motion parameter matching deviation can be a distance value of the two motion parameters in a formulated coordinate system, and can also be a value difference value of the two motion parameters.
In the embodiment of the invention, the weighting result reflects the deviation between the point cloud cluster and the current target, if the weighting results corresponding to the point cloud cluster and all the current targets indicate that the deviation between the point cloud cluster and all the current targets exceeds the set threshold, the matching result is that no current target is matched with the point cloud cluster, otherwise, the current target corresponding to the weighting result with the minimum deviation is selected as the current target indicated by the matching result.
In the embodiment of the present invention, the feature descriptor operator of the current target may be stored in a form of a model, and then, the above-mentioned feature descriptor operator of the current target is specifically to obtain a feature model of the current target, where the feature model includes all feature descriptor operators of the current target. Correspondingly, the method provided by the embodiment of the invention can also update the feature model according to the matching result so as to be used in the next target tracking period.
Specifically, as an example and not by way of limitation, it is assumed that the feature model before updating includes feature descriptor operators of three current targets, and if three targets are still detected in the current target tracking period and are associated with one current target, the feature descriptor operators of the three targets detected in the current target tracking period are used to replace the feature descriptor operators of the current target associated with each other; if the feature model before updating comprises respective feature descriptor operators of three current targets, if two targets are detected in the current target tracking period and are respectively associated with one current target, respectively replacing the feature descriptor operators of the respective associated current targets by using the feature descriptor operators of the two targets detected in the current target tracking period, and deleting the feature descriptor operators of the unassociated current target; assuming that the feature model before updating includes feature descriptor operators of three current targets, if four targets are detected in the current target tracking period and three of the targets are associated with one current target, the feature descriptor operators of the three targets detected in the current target tracking period are used to replace the feature descriptor operators of the current targets associated with the three targets respectively, and the feature descriptor operator of another detected target is added to the feature model.
It should be noted that, in practical engineering applications, the update strategy may be adjusted according to the actual detection environment and requirements. For example, when an object disappears, it is not temporarily removed from the feature model.
The fact that the number of the targets detected in the current target tracking period is larger than that of the current targets means that the targets are newly appeared, and the fact that the number of the targets detected in the current target tracking period is smaller than that of the current targets means that the targets move out of the detection range or are completely shielded.
In the embodiment of the present invention, the estimated spatial position of the current target may be stored in a model form, and then, the obtaining of the estimated spatial position of the current target specifically means obtaining a spatial position model of the current target, where the spatial position model includes estimated spatial positions of all current targets. Correspondingly, the method provided by the embodiment of the invention can also obtain the estimated spatial position of the point cloud cluster according to the spatial position and the motion parameters of the point cloud cluster; and updating the spatial position model by using the estimated spatial position of the point cloud cluster.
In the embodiment of the present invention, the spatial position may be, but is not limited to, represented by spatial coordinates. The spatial position of the current target may be, but is not limited to, the spatial position of the corresponding point cloud cluster in its target tracking period. The spatial position of the point cloud cluster may be defined with reference to an existing definition or according to actual engineering requirements, which is not limited in the embodiment of the present invention.
Specifically, as an example and not by way of limitation, assuming that the spatial position model before updating includes respective estimated spatial positions of three current targets, if three targets are still detected in the current target tracking period and are respectively associated with one current target, the estimated spatial positions of the three targets detected in the current target tracking period are used to respectively replace the respective associated estimated spatial positions of the current target; assuming that the spatial position model before updating comprises respective estimated spatial positions of three current targets, if two targets are detected in the current target tracking period and are respectively associated with one current target, respectively replacing the estimated spatial positions of the respective associated current targets by using the spatial positions of the two targets detected in the current target tracking period, and deleting the estimated spatial positions of the unassociated current targets; assuming that the feature model before updating includes respective estimated spatial positions of three current targets, if four targets are detected in the current target tracking period and three of the three targets are associated with one current target, the estimated spatial positions of the three targets detected in the current target tracking period are used for replacing the estimated spatial positions of the associated current targets respectively, and the estimated spatial position of another detected target is added into the spatial position model.
It should be noted that, in practical engineering applications, the update strategy may be adjusted according to the actual detection environment and requirements. For example, when an object disappears, it is temporarily not removed from the spatial location model.
In the embodiment of the present invention, the estimated motion parameters of the current target may be stored in a model form, and then, the obtaining of the motion parameter matching deviation of the point cloud cluster and the current target specifically means obtaining the motion parameters of the point cloud cluster, obtaining a motion parameter model of the current target, where the motion parameter model includes the estimated motion parameters of all the current targets, and obtaining the motion parameter matching deviation of the point cloud cluster and the current target according to the motion parameters of the point cloud cluster and the motion parameter model of the current target. Correspondingly, the method provided by the embodiment of the invention can also obtain the estimated motion parameters of the point cloud cluster according to the spatial position and the motion parameters of the point cloud cluster; and updating the motion parameter model by utilizing the estimated motion parameters of the point cloud cluster.
Specifically, as an example and not by way of limitation, assuming that the motion parameter model before updating includes respective estimated motion parameters of three current targets, if three targets are still detected in the current target tracking period and are respectively associated with one current target, the estimated motion parameters of the three targets detected in the current target tracking period are used to respectively replace the respective associated estimated motion parameters of the current target; assuming that the motion parameter model before updating comprises respective estimated motion parameters of three current targets, if two targets are detected in the current target tracking period and are respectively associated with one current target, respectively replacing the estimated motion parameters of the respective associated current targets by the estimated motion parameters of the two targets detected in the current target tracking period, and deleting the estimated motion parameters of the unassociated current target; assuming that the motion parameter model before updating comprises respective estimated motion parameters of three current targets, if four targets are detected in the current target tracking period and three targets are respectively associated with one current target, respectively replacing the estimated motion parameters of the respective associated current targets with the estimated motion parameters of the three targets detected in the current target tracking period, and adding the estimated motion parameters of the other detected target into the motion parameter model.
It should be noted that, in practical engineering applications, the update strategy may be adjusted according to the actual detection environment and requirements. For example, when an object disappears, it is not temporarily removed from the motion parameter model.
On the basis of the above arbitrary method embodiment, the current frame point cloud data can be subjected to density clustering by adopting the existing arbitrary density clustering method. Preferably, the embodiment of the present invention provides an improved density clustering method, that is, segmented density clustering. Specifically, a distance range in which each sample point falls is determined according to the spatial position of each sample point in the current frame point cloud data, different distance ranges correspond to different density clustering parameter values, and the distance range refers to the spatial distance from a target detection device (such as a laser radar, a depth camera and the like); and performing density clustering on the current frame point cloud data according to the density clustering parameter value corresponding to the distance range of each sample point to obtain at least one point cloud cluster.
There are various implementations of determining the distance range in which each sample point falls, and statistics may be performed in a histogram manner by way of example and not limitation.
The density clustering parameters comprise clustering radius, the minimum value of the number of neighbor points and the minimum value of the number of points in a cluster. In one implementation, the smaller the value of the distance range, the smaller the clustering radius, the larger the minimum value of the number of neighbors and the minimum value of the number of intra-cluster points, the larger the value of the distance range, the larger the clustering radius, and the smaller the minimum value of the number of neighbors and the minimum value of the number of intra-cluster points. The distance range is measured by using the center point or a certain boundary point or a specific point in the distance range (e.g., a distance point corresponding to the spatial position of the current target).
It should be noted that, in practical engineering application, the corresponding relationship between the distance range and the value of the density clustering parameter may also be adjusted as needed, which is not limited in the embodiment of the present invention. By way of example and not limitation, the smaller the value of the central point of the distance range is, the smaller the clustering radius is, and the larger the minimum value of the number of the neighbor points and the minimum value of the number of the points in the cluster are; if the values of the central points of the two distance ranges are the same, but the bottom edge boundary values are different, the smaller the bottom edge boundary value is, the smaller the clustering radius is, and the larger the minimum value of the number of the neighbor points and the minimum value of the number of the points in the cluster are.
In the embodiment of the present invention, the distance range may be determined by being divided in advance, or may be determined according to the current target. If the distance is determined according to the current target, the distance range corresponding to each current target and the density clustering parameter value corresponding to each distance range are obtained, and the density clustering parameter values corresponding to each distance range are different.
There are various ways to obtain the distance range corresponding to each current target. In one implementation mode, firstly, the space position of each current target is obtained, the distance from the current target to a target detection device is determined according to the space position of the current target, and the distance range corresponding to the current target is determined according to the set distance radius by taking the distance as the center; the distance range corresponding to each current target has the same span. In another implementation manner, the spatial position of each current target is obtained in a limited manner, the distance from the current target to the target monitoring device is determined according to the spatial position of the current target, the distance radius is determined according to the spatial size of the target, and the distance range corresponding to the current target is obtained according to the determined distance radius by taking the distance of the current target as the center.
In the embodiment of the present invention, the density clustering parameters and the distance ranges may be stored in the form of a model, and then, the obtaining of the distance range corresponding to each current target and the density clustering parameter value corresponding to each distance range specifically means obtaining a density clustering parameter model, where the density clustering parameter model includes the distance ranges corresponding to all current targets and the density clustering parameter value corresponding to each distance range. Correspondingly, the method provided by the embodiment of the invention can also obtain the distance range corresponding to the point cloud cluster and the density clustering parameter value corresponding to the distance range; and updating the density clustering parameter model by utilizing the distance range corresponding to the point cloud cluster and the density clustering parameter value corresponding to the distance range.
Specifically, by way of example and not limitation, assuming that the density clustering parameter model before updating includes respective distance ranges and density clustering parameter values of three current targets, if three targets are still detected in the current target tracking period and are respectively associated with one current target, the distance ranges and the density clustering parameter values of the three targets detected in the current target tracking period are used to respectively replace the distance ranges and the density clustering parameter values of the respective associated current targets; assuming that the spatial position model before updating comprises respective distance range and density clustering parameter values of three current targets, if two targets are detected in the current target tracking period and are respectively associated with one current target, respectively replacing the distance range and the density clustering parameter value of the respective associated current target by using the distance range and the density clustering parameter value of the two targets detected in the current target tracking period, and deleting the distance range and the density clustering parameter value of the unassociated current target; if the feature model before updating includes the respective distance range and density clustering parameter values of three current targets, if four targets are detected in the current target tracking period and three of the three targets are respectively associated with one current target, the distance range and density clustering parameter values of the three targets detected in the current target tracking period are used for replacing the distance range and density clustering parameter values of the respective associated current target, and the distance range and density clustering parameter value of another detected target are added into the density clustering parameter model.
It should be noted that, in practical engineering applications, the update strategy may be adjusted according to the actual detection environment and requirements. For example, when a target disappears, it is temporarily not deleted from the density clustering parameter model.
In the embodiment of the present invention, the distance to the target detection device is a relative distance, and the value thereof is related to the spatial position of the sample point/target. Therefore, after the distance is calculated, the distance range may be represented by the distance, or the distance range may be represented by the spatial position region.
A preferred implementation of the method embodiment of the present invention is described in detail below with reference to specific application scenarios.
In the working process of the laser radar, scanning a detection scene through a laser transceiver to obtain continuous frame point cloud data.
As shown in fig. 2, for each frame of point cloud data, the processor of the lidar performs the following operations:
step 201, filtering the current frame point cloud data.
The FLASH laser radar has the problem of close range blind area, the distance of sample points appearing in the range of the blind area is seriously distorted, and the FLASH laser radar is not suitable for participating in feature extraction and motion parameter models in post-processing.
Step 202, segmenting the filtered point cloud data by adopting segmented density clustering to obtain at least one point cloud cluster.
Various noise point data often appear in a FLASH laser radar sensor, and dragging points and over-periodic error point data caused by high-reflectivity objects outside a measuring range appear at the edge position of an object. Through point cloud density clustering, not only can preliminary point cloud segmentation be carried out, but also the abnormal points can be eliminated.
The adoption of the segmentation density clustering during point cloud segmentation can improve the self-adaptability of point cloud segmentation, inhibit abnormal point cloud data, provide a good foundation for subsequent feature descriptor extraction and feature matching, and improve the matching accuracy.
And step 203, extracting and matching point cloud features of each point cloud cluster. Specifically, global feature extraction is performed on each point cloud cluster of the point cloud data which is primarily segmented, and a geometric descriptor operator is extracted. And matching with a geometric descriptor operator of each point cloud cluster obtained by segmenting the previous frame of point cloud data, wherein if a plurality of targets with similar geometric shapes appear in the current scene, a plurality of pairs of feature matching results exist, and accurate matching cannot be achieved only according to the feature matching results.
And 204, carrying out spatial position matching on each point cloud cluster.
The processor maintains a spatial position model during operation, records the spatial position of each target in successive frames, and predicts the spatial position of the target in the next frame according to speed, acceleration and direction.
As shown in fig. 3, the scene includes a target 1, a target 2, and a target 3, the solid line box represents a spatial position of the target of the current frame, the dotted line box represents a possible position (estimated spatial position) of the target in the next frame predicted according to the motion parameter, a suitable radius is selected, and the point cloud cluster in the radius range returns a successful match with the estimated spatial position as a center. During feature value extraction and matching, the targets 2 and 3 have approximate feature values (feature descriptor operators), and the problem of approximate features can be solved by adding spatial position matching.
The above-mentioned step 203 and step 204 are performed synchronously.
And step 205, finally deciding that the current point cloud cluster is matched with one point cloud cluster (current target) of the previous frame according to the matching results of the step 203 and the step 204 according to the following strategies.
Strategy:
when a plurality of targets with similar geometric shapes exist in a scene and the spatial positions are not close, matching and distinguishing are carried out depending on a target space model;
when a plurality of objects with dissimilar geometric shapes in a scene are close in spatial position, distinguishing by means of feature matching;
relying on motion parameters (velocity parameters) when multiple geometrically similar and spatially closely located objects in a scene
Figure BDA0002210114110000141
And acceleration parameter
Figure BDA0002210114110000142
) Matching and distinguishing;
the corresponding matching result is calculated by the formulaWherein DfeatureFor feature matching bias, DpositionFor spatial position matching of deviations, Dv,aThe deviation is matched for the motion parameters. And selecting the target with the minimum D from the candidate matching targets as a matching target (the current target indicated by the matching result).
And step 206, updating the spatial position model.
And step 207, updating the motion parameter model.
The current motion parameters (velocity, acceleration and direction) are updated in dependence on the spatial position information and the motion parameters (velocity, acceleration and direction) of the current moving object.
And step 208, updating the matched feature model.
The geometric characteristics of the target may be changed during the movement process, for example, the angle of the target relative to the Lidar is changed during the movement process, the relative distance between the target and the Lidar is changed, the change of noise during the Lidar ranging process and the like may cause the change of the geometric characteristics, so that the matching accuracy can be improved by continuously updating the geometric characteristics of the target point cloud during the movement process.
And step 209, updating the point cloud density clustering parameter model.
And selecting appropriate parameters according to the distance between the target and the radar by segmentation, wherein the parameters comprise a search radius epsilon, a minimum value MinPts of the number of neighbor points and a minimum value MinPtsCluster of the number of points in the cluster.
The steps 206 to 209 may be performed synchronously or in series.
In the embodiment of the invention, the matching estimation of the point cloud motion parameter model and the point cloud space position model depends on the following basic assumptions:
the position of the inter-frame associated moving target cannot be changed violently;
limiting the speed and the acceleration of the moving object, wherein the inter-frame moving object does not exceed a circular area space with the maximum speed as a radius;
the direction and the acceleration of the moving target cannot be changed violently;
the moving object is described by a plurality of points, and the speeds of corresponding point cloud data are consistent;
rigid motion limits the distance between any two points in the point cloud data of the moving target to be constant;
the point cloud motion model estimates the motion speed, the motion acceleration and the motion direction of the target according to the time domain parameters.
The point cloud space model matching depends on the point cloud motion model to obtain estimated space position information, and the matching of the inter-frame motion target is carried out. If multiple closely spaced objects appear in the current scene, accurate matching may also not be possible.
The point cloud matching method has the advantages that point cloud characteristic model matching and point cloud space model matching are adopted during point cloud matching, matching accuracy is improved, and the matching problems of multiple targets with similar geometric characteristics and multiple targets with similar positions in the same scene can be solved. In particular, the method reduces the error matching of a plurality of targets with similar geometric shapes and a plurality of targets adjacent to each other in space positions, and improves the target tracking accuracy.
In an embodiment of the present invention, the density in the density cluster is defined as the number of sample points within a specified radius ε. Meanwhile, each point is endowed with a label, and if the number of other sample points is not less than the designated number MinPts in the designated radius epsilon around one sample point, the point is marked as a core point; within a given radius epsilon, a sample point is called a boundary point if its neighbor points are smaller than MinPts but contain at least one core point; other sample points than the core point and the boundary point are called noise points; after the labeling of core, boundary and noise points is completed, the density clustering can be summarized as:
forming a separate cluster on the basis of each core point or a group of connected core points; dividing each boundary point into the cluster where the corresponding core point is located; if the number of points within the entire cluster is less than MinPtsCluster, the entire cluster is deleted.
The method has good inhibition capacity for noise points, dragging points and points with density which is not in line with expectation, simultaneously, a set of unified (epsilon, MinPts and MinPtsCluster) parameters are not suitable for one frame of point cloud data, corresponding (epsilon, MinPts and MinPtsCluster) parameters are selected according to the distance of a target, epsilon is smaller for a short-distance target, MinPts is larger, MinPtsCluster is larger, epsilon is larger for a long-distance target, MinPts is smaller, MinPtsCluster is smaller, and the selection of specific numerical values is determined according to the angular resolution of the FLASH laser radar.
The specific (ε, MinPts, MinPtsCluster) selection strategy is as follows:
for all sample points { X in point cloud datai,Yi,ZiMapping according to the space position to obtain a histogram, and counting the number of the sample points falling on each space position interval; taking the lowest point of the histogram as a boundary; and (epsilon, MinPts, MinPtsCluster) is selected according to the maximum value of the boundary of each interval.
In the embodiment of the invention, a clustering viewpoint histogram descriptor (CVFH) is adopted for each point cloud cluster to extract the characteristics of the point cloud cluster, and the specific extraction steps are as follows:
1. dividing the point cloud cluster into local point cloud clusters with similar normal adjacent points according to the normal by adopting a region growing algorithm;
all points in the point cloud cluster are classified into each local point cloud cluster by adopting the following method:
a. the region growing algorithm is sent from a point which is used as a seed;
b. comparing whether the normal directions of the points around the seeds are close enough, if so, taking the adjacent points as seeds;
c. when the scale of the seeds is not increased any more, the iteration of the local point cloud cluster is finished;
2. calculating the mass center of the local point cloud cluster;
the centroid calculation method of the local point cloud cluster is as follows:
Figure BDA0002210114110000171
Figure BDA0002210114110000172
Figure BDA0002210114110000173
3. calculating a viewpoint feature histogram for each local point cloud cluster by taking the centroid of the local point cloud cluster as a center;
calculating a viewpoint feature histogram for each local point cloud cluster:
a. calculating a vector between a viewpoint and the centroid of the local point cloud cluster;
b. calculating included angles between the normal lines of all points in the local point cloud cluster and the vector, wherein the included angles form a histogram;
c. calculating included angles between the normal lines of all points in the local point cloud cluster and the vector formed by the points and the viewpoints, wherein the included angles form a histogram;
d. the included angle histograms form descriptors of local point cloud clusters;
4. calculating a geometric shape distribution histogram for each local point cloud cluster;
a. selecting a radius R, wherein the radius R is the farthest distance from the centroid to a point in a local point cloud cluster;
b. and (3) constructing 2 spherical spaces by taking the center of mass of the local point cloud cluster as a center and taking the radii R and R/2. The direction angle is evenly divided into 12 angle spaces, and each angle space covers a 30-degree range. Dividing the tangential direction into 8 spaces according to the diameter/8 step length, and counting the number of points in each space to form a histogram of the number of the space points (see fig. 4);
5. merging the viewpoint feature histogram and the geometric shape distribution histogram of the local point cloud cluster into a clustering viewpoint histogram descriptor;
6. merging clustering viewpoint histogram descriptors of all local point cloud clusters to serve as point cloud cluster feature descriptors;
the method has the advantages that under the condition that the target is partially shielded, feature matching can be carried out according to partial information of the target, and the matching is carried out according to the clustering viewpoint histogram descriptor of the local point cloud cluster.
Correspondingly, the implementation manner of point cloud feature matching may be:
and matching the point cloud cluster feature descriptors obtained in the last step with the point cloud cluster feature descriptors of the previous frame, wherein the matching method is to calculate the distance between the features, find the nearest feature and match the nearest feature with the target in the previous frame.
And setting a maximum distance, when the distance of the feature descriptor exceeds the maximum distance, the target in the current frame may be a scene new entry target, and the tracking target point cloud feature of the target is updated in the subsequent steps.
Setting an error range, returning descriptors from the minimum distance of the feature descriptors to the error range, if only one point cloud feature descriptor is returned, completely matching the point cloud cluster with the point cloud cluster in the previous frame, and if a plurality of point cloud feature descriptors are returned, matching the scene by relying on a plurality of targets with similar geometric shapes, which cannot be accurately matched by the point cloud features, and depending on a subsequent point cloud space model.
The real-time updating of the matching characteristics can be self-adaptive to the change of the characteristics in the motion process, and the problem that the geometric characteristics are changed due to the change of the distance of the target relative to the radar and the radar in the motion process is solved.
In the embodiment of the present invention, the implementation manner of determining the estimated spatial position and estimating the motion parameter may be:
wherein
Figure BDA0002210114110000182
Is the estimated spatial position of the object,
Figure BDA0002210114110000183
is the current spatial position of the object and,the moving speed of each direction of the target is,
Figure BDA0002210114110000185
and t is the motion acceleration of the target in each direction, and the acquisition time interval of each frame of point cloud data.
The motion parameter model mainly comprises space position coordinates
Figure BDA0002210114110000186
Speed of movement
Figure BDA0002210114110000187
And acceleration
Figure BDA0002210114110000188
The parameter, k, represents the parameter at the k-th frame, simply by the vector pkTo represent
Figure BDA0002210114110000189
By vector vkTo represent
Figure BDA00022101141100001810
By vector akTo represent
Figure BDA0002210114110000191
The state equation of the motion parameter model is established according to Kalman filtering as follows.
Figure BDA0002210114110000192
t denotes the time interval of each frame, further XkTo represent
Figure BDA0002210114110000193
Is represented by AIs represented by B
Figure BDA0002210114110000195
The state equation is re-expressed as Xk=AXk-1+Bak-1
Establishing an observation equation
Figure BDA0002210114110000196
Iterate according to kalman filtering as follows:
predictive formula
Figure BDA0002210114110000197
Correction formula
Figure BDA0002210114110000198
Wherein
Figure BDA0002210114110000199
Which represents an a-priori estimate of the position of the target,
Figure BDA00022101141100001910
the covariance of the prior error is expressed,representing a posteriori estimate, PkThe a posteriori error covariance is indicated. The position of the next frame of the target estimation is estimated in turn, and the velocity estimation is also obtained, and the acceleration also needs to be updated after one prediction and correction is completed.
Figure BDA00022101141100001912
Based on the same inventive concept as the method, the embodiment of the invention also provides a target tracking device, which can be a laser radar, an electronic device provided with the laser radar, and a computer device connected with the laser radar (including wired connection and wireless communication connection).
The target tracking device includes a memory and a processor;
the memory is used for storing data used when the processor executes the computer program;
the processor is used for executing the computer program to realize the method provided by the target tracking method embodiment.
The target tracking device provided by the embodiment of the invention can adopt a plurality of modes to carry out combined matching so as to obtain a unique matching result, thereby avoiding the situation of matching errors, avoiding target association errors and improving the accuracy of target tracking.
Based on the same inventive concept as the above-mentioned target tracking method embodiment, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the following steps:
performing density clustering on the current frame point cloud data to obtain at least one point cloud cluster;
matching the point cloud cluster with at least one current target by adopting at least one matching mode to obtain a matching result, and marking the target corresponding to the point cloud cluster according to the matching result; the matching result indicates that no current target is matched with the point cloud cluster, and the target corresponding to the point cloud cluster is a new target, or the matching result indicates the current target which is uniquely matched with the point cloud cluster, and the target corresponding to the point cloud cluster is the current target indicated by the matching result;
the at least one matching method includes:
carrying out feature matching on a feature descriptor operator of the point cloud cluster and a feature descriptor operator of a current target;
carrying out space matching on the space position of the point cloud cluster and the estimated space position of the current target;
weighting the feature matching deviation of the point cloud cluster and the current target, the space matching deviation of the point cloud cluster and the current target, and the motion parameter matching deviation of the point cloud cluster and the current target, and matching according to a weighting result; the characteristic matching deviation is the deviation of a point cloud cluster and a characteristic descriptor operator of the current target, the space matching deviation is the deviation of the space position of the point cloud cluster and the current target, and the motion parameter matching deviation is the deviation of an estimated motion parameter of the point cloud cluster and the current target.
The storage medium provided by the embodiment of the invention can adopt a plurality of modes for combined matching, so that a unique matching result is obtained, the matching error condition is avoided, the target association error is avoided, and the target tracking accuracy is improved.
The description has been presented with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present specification have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all changes and modifications that fall within the scope of the specification.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present specification without departing from the spirit and scope of the specification. Thus, if such modifications and variations of the present specification fall within the scope of the claims of the present specification and their equivalents, the specification is intended to include such modifications and variations.

Claims (11)

1. A method of target tracking, the method comprising:
performing density clustering on the current frame point cloud data to obtain at least one point cloud cluster;
matching the point cloud cluster with at least one current target by adopting at least one matching mode to obtain a matching result, and marking the target corresponding to the point cloud cluster according to the matching result; the matching result indicates that no current target is matched with the point cloud cluster, and the target corresponding to the point cloud cluster is a new target, or the matching result indicates a current target uniquely matched with the point cloud cluster, and the target corresponding to the point cloud cluster is the current target indicated by the matching result;
the at least one matching method comprises:
carrying out feature matching on the feature descriptor operator of the point cloud cluster and the feature descriptor operator of the current target;
carrying out space matching on the space position of the point cloud cluster and the estimated space position of the current target;
weighting the feature matching deviation of the point cloud cluster and the current target, the space matching deviation of the point cloud cluster and the current target and the motion parameter matching deviation of the point cloud cluster and the current target, and matching according to a weighting result; the characteristic matching deviation is the deviation of a characteristic descriptor operator of the point cloud cluster and the current target, the space matching deviation is the deviation of the space position of the point cloud cluster and the current target, and the motion parameter matching deviation is the deviation of the estimated motion parameters of the point cloud cluster and the current target.
2. The method of claim 1, wherein matching the point cloud cluster with a current target using at least one matching method to obtain a matching result comprises:
extracting a feature descriptor operator of the point cloud cluster, acquiring a feature descriptor operator of a current target, performing feature matching on the feature descriptor operator of the point cloud cluster and the feature descriptor operator of the current target, wherein the obtained feature matching result indicates that the feature descriptor operator without the current target is matched with the feature descriptor operator of the point cloud cluster, or the feature matching result indicates that at least one feature descriptor operator of the current target is matched with the feature descriptor operator of the point cloud cluster;
acquiring the spatial position of the point cloud cluster, acquiring the estimated spatial position of a current target, and performing spatial matching on the spatial position of the point cloud cluster and the estimated spatial position of the current target, wherein the acquired spatial matching result indicates that the spatial position without the current target is matched with the spatial position of the point cloud cluster, or the spatial matching result indicates that the spatial position of at least one current target is matched with the spatial position of the point cloud cluster;
after determining that the feature descriptor operator without the current target is matched with the feature descriptor operator of the point cloud cluster and the spatial position without the current target is matched with the spatial position of the point cloud cluster, obtaining a matching result indicating that no current target is matched with the point cloud cluster;
obtaining a matching result of the current target indicating that the feature descriptor operator is matched with the point cloud cluster after determining that the feature descriptor operator of the current target is uniquely matched with the feature descriptor operator of the point cloud cluster;
after determining that at least two point cloud cluster feature descriptor operators are matched with the same current target feature descriptor operator, but only one point cloud cluster spatial position is matched with the same current target spatial position, obtaining a matching result of the current target indicating that the spatial position is matched with the point cloud cluster;
after determining that the feature descriptor operators of at least two point cloud clusters are matched with the feature descriptor operator of the same current target and the spatial positions of at least two point cloud clusters are matched with the spatial position of the same current target, acquiring the feature matching deviation of the point cloud clusters and the current target, the spatial matching deviation of the point cloud clusters and the current target and the motion parameter matching deviation of the point cloud clusters and the current target, weighting the feature matching deviation of the point cloud clusters and the current target, the spatial matching deviation of the point cloud clusters and the current target and the motion parameter matching deviation of the point cloud clusters and the current target, and matching according to a weighting result.
3. The method of claim 2, wherein said obtaining a feature descriptor operator for a current target comprises:
acquiring a feature model of a current target, wherein the feature model comprises feature descriptor operators of all the current targets;
the method further comprises the following steps:
and updating the feature model according to the matching result.
4. The method of claim 2, wherein obtaining the estimated spatial location of the current target comprises:
acquiring a spatial position model of a current target, wherein the spatial position model comprises estimated spatial positions of all the current targets;
the method further comprises the following steps:
acquiring an estimated spatial position of the point cloud cluster according to the spatial position and the motion parameters of the point cloud cluster;
and updating the spatial position model by utilizing the estimated spatial position of the point cloud cluster.
5. The method of claim 2, wherein obtaining a motion parameter match deviation of the point cloud cluster from a current target comprises:
acquiring motion parameters of the point cloud cluster, acquiring a motion parameter model of a current target, wherein the motion parameter model comprises estimated motion parameters of all the current targets, and acquiring the motion parameter matching deviation of the point cloud cluster and the current target according to the motion parameters of the point cloud cluster and the motion parameter model of the current target;
the method further comprises the following steps:
acquiring an estimated motion parameter of the point cloud cluster according to the spatial position and the motion parameter of the point cloud cluster;
and updating the motion parameter model by utilizing the estimated motion parameters of the point cloud cluster.
6. The method according to any one of claims 1 to 5, wherein the density clustering of the current frame point cloud data to obtain at least one point cloud cluster comprises:
determining a distance range in which each sample point falls according to the spatial position of each sample point in the current frame point cloud data, wherein different distance ranges correspond to different density clustering parameter values, and the distance range refers to the spatial distance from a target detection device;
and performing density clustering on the current frame point cloud data according to the density clustering parameter value corresponding to the distance range of each sample point to obtain at least one point cloud cluster.
7. The method of claim 6, wherein before determining the distance range each sample point falls into according to the spatial position of each sample point in the current frame point cloud data, the method further comprises:
and acquiring a distance range corresponding to each current target and a density clustering parameter value corresponding to each distance range, wherein the density clustering parameter values corresponding to each distance range are different.
8. The method according to claim 7, wherein the obtaining of the distance range corresponding to each current target and the density clustering parameter value corresponding to each distance range comprises:
obtaining a density clustering parameter model, wherein the density clustering parameter model comprises distance ranges corresponding to all current targets and density clustering parameter values corresponding to each distance range;
the method further comprises the following steps:
obtaining a distance range corresponding to the point cloud cluster and a density clustering parameter value corresponding to the distance range;
and updating the density clustering parameter model by using the distance range corresponding to the point cloud cluster and the density clustering parameter value corresponding to the distance range.
9. The method according to any one of claims 1 to 5, wherein the weight of the feature matching deviation of the point cloud cluster and the current target, the spatial matching deviation of the point cloud cluster and the current target, and the motion parameter matching deviation of the point cloud cluster and the current target are the same.
10. An object tracking device, comprising:
a memory and a processor;
the memory is used for storing data used by the processor when executing the computer program;
the processor is configured to execute a computer program to implement the method according to any one of claims 1 to 9.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 9.
CN201910895581.2A 2019-09-21 2019-09-21 Target tracking method, device and storage medium Active CN110766719B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910895581.2A CN110766719B (en) 2019-09-21 2019-09-21 Target tracking method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910895581.2A CN110766719B (en) 2019-09-21 2019-09-21 Target tracking method, device and storage medium

Publications (2)

Publication Number Publication Date
CN110766719A true CN110766719A (en) 2020-02-07
CN110766719B CN110766719B (en) 2022-11-18

Family

ID=69330016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910895581.2A Active CN110766719B (en) 2019-09-21 2019-09-21 Target tracking method, device and storage medium

Country Status (1)

Country Link
CN (1) CN110766719B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112199459A (en) * 2020-09-30 2021-01-08 深兰人工智能(深圳)有限公司 3D point cloud segmentation method and segmentation device
CN112847343A (en) * 2020-12-29 2021-05-28 深圳市普渡科技有限公司 Dynamic target tracking and positioning method, device, equipment and storage medium
CN112884687A (en) * 2021-01-29 2021-06-01 郑州信大云筑工程科技有限公司 Mapping laser radar scanning strategy control method and system based on artificial intelligence
CN113030896A (en) * 2021-03-10 2021-06-25 森思泰克河北科技有限公司 Radar target clustering method and device and electronic equipment
CN113343840A (en) * 2021-06-02 2021-09-03 合肥泰瑞数创科技有限公司 Object identification method and device based on three-dimensional point cloud
CN113344994A (en) * 2021-06-21 2021-09-03 京东数科海益信息科技有限公司 Image registration method and device, electronic equipment and storage medium
CN113536850A (en) * 2020-04-20 2021-10-22 长沙莫之比智能科技有限公司 Target object size testing method and device based on 77G millimeter wave radar
CN113850995A (en) * 2021-09-14 2021-12-28 华设设计集团股份有限公司 Event detection method, device and system based on tunnel radar vision data fusion
CN117115494B (en) * 2023-10-23 2024-02-06 卡松科技股份有限公司 Lubricating oil impurity pollution detection method and device based on artificial intelligence

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070130239A1 (en) * 2005-11-21 2007-06-07 Wheeler Mark D Identification of edge regions from 3D point data
US20140334670A1 (en) * 2012-06-14 2014-11-13 Softkinetic Software Three-Dimensional Object Modelling Fitting & Tracking
CN110119751A (en) * 2018-02-06 2019-08-13 北京四维图新科技股份有限公司 Laser radar point cloud Target Segmentation method, target matching method, device and vehicle
CN110244322A (en) * 2019-06-28 2019-09-17 东南大学 Pavement construction robot environment sensory perceptual system and method based on Multiple Source Sensor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070130239A1 (en) * 2005-11-21 2007-06-07 Wheeler Mark D Identification of edge regions from 3D point data
US20140334670A1 (en) * 2012-06-14 2014-11-13 Softkinetic Software Three-Dimensional Object Modelling Fitting & Tracking
CN110119751A (en) * 2018-02-06 2019-08-13 北京四维图新科技股份有限公司 Laser radar point cloud Target Segmentation method, target matching method, device and vehicle
CN110244322A (en) * 2019-06-28 2019-09-17 东南大学 Pavement construction robot environment sensory perceptual system and method based on Multiple Source Sensor

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113536850A (en) * 2020-04-20 2021-10-22 长沙莫之比智能科技有限公司 Target object size testing method and device based on 77G millimeter wave radar
CN112199459A (en) * 2020-09-30 2021-01-08 深兰人工智能(深圳)有限公司 3D point cloud segmentation method and segmentation device
CN112847343A (en) * 2020-12-29 2021-05-28 深圳市普渡科技有限公司 Dynamic target tracking and positioning method, device, equipment and storage medium
WO2022142948A1 (en) * 2020-12-29 2022-07-07 深圳市普渡科技有限公司 Dynamic target tracking and positioning method and apparatus, and device and storage medium
CN112884687A (en) * 2021-01-29 2021-06-01 郑州信大云筑工程科技有限公司 Mapping laser radar scanning strategy control method and system based on artificial intelligence
CN113030896A (en) * 2021-03-10 2021-06-25 森思泰克河北科技有限公司 Radar target clustering method and device and electronic equipment
CN113343840A (en) * 2021-06-02 2021-09-03 合肥泰瑞数创科技有限公司 Object identification method and device based on three-dimensional point cloud
CN113343840B (en) * 2021-06-02 2022-03-08 合肥泰瑞数创科技有限公司 Object identification method and device based on three-dimensional point cloud
CN113344994A (en) * 2021-06-21 2021-09-03 京东数科海益信息科技有限公司 Image registration method and device, electronic equipment and storage medium
CN113344994B (en) * 2021-06-21 2024-04-16 京东科技信息技术有限公司 Image registration method, device, electronic equipment and storage medium
CN113850995A (en) * 2021-09-14 2021-12-28 华设设计集团股份有限公司 Event detection method, device and system based on tunnel radar vision data fusion
CN117115494B (en) * 2023-10-23 2024-02-06 卡松科技股份有限公司 Lubricating oil impurity pollution detection method and device based on artificial intelligence

Also Published As

Publication number Publication date
CN110766719B (en) 2022-11-18

Similar Documents

Publication Publication Date Title
CN110766719B (en) Target tracking method, device and storage medium
US11086016B2 (en) Method and apparatus for tracking obstacle
CN110472553B (en) Target tracking method, computing device and medium for fusion of image and laser point cloud
CN113409410B (en) Multi-feature fusion IGV positioning and mapping method based on 3D laser radar
CN109658454B (en) Pose information determination method, related device and storage medium
CN104517275A (en) Object detection method and system
JP2014523572A (en) Generating map data
CN111781608A (en) Moving target detection method and system based on FMCW laser radar
CN113012161B (en) Stacked scattered target point cloud segmentation method based on convex region growth
CN115144828B (en) Automatic online calibration method for intelligent automobile multi-sensor space-time fusion
CN113848545B (en) Fusion target detection and tracking method based on vision and millimeter wave radar
CN112991389A (en) Target tracking method and device and mobile robot
CN109740613B (en) Visual servo control method based on Feature-Shift and prediction
CN111178193A (en) Lane line detection method, lane line detection device and computer-readable storage medium
CN111783722B (en) Lane line extraction method of laser point cloud and electronic equipment
CN113902782A (en) Rapid registration method and system for three-dimensional point cloud of obstacles around excavator
CN110851978B (en) Camera position optimization method based on visibility
CN116681730A (en) Target tracking method, device, computer equipment and storage medium
CN115097419A (en) External parameter calibration method and device for laser radar IMU
CN112529891B (en) Method and device for identifying hollow holes and detecting contours based on point cloud and storage medium
CN117292076A (en) Dynamic three-dimensional reconstruction method and system for local operation scene of engineering machinery
CN115830042A (en) Anchor spraying robot tunnel arch surface re-spraying area identification and positioning method
CN115661394A (en) Method for constructing lane line map, computer device and storage medium
CN114202631A (en) Method for determining rock working face and working point in secondary rock crushing operation
CN114511803A (en) Target occlusion detection method for visual tracking task

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant