CN113689471A - Target tracking method and device, computer equipment and storage medium - Google Patents

Target tracking method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113689471A
CN113689471A CN202111054644.5A CN202111054644A CN113689471A CN 113689471 A CN113689471 A CN 113689471A CN 202111054644 A CN202111054644 A CN 202111054644A CN 113689471 A CN113689471 A CN 113689471A
Authority
CN
China
Prior art keywords
point cloud
characteristic value
cloud data
value matrix
detection area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111054644.5A
Other languages
Chinese (zh)
Other versions
CN113689471B (en
Inventor
兰陌
陈亚楠
韩莹莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202111054644.5A priority Critical patent/CN113689471B/en
Publication of CN113689471A publication Critical patent/CN113689471A/en
Application granted granted Critical
Publication of CN113689471B publication Critical patent/CN113689471B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The present disclosure provides a target tracking method, an apparatus, a computer device and a storage medium, wherein the method comprises: continuously and repeatedly acquiring depth point cloud data in the detection area after the background is eliminated; obtaining a corresponding point cloud characteristic value matrix based on the depth point cloud data obtained each time; calculating the distance of the same characteristic in the point cloud characteristic value matrix obtained by every two adjacent times; judging whether the targets appearing in every two adjacent times in the detection area are matched or not based on the distance calculation result, if so, replacing the point cloud characteristic value matrix obtained in the last time with the point cloud characteristic value matrix obtained in the last time, and if not, storing the point cloud characteristic value matrices obtained in the last and last two times; and tracking the target in the detection area based on the saved point cloud characteristic value matrix. The technical scheme provided by the disclosure can find whether the target in the detection area changes in real time, so as to realize target tracking, and has the advantages of higher positioning precision, simple algorithm and low hardware cost.

Description

Target tracking method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of target tracking technologies, and in particular, to a target tracking method, a target tracking apparatus, a computer device, and a computer-readable storage medium.
Background
In the prior art, common target tracking technologies are classified into an internet of things tracking and positioning technology and a video stream tracking and positioning technology. The tracking and positioning technology of the Internet of things specifically comprises the following steps: the method comprises the steps of adopting a low-power-consumption microcontroller and combining with some specific sensors to form portable terminal equipment, arranging a large number of coded sensors in a detected area, judging the target position according to the connection condition of the terminal and the coded sensors and tracking. The video stream tracking and positioning technology specifically comprises the following steps: the method comprises the steps of installing a camera at a high place, shooting people flow downwards, then connecting the shot video flow into a rear-end operation unit through a certain transmission means, identifying the collected video flow through the operation unit, and finally calculating a target covered by the camera through the output of an algorithm to track.
The inventor finds that the current tracking and positioning technology of the internet of things has the following technical defects: equipment needs to be mounted on a user or a target to be tracked, the precision is not high, the hardware cost is high, and the like; the current video stream tracking and positioning technology has the following technical defects: the acquired data is in an RGB format and can be influenced by illumination in a use environment and the failure modes of all camera schemes; the algorithm for video streaming is complex in design, the calculation amount of the back end is huge, and the algorithm has many failure situations; the hardware cost of the required camera and the back-end operation is high; video stream data also has privacy disclosure problems.
Therefore, it is an urgent problem to provide a target tracking scheme capable of improving precision and reducing cost.
Disclosure of Invention
The invention is completed in order to at least partially solve the technical problems of low precision, high cost and the like in the internet of things tracking and positioning technology and the video stream tracking and positioning technology in the prior art.
According to an aspect of the present disclosure, there is provided a target tracking method, the method including:
continuously and repeatedly acquiring depth point cloud data in the detection area after the background is eliminated;
obtaining a corresponding point cloud characteristic value matrix based on the depth point cloud data obtained each time;
calculating the distance of the same characteristic in the point cloud characteristic value matrix obtained by every two adjacent times;
judging whether the targets appearing in every two adjacent times in the detection area are matched or not based on the distance calculation result, if so, replacing the point cloud characteristic value matrix obtained in the last time with the point cloud characteristic value matrix obtained in the last time, and if not, storing the point cloud characteristic value matrices obtained in the last and last two times;
and tracking the target in the detection area based on the saved point cloud characteristic value matrix.
According to another aspect of the present disclosure, there is provided a target tracking apparatus, the apparatus including:
the first acquisition module is arranged for continuously acquiring depth point cloud data of the detection area after the background is eliminated for multiple times;
the second acquisition module is arranged to obtain a corresponding point cloud characteristic value matrix based on the depth point cloud data acquired each time;
the calculation module is used for calculating the distance of the same characteristic in the point cloud characteristic value matrix obtained by every two adjacent times;
a judging module configured to judge whether the targets appearing twice adjacent in the detection region match based on the distance calculation result obtained by the calculating module;
the storage module is arranged to replace the point cloud characteristic value matrix obtained at the last time with the point cloud characteristic value matrix obtained at the last time when the judgment result of the judgment module is matched, and to store the point cloud characteristic value matrices obtained at the last and last times when the judgment result of the judgment module is not matched; and the number of the first and second groups,
a tracking module configured to track the target in the detection region based on the point cloud eigenvalue matrix saved by the storage module.
According to yet another aspect of the present disclosure, there is provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor executing the aforementioned target tracking method when the processor runs the computer program stored in the memory.
According to yet another aspect of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored, which, when executed by a processor, performs the aforementioned object tracking method.
The technical scheme provided by the disclosure can comprise the following beneficial effects:
according to the target tracking method, whether the target in the detection area changes or not can be found in real time based on the depth point cloud data obtained in the detection area for multiple times after the background is eliminated, if the target changes, the point cloud characteristic value matrixes obtained twice before and after the change are recorded, the target in the detection area is tracked based on the stored point cloud characteristic value matrixes, the positioning precision is high, the algorithm is simple, the hardware cost is low, and the privacy disclosure problem does not exist.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the disclosure. The objectives and other advantages of the disclosure may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosed embodiments and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the example serve to explain the principles of the disclosure and not to limit the disclosure.
Fig. 1 is a schematic flow chart of a target tracking method provided in an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a target tracking device according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a target tracking system provided in an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a computer device provided in an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, specific embodiments of the present disclosure are described below in detail with reference to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order; also, the embodiments and features of the embodiments in the present disclosure may be arbitrarily combined with each other without conflict.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of explanation of the present disclosure, and have no specific meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
Fig. 1 is a schematic flow chart of a target tracking method according to an embodiment of the present disclosure. As shown in fig. 1, the method includes the following steps S101 to S106.
S101, continuously and repeatedly acquiring depth point cloud data in a detection area after a background is eliminated;
in this step, the "point cloud data" refers to a set of vectors in a three-dimensional coordinate system, wherein each point includes XYZ three-dimensional coordinates; the depth point cloud data indicates that an object (namely a target) exists in the detection area, otherwise point cloud data with practical significance cannot be obtained after the background in the detection area is eliminated;
s102, obtaining a corresponding point cloud characteristic value matrix based on the depth point cloud data obtained each time;
s103, calculating the distance of the same characteristic in the point cloud characteristic value matrix obtained every two adjacent times;
s104, judging whether the targets appearing in every two adjacent times in the detection area are matched or not based on the distance calculation result, if so, replacing the point cloud characteristic value matrix obtained in the last time with the point cloud characteristic value matrix obtained in the previous time, and if not, storing the point cloud characteristic value matrices obtained in the previous and last two times;
and S105, tracking the target in the detection area based on the stored point cloud characteristic value matrix.
In the embodiment of the disclosure, the depth point cloud data in the detection area after the background is removed is obtained, then a corresponding point cloud characteristic value matrix is obtained based on the depth point cloud data obtained each time, and then whether the targets appearing in the detection area in two times are matched is judged based on the distance between the same characteristics in the point cloud characteristic value matrix obtained in two adjacent times, so that whether the targets in the detection area are changed is found in real time, and then the tracking of the targets in the detection area is realized.
In one embodiment, step S101 specifically includes the following steps S101a to S101d.
S101a, collecting point cloud data D1 of the detection area after all non-fixed objects are emptied by using an area array laser radar;
s101b, collecting point cloud data D2i, i of the current scene of the detection area at intervals of preset time by using an area array laser radar, and sequentially taking 1 to n1,n1Is a positive integer; wherein, the preset duration can be set and adjusted by the technicians in the field according to the actual requirements;
s101c, sequentially carrying out difference on D2i and D1;
s101d, obtaining depth point cloud data D3i, i in the detection area after the background is eliminated according to the difference calculation result obtained in the step S101c, wherein 1 to n are sequentially selected from the n2,n2Is a positive integer, n2≤n1
An area array laser radar, also called an area array photoelectric sensor, is a device which actively emits laser and calculates the distance between the radar and the detected object according to the flight time of light. The area array laser radar comprises M rows and N columns of photoelectric sensors (also called distance measuring devices), which jointly form an area array photoelectric sensor comprising M x N distance measuring devices. When triggered each time, the area array laser radar returns M x N space XYZ three-dimensional coordinates, and generates a space XYZ point cloud after a certain compensation algorithm and a certain distortion algorithm.
In practical application, it is necessary to install an area array laser radar at a fixed position in a detection area, and mark M × N distance measuring devices therein, which are denoted as T1-TM*NDividing the detection area into M × N designated areas and numbering the designated areas respectively; the labels of the respective ranging devices correspond to the numbers of the respective designated areas one to one, and each ranging device needs to detect one corresponding designated area.
In step S101a, it is necessary to empty all non-stationary objects within the detection area, including but not limited to: people, cartons, instruments, sundries and the like, the tidiness of the detection area is kept, and then the area array laser radar is used for collecting point cloud data D1 of the detection area after all non-fixed objects are emptied.
In engineering, in consideration of errors of the distance measuring device, the area array laser radar can be used for continuously and repeatedly acquiring point cloud data D1i, i of the detection area after all non-fixed objects are emptied, and sequentially taking 1 to n3,n3The average value of D1i was calculated again as a positive integer, and this average value was recorded as D1 and stored.
In step S101c, D2i and D1 are sequentially subtracted, and the resulting difference calculation result is | D2i-D1|, the X-axis component of D2i is subtracted from the X-axis component of D1, the Y-axis component of D2i is subtracted from the Y-axis component of D1, and the Z-axis component of D2i is subtracted from the Z-axis component of D1, so that the difference calculation result is still a set of point cloud data.
In the step, the depth point cloud data D3i of the detection area with the background removed can be obtained by subtracting the point cloud data D2i of the current scene of the detection area from the point cloud data D1 of the detection area with all non-fixed objects emptied, the algorithm is simple, and the obtained depth point cloud data is high in precision.
In one embodiment, step S101d specifically includes the following steps d1 and d2.
d1. Respectively judging whether absolute values of all the Z-axis component mean values in the difference calculation results obtained in the step S101c are greater than a preset first threshold, and if so, executing a step d 2; otherwise, returning to step S101c to continue to obtain the next difference calculation result; wherein, the first threshold value can be set and adjusted by the technicians in the field according to the actual requirements;
d2. and taking the corresponding difference calculation result as the depth point cloud data D3i of the detection area after the background is eliminated.
In this embodiment, when the difference between D2i and D1 is calculated, the difference calculation result needs to be evaluated, specifically, all Z-axis components in the difference calculation result are averaged, and then it is determined whether an absolute value of the average value is greater than a preset first threshold, for example, a certain threshold, and only if the absolute value is greater than the threshold, it is determined that an object exists in the current scene of the detection region, and the corresponding difference calculation result is used as the depth point cloud data in the detection region after the background has been removed. The value of the threshold can be an empirical value, and is usually selected to be 1-10cm in engineering.
In one embodiment, step S102 includes the following steps S102a and S102b.
S102a, clustering depth point cloud data acquired each time, and dividing the depth point cloud data into different sub-regions;
s102b, performing secondary operation on the point cloud data of each sub-region divided by the depth point cloud data acquired each time to obtain a corresponding point cloud characteristic value matrix.
In the embodiment, the corresponding point cloud characteristic value matrix can be obtained by sequentially clustering and carrying out secondary operation on the depth point cloud data, and the algorithm is simple.
In a specific embodiment, an algorithm used for clustering the depth point cloud data acquired each time is as follows: and (4) a density-based spatial clustering algorithm.
In this embodiment, the density-based spatial clustering algorithm may be used to conveniently and rapidly divide the point cloud with sufficiently high density into different sub-regions.
Preferably, the Clustering algorithm may select a DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm. The algorithm can divide areas with sufficient density into clusters and find arbitrarily shaped clusters in a noisy spatial database, which defines clusters as the largest set of density-connected points. With this algorithm, given the parameters, the background-removed depth point cloud data D3i can be divided into different "clusters" of point cloud sets, and thus into different sub-regions.
After the depth point cloud data are clustered by using the DBSCAN algorithm, each divided sub-area can be numbered according to the returned label and can be recorded as O1-ONiAnd Ni refers to the number of sub-regions formed by clustering the depth point cloud data acquired at the ith time. Certainly, the depth point cloud data acquired for the first time is the first frame of point cloud data acquired after the system is powered on, and the first frame of point cloud data is initialized point cloud data.
In one embodiment, step S102b specifically includes the following steps b1 to b3.
b1. Performing data normalization processing on the point cloud data of each sub-region divided by the depth point cloud data acquired each time;
b2. calculating a characteristic value set { Xmin, Xmax, Xmean, Ymin, Ymax, Ymean, Zmin, Zmax, Zmean } of the point cloud data of each sub-region after data normalization processing;
b3. and respectively matching the IDs of the sub-regions with the corresponding characteristic value sets to obtain a point cloud characteristic value matrix corresponding to the depth point cloud data obtained this time.
In step b1, in addition to the data normalization process, filtering and correction processes may be included.
In step b2, a feature value set describing the features of the point cloud data of each sub-region is obtained, that is, each sub-region corresponds to one feature value set.
In the characteristic value set, Xmin represents the minimum value of the X-axis component in the point cloud data of the corresponding sub-region, Xmax represents the maximum value of the X-axis component in the point cloud data of the corresponding sub-region, and Xmean represents the mean value of all the X-axis components in the point cloud data of the corresponding sub-region; ymin represents the minimum value of the Y-axis component in the point cloud data of the corresponding sub-region, Ymax represents the maximum value of the Y-axis component in the point cloud data of the corresponding sub-region, and Ymean represents the mean value of all the Y-axis components in the point cloud data of the corresponding sub-region; zmin represents the minimum value of the Z-axis component in the point cloud data of the corresponding sub-region, Zmax represents the maximum value of the Z-axis component in the point cloud data of the corresponding sub-region, and Zmean represents the mean value of all the Z-axis components in the point cloud data of the corresponding sub-region.
In addition to the above feature values, the set of feature values may further include: the number of sub-regions and the number of valid data points in the point cloud data for each sub-region.
In step b3, the ID of each sub-region is the number O of each sub-region described above1-ONi
The format of the point cloud eigenvalue matrix C1 corresponding to the depth point cloud data acquired for the first time is as follows:
Figure BDA0003254174910000071
the format of the point cloud eigenvalue matrix C2 corresponding to the depth point cloud data obtained for the second time is as follows:
Figure BDA0003254174910000081
by analogy, a point cloud characteristic value matrix Ci corresponding to other depth point cloud data can be obtained.
In the step, the corresponding point cloud characteristic value matrix can be obtained by carrying out secondary operation (filtering, data normalization, correction, characteristic value calculation and the like) on the depth point cloud data acquired each time, and the algorithm is simple and convenient.
In one embodiment, between steps b2 and b3, the following step b4 is further included:
b4. and sequencing the characteristic value sets according to the sequence of Zmean from near to far in the characteristic value sets of the point cloud data of each sub-region, and numbering each sub-region according to the sequencing result to serve as the ID of each sub-region.
In this step, after the eigenvalue sets of the point cloud data of each sub-region are obtained, in order to facilitate subsequent calculation of the distance between the same eigenvalue sets in the point cloud eigenvalue matrix obtained twice every adjacent time, the eigenvalue sets need to be sorted according to the Zmean size, and the sub-regions are numbered according to the sorting result (that is, the numbers of the sub-regions are assigned as O again1-ONi)。
Correspondingly, step S103 specifically is: and calculating the distance of the characteristic value set of the point cloud data of the subareas with the same ID in the point cloud characteristic value matrix obtained twice in adjacent.
In this embodiment, after the point cloud eigenvalue matrix corresponding to each acquired depth point cloud data is obtained after data normalization processing, in step S103, a KNN (K-Nearest Neighbor) algorithm may be used to implement matching of elements in the point cloud eigenvalue matrix obtained twice in the vicinity, so as to facilitate subsequent distance calculation.
In a specific embodiment, after step S102, the following steps S106 and S107 are further included.
S106, aiming at any point cloud characteristic value matrix obtained twice in adjacent, respectively obtaining the total number of all characteristic value sets in the point cloud characteristic value matrix obtained twice;
s107, comparing the total number N2 of the feature value sets obtained at the next time with the total number N1 of the feature value sets obtained at the previous time, if N1 is greater than N2, deleting overflowing N1-N2 feature value sets from the point cloud feature value matrix obtained at the previous time, deleting the IDs of corresponding sub-regions, and then executing the step S104 to calculate the distance between the same features (namely the same feature value sets) in the point cloud feature value matrixes obtained at the two times; if N1 is less than N2, N2-N1 feature value sets missing in the point cloud feature value matrix obtained at the previous time are supplemented and new IDs are assigned to the sub-regions corresponding to the feature value sets, and step S104 is executed to calculate the distances between the same features (i.e., the same feature value sets) in the point cloud feature value matrices obtained at the two times.
In this embodiment, in the process of tracking the target in the detection region, for each sub-region divided by the depth point cloud data acquired at each time, there may be a process of adding a new sub-region and a process of exiting an old sub-region, and it is necessary to assign a new ID to the new sub-region and supplement a corresponding feature value set, and delete an ID of the old sub-region and delete the corresponding feature value set.
In one embodiment, the distance calculated in step S103 is specifically an euler distance.
The euler distance, the euclidean distance, also called the euclidean distance, refers to the absolute distance between two points in space. In the present disclosure, the actual calculation is the euler distance between two multidimensional vectors (Xmin, Xmax, Xmean, Ymin, Ymax, Ymean, Zmin, Zmax, Zmean) corresponding to sub-regions with the same ID divided by the depth point cloud data acquired twice adjacently.
Accordingly, the step S104 of determining whether or not the targets appearing twice each adjacent in the detection area match based on the distance calculation result specifically includes the following steps S104a and S104b.
S104a, aiming at point cloud characteristic value matrixes obtained in any two adjacent times, sorting Euler distances with the same characteristics according to sizes, and extracting a preset number of smaller Euler distances; the preset number can be set and adjusted by technicians in the field according to actual requirements, and the minimum number can be set to 1;
s104b, judging whether the preset number of smaller Euler distances are all smaller than a preset second threshold value, if so, considering that the targets which appear in the detection area in two adjacent times are matched, otherwise, considering that the targets are not matched; the second threshold can be set and adjusted by those skilled in the art according to actual requirements, and can be a fixed or sliding threshold.
In the embodiment, whether two adjacent targets in the detection area are matched or not is judged through the Euler distance, and the algorithm is simple and high in precision; if the judgment result is matching, the target is kept unchanged, and if the judgment result is not matching, the target is changed.
It should be noted that the sequence of the above steps is only a specific example provided for illustrating the embodiment of the present disclosure, and the present disclosure does not limit the sequence of the above steps, and those skilled in the art can adjust the sequence as required in practical application.
According to the target tracking method provided by the embodiment of the disclosure, whether a target (object) in a detection area changes or not can be found in real time based on depth point cloud data obtained in the detection area for multiple times after background elimination, if the target (object) changes, a point cloud characteristic value matrix obtained twice before and after the change is recorded, and the target in the detection area is tracked based on the stored point cloud characteristic value matrix.
Fig. 2 is a schematic structural diagram of a target tracking device according to an embodiment of the present disclosure. As shown in fig. 2, the apparatus 2 includes: a first acquisition module 201, a second acquisition module 202, a calculation module 203, a judgment module 204, a storage module 205, and a tracking module 206.
The first obtaining module 201 is configured to obtain depth point cloud data in the detection area after the background is removed for multiple times; the second obtaining module 202 is configured to obtain a corresponding point cloud eigenvalue matrix based on the depth point cloud data obtained each time; the calculation module 203 is configured to calculate the distance between the same feature in the point cloud feature value matrix obtained by every two adjacent times; the judging module 204 is configured to judge whether the two adjacent targets in the detection region are matched based on the distance calculation result obtained by the calculating module 203; the storage module 205 is configured to replace the point cloud eigenvalue matrix obtained at the last time with the point cloud eigenvalue matrix obtained at the last time when the determination result of the determination module 204 is a match, and to store the point cloud eigenvalue matrices obtained at the last and last times when the determination result of the determination module 204 is a mismatch; the tracking module 206 is configured to track the target in the detection region based on the point cloud eigenvalue matrix already stored by the storage module 205.
In a specific embodiment, the first obtaining module 201 includes: the device comprises a first acquisition unit, a second acquisition unit and a calculation unit.
Wherein the first acquisition unit is arranged to acquire the detection area by using the area array laser radar after all the non-fixed objects are emptiedPoint cloud data D1; the second acquisition unit is set to acquire the point cloud data D2i of the current scene of the detection area at intervals of preset time by using the area array laser radar, and 1 to n are sequentially selected from i1,n1Is a positive integer; the computing unit is arranged to sequentially make a difference between D2i and D1, and obtain depth point cloud data D3i in the detection region after background elimination according to a difference computing result2,n2Is a positive integer.
In one embodiment, the computing unit comprises: the device comprises a first calculation subunit, a judgment subunit and a determination subunit.
Wherein the first calculating subunit is configured to sequentially differentiate D2i from D1; the judgment subunit is configured to respectively judge whether absolute values of all Z-axis component mean values in each difference calculation result obtained by the calculation subunit are greater than a preset first threshold value; the determining subunit is configured to use the corresponding difference calculation result as the depth point cloud data D3i in the detection region after the background is removed when the determination result of the determining subunit is greater than the preset first threshold.
In a specific embodiment, the second obtaining module 202 includes: a first arithmetic unit and a second arithmetic unit.
The first operation unit is used for clustering the depth point cloud data acquired each time and dividing the depth point cloud data into different sub-areas; the second operation unit is used for performing secondary operation on the point cloud data of each sub-area divided by the acquired depth point cloud data each time to obtain a corresponding point cloud characteristic value matrix.
In a specific embodiment, the first arithmetic unit clusters the depth point cloud data acquired each time by using an algorithm as follows: and (4) a density-based spatial clustering algorithm.
In one embodiment, the second arithmetic unit includes: a normalization subunit, a second calculation subunit and a matching subunit.
The normalization subunit is configured to perform data normalization processing on the point cloud data of each sub-region divided by the depth point cloud data acquired each time; the second calculating subunit is configured to calculate a feature value set { Xmin, Xmax, Xmean, Ymin, Ymax, Ymean, Zmin, Zmean } of the point cloud data of each sub-region subjected to the data normalization processing; the matching subunit is set to match the ID of each sub-region with the corresponding characteristic value set respectively to obtain a point cloud characteristic value matrix corresponding to the depth point cloud data acquired this time.
In one embodiment, the second arithmetic unit further includes: and ordering the subunits.
The sorting subunit is configured to sort the characteristic value sets according to the sequence of Zmean from near to far in the characteristic value sets of the point cloud data of the sub-regions, and number the sub-regions according to the sorting result to serve as the IDs of the sub-regions.
Correspondingly, the calculating module 203 is specifically configured to calculate a distance between the feature value sets of the point cloud data of the sub-regions with the same ID in the point cloud feature value matrix obtained twice in adjacent.
In one embodiment, the second arithmetic unit further includes: acquiring a subunit; the device 2 further comprises: a comparison module 207, a deletion module 208 and an addition module 209.
The acquisition subunit is configured to acquire the total number of all feature value sets in the point cloud feature value matrixes obtained twice respectively for any two adjacent point cloud feature value matrixes obtained twice; the comparison module 207 is configured to compare the total number of eigenvalue sets N2 obtained at the next time with the total number of eigenvalue sets N1 obtained at the previous time; the deleting module 208 is configured to delete the overflowed N1-N2 feature value sets from the point cloud feature value matrix obtained at the previous time and delete the IDs of the corresponding sub-regions when the comparison result of the comparing module 207 is N1> N2, and then the calculating module 203 calculates the distance between the same features (i.e., the same feature value sets) in the point cloud feature value matrices obtained at the two times; the appending module 209 is configured to append the missing N2-N1 feature value sets in the previously obtained point cloud feature value matrix and assign new IDs to the corresponding sub-regions when the comparison result of the comparing module 207 is N1< N2, and then the calculating module 203 calculates the distance between the same feature value sets in the two obtained point cloud feature value matrices.
In one embodiment, the distance calculated by the calculation module 203 is specifically the euler distance.
Accordingly, the determining module 204 includes: the device comprises a sorting unit, an extracting unit and a judging unit.
The sorting unit is arranged to sort the Euler distances of the same characteristic value set in a point cloud characteristic value matrix obtained by any two adjacent times according to the size; the extracting unit is set to extract a preset number of smaller Euler distances according to the sorting result of the sorting unit; the judging unit is set to judge whether the preset number of smaller Euler distances are all smaller than a preset second threshold value, if so, the targets which appear in the detection area twice are considered to be matched, otherwise, the targets are considered to be not matched.
The target tracking device provided by the embodiment of the disclosure can find whether a target (object) in a detection area changes in real time based on depth point cloud data obtained in the detection area for multiple times after background elimination, records a point cloud characteristic value matrix obtained twice before and after the change if the target (object) in the detection area changes, and tracks the target in the detection area based on the stored point cloud characteristic value matrix, and has the advantages of high positioning accuracy, simple algorithm, low hardware cost and no privacy disclosure problem.
Fig. 3 is a schematic structural diagram of a target tracking system according to an embodiment of the present disclosure. As shown in fig. 3, the system includes: a computing platform 31 and an array lidar 32. The area array laser radar 32 comprises M rows and N columns of photoelectric sensors, and is used for acquiring point cloud data of a detection area after all non-fixed objects are emptied, and acquiring point cloud data of a current scene of the detection area at intervals of preset time, so that the computing platform 31 can obtain depth point cloud data in the detection area after the background is eliminated according to the difference value of the point cloud data and the point cloud data; the computing platform 31 may adopt the target tracking device 2 in the foregoing embodiment, and is specifically configured to count data acquired by each photoelectric sensor (i.e., a distance measuring device) and perform corresponding computation, so as to realize tracking of a target in the detection area.
Since the structure of the target tracking device 2 has been described in detail in the foregoing, and the photoelectric sensor belongs to an existing component, the structure thereof is not described in detail in this embodiment.
Based on the same technical concept, the embodiment of the present disclosure correspondingly provides a computer device, as shown in fig. 4, the computer device 4 includes a memory 41 and a processor 42, the memory 41 stores a computer program, and when the processor 42 runs the computer program stored in the memory 41, the processor 42 executes the aforementioned object tracking method.
Based on the same technical concept, embodiments of the present disclosure correspondingly provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the processor executes the foregoing object tracking method.
In summary, the target tracking method, apparatus, system, computer device and storage medium provided in the embodiments of the present disclosure implement tracking of a target in a detection area through an area array laser radar and a computing platform, and specifically, first obtain depth point cloud data in the detection area after background removal, cluster the depth point cloud data by using a DBSCAN algorithm to divide the depth point cloud data into different sub-areas, perform secondary operation on the point cloud data of each sub-area to obtain a corresponding point cloud characteristic value matrix, then match elements of the same characteristic value set in each two adjacent obtained point cloud characteristic value matrices by using a KNN algorithm to calculate a distance of the same characteristic value set, and then judge whether the two adjacent objects in the detection area are matched based on the distance of the same characteristic value set in each two adjacent obtained point cloud characteristic value matrices, and storing a corresponding point cloud characteristic value matrix according to the judgment result, and finally realizing the tracking of the target (object) in the detection area according to the stored point cloud characteristic value matrix.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present disclosure, and not for limiting the same; while the present disclosure has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present disclosure.

Claims (12)

1. A target tracking method, comprising:
continuously and repeatedly acquiring depth point cloud data in the detection area after the background is eliminated;
obtaining a corresponding point cloud characteristic value matrix based on the depth point cloud data obtained each time;
calculating the distance of the same characteristic in the point cloud characteristic value matrix obtained by every two adjacent times;
judging whether the targets appearing in every two adjacent times in the detection area are matched or not based on the distance calculation result, if so, replacing the point cloud characteristic value matrix obtained in the last time with the point cloud characteristic value matrix obtained in the last time, and if not, storing the point cloud characteristic value matrices obtained in the last and last two times;
and tracking the target in the detection area based on the saved point cloud characteristic value matrix.
2. The method of claim 1, wherein the obtaining depth point cloud data in the detection region after the background has been removed a plurality of times comprises:
collecting point cloud data D1 of the detection area after all non-fixed objects are emptied by using an area array laser radar;
acquiring point cloud data D2i, i of the current scene of the detection area at intervals of preset time by using an area array laser radar, and sequentially taking 1 to n1,n1Is a positive integer;
sequentially subtracting D2i from D1, and obtaining depth point cloud data D3i in the detection region after background elimination according to the difference calculation result, wherein i is sequentially 1 to n2,n2Is a positive integer.
3. The method according to claim 2, wherein the deriving depth point cloud data D3i in the detection region after background removal according to the difference calculation result comprises:
respectively judging whether absolute values of all Z-axis component mean values in each difference value calculation result are larger than a preset first threshold value;
and if the difference is larger than a preset first threshold, taking the corresponding difference calculation result as the depth point cloud data D3i in the detection area after the background is eliminated.
4. The method of claim 1, wherein deriving a corresponding point cloud eigenvalue matrix based on each acquired depth point cloud data comprises:
clustering the depth point cloud data acquired each time, and dividing the depth point cloud data into different sub-areas;
and performing secondary operation on the point cloud data of each subarea divided by the depth point cloud data acquired each time to obtain a corresponding point cloud characteristic value matrix.
5. The method according to claim 4, wherein an algorithm for clustering the depth point cloud data acquired each time is as follows: and (4) a density-based spatial clustering algorithm.
6. The method of claim 4, wherein performing a quadratic operation on the point cloud data of each sub-region divided by the depth point cloud data obtained each time to obtain a corresponding point cloud eigenvalue matrix comprises:
performing data normalization processing on the point cloud data of each sub-region divided by the depth point cloud data acquired each time;
calculating a characteristic value set { Xmin, Xmax, Xmean, Ymin, Ymax, Ymean, Zmin, Zmax, Zmean } of the point cloud data of each sub-region after data normalization processing;
and respectively matching the IDs of the sub-regions with the corresponding characteristic value sets to obtain a point cloud characteristic value matrix corresponding to the depth point cloud data obtained this time.
7. The method of claim 6, further comprising, prior to matching the ID of each sub-region with the corresponding set of eigenvalues, respectively:
sorting the characteristic value sets according to the sequence of Zmean from near to far in the characteristic value sets of the point cloud data of each sub-region, and numbering each sub-region according to sorting results to serve as the ID of each sub-region;
the calculation of the distance between the same features in the point cloud feature value matrix obtained by every two adjacent times specifically comprises the following steps:
and calculating the distance of the characteristic value set of the point cloud data of the subareas with the same ID in the point cloud characteristic value matrix obtained twice in adjacent.
8. The method of claim 6, further comprising:
respectively acquiring the total number of all characteristic value sets in the point cloud characteristic value matrixes obtained twice by aiming at the point cloud characteristic value matrixes obtained twice by random adjacency;
comparing the total number N2 of the feature value sets obtained at the next time with the total number N1 of the feature value sets obtained at the previous time, if N1 is more than N2, deleting overflowed N1-N2 feature value sets from the point cloud feature value matrix obtained at the previous time, deleting IDs of corresponding sub-regions, and then performing the step of calculating the distance between the same features in the point cloud feature value matrixes obtained at the two times; if N1 is less than N2, the missing N2-N1 feature value sets are supplemented in the point cloud feature value matrix obtained in the previous time, new IDs are given to corresponding sub-regions, and then the step of calculating the distance between the same features in the point cloud feature value matrix obtained in the two times is executed.
9. Method according to claim 1, characterized in that said distance is in particular an euler distance;
the judging whether the targets appearing twice every adjacent in the detection area are matched based on the distance calculation result includes:
sorting the Euler distances with the same characteristics according to the size of the point cloud characteristic value matrix obtained in any two adjacent times, and extracting preset number of smaller Euler distances;
and judging whether the preset number of smaller Euler distances are all smaller than a preset second threshold value, if so, considering that the targets which appear in the detection area twice are matched, and otherwise, considering that the targets do not match.
10. An object tracking device, comprising:
the first acquisition module is arranged for continuously acquiring depth point cloud data of the detection area after the background is eliminated for multiple times;
the second acquisition module is arranged to obtain a corresponding point cloud characteristic value matrix based on the depth point cloud data acquired each time;
the calculation module is used for calculating the distance of the same characteristic in the point cloud characteristic value matrix obtained by every two adjacent times;
a judging module configured to judge whether the targets appearing twice adjacent in the detection region match based on the distance calculation result obtained by the calculating module;
the storage module is arranged to replace the point cloud characteristic value matrix obtained at the last time with the point cloud characteristic value matrix obtained at the last time when the judgment result of the judgment module is matched, and to store the point cloud characteristic value matrices obtained at the last and last times when the judgment result of the judgment module is not matched; and the number of the first and second groups,
a tracking module configured to track the target in the detection region based on the point cloud eigenvalue matrix saved by the storage module.
11. A computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor performing the object tracking method according to any one of claims 1 to 9 when the processor runs the computer program stored by the memory.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the object tracking method according to any one of claims 1 to 9.
CN202111054644.5A 2021-09-09 2021-09-09 Target tracking method, device, computer equipment and storage medium Active CN113689471B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111054644.5A CN113689471B (en) 2021-09-09 2021-09-09 Target tracking method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111054644.5A CN113689471B (en) 2021-09-09 2021-09-09 Target tracking method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113689471A true CN113689471A (en) 2021-11-23
CN113689471B CN113689471B (en) 2023-08-18

Family

ID=78585822

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111054644.5A Active CN113689471B (en) 2021-09-09 2021-09-09 Target tracking method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113689471B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114442101A (en) * 2022-01-28 2022-05-06 南京慧尔视智能科技有限公司 Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014178789A (en) * 2013-03-14 2014-09-25 Advanced Telecommunication Research Institute International Tracking device, tracking program and tracking method
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect
CN109697728A (en) * 2017-10-20 2019-04-30 阿里巴巴集团控股有限公司 Data processing method, device, system and storage medium
JP2019128196A (en) * 2018-01-23 2019-08-01 株式会社トプコン Surveying apparatus and surveying method
CN110119751A (en) * 2018-02-06 2019-08-13 北京四维图新科技股份有限公司 Laser radar point cloud Target Segmentation method, target matching method, device and vehicle
CN110189257A (en) * 2019-06-03 2019-08-30 北京石油化工学院 Method, apparatus, system and the storage medium that point cloud obtains
CN110246159A (en) * 2019-06-14 2019-09-17 湖南大学 The 3D target motion analysis method of view-based access control model and radar information fusion
CN110795523A (en) * 2020-01-06 2020-02-14 中智行科技有限公司 Vehicle positioning method and device and intelligent vehicle
CN110909713A (en) * 2019-12-05 2020-03-24 深圳市镭神智能***有限公司 Method, system and medium for extracting point cloud data track
CN110927712A (en) * 2019-10-28 2020-03-27 珠海格力电器股份有限公司 Tracking method and device
CN110942476A (en) * 2019-10-17 2020-03-31 湖南大学 Improved three-dimensional point cloud registration method and system based on two-dimensional image guidance and readable storage medium
CN111126296A (en) * 2019-12-25 2020-05-08 中国联合网络通信集团有限公司 Fruit positioning method and device
CN111239766A (en) * 2019-12-27 2020-06-05 北京航天控制仪器研究所 Water surface multi-target rapid identification and tracking method based on laser radar
US20200242330A1 (en) * 2017-10-05 2020-07-30 Applications Mobiles Overview Inc. Method for object recognition
CN111583663A (en) * 2020-04-26 2020-08-25 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN111640300A (en) * 2020-04-28 2020-09-08 武汉万集信息技术有限公司 Vehicle detection processing method and device
CN112069923A (en) * 2020-08-18 2020-12-11 东莞正扬电子机械有限公司 3D face point cloud reconstruction method and system
CN112330661A (en) * 2020-11-24 2021-02-05 交通运输部公路科学研究所 Multi-period vehicle-mounted laser point cloud road change monitoring method
WO2021134258A1 (en) * 2019-12-30 2021-07-08 深圳元戎启行科技有限公司 Point cloud-based target tracking method and apparatus, computer device and storage medium
US20210256722A1 (en) * 2020-02-11 2021-08-19 Raytheon Company Collaborative 3d mapping and surface registration
CN113340295A (en) * 2021-06-16 2021-09-03 广东工业大学 Unmanned ship near-shore real-time positioning and mapping method with multiple ranging sensors
WO2021226716A1 (en) * 2020-05-12 2021-11-18 Glove Systems Inc. System and method for discrete point coordinate and orientation detection in 3d point clouds

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014178789A (en) * 2013-03-14 2014-09-25 Advanced Telecommunication Research Institute International Tracking device, tracking program and tracking method
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect
US20200242330A1 (en) * 2017-10-05 2020-07-30 Applications Mobiles Overview Inc. Method for object recognition
CN109697728A (en) * 2017-10-20 2019-04-30 阿里巴巴集团控股有限公司 Data processing method, device, system and storage medium
JP2019128196A (en) * 2018-01-23 2019-08-01 株式会社トプコン Surveying apparatus and surveying method
CN110119751A (en) * 2018-02-06 2019-08-13 北京四维图新科技股份有限公司 Laser radar point cloud Target Segmentation method, target matching method, device and vehicle
CN110189257A (en) * 2019-06-03 2019-08-30 北京石油化工学院 Method, apparatus, system and the storage medium that point cloud obtains
CN110246159A (en) * 2019-06-14 2019-09-17 湖南大学 The 3D target motion analysis method of view-based access control model and radar information fusion
CN110942476A (en) * 2019-10-17 2020-03-31 湖南大学 Improved three-dimensional point cloud registration method and system based on two-dimensional image guidance and readable storage medium
CN110927712A (en) * 2019-10-28 2020-03-27 珠海格力电器股份有限公司 Tracking method and device
CN110909713A (en) * 2019-12-05 2020-03-24 深圳市镭神智能***有限公司 Method, system and medium for extracting point cloud data track
CN111126296A (en) * 2019-12-25 2020-05-08 中国联合网络通信集团有限公司 Fruit positioning method and device
CN111239766A (en) * 2019-12-27 2020-06-05 北京航天控制仪器研究所 Water surface multi-target rapid identification and tracking method based on laser radar
WO2021134258A1 (en) * 2019-12-30 2021-07-08 深圳元戎启行科技有限公司 Point cloud-based target tracking method and apparatus, computer device and storage medium
CN110795523A (en) * 2020-01-06 2020-02-14 中智行科技有限公司 Vehicle positioning method and device and intelligent vehicle
US20210256722A1 (en) * 2020-02-11 2021-08-19 Raytheon Company Collaborative 3d mapping and surface registration
CN111583663A (en) * 2020-04-26 2020-08-25 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN111640300A (en) * 2020-04-28 2020-09-08 武汉万集信息技术有限公司 Vehicle detection processing method and device
WO2021226716A1 (en) * 2020-05-12 2021-11-18 Glove Systems Inc. System and method for discrete point coordinate and orientation detection in 3d point clouds
CN112069923A (en) * 2020-08-18 2020-12-11 东莞正扬电子机械有限公司 3D face point cloud reconstruction method and system
CN112330661A (en) * 2020-11-24 2021-02-05 交通运输部公路科学研究所 Multi-period vehicle-mounted laser point cloud road change monitoring method
CN113340295A (en) * 2021-06-16 2021-09-03 广东工业大学 Unmanned ship near-shore real-time positioning and mapping method with multiple ranging sensors

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
MO YUDA等: "Target accurate positioning based on the point cloud created by stereo vision", 《 2016 23RD INTERNATIONAL CONFERENCE ON MECHATRONICS AND MACHINE VISION IN PRACTICE (M2VIP)》, pages 1 - 5 *
TAO LIU等: "Moving Camera-Based Object Tracking Using Adaptive Ground Plane Estimation and Constrained Multiple Kernels", 《JOURNAL OF ADVANCED TRANSPORTATION》, pages 1 - 15 *
WENWEI ZHANG等: "Robust Multi-Modality Multi-Object Tracking", 《2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》, pages 2365 - 2374 *
崔家武等: "利用MPI 实现多幅点云ICP 并行配准", 《测绘通报》, no. 3, pages 87 - 90 *
崔明月等: "利用边缘计算的多车协同激光雷达SLAM", 《中国图象图形学报》, vol. 26, no. 1, pages 218 - 228 *
曹志民等: "机载LiDAR点云定量化局部结构信息分析", 《 地理空间信息》, vol. 14, no. 2, pages 10 - 12 *
李帅印等: "基于多层激光雷达的目标检测与跟踪算法", 《传感器与微***》, vol. 39, no. 6, pages 123 - 126 *
马瑞鑫等: "基于多源异构数据融合的船舶目标检测与跟踪技术研究", 《水道港口》, vol. 42, no. 3, pages 392 - 398 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114442101A (en) * 2022-01-28 2022-05-06 南京慧尔视智能科技有限公司 Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar
CN114442101B (en) * 2022-01-28 2023-11-14 南京慧尔视智能科技有限公司 Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar

Also Published As

Publication number Publication date
CN113689471B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN110555901B (en) Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
Sock et al. Multi-view 6D object pose estimation and camera motion planning using RGBD images
CN109376256B (en) Image searching method and device
US10515117B2 (en) Generating and reviewing motion metadata
WO2014022036A1 (en) Fast 3-d point cloud generation on mobile devices
CN111369495B (en) Panoramic image change detection method based on video
WO2013104432A1 (en) Detecting video copies
US20160117573A1 (en) Method and apparatus for extracting feature correspondences from multiple images
CN111598176B (en) Image matching processing method and device
CN112927247A (en) Graph cutting method based on target detection, graph cutting device and storage medium
CN113689471A (en) Target tracking method and device, computer equipment and storage medium
CN111383246A (en) Scroll detection method, device and equipment
KR20220036803A (en) System and method for detecting fiducial mark on the pcb
CN111368128B (en) Target picture identification method, device and computer readable storage medium
CN115661131B (en) Image identification method and device, electronic equipment and storage medium
Besiris et al. Key frame extraction in video sequences: a vantage points approach
CN112802112B (en) Visual positioning method, device, server and storage medium
CN111767757B (en) Identity information determining method and device
CN112257666B (en) Target image content aggregation method, device, equipment and readable storage medium
CN115131705A (en) Target detection method and device, electronic equipment and storage medium
CN110060343B (en) Map construction method and system, server and computer readable medium
CN110580429B (en) Video background library management method, device and application thereof
CN117218613B (en) Vehicle snapshot recognition system and method
CN111242054B (en) Method and device for detecting capture rate of detector
JP2013246550A (en) Image retrieval system and image retrieval method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant