CN113689471B - Target tracking method, device, computer equipment and storage medium - Google Patents

Target tracking method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN113689471B
CN113689471B CN202111054644.5A CN202111054644A CN113689471B CN 113689471 B CN113689471 B CN 113689471B CN 202111054644 A CN202111054644 A CN 202111054644A CN 113689471 B CN113689471 B CN 113689471B
Authority
CN
China
Prior art keywords
point cloud
cloud data
characteristic value
detection area
value matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111054644.5A
Other languages
Chinese (zh)
Other versions
CN113689471A (en
Inventor
兰陌
陈亚楠
韩莹莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202111054644.5A priority Critical patent/CN113689471B/en
Publication of CN113689471A publication Critical patent/CN113689471A/en
Application granted granted Critical
Publication of CN113689471B publication Critical patent/CN113689471B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The disclosure provides a target tracking method, a device, a computer device and a storage medium, wherein the method comprises the following steps: continuously acquiring depth point cloud data with the background eliminated in the detection area for multiple times; obtaining a corresponding point cloud characteristic value matrix based on the depth point cloud data obtained each time; calculating the distance of the same feature in the point cloud feature value matrix obtained every two adjacent times; judging whether targets appearing every two adjacent times in the detection area are matched or not based on a distance calculation result, if so, replacing a point cloud characteristic value matrix obtained at the last time with a point cloud characteristic value matrix obtained at the previous time, and if not, storing the point cloud characteristic value matrix obtained at the previous and subsequent times; tracking the target in the detection area based on the stored point cloud characteristic value matrix. The technical scheme provided by the disclosure can find out whether the target in the detection area is changed in real time, so that the target tracking is realized, the positioning accuracy is higher, the algorithm is simple, and the hardware cost is low.

Description

Target tracking method, device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of target tracking technology, and in particular, to a target tracking method, a target tracking apparatus, a computer device, and a computer-readable storage medium.
Background
In the prior art, common target tracking technologies are divided into an internet of things tracking and positioning technology and a video stream tracking and positioning technology. The tracking and positioning technology of the Internet of things specifically comprises the following steps: the portable terminal equipment is formed by adopting a low-power-consumption microcontroller and combining a plurality of specific sensors, a large number of coded sensors are arranged in a detected area, and then the target position is judged and tracked according to the connection condition of the terminal and the coded sensors. The video stream tracking and positioning technology specifically comprises the following steps: the camera is arranged at a high position, the stream of people is shot downwards, then the shot video stream is connected into the operation unit at the rear end through a certain transmission means, the operation unit identifies the collected video stream, and finally the target covered by the camera is calculated through the output of an algorithm to track.
The inventor finds that the current tracking and positioning technology of the Internet of things has the following technical defects: the equipment needs to be mounted on a user or a target to be tracked, the precision is low, the hardware cost is high, and the like; the existing video stream tracking and positioning technology has the following technical defects: the acquired data is in an RGB format, and can be influenced by illumination in a use environment and the failure modes of all camera schemes; the algorithm design aiming at the video stream is complex, the calculation amount of the rear end is huge, and the algorithm failure situations are more; the required camera and the hardware for back-end operation have high cost; video streaming data also has privacy disclosure problems.
Therefore, it is a problem to be solved at present to propose a target tracking scheme capable of improving accuracy and reducing cost.
Disclosure of Invention
The method and the device are used for at least partially solving the technical problems of low precision, high cost and the like existing in the Internet of things tracking and positioning technology and the video stream tracking and positioning technology in the prior art.
According to an aspect of the present disclosure, there is provided a target tracking method, the method including:
continuously acquiring depth point cloud data with the background eliminated in the detection area for multiple times;
obtaining a corresponding point cloud characteristic value matrix based on the depth point cloud data obtained each time;
calculating the distance of the same feature in the point cloud feature value matrix obtained every two adjacent times;
judging whether targets appearing every two adjacent times in the detection area are matched or not based on a distance calculation result, if so, replacing a point cloud characteristic value matrix obtained at the last time with a point cloud characteristic value matrix obtained at the previous time, and if not, storing the point cloud characteristic value matrix obtained at the previous and subsequent times;
tracking the target in the detection area based on the stored point cloud characteristic value matrix.
According to another aspect of the present disclosure, there is provided an object tracking apparatus, the apparatus including:
the first acquisition module is used for continuously acquiring depth point cloud data after the background is eliminated in the detection area for a plurality of times;
the second acquisition module is used for obtaining a corresponding point cloud characteristic value matrix based on the depth point cloud data obtained each time;
the calculation module is used for calculating the distance of the same feature in the point cloud feature value matrix obtained every two adjacent times;
a judging module configured to judge whether targets appearing every two adjacent times in the detection area are matched based on the distance calculation result obtained by the calculating module;
the storage module is used for replacing the point cloud characteristic value matrix obtained at the last time with the point cloud characteristic value matrix obtained at the previous time when the judging result of the judging module is matched, and storing the point cloud characteristic value matrix obtained at the previous and the next times when the judging result of the judging module is not matched; the method comprises the steps of,
and the tracking module is used for tracking the target in the detection area based on the point cloud characteristic value matrix stored by the storage module.
According to yet another aspect of the present disclosure, there is provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, which when executed by the processor performs the aforementioned object tracking method.
According to yet another aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the aforementioned object tracking method.
The technical scheme provided by the disclosure can comprise the following beneficial effects:
according to the target tracking method, whether the target in the detection area changes or not can be found in real time based on the depth point cloud data after the background is eliminated in the detection area which is continuously acquired for many times, if so, the point cloud characteristic value matrix obtained before and after the change is recorded, the target in the detection area is tracked based on the stored point cloud characteristic value matrix, the positioning accuracy is high, the algorithm is simple, the hardware cost is low, and the privacy leakage problem does not exist.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the disclosure. The objectives and other advantages of the disclosure will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosed embodiments and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain, without limitation, the disclosed embodiments.
Fig. 1 is a schematic flow chart of a target tracking method according to an embodiment of the disclosure;
fig. 2 is a schematic structural diagram of a target tracking apparatus according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of a target tracking system according to an embodiment of the disclosure;
fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the following detailed description of the specific embodiments of the present disclosure will be given with reference to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the disclosure, are not intended to limit the disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order; moreover, embodiments of the present disclosure and features of embodiments may be arbitrarily combined with each other without conflict.
In the following description, suffixes such as "module", "component", or "unit" for representing elements are used only for facilitating the description of the present disclosure, and are not of specific significance per se. Thus, "module," "component," or "unit" may be used in combination.
Fig. 1 is a schematic flow chart of a target tracking method according to an embodiment of the disclosure. As shown in fig. 1, the method includes the following steps S101 to S106.
S101, continuously acquiring depth point cloud data with the background eliminated in a detection area for multiple times;
in this step, "point cloud data" refers to a set of vectors in a three-dimensional coordinate system, wherein each point contains XYZ three-dimensional coordinates; the depth point cloud data indicates that an object (namely a target) exists in the detection area, otherwise, the point cloud data with practical significance cannot be obtained after the background in the detection area is eliminated;
s102, obtaining a corresponding point cloud characteristic value matrix based on the depth point cloud data obtained each time;
s103, calculating the distance of the same feature in the point cloud feature value matrix obtained every two adjacent times;
s104, judging whether targets appearing every two adjacent times in the detection area are matched or not based on a distance calculation result, if so, replacing a point cloud characteristic value matrix obtained at the last time with a point cloud characteristic value matrix obtained at the previous time, and if not, storing the point cloud characteristic value matrix obtained at the previous and subsequent times;
s105, tracking the target in the detection area based on the stored point cloud characteristic value matrix.
In the embodiment of the disclosure, depth point cloud data after background elimination in a detection area is acquired first, then a corresponding point cloud characteristic value matrix is obtained based on the depth point cloud data acquired each time, and then whether targets appearing in the detection area at two times are matched is judged based on the distance of the same characteristic in the point cloud characteristic value matrix obtained every two adjacent times, so that whether the targets in the detection area are changed is found in real time, and then tracking of the targets in the detection area is achieved.
In one embodiment, step S101 specifically includes the following steps S101a to S101d.
S101a, acquiring point cloud data D1 of the detection area after all the non-fixed objects are emptied by using an area array laser radar;
s101b, acquiring point cloud data D2i of the current scene of the detection area at preset time intervals by using an area array laser radar, wherein i sequentially takes 1 to n 1 ,n 1 Is a positive integer; wherein, the preset time length can be set and adjusted by a person skilled in the art according to actual requirements;
s101c, sequentially differencing D2i and D1;
S101D, obtaining depth point cloud data D3i with the background eliminated in the detection area according to the difference value calculation result obtained in the step S101c, wherein i sequentially takes 1 to n 2 ,n 2 Is a positive integer, n 2 ≤n 1
An area array laser radar, also called an area array photoelectric sensor, is a device which actively emits laser and calculates the distance between the radar and a detected object according to the flight time of light. The area array laser radar comprises M rows and N columns of photoelectric sensors (also called ranging devices), and the photoelectric sensors together form an area array photoelectric sensor containing M x N ranging devices. When triggered each time, the area array laser radar returns M x N space XYZ three-dimensional coordinates, and after a certain compensation algorithm and distortion algorithm, a space XYZ point cloud is generated.
In practical application, the laser radar is arranged on a fixed position in the detection area, and M x N distance measuring devices are respectively marked with T 1 -T M*N The detection area is divided into M x N designated areas and numbered respectively; reference numerals of the respective distance measuring devicesAnd the distance measuring devices are respectively in one-to-one correspondence with the numbers of the designated areas, and each distance measuring device needs to detect one corresponding designated area.
In step S101a, all non-stationary objects within the detection area need to be emptied, including but not limited to: people, cartons, instruments, sundries and the like, keeping the cleanliness of the detection area, and then acquiring point cloud data D1 of the detection area after all the non-fixed objects are emptied by using an area array laser radar.
In engineering, considering the error of the distance measuring device, the area array laser radar can be utilized to continuously collect the point cloud data D1i, i of the detection area after all the non-fixed objects are emptied for multiple times, and sequentially taking 1 to n 3 ,n 3 The average value of D1i is calculated again as a positive integer, and the average value is recorded as D1 and stored.
In step S101c, D2i and D1 are sequentially differenced to obtain a difference calculation result of |d2i—d1|, and the difference calculation result is still a set of point cloud data because the X-axis component of D2i is subtracted from the X-axis component of D1, the Y-axis component of D2i is subtracted from the Y-axis component of D1, and the Z-axis component of D2i is subtracted from the Z-axis component of D1.
In the step, the depth point cloud data D3i after the background is eliminated in the detection area can be obtained by making the difference between the point cloud data D2i of the current scene of the detection area and the point cloud data D1 of the detection area after all the non-fixed objects are emptied, so that the algorithm is simple, and the acquired depth point cloud data has high precision.
In one embodiment, step S101d specifically includes the following steps d1 and d2.
d1. Respectively judging whether the absolute value of the mean value of all Z-axis components in each difference value calculation result obtained in the step S101c is larger than a preset first threshold value, and if so, executing the step d2; otherwise, returning to the step S101c to continuously acquire the next difference value calculation result; wherein, the first threshold value can be set and adjusted by a person skilled in the art according to actual requirements;
d2. and taking the corresponding difference value calculation result as depth point cloud data D3i after the background is eliminated in the detection area.
In this embodiment, when calculating the difference between D2i and D1, it is necessary to evaluate the difference calculation result, specifically, average all Z-axis components in the difference calculation result, and then determine whether the absolute value of the average value is greater than a preset first threshold, for example, a certain fixed value threshold, and only if the absolute value is greater than the fixed value threshold, it is indicated that an object exists in the current scene in the detection area, and the corresponding difference calculation result is used as depth point cloud data after the background has been eliminated in the detection area. The value of threshold can be an empirical value, and is usually selected to be 1-10cm in engineering.
In one embodiment, step S102 includes the following steps S102a and S102b.
S102a, clustering the depth point cloud data obtained each time, and dividing the depth point cloud data into different subareas;
s102b, performing secondary operation on the point cloud data of each subarea divided by the depth point cloud data obtained each time to obtain a corresponding point cloud characteristic value matrix.
In the embodiment, the corresponding point cloud characteristic value matrix can be obtained by sequentially clustering and performing secondary operation on the depth point cloud data, and the algorithm is simple.
In a specific embodiment, an algorithm adopted for clustering the depth point cloud data acquired each time is as follows: spatial clustering algorithm based on density.
In this embodiment, a spatial clustering algorithm based on density is adopted to conveniently and quickly divide a point cloud with a sufficiently high density into different sub-areas.
Preferably, the clustering algorithm may select a DBSCAN (Density-Based Spatial Clustering of Applications with Noise, density-based spatial clustering with noise) algorithm. The algorithm is able to divide a region of sufficient density into clusters and find arbitrarily shaped clusters in a noisy spatial database, which defines clusters as the largest set of densely connected points. By adopting the algorithm, after the parameters are given, the depth point cloud data D3i with the background eliminated can be divided into different point cloud sets of clusters, and thus, different subareas are divided.
After the depth point cloud data is clustered by using the DBSCAN algorithm, each divided sub-region can be numbered according to the returned label and can be recorded as O 1 -O Ni Ni refers to the number of subareas which are formed by clustering the depth point cloud data acquired at the ith time. Of course, the depth point cloud data obtained for the first time is the first frame of point cloud data obtained after the system is powered on, namely the initialized point cloud data.
In one embodiment, step S102b specifically includes the following steps b1 to b3.
b1. Carrying out data normalization processing on the point cloud data of each subarea divided by the depth point cloud data obtained each time;
b2. calculating a characteristic value set { Xmin, xmax, xmean, ymin, ymax, ymean, zmin, zmax, zmean } of the point cloud data of each sub-region after data normalization processing;
b3. and respectively matching the IDs of the subareas with the corresponding characteristic value sets to obtain a point cloud characteristic value matrix corresponding to the depth point cloud data acquired at the time.
In step b1, processing such as filtering and correction may be included in addition to the data normalization processing.
In step b2, a set of feature values describing the features of the cloud data of each sub-area is obtained, i.e. each sub-area corresponds to one set of feature values.
In the characteristic value set, xmin represents the minimum value of the X-axis component in the corresponding sub-region point cloud data, xmax represents the maximum value of the X-axis component in the corresponding sub-region point cloud data, and Xmean represents the average value of all the X-axis components in the corresponding sub-region point cloud data; ymin represents the minimum value of the Y-axis component in the corresponding sub-area point cloud data, ymax represents the maximum value of the Y-axis component in the corresponding sub-area point cloud data, and Ymean represents the average value of all the Y-axis components in the corresponding sub-area point cloud data; zmin represents the minimum value of the Z-axis component in the corresponding sub-region point cloud data, zmax represents the maximum value of the Z-axis component in the corresponding sub-region point cloud data, and Zmean represents the average value of all the Z-axis components in the corresponding sub-region point cloud data.
In addition to the above feature values, the feature value set may further include: the number of subregions and the number of valid data points in the point cloud data for each subregion.
In step b3, the ID of each sub-region is the number O of each sub-region as described above 1 -O Ni
The format of the point cloud characteristic value matrix C1 corresponding to the depth point cloud data obtained for the first time is as follows:
the format of the point cloud characteristic value matrix C2 corresponding to the depth point cloud data acquired for the second time is as follows:
and the like, the point cloud characteristic value matrix Ci corresponding to the point cloud data of other depths can be obtained.
In the step, the corresponding point cloud characteristic value matrix can be obtained by carrying out secondary operation (filtering, data normalization, correction, characteristic value calculation and the like) on the depth point cloud data obtained each time, and the algorithm is simple and convenient.
In a specific embodiment, between steps b2 and b3, the following step b4 is further included:
b4. and sequencing each characteristic value set according to the sequence from the near to the far of Zmean in the characteristic value set of the point cloud data of each subarea, and numbering each subarea according to the sequencing result to serve as the ID of each subarea.
In this step, after the feature value sets of the point cloud data of each sub-region are obtained, in order to facilitate the subsequent calculation of the distances between the same feature value sets in the point cloud feature value matrix obtained every two adjacent times, the feature value sets are ordered according to the Zmean size, and the numbers of each sub-region are assigned according to the ordering result (i.e. the numbers of each sub-region are assigned with O again 1 -O Ni )。
Accordingly, step S103 specifically includes: and calculating the distance between the feature value sets of the point cloud data of the sub-areas with the same ID in the point cloud feature value matrix obtained every two adjacent times.
In this embodiment, after the point cloud feature value matrix corresponding to the depth point cloud data obtained each time is obtained after the data normalization processing, in step S103, a KNN (K-Nearest Neighbor) algorithm may be used to implement matching of elements in the point cloud feature value matrix obtained two adjacent times, so as to facilitate subsequent distance calculation.
In a specific embodiment, after step S102, the following steps S106 and S107 are further included.
S106, respectively acquiring the total number of all feature value sets in the point cloud feature value matrixes obtained by any two adjacent times aiming at the point cloud feature value matrixes obtained by the two times;
s107, comparing the total number N2 of the characteristic value sets obtained in the last time with the total number N1 of the characteristic value sets obtained in the last time, deleting overflowed N1-N2 characteristic value sets from the characteristic value matrix of the point cloud obtained in the last time and deleting IDs of corresponding subareas if N1 is larger than N2, and then executing step S104 to calculate the distances of the same characteristics (namely the same characteristic value sets) in the characteristic value matrix of the point cloud obtained in the two times; if N1< N2, supplementing the missing N2-N1 eigenvalue sets in the point cloud eigenvalue matrix obtained in the previous time, assigning new IDs to the corresponding sub-regions, and then executing step S104 to calculate the distances of the same features (i.e., the same eigenvalue sets) in the point cloud eigenvalue matrix obtained in the two times.
In this embodiment, in the process of tracking the target in the detection area, for each sub-area divided by the depth point cloud data acquired each time, there may be a process of adding a new sub-area and exiting an old sub-area, which requires assigning a new ID to the new sub-area and supplementing a corresponding feature value set, and deleting the ID of the old sub-area and deleting the corresponding feature value set.
In one embodiment, the distance calculated in step S103 is specifically a euclidean distance.
Euclidean distance, also called euclidean distance, refers to the absolute distance between two points in space. In the disclosure, the actual calculation is the euclidean distance between two multidimensional vectors (Xmin, xmax, xmean, ymin, ymax, ymean, zmin, zmax, zmean) corresponding to the subregions with the same ID divided by the depth point cloud data acquired twice.
Accordingly, the step S104 of determining whether or not the targets appearing every two adjacent times in the detection area match based on the distance calculation result specifically includes the following steps S104a and S104b.
S104a, ordering Euclidean distances of the same features according to the sizes of the point cloud feature value matrixes obtained by any two adjacent times, and extracting smaller Euclidean distances of preset numbers; wherein the preset number can be set and adjusted by a person skilled in the art according to actual requirements, and the minimum number can be set to be 1;
s104b, judging whether the smaller Euclidean distances of the preset number are smaller than a preset second threshold value, if yes, considering that targets appearing in the detection area in two adjacent times are matched, otherwise, considering that the targets are not matched; the second threshold may be set and adjusted by those skilled in the art according to actual needs, and may be a fixed or sliding threshold.
In the embodiment, whether targets appearing in the detection area in two adjacent times are matched or not is judged through the Euclidean distance, and the algorithm is simple and high in precision; if the judging result is matching, the target is kept unchanged, and if the judging result is not matching, the target is changed.
It should be noted that the order of the above steps is only a specific example provided for illustrating the embodiments of the present disclosure, and the present disclosure does not limit the order of the above steps, and those skilled in the art may adjust the order of the above steps as needed in practical applications.
According to the target tracking method provided by the embodiment of the disclosure, whether the target (object) in the detection area changes or not can be found in real time based on the depth point cloud data which is acquired continuously and repeatedly and is subjected to background elimination in the detection area, if so, the point cloud characteristic value matrix obtained twice before and after the change is recorded, and the target in the detection area is tracked based on the stored point cloud characteristic value matrix.
Fig. 2 is a schematic structural diagram of a target tracking apparatus according to an embodiment of the disclosure. As shown in fig. 2, the apparatus 2 includes: the device comprises a first acquisition module 201, a second acquisition module 202, a calculation module 203, a judgment module 204, a storage module 205 and a tracking module 206.
Wherein the first acquisition module 201 is configured to acquire depth point cloud data after the background has been eliminated in the detection area continuously for a plurality of times; the second obtaining module 202 is configured to obtain a corresponding point cloud feature value matrix based on the depth point cloud data obtained each time; the calculating module 203 is configured to calculate the distance of the same feature in the point cloud feature value matrix obtained every two adjacent times; the judging module 204 is configured to judge whether targets appearing every two adjacent times in the detection area are matched based on the distance calculation result obtained by the calculating module 203; the storage module 205 is configured to replace the point cloud eigenvalue matrix obtained in the last time with the point cloud eigenvalue matrix obtained in the previous time when the judgment result of the judgment module 204 is matching, and store the point cloud eigenvalue matrix obtained in the previous and subsequent times when the judgment result of the judgment module 204 is not matching; the tracking module 206 is configured to track the target in the detection area based on the point cloud feature value matrix stored in the storage module 205.
In one embodiment, the first acquisition module 201 includes: the device comprises a first acquisition unit, a second acquisition unit and a calculation unit.
The first acquisition unit is used for acquiring point cloud data D1 of the detection area after all the non-fixed objects are emptied by using the area array laser radar; the second acquisition unit is arranged to acquire point cloud data D2i of the current scene of the detection area at intervals of preset time intervals by using the area array laser radar, wherein i sequentially takes 1 to n 1 ,n 1 Is a positive integer; the computing unit is configured to sequentially make differences between D2i and D1, and obtain depth point cloud data D3i with the background eliminated in the detection area according to the difference value computing result, where i sequentially takes 1 to n 2 ,n 2 Is a positive integer.
In one embodiment, the computing unit includes: the device comprises a first computing subunit, a judging subunit and a determining subunit.
Wherein the first computing subunit is configured to sequentially difference D2i from D1; the judging subunit is configured to respectively judge whether the absolute value of the mean value of all the Z-axis components in each difference value calculation result obtained by the calculating subunit is greater than a preset first threshold value; the determining subunit is configured to set the corresponding difference value calculation result as depth point cloud data D3i after the background has been eliminated in the detection area when the determination result of the determining subunit is greater than the preset first threshold.
In one embodiment, the second acquisition module 202 includes: a first arithmetic unit and a second arithmetic unit.
The first operation unit is used for clustering the depth point cloud data obtained each time and dividing the depth point cloud data into different subareas; the second operation unit is used for carrying out secondary operation on the point cloud data of each subarea divided by the depth point cloud data obtained each time to obtain a corresponding point cloud characteristic value matrix.
In a specific embodiment, an algorithm adopted by the first operation unit to cluster the depth point cloud data acquired each time is: spatial clustering algorithm based on density.
In one specific embodiment, the second operation unit includes: the device comprises a normalization subunit, a second calculation subunit and a matching subunit.
The normalization subunit is configured to perform data normalization processing on the point cloud data of each subarea divided by the depth point cloud data acquired each time; the second calculating subunit is configured to calculate a feature value set { Xmin, xmax, xmean, ymin, ymax, ymean, zmin, zmax, zmean } of the point cloud data of each sub-area after the data normalization processing; the matching subunit is configured to match the IDs of the sub-areas with the corresponding feature value sets respectively, so as to obtain a point cloud feature value matrix corresponding to the depth point cloud data acquired at this time.
In a specific embodiment, the second arithmetic unit further includes: and sequencing the subunits.
The sequencing subunit is configured to sequence the feature value sets according to the sequence from the near to the far of Zmean in the feature value sets of the point cloud data of each sub-region, and number each sub-region according to the sequencing result to serve as the ID of each sub-region.
Accordingly, the calculating module 203 is specifically configured to calculate the distance between the feature value sets of the point cloud data of the sub-areas with the same ID in the point cloud feature value matrix obtained every two adjacent times.
In a specific embodiment, the second arithmetic unit further includes: obtaining a subunit; the device 2 further comprises: a contrast module 207, a delete module 208 and an augment module 209.
The acquisition subunit is configured to acquire the total number of all feature value sets in the point cloud feature value matrix obtained by any two adjacent times for the point cloud feature value matrix obtained by the two adjacent times respectively; the comparison module 207 is configured to compare the total number N2 of the feature value sets obtained in the last time with the total number N1 of the feature value sets obtained in the previous time; the deleting module 208 is configured to delete the overflowed N1-N2 feature value sets from the point cloud feature value matrix obtained in the previous time and delete the IDs of the corresponding sub-areas when the comparison result of the comparing module 207 is N1> N2, and then the calculating module 203 calculates the distances of the same features (i.e. the same feature value sets) in the point cloud feature value matrix obtained in the previous time; the supplementing module 209 is configured to supplement the missing N2-N1 feature value sets in the point cloud feature value matrix obtained in the previous time and assign new IDs to the corresponding sub-regions when the comparison result of the comparing module 207 is N1< N2, and then the calculating module 203 calculates the distances of the same feature value sets in the point cloud feature value matrix obtained in the previous time.
In one embodiment, the distance calculated by the calculation module 203 is specifically a euclidean distance.
Accordingly, the determination module 204 includes: the device comprises a sequencing unit, an extracting unit and a judging unit.
The sorting unit is arranged for sorting Euclidean distances of the same feature value sets according to the sizes of the feature value matrixes of the point clouds obtained by any two adjacent times; the extraction unit is used for extracting a preset number of smaller Euclidean distances according to the sorting result of the sorting unit; the judging unit is configured to judge whether the preset number of smaller euclidean distances are smaller than a preset second threshold value, if yes, the targets appearing in the detection area in two adjacent times are considered to be matched, otherwise, the targets are considered to be not matched.
According to the target tracking device provided by the embodiment of the disclosure, whether the target (object) in the detection area changes or not can be found in real time based on the depth point cloud data which is acquired continuously and repeatedly and is subjected to background elimination in the detection area, if so, the point cloud characteristic value matrix obtained twice before and after the change is recorded, and the target in the detection area is tracked based on the stored point cloud characteristic value matrix.
Fig. 3 is a schematic structural diagram of a target tracking system according to an embodiment of the disclosure. As shown in fig. 3, the system includes: computing platform 31 and area array lidar 32. The area array laser radar 32 comprises M rows and N columns of photoelectric sensors, and is used for collecting point cloud data of a detection area after all non-fixed objects are emptied, and collecting point cloud data of a current scene of the detection area at intervals of preset time intervals, so that the computing platform 31 obtains depth point cloud data after the background is eliminated in the detection area according to the difference value of the two; the computing platform 31 may employ the target tracking device 2 in the foregoing embodiment, and is specifically configured to count data collected by each photoelectric sensor (i.e. ranging device) and perform corresponding computation, so as to track the target in the detection area.
Since the structure of the target tracking apparatus 2 has been described in detail in the foregoing, the photoelectric sensor belongs to an existing component, and thus the structure thereof will not be described in detail in this embodiment.
Based on the same technical concept, the embodiments of the present disclosure correspondingly further provide a computer device, as shown in fig. 4, where the computer device 4 includes a memory 41 and a processor 42, where the memory 41 stores a computer program, and when the processor 42 runs the computer program stored in the memory 41, the processor 42 executes the foregoing target tracking method.
Based on the same technical concept, the embodiments of the present disclosure correspondingly further provide a computer-readable storage medium having a computer program stored thereon, which when executed by a processor, performs the aforementioned object tracking method.
In summary, the target tracking method, device, system, computer equipment and storage medium provided in the embodiments of the present disclosure implement tracking of a target in a detection area through an area array laser radar and a computing platform, specifically, first obtain depth point cloud data after background elimination in the detection area, cluster the depth point cloud data by using a DBSCAN algorithm to divide the depth point cloud data into different subareas, then perform a secondary operation on the point cloud data of each subarea to obtain a corresponding point cloud eigenvalue matrix, then use a KNN algorithm to match elements of the same eigenvalue set in each two adjacent obtained point cloud eigenvalue matrices so as to calculate a distance of the same eigenvalue set, then determine whether the targets appearing in the detection area in two adjacent times are matched based on the distance of the same eigenvalue set in each two adjacent obtained point cloud eigenvalue matrices, and store the corresponding point cloud eigenvalue matrix according to the determination result.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present disclosure, and not for limiting the same; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present disclosure.

Claims (10)

1. A target tracking method, comprising:
continuously acquiring depth point cloud data with the background eliminated in the detection area for multiple times;
obtaining a corresponding point cloud characteristic value matrix based on the depth point cloud data obtained each time;
calculating the distance of the same feature in the point cloud feature value matrix obtained every two adjacent times;
judging whether targets appearing every two adjacent times in the detection area are matched or not based on a distance calculation result, if so, replacing a point cloud characteristic value matrix obtained at the last time with a point cloud characteristic value matrix obtained at the previous time, and if not, storing the point cloud characteristic value matrix obtained at the previous and subsequent times;
tracking the target in the detection area based on the stored point cloud characteristic value matrix;
the obtaining a corresponding point cloud characteristic value matrix based on the depth point cloud data obtained each time comprises the following steps:
clustering the depth point cloud data obtained each time, and dividing the depth point cloud data into different subareas;
carrying out data normalization processing on the point cloud data of each subarea divided by the depth point cloud data obtained each time;
calculating a characteristic value set { Xmin, xmax, xmean, ymin, ymax, ymean, zmin, zmax, zmean } of the point cloud data of each sub-region after data normalization processing;
respectively matching the IDs of the subareas with the corresponding characteristic value sets to obtain a point cloud characteristic value matrix corresponding to the depth point cloud data acquired at the time;
the distance of the same feature in the point cloud feature value matrix obtained every two adjacent times is calculated, specifically:
and calculating the distance between the feature value sets of the point cloud data of the sub-areas with the same ID in the point cloud feature value matrix obtained every two adjacent times.
2. The method of claim 1, wherein the successively acquiring depth point cloud data from the detection area after the background has been removed comprises:
collecting point cloud data D1 of the detection area after all the non-fixed objects are emptied by using an area array laser radar;
acquiring point cloud data D2i, i of the current scene of the detection area at preset time intervals by using an area array laser radar, and sequentially taking 1 to n 1 ,n 1 Is a positive integer;
sequentially differencing D2i with D1 and calculating a junction based on the differenceObtaining depth point cloud data D3i with the background eliminated in the detection area, and sequentially taking 1 to n from i 2 ,n 2 Is a positive integer.
3. The method according to claim 2, wherein the deriving depth point cloud data D3i from the difference calculation result after the background has been eliminated in the detection area includes:
respectively judging whether the absolute value of the mean value of all Z-axis components in each difference value calculation result is larger than a preset first threshold value;
if the depth point cloud data is larger than a preset first threshold value, the corresponding difference value calculation result is used as depth point cloud data D3i after the background is eliminated in the detection area.
4. The method of claim 1, wherein the algorithm used to cluster the depth point cloud data obtained each time is: spatial clustering algorithm based on density.
5. The method of claim 1, further comprising, prior to matching the IDs of the respective sub-regions with the corresponding sets of eigenvalues, respectively:
and sequencing each characteristic value set according to the sequence from the near to the far of Zmean in the characteristic value set of the point cloud data of each subarea, and numbering each subarea according to the sequencing result to serve as the ID of each subarea.
6. The method as recited in claim 1, further comprising:
respectively acquiring the total number of all feature value sets in the point cloud feature value matrixes obtained by any two adjacent times aiming at the point cloud feature value matrixes obtained by any two adjacent times;
comparing the total number N2 of the characteristic value sets obtained in the last time with the total number N1 of the characteristic value sets obtained in the last time, deleting overflowed N1-N2 characteristic value sets from the characteristic value matrix of the point cloud obtained in the last time and deleting IDs of corresponding subareas if N1 is larger than N2, and then executing the step of calculating the distances of the same characteristics in the characteristic value matrix of the point cloud obtained in the two times; if N1 is less than N2, supplementing the missing N2-N1 eigenvalue sets in the point cloud eigenvalue matrix obtained in the previous time, endowing new IDs for the corresponding subareas, and then executing the step of calculating the distances of the same features in the point cloud eigenvalue matrix obtained twice.
7. The method according to claim 1, characterized in that the distance is in particular a euclidean distance;
the determining whether the targets appearing every two adjacent times in the detection area are matched based on the distance calculation result comprises the following steps:
ordering Euclidean distances of the same features according to the size aiming at the point cloud feature value matrix obtained by any two adjacent times, and extracting smaller Euclidean distances of preset quantity;
judging whether the preset number of smaller Euclidean distances are smaller than a preset second threshold value, if yes, considering that targets appearing in the detection area in two adjacent times are matched, and if not, considering that the targets are not matched.
8. An object tracking device, comprising:
the first acquisition module is used for continuously acquiring depth point cloud data after the background is eliminated in the detection area for a plurality of times;
the second acquisition module is used for obtaining a corresponding point cloud characteristic value matrix based on the depth point cloud data obtained each time;
the calculation module is used for calculating the distance of the same feature in the point cloud feature value matrix obtained every two adjacent times;
a judging module configured to judge whether targets appearing every two adjacent times in the detection area are matched based on the distance calculation result obtained by the calculating module;
the storage module is used for replacing the point cloud characteristic value matrix obtained at the last time with the point cloud characteristic value matrix obtained at the previous time when the judging result of the judging module is matched, and storing the point cloud characteristic value matrix obtained at the previous and the next times when the judging result of the judging module is not matched; the method comprises the steps of,
the tracking module is used for tracking the target in the detection area based on the point cloud characteristic value matrix stored by the storage module;
the second acquisition module comprises a first operation unit and a second operation unit;
the first operation unit is used for clustering the depth point cloud data obtained each time and dividing the depth point cloud data into different subareas;
the second operation unit comprises a normalization subunit, a second calculation subunit and a matching subunit;
the normalization subunit is configured to perform data normalization processing on the point cloud data of each sub-region divided by the depth point cloud data acquired each time;
the second calculating subunit is configured to calculate a feature value set { Xmin, xmax, xmean, ymax, yman, zmin, zmax, zmean } of the point cloud data of each sub-area after the data normalization processing;
the matching subunit is configured to match the IDs of the sub-areas with the corresponding feature value sets respectively to obtain a point cloud feature value matrix corresponding to the depth point cloud data acquired at this time;
the calculation module is specifically configured to calculate a distance between feature value sets of the point cloud data of the sub-areas with the same ID in the point cloud feature value matrix obtained every two adjacent times.
9. A computer device comprising a memory and a processor, the memory having a computer program stored therein, the processor performing the object tracking method according to any of claims 1 to 7 when the processor runs the computer program stored in the memory.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, performs the object tracking method according to any one of claims 1 to 7.
CN202111054644.5A 2021-09-09 2021-09-09 Target tracking method, device, computer equipment and storage medium Active CN113689471B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111054644.5A CN113689471B (en) 2021-09-09 2021-09-09 Target tracking method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111054644.5A CN113689471B (en) 2021-09-09 2021-09-09 Target tracking method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113689471A CN113689471A (en) 2021-11-23
CN113689471B true CN113689471B (en) 2023-08-18

Family

ID=78585822

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111054644.5A Active CN113689471B (en) 2021-09-09 2021-09-09 Target tracking method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113689471B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114442101B (en) * 2022-01-28 2023-11-14 南京慧尔视智能科技有限公司 Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014178789A (en) * 2013-03-14 2014-09-25 Advanced Telecommunication Research Institute International Tracking device, tracking program and tracking method
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect
CN109697728A (en) * 2017-10-20 2019-04-30 阿里巴巴集团控股有限公司 Data processing method, device, system and storage medium
JP2019128196A (en) * 2018-01-23 2019-08-01 株式会社トプコン Surveying apparatus and surveying method
CN110119751A (en) * 2018-02-06 2019-08-13 北京四维图新科技股份有限公司 Laser radar point cloud Target Segmentation method, target matching method, device and vehicle
CN110189257A (en) * 2019-06-03 2019-08-30 北京石油化工学院 Method, apparatus, system and the storage medium that point cloud obtains
CN110246159A (en) * 2019-06-14 2019-09-17 湖南大学 The 3D target motion analysis method of view-based access control model and radar information fusion
CN110795523A (en) * 2020-01-06 2020-02-14 中智行科技有限公司 Vehicle positioning method and device and intelligent vehicle
CN110909713A (en) * 2019-12-05 2020-03-24 深圳市镭神智能***有限公司 Method, system and medium for extracting point cloud data track
CN110927712A (en) * 2019-10-28 2020-03-27 珠海格力电器股份有限公司 Tracking method and device
CN110942476A (en) * 2019-10-17 2020-03-31 湖南大学 Improved three-dimensional point cloud registration method and system based on two-dimensional image guidance and readable storage medium
CN111126296A (en) * 2019-12-25 2020-05-08 中国联合网络通信集团有限公司 Fruit positioning method and device
CN111239766A (en) * 2019-12-27 2020-06-05 北京航天控制仪器研究所 Water surface multi-target rapid identification and tracking method based on laser radar
CN111583663A (en) * 2020-04-26 2020-08-25 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN111640300A (en) * 2020-04-28 2020-09-08 武汉万集信息技术有限公司 Vehicle detection processing method and device
CN112069923A (en) * 2020-08-18 2020-12-11 东莞正扬电子机械有限公司 3D face point cloud reconstruction method and system
CN112330661A (en) * 2020-11-24 2021-02-05 交通运输部公路科学研究所 Multi-period vehicle-mounted laser point cloud road change monitoring method
WO2021134258A1 (en) * 2019-12-30 2021-07-08 深圳元戎启行科技有限公司 Point cloud-based target tracking method and apparatus, computer device and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
LU100465B1 (en) * 2017-10-05 2019-04-09 Applications Mobiles Overview Inc System and method for object recognition
US11964762B2 (en) * 2020-02-11 2024-04-23 Raytheon Company Collaborative 3D mapping and surface registration
WO2021226716A1 (en) * 2020-05-12 2021-11-18 Glove Systems Inc. System and method for discrete point coordinate and orientation detection in 3d point clouds
CN113340295B (en) * 2021-06-16 2021-12-21 广东工业大学 Unmanned ship near-shore real-time positioning and mapping method with multiple ranging sensors

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014178789A (en) * 2013-03-14 2014-09-25 Advanced Telecommunication Research Institute International Tracking device, tracking program and tracking method
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect
CN109697728A (en) * 2017-10-20 2019-04-30 阿里巴巴集团控股有限公司 Data processing method, device, system and storage medium
JP2019128196A (en) * 2018-01-23 2019-08-01 株式会社トプコン Surveying apparatus and surveying method
CN110119751A (en) * 2018-02-06 2019-08-13 北京四维图新科技股份有限公司 Laser radar point cloud Target Segmentation method, target matching method, device and vehicle
CN110189257A (en) * 2019-06-03 2019-08-30 北京石油化工学院 Method, apparatus, system and the storage medium that point cloud obtains
CN110246159A (en) * 2019-06-14 2019-09-17 湖南大学 The 3D target motion analysis method of view-based access control model and radar information fusion
CN110942476A (en) * 2019-10-17 2020-03-31 湖南大学 Improved three-dimensional point cloud registration method and system based on two-dimensional image guidance and readable storage medium
CN110927712A (en) * 2019-10-28 2020-03-27 珠海格力电器股份有限公司 Tracking method and device
CN110909713A (en) * 2019-12-05 2020-03-24 深圳市镭神智能***有限公司 Method, system and medium for extracting point cloud data track
CN111126296A (en) * 2019-12-25 2020-05-08 中国联合网络通信集团有限公司 Fruit positioning method and device
CN111239766A (en) * 2019-12-27 2020-06-05 北京航天控制仪器研究所 Water surface multi-target rapid identification and tracking method based on laser radar
WO2021134258A1 (en) * 2019-12-30 2021-07-08 深圳元戎启行科技有限公司 Point cloud-based target tracking method and apparatus, computer device and storage medium
CN110795523A (en) * 2020-01-06 2020-02-14 中智行科技有限公司 Vehicle positioning method and device and intelligent vehicle
CN111583663A (en) * 2020-04-26 2020-08-25 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN111640300A (en) * 2020-04-28 2020-09-08 武汉万集信息技术有限公司 Vehicle detection processing method and device
CN112069923A (en) * 2020-08-18 2020-12-11 东莞正扬电子机械有限公司 3D face point cloud reconstruction method and system
CN112330661A (en) * 2020-11-24 2021-02-05 交通运输部公路科学研究所 Multi-period vehicle-mounted laser point cloud road change monitoring method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多源异构数据融合的船舶目标检测与跟踪技术研究;马瑞鑫等;《水道港口》;第42卷(第3期);392-398 *

Also Published As

Publication number Publication date
CN113689471A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN110555901B (en) Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
RU2426172C1 (en) Method and system for isolating foreground object image proceeding from colour and depth data
CN109376256B (en) Image searching method and device
CN109063549B (en) High-resolution aerial video moving target detection method based on deep neural network
CN109859314B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and storage medium
US20160117573A1 (en) Method and apparatus for extracting feature correspondences from multiple images
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
US11651581B2 (en) System and method for correspondence map determination
CN111369495A (en) Video-based panoramic image change detection method
CN113689471B (en) Target tracking method, device, computer equipment and storage medium
CN113362441A (en) Three-dimensional reconstruction method and device, computer equipment and storage medium
CN113139442A (en) Image tracking method and device, storage medium and electronic equipment
US20120082393A1 (en) Attitude estimation with compressive sampling of starfield data
CN111383246A (en) Scroll detection method, device and equipment
CN111899279A (en) Method and device for detecting motion speed of target object
CN113902932A (en) Feature extraction method, visual positioning method and device, medium and electronic equipment
CN117197245A (en) Pose restoration method and device
US8472737B2 (en) Attitude estimation in compressed domain
CN114494404A (en) Object volume measurement method, system, device and medium
CN113971687A (en) Object tracking method, device electronic equipment and storage medium
CN111767757B (en) Identity information determining method and device
CN110060343B (en) Map construction method and system, server and computer readable medium
CN112101303A (en) Image data processing method and device and computer readable storage medium
US11127158B2 (en) Image indexing and retrieval using local image patches for object three-dimensional pose estimation
CN111598882A (en) Organ detection method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant