WO2022142948A1 - 动态目标跟踪定位方法、装置、设备和存储介质 - Google Patents

动态目标跟踪定位方法、装置、设备和存储介质 Download PDF

Info

Publication number
WO2022142948A1
WO2022142948A1 PCT/CN2021/134124 CN2021134124W WO2022142948A1 WO 2022142948 A1 WO2022142948 A1 WO 2022142948A1 CN 2021134124 W CN2021134124 W CN 2021134124W WO 2022142948 A1 WO2022142948 A1 WO 2022142948A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
cloud cluster
state vector
coordinates
cluster
Prior art date
Application number
PCT/CN2021/134124
Other languages
English (en)
French (fr)
Inventor
周阳
张涛
陈美文
刘运航
何科君
Original Assignee
深圳市普渡科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市普渡科技有限公司 filed Critical 深圳市普渡科技有限公司
Publication of WO2022142948A1 publication Critical patent/WO2022142948A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions

Definitions

  • the present application relates to the field of robotics, and in particular, to a dynamic target tracking and positioning method, apparatus, device, and computer-readable storage medium.
  • the environment faced by mobile robots is becoming more and more complex, especially in scenarios with many different types of dynamic targets, such as pedestrians, vehicles or other robots in the scene. etc., the robot not only needs to perform stable positioning, but also needs to achieve smooth movement in dynamic scenes.
  • the key to solving the above problems is whether the robot can extract dynamic objects from the scene and track the pose and speed of dynamic objects. If the dynamic target in the scene can be detected and tracked, based on the pose and velocity information of the dynamic object, the robot motion control system can be provided with decision-making information to avoid the dynamic object, and the impact of the dynamic object on the positioning accuracy can be eliminated for the positioning system.
  • the solution in the prior art is to sample the environment through the lidar mounted on the mobile robot, and after classifying the sampled data using a classification algorithm, determine the dynamic target in the scene.
  • lidar the environmental information provided by lidar is not rich enough, so that the algorithm cannot distinguish potential dynamic targets well, so as to track dynamic targets in a targeted manner.
  • the present application provides a dynamic target tracking and positioning method, apparatus, device, and computer-readable storage medium.
  • a dynamic target tracking and positioning method comprising:
  • a dynamic target tracking and positioning device comprising:
  • the first acquisition module is used to sample the environmental objects through the lidar mounted on the mobile robot, and obtain the lidar point cloud data of the environmental objects;
  • a second acquisition module configured to cluster the lidar point cloud data of the environmental objects, and acquire the coordinates of the point cloud clusters of the environmental objects;
  • a point cloud cluster coordinate processing module configured to associate the coordinates of the point cloud cluster of the environmental object with an existing point cloud cluster state vector or create a point cloud cluster state vector according to the coordinates of the point cloud cluster of the environmental object;
  • an iterative optimization module configured to iteratively optimize the pose of the mobile robot and the point cloud cluster based on the pose of the mobile robot, the point cloud cluster state vector and the position observation constraints of the point cloud cluster state vector Dynamic target state in the state vector.
  • a device comprising a memory, a processor and a computer program stored in the memory and running on the processor, the processor implements the above-mentioned dynamic target tracking and positioning method when the processor executes the computer program steps of the technical solution.
  • a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the steps of the technical solution of the above-mentioned dynamic target tracking and positioning method.
  • FIG. 1 is a flowchart of a dynamic target tracking and positioning method provided by an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of a dynamic target tracking and positioning device provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a two-dimensional KD tree provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • the robot can be a robot operating in a restaurant, such as a food delivery robot, or a medicine delivery robot operating in a medical place, such as a hospital, or It is a transfer robot that works in warehouses and other places, and so on.
  • the dynamic target tracking and positioning method mainly includes steps S101 to S104, which are described in detail as follows:
  • Step S101 sampling the environmental objects through the lidar mounted on the mobile robot, and obtaining the lidar point cloud data of the environmental objects.
  • the environment refers to the environment in which the robot works
  • the environmental objects refer to all objects in the environment in which the robot works, including static objects in the environment (for example, a certain kind of goods, a tree, a wall, a table) and dynamic objects (eg, a person, a moving car, or a mobile robot, etc.).
  • the sampling process of the laser radar carried by the robot is the same as that in the prior art.
  • the laser beam is emitted to the surrounding environment to scan the current environment in real time, and the time-of-flight ranging method is used to calculate the robot.
  • the lidar may adopt a two-dimensional lidar.
  • Step S102 Cluster the lidar point cloud data of the environmental objects, and obtain the coordinates of the point cloud clusters of the environmental objects.
  • Clustering is a machine learning technique that involves grouping data points. For a given set of data points, a clustering algorithm can be used to divide each data point into a specific group. In theory, data points in the same group should have similar attributes and/or characteristics, while data points in different groups should have highly different attributes and/or characteristics.
  • Clustering cloud data has the general properties of the above-mentioned clustering algorithms. As an embodiment of the present application, clustering the lidar point cloud data of the environmental objects, and obtaining the coordinates of the point cloud clusters of the environmental objects can be achieved through step S1021 or at least step S1023, and the description is as follows:
  • Step S1021 Perform primary clustering on the lidar point cloud data of the environmental objects to obtain the geometric center of the first point cloud cluster.
  • DBSCAN Density-Based Spatial Clustering of Applications with Noise
  • the first clustering of the lidar point cloud data of the environmental objects is performed, and the geometric center of the first point cloud cluster can be obtained by: setting the neighborhood radius of the DBSCAN algorithm and the minimum number of points of the point cloud cluster, and using the DBSCAN algorithm to classify the lidar points
  • the cloud data is clustered for the first time, the first point cloud cluster is obtained, and the geometric center of the first point cloud cluster is obtained as the geometric center of the first point cloud cluster.
  • the purpose of setting the neighborhood radius and the minimum number of point cloud clusters of the DBSCAN algorithm is to filter out the abnormal points in the lidar point cloud data of the environmental objects, so as to reduce the unnecessary workload of the clustering algorithm, It can also improve the classification accuracy.
  • the initial point cloud cluster can be regarded as a mass point, and the geometric center coordinates of the mass point can be obtained by geometric or physical methods as the geometric center of the first point cloud cluster.
  • Step S1022 if there is a similar point cloud cluster in the first point cloud cluster, cluster the geometric center of the first point cloud cluster and the geometric center of the similar point cloud cluster again to obtain a second point cloud cluster.
  • the point cloud clusters obtained by clustering the lidar point cloud data of symmetrical objects are similar. For example, two legs or two hands of a human body, two front wheels of a car, etc., the point cloud cluster obtained after clustering the lidar point cloud data of the left leg and the lidar point cloud data of the right leg are clustered
  • the point cloud cluster obtained after the class is similar
  • the point cloud cluster obtained after clustering the left-hand lidar point cloud data is similar to the point cloud cluster obtained after clustering the right-hand lidar point cloud data, etc.
  • Similar point cloud clusters can be clustered again by their geometric centers to obtain new point cloud clusters, that is, in this embodiment of the present application, if there are similar point cloud clusters in the first point cloud cluster, The geometric centers of , and the geometric centers of similar point cloud clusters are clustered again to obtain the second point cloud cluster.
  • the specific algorithm for re-clustering the geometric center of the first point cloud cluster with the geometric center of similar point cloud clusters can also be used with the algorithm for initial clustering of lidar point cloud data.
  • the DBSCAN algorithm or other clustering algorithms can also be used. Class algorithm, but when clustering again, the geometric center of the first point cloud cluster and the geometric center of similar point cloud clusters are regarded as the clustering objects or data points.
  • Step S1023 Obtain the geometric center coordinates of the second point cloud cluster, and use the geometric center coordinates of the second point cloud cluster as the coordinates of the point cloud cluster of the environmental object.
  • the coordinates of the geometric center of the second point cloud cluster may be obtained to be the same as the geometric center of the first point cloud cluster, which will not be repeated here. After the geometric center coordinates of the second point cloud cluster are obtained, the geometric center coordinates of the second point cloud cluster are used as the coordinates of the point cloud cluster of the environment object.
  • Step S103 Associate the coordinates of the point cloud clusters of the environmental objects with the existing point cloud cluster state vectors or create a point cloud cluster state vector according to the coordinates of the point cloud clusters of the environmental objects.
  • a point cloud cluster can be regarded as a mass point.
  • information such as position and velocity is usually used to characterize the properties of a mass point. Therefore, in this embodiment of the present application, if the positioning system has a point cloud cluster state If there is no point cloud cluster state vector in the positioning system, the point cloud cluster state vector can be created. Whether it is an existing point cloud cluster state vector or a created point cloud cluster state vector, it can include information such as the position of the point cloud cluster, that is, the coordinates and speed of the point cloud cluster.
  • associating the coordinates of the point cloud cluster of the environmental object with the existing point cloud cluster state vector or creating the point cloud cluster state vector according to the coordinates of the point cloud cluster of the environmental object can be done through steps S1031 to S1035 implementation, as described below:
  • Step S1031 if there is a point cloud cluster state vector in the positioning system, use the position information in the existing point cloud cluster state vector to construct a two-dimensional KD tree.
  • KD tree The full name of KD tree is k-Dimension Tree, which is a data structure that divides K-dimensional space and is mainly used in the search of key information.
  • the two-dimensional KD tree mentioned in the embodiments of this application means that the KD tree is a kind of A data structure that divides a two-dimensional space. If there is a point cloud cluster state vector in the positioning system, use the position information in the existing point cloud cluster state vector to construct a two-dimensional KD tree. constitute.
  • steps S1 to S5 As for the method for constructing a two-dimensional KD tree, it may specifically include steps S1 to S5, which are briefly described as follows:
  • Step S1 Calculate the variance of the position information, that is, the x -coordinate and y -coordinate of the data point, and select the one with the largest variance as the direction of the dividing line;
  • Step S2 Sort the data points in the nodes according to the dimension of the dividing direction, Select the median data point among them as the data vector, that is, the dividing point to be divided;
  • Step S3 simultaneously carry out the division of the space vector again, that is, divide it again within the spatial range of the parent node;
  • Step S4 carry out the remaining nodes.
  • the left space and the right space are divided, and the left child node and the right child node are divided;
  • Step S5 the end point of the division is when there is only one data point left or there is no data point on one side.
  • Step S1032 Match the coordinates of the point cloud clusters of the environmental objects with the coordinates of the candidate points in the two-dimensional KD tree.
  • matching the coordinates of the point cloud clusters of the environmental objects with the coordinates of the candidate points in the two-dimensional KD tree specifically includes steps S1 to S3, which are briefly described as follows:
  • Step S1 Starting from the root node of the two-dimensional KD tree, from top to bottom, according to the segmentation direction when constructing the two-dimensional KD tree, on the two-dimensional coordinate points, perform a sequential search of the tree, for example, assuming the point of the environmental object The coordinates of the cloud cluster are (3, 1).
  • the root node (7, 2) is searched first. Since the division direction of the root node is x , only the comparison The division of x -coordinates, and because 3 ⁇ 7, therefore, go to the left, the following nodes are the same, and finally reach the leaf node;
  • Step S2 when the node found in step S1 is not the closest, you can go to step S2;
  • Step S3 Backtracking to the parent node, when backtracking to the parent node, firstly compared with the parent node, if the parent node is closer to the coordinates of the point cloud cluster of the environmental object (hereinafter referred to as the data point to be checked), then change the currently found closest Then, draw a circle with the data point to be checked as the center, the distance between the data point to be checked and the nearest node as the radius, and judge whether it intersects with the dividing line of the parent node. For the nodes with closer data points to be checked, another depth-first traversal of the parent node space is performed. After the local traversal is searched, it is compared with the current nearest node. After the comparison, the backtracking continues.
  • Step S1033 if the point cloud cluster of the environmental object matches the nearest point cloud cluster of the two-dimensional KD tree, associate the coordinates of the point cloud cluster of the environmental object with the existing point cloud cluster state vector.
  • associating the coordinates of the point cloud cluster of the environmental object with the existing point cloud cluster state vector is specifically saving the coordinates of the point cloud cluster of the environmental object to the observation container of the existing point cloud cluster state vector , which means that the existing point cloud cluster state vector finds its corresponding position observation constraint.
  • Step S1034 If there is no point cloud cluster state vector in the positioning system, create a point cloud cluster state vector.
  • Step S1035 Determine the coordinates of the point cloud cluster of the environmental object as the position observation constraint of the existing point cloud cluster state vector or the created point cloud cluster state vector.
  • the point cloud cluster state vector is the estimated position of the point cloud cluster in the environment, and the coordinates of the existing point cloud cluster state vector or the created point cloud cluster state vector can be considered as the state vector observed by the lidar
  • the position observation constraint is used to constrain the relationship between the observed actual pose of the mobile robot and the estimated position represented by the state vector of the point cloud cluster.
  • the coordinates of the point cloud cluster of the environmental object are associated with the existing point cloud cluster state.
  • the Mahalanobis distance between the coordinates of the point cloud cluster of the environmental object and the state vector of the existing point cloud cluster can be calculated.
  • the coordinates of point cloud clusters whose Mahalanobis distance between vectors is greater than a preset threshold are removed.
  • Step S104 Based on the pose of the mobile robot, the state vector of the point cloud cluster, and the position observation constraint of the state vector of the point cloud cluster, iteratively optimize the pose of the mobile robot and the dynamic target state in the state vector of the point cloud cluster.
  • the iterative optimization of the pose of the mobile robot and the dynamic target state in the point cloud cluster state vector may be:
  • the position observation constraints of multiple continuous poses, point cloud cluster state vectors and point cloud cluster state vectors are used to construct a nonlinear least squares problem;
  • the pose error of the mobile robot and the state error of the dynamic target state are used as nonlinear least squares Multiply the error constraints of the problem, and iteratively optimize the mobile robot's pose and the dynamic target state in the point cloud cluster state vector until the mobile robot's pose error and the state error of the dynamic target state are the smallest.
  • the method of the above embodiment further includes: judging whether the environmental object corresponding to the state vector in the state vector of the point cloud cluster is a dynamic target, and if the environmental object corresponding to the state vector in the state vector of the point cloud cluster is a dynamic target, then feeding back the state of the dynamic target to the mobile target.
  • Robot motion control system Robot motion control system.
  • the dynamic target tracking and positioning method of the above example in Figure 1 that, on the one hand, by clustering the lidar point cloud data, the coordinates of the point cloud cluster of the environmental objects are associated with the existing point cloud cluster state vector or the creation point.
  • the cloud cluster state vector can identify the types of environmental objects without using complex detection methods, and then can judge whether the environmental objects are dynamic targets through the speed vector in the point cloud cluster state vector, and the algorithm is fast.
  • the utilization of lidar data is improved; on the other hand, based on the pose of the mobile robot, the state vector of the point cloud cluster, and the position observation constraints of the state vector of the point cloud cluster, iteratively optimizes the pose of the mobile robot and the state vector of the point cloud cluster.
  • the dynamic target state in other words, the dynamic target state and the pose of the mobile robot are jointly optimized to obtain the global optimal state estimation, thereby realizing the precise positioning and tracking of the dynamic target.
  • FIG. 3 is a dynamic target tracking and positioning device provided by an embodiment of the present application, which may include a first acquisition module 301 , a second acquisition module 302 , a point cloud cluster coordinate processing module 303 , and an iterative optimization module 304 . as follows:
  • the first acquisition module 301 is used to sample the environmental objects through the lidar mounted on the mobile robot, and obtain the lidar point cloud data of the environmental objects;
  • the second acquisition module 302 is configured to cluster the lidar point cloud data of the environmental objects, and obtain the coordinates of the point cloud clusters of the environmental objects;
  • the point cloud cluster coordinate processing module 303 is used to associate the coordinates of the point cloud clusters of the environmental objects with the existing point cloud cluster state vectors or create a point cloud cluster state vector according to the coordinates of the point cloud clusters of the environmental objects;
  • the iterative optimization module 304 is configured to iteratively optimize the pose of the mobile robot and the dynamic target state in the point cloud cluster state vector based on the pose of the mobile robot, the point cloud cluster state vector and the position observation constraints of the point cloud cluster state vector.
  • the second obtaining module 302 in the example of FIG. 3 may include a first clustering unit, a second clustering unit and a obtaining unit, wherein:
  • the first clustering unit is used to perform initial clustering on the lidar point cloud data to obtain the geometric center of the first point cloud cluster
  • the second clustering unit is configured to re-cluster the geometric center of the first point cloud cluster and the geometric center of the similar point cloud cluster to obtain a second point cloud cluster if there is a similar point cloud cluster in the first point cloud cluster;
  • the obtaining unit is used to obtain the geometric center coordinates of the second point cloud cluster, and the geometric center coordinates of the second point cloud cluster are used as the coordinates of the point cloud cluster of the environment object.
  • the above-mentioned first clustering unit may include a setting unit and a geometric center obtaining unit, wherein:
  • the setting unit is used to set the neighborhood radius and the minimum number of point cloud clusters of the noise-based density clustering DBSCAN algorithm, and use the DBSCAN algorithm to perform primary clustering on the lidar point cloud data to obtain the primary point cloud clusters;
  • the geometric center obtaining unit is used to obtain the geometric center of the initial point cloud cluster as the geometric center of the first point cloud cluster.
  • the point cloud cluster coordinate processing module 303 in the example of FIG. 3 may include a two-dimensional KD tree construction unit, a matching unit, an association unit, a creation unit and a determination unit, wherein:
  • the two-dimensional KD tree construction unit is used to construct a two-dimensional KD tree using the position information in the existing point cloud cluster state vector if the positioning system has a point cloud cluster state vector;
  • the matching unit is used to match the coordinates of the point cloud clusters of the environmental objects with the coordinates of the candidate points in the two-dimensional KD tree;
  • an association unit used for associating the coordinates of the point cloud cluster of the environmental object with the existing point cloud cluster state vector if the matching unit is matched to the nearest point cloud cluster of the two-dimensional KD tree;
  • the determining unit is used for determining the coordinates of the point cloud cluster of the environmental object as the position observation constraint of the existing point cloud cluster state vector or the created point cloud cluster state vector.
  • the iterative optimization module 304 illustrated in FIG. 3 may include a two-dimensional KD tree construction unit and a determination unit, wherein:
  • the optimization problem building unit is used to construct a nonlinear least squares problem based on the pose of the mobile robot, the state vector of the point cloud cluster and the position observation constraints of the state vector of the point cloud cluster;
  • the pose state iteration unit is used to iteratively optimize the pose error of the mobile robot and the dynamic state in the state vector of the point cloud cluster by using the pose error of the mobile robot and the state error of the dynamic target state as the error constraints of the nonlinear least squares problem. target state until the pose error of the mobile robot and the state error of the dynamic target state are minimized.
  • the apparatus of the example of FIG. 3 may also include a Mahalanobis distance calculation module and a culling module, wherein:
  • the Mahalanobis distance calculation module is used for the point cloud cluster coordinate processing module 303 to associate the coordinates of the point cloud cluster of the environmental object with the existing point cloud cluster state vector or at the same time, calculate the coordinates of the point cloud cluster of the environmental object and the existing point cloud cluster.
  • the culling module is used to remove the coordinates of the point cloud clusters whose Mahalanobis distance from the existing point cloud cluster state vector is greater than the preset threshold if the Mahalanobis distance is greater than the preset threshold.
  • the apparatus of the example of FIG. 3 may also include a judgment module and a state feedback module, wherein:
  • the judgment module is used to judge whether the environmental object corresponding to the state vector in the state vector of the point cloud cluster is a dynamic target
  • the state feedback module is used for feeding back the state of the dynamic target to the motion control system of the mobile robot if the environmental object corresponding to the state vector in the state vector of the point cloud cluster is a dynamic target.
  • the lidar point cloud data by clustering the lidar point cloud data, the coordinates of the point cloud cluster of the environmental objects are associated with the existing point cloud cluster state vector or the point cloud cluster state vector is created.
  • the type of environmental objects can be identified without the use of complex detection methods, and then whether the environmental objects are dynamic targets can be judged by the velocity vector in the state vector of the point cloud cluster, the algorithm is fast, and the lidar data is improved.
  • FIG. 4 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • the device 4 of this embodiment mainly includes: a processor 40 , a memory 41 , and a computer program 42 stored in the memory 41 and executable on the processor 40 , such as a program for a dynamic target tracking and positioning method.
  • the processor 40 executes the computer program 42 , the steps in the above-mentioned embodiment of the dynamic target tracking and positioning method are implemented, for example, steps S101 to S104 shown in FIG. 1 .
  • the processor 40 executes the computer program 42, the functions of the modules/units in the above-mentioned device embodiments are realized, for example, the first acquisition module 301, the second acquisition module 302, the point cloud cluster coordinate processing module 303 and the iterative module shown in FIG. 3
  • the function of the optimization module 304 is optimized.
  • the computer program 42 of the dynamic target tracking and positioning method mainly includes: sampling the environmental objects through the lidar mounted on the mobile robot, and obtaining the lidar point cloud data of the environmental objects; and gathering the lidar point cloud data of the environmental objects. class, obtain the coordinates of the point cloud cluster of the environmental object; associate the coordinates of the point cloud cluster of the environmental object with the existing point cloud cluster state vector or create the point cloud cluster state vector according to the coordinates of the point cloud cluster of the environmental object;
  • the pose of the mobile robot, the state vector of the point cloud cluster and the position observation constraints of the state vector of the point cloud cluster are used to iteratively optimize the pose of the mobile robot and the dynamic target state in the state vector of the point cloud cluster.
  • the computer program 42 may be divided into one or more modules/units, which are stored in the memory 41 and executed by the processor 40 to complete the present application.
  • One or more modules/units may be a series of computer program instruction segments capable of performing specific functions, the instruction segments being used to describe the execution of the computer program 42 in the device 4 .
  • the computer program 42 can be divided into the functions of the first acquisition module 301, the second acquisition module 302, the point cloud cluster coordinate processing module 303 and the iterative optimization module 304 (modules in the virtual device), and the specific functions of each module are as follows:
  • the lidar carried by the mobile robot samples the environmental objects to obtain the lidar point cloud data of the environmental objects; clusters the lidar point cloud data of the environmental objects to obtain the coordinates of the point cloud clusters of the environmental objects;
  • the coordinates of the cloud cluster are related to the existing point cloud cluster state vector or the point cloud cluster state vector is created according to the coordinates of the point cloud cluster of the environmental objects; based on the pose of the mobile robot, the point cloud cluster state vector and the point cloud cluster state vector
  • the position observation constraints of iteratively optimize the pose of the mobile robot and the dynamic target state in the state vector of the point cloud cluster.
  • Device 4 may include, but is not limited to, processor 40 , memory 41 .
  • FIG. 4 is only an example of the device 4, and does not constitute a limitation on the device 4. It may include more or less components than the one shown in the figure, or combine some components, or different components, such as Computing devices may also include input and output devices, network access devices, buses, and the like.
  • the so-called processor 40 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processors) Processor, DSP), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • CPU Central Processing Unit
  • DSP Digital Signal Processors
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 41 may be an internal storage unit of the device 4 , such as a hard disk or a memory of the device 4 .
  • the memory 41 may also be an external storage device of the device 4, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, a flash memory card (Flash Card) equipped on the device 4 Wait.
  • the memory 41 may also include both an internal storage unit of the device 4 and an external storage device.
  • the memory 41 is used to store computer programs and other programs and data required by the device.
  • the memory 41 can also be used to temporarily store data that has been output or is to be output.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

一种动态目标跟踪定位方法,包括:通过移动机器人搭载的激光雷达对环境物进行采样,获取环境物的激光雷达点云数据;对环境物的激光雷达点云数据进行聚类,获取环境物的点云簇的坐标;将环境物的点云簇的坐标关联至已经存在的点云簇状态向量或者根据环境物的点云簇的坐标,创建点云簇状态向量;基于移动机器人的位姿、点云簇状态向量和点云簇状态向量的位置观测约束,迭代优化移动机器人的位姿和点云簇状态向量中动态目标状态。该方法能实现动态目标跟踪定位。还包括一种动态目标跟踪定位装置、设备和计算机可读存储介质。

Description

动态目标跟踪定位方法、装置、设备和存储介质
本申请要求于2020年12月29日提交中国国家知识产权局专利局、申请号为202011589891.0、申请名称为“动态目标跟踪定位方法、装置、设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及机器人领域,特别涉及一种动态目标跟踪定位方法、装置、设备和计算机可读存储介质。
背景技术
随着各个行业对移动机器人应用领域的不断扩大,使得移动机器人所面临的环境越来越复杂,尤其是在具有多种不同类型动态目标存在的场景下,例如场景中存在行人、车辆或其它机器人等,机器人不仅需要进行稳定的定位,还需要在动态场景下实现流畅的运动。
解决上述问题的关键是机器人能否从场景中提取到动态目标,对动态物体的位姿和速度进行跟踪。若能够检测并跟踪到场景中的动态目标,则基于动态物体的位姿和速度信息,可以为机器人运动控制***提供避开动态物体的决策信息,为定位***消除动态物体对定位精度的影响。针对上述问题,现有技术的解决方案是通过移动机器人上搭载的激光雷达对环境进行采样,对采样数据采用分类算法进行分类后,确定场景中的动态目标。
然而,对于一个只搭载了激光雷达的移动机器人而言,激光雷达提供的环境信息不够丰富,导致算法无法很好地区分潜在的动态目标,从而有针对性地跟踪动态目标。
技术解决方案
根据本申请的各种实施例,本申请提供一种动态目标跟踪定位方法、装置、设备和计算机可读存储介质。
一种动态目标跟踪定位方法,包括:
通过移动机器人搭载的激光雷达对环境物进行采样,获取所述环境物的激光雷达点云数据;
对所述环境物的激光雷达点云数据进行聚类,获取所述环境物的点云簇的坐标;
将所述环境物的点云簇的坐标关联至已经存在的点云簇状态向量或者根据所述环境物的点云簇的坐标,创建点云簇状态向量;
基于所述移动机器人的位姿、所述点云簇状态向量和所述点云簇状态向量的位置观测约束,迭代优化所述移动机器人的位姿和所述点云簇状态向量中动态目标状态。
一种动态目标跟踪定位装置,包括:
第一获取模块,用于通过移动机器人搭载的激光雷达对环境物进行采样,获取环境物的激光雷达点云数据;
第二获取模块,用于对所述环境物的激光雷达点云数据进行聚类,获取所述环境物的点云簇的坐标;
点云簇坐标处理模块,用于将所述环境物的点云簇的坐标关联至已经存在的点云簇状态向量或者根据所述环境物的点云簇的坐标,创建点云簇状态向量;
迭代优化模块,用于基于所述移动机器人的位姿、所述点云簇状态向量和所述点云簇状态向量的位置观测约束,迭代优化所述移动机器人的位姿和所述点云簇状态向量中动态目标状态。
一种设备,所述设备包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上述动态目标跟踪定位方法的技术方案的步骤。
一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上述动态目标跟踪定位方法的技术方案的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他实施例的附图。
图1是本申请实施例提供的动态目标跟踪定位方法的流程图;
图2是本申请实施例提供的动态目标跟踪定位装置的结构示意图;
图3是本申请实施例提供的二维KD树示意图;
图4是本申请实施例提供的设备的结构示意图。
本发明的实施方式
为了便于理解本申请,下面将参照相关附图对本申请进行更全面的描述。附图中给出了本申请的较佳实施例。但是,本申请可以以许多不同的形式来实现,并不限于本文所描述的实施例。相反地,提供这些实施例的目的是使对本申请的公开内容的理解更加透彻全面。
除非另有定义,本文所使用的所有的技术和科学术语与属于发明的技术领域的技术人员通常理解的含义相同。本文中在发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在限制本申请。本文所使用的术语“和/或”包括一个或多个相关的所列项目的任意的和所有的组合。
本申请提出了一种动态目标跟踪定位方法,可应用于机器人,该机器人可以是在餐厅作业的机器人,例如,传菜机器人,也可以是在医疗场所,例如医院作业的送药机器人,还可以是在仓库等场所作业的搬运机器人,等等。如附图1所示,动态目标跟踪定位方法主要包括步骤S101至S104,详述如下:
步骤S101:通过移动机器人搭载的激光雷达对环境物进行采样,获取环境物的激光雷达点云数据。
在本申请实施例中,环境是指机器人工作的环境,环境物是指机器人工作的环境中的一切物体,包括在环境中的静态物(例如,某种货物、一棵树、一堵墙、一张桌子)和动态目标(例如,一个人、一辆行驶的车或移动机器人,等等)。在本申请实施例中,机器人搭载的激光雷达对环境物的采样过程与现有技术一样,都是通过向周遭环境发射激光束,实时对当前环境进行扫描,采用飞行时间测距法计算得到机器人与环境中路标之间的距离,每一束激光击中环境物时,该激光束相对于机器人的角度以及激光源与被击中环境物之间的距离等信息,多个激光束击中环境物时的上述信息就构成环境物的激光雷达点云数据。该激光雷达具体可以采用二维激光雷达。
步骤S102:对环境物的激光雷达点云数据进行聚类,获取环境物的点云簇的坐标。
所谓聚类,是一种涉及数据点分组的机器学习技术,对于给定的一组数据点,可以使用聚类算法将每个数据点划分为一个特定的组。理论上,同一组中的数据点应该具有相似的属性和/或特征,而不同组中的数据点应该具有高度不同的属性和/或特征,本申请实施例中,对环境物的激光雷达点云数据进行聚类具有上述聚类算法的一般性质。作为本申请的一个实施例,对环境物的激光雷达点云数据进行聚类,获取环境物的点云簇的坐标可通过步骤S1021至少步骤S1023实现,说明如下:
步骤S1021:对环境物的激光雷达点云数据进行初次聚类,得到第一点云簇的几何中心。
考虑到具有噪声的基于密度聚类(Density-Based Spatial Clustering of Applications with Noise,DBSCAN)算法具有能够对任意形状的稠密数据集进行聚类,并且在聚类的同时发现异常点,聚类结果鲜有发生偏倚等优点,在本申请实施例中,可以采用DBSCAN算法对环境物的激光雷达点云数据进行初次聚类,得到第一点云簇的几何中心,DBSCAN算法的基本思想是,若一个区域中的数据点的密度大过某个阈值,则将其添加至与之相似的簇中。具体地,对环境物的激光雷达点云数据进行初次聚类,得到第一点云簇的几何中心可以是:设置DBSCAN算法的邻域半径和最小点云簇点数,采用DBSCAN算法对激光雷达点云数据进行初次聚类,得到初次点云簇,求取初次点云簇的几何中心作为第一点云簇的几何中心。上述实施例中,设置DBSCAN算法的邻域半径和最小点云簇点数的目的在于可以将环境物的激光雷达点云数据中异常点予以滤除,以减小聚类算法不必要的工作量,亦能提升分类的精确性。至于求取初次点云簇的几何中心,可以将初次点云簇视为一个质点,采用几何或者物理方法求取该质点的几何中心坐标作为第一点云簇的几何中心。
步骤S1022:若第一点云簇存在相似点云簇,则将第一点云簇的几何中心与相似点云簇的几何中心再次聚类,得到第二点云簇。
无论是人工制造物还是自然物,很多都具有对称性,一般而言,对称性的物体,其激光雷达点云数据经聚类后得到的点云簇具有相似性。例如,人体的两条腿或两只手、汽车的两个前轮,等等,左腿的激光雷达点云数据经聚类后得到的点云簇与右腿的激光雷达点云数据经聚类后得到的点云簇相似,左手的激光雷达点云数据经聚类后得到的点云簇与右手的激光雷达点云数据经聚类后得到的点云簇相似,等等,对于此类相似的点云簇,可以将其几何中心再次聚类,得到新的点云簇,即,在本申请实施例中,若第一点云簇存在相似点云簇,则将第一点云簇的几何中心与相似点云簇的几何中心再次聚类,得到第二点云簇。将第一点云簇的几何中心与相似点云簇的几何中心再次聚类的具体算法,也可以与对激光雷达点云数据进行初次聚类的算法,例如,也可以使用DBSCAN算法或其他聚类算法,只是再次聚类时,将第一点云簇的几何中心与相似点云簇的几何中心视为聚类对象即数据点。
步骤S1023:求取第二点云簇的几何中心坐标,将第二点云簇的几何中心坐标作为环境物的点云簇的坐标。
在本申请实施例中,求取第二点云簇的几何中心坐标可以与第一点云簇的几何中心相同,此处不做赘述。当求取了第二点云簇的几何中心坐标后,将第二点云簇的几何中心坐标作为环境物的点云簇的坐标。
步骤S103:将环境物的点云簇的坐标关联至已经存在的点云簇状态向量或者根据环境物的点云簇的坐标,创建点云簇状态向量。
如前所述,点云簇可以视为一个质点,在经典力学中,通常使用位置和速度等信息来表征一个质点的属性,因此,在本申请实施例中,若定位***存在点云簇状态向量,则可以直接使用已经存在的点云簇状态向量,若定位***不存在点云簇状态向量,则可以创建点云簇状态向量。无论是已经存在的点云簇状态向量还是创建的点云簇状态向量,其都可以包括点云簇的位置即点云簇的坐标和速度等信息。作为本申请一个实施例,将环境物的点云簇的坐标关联至已经存在的点云簇状态向量或者根据环境物的点云簇的坐标,创建点云簇状态向量可通过步骤S1031至步骤S1035实现,说明如下:
步骤S1031:若定位***存在点云簇状态向量,则使用已经存在的点云簇状态向量中的位置信息构建二维KD树。
KD树的全称为k-Dimension Tree,是一种分割K维空间的数据结构,主要应用于关键信息的搜索,本申请实施例中提及的二维KD树,是指该KD树是一种分割二维空间的数据结构。若定位***存在点云簇状态向量,则使用已经存在的点云簇状态向量中的位置信息构建二维KD树,构建完成的二维KD树,其节点由点云簇状态向量中的位置信息构成。至于构建二维KD树方法,具体可以包括步骤S1至步骤S5,简述如下:
步骤S1:将位置信息即数据点中的 x坐标和 y坐标进行方差计算,选出其中方差大的,作为分割线的方向;步骤S2:将节点中的数据点按照分割方向的维度进行排序,选出其中的中位数的数据点作为数据矢量即待分割的分割点;步骤S3:同时进行空间矢量的再次划分,即在父节点的空间范围内再进行分割;步骤S4:对剩余节点进行左侧空间和右侧空间的分割,进行左孩子节点和右孩子节点的分割;步骤S5:分割的终点是最终只剩下一个数据点或一侧没有数据点时为止。
步骤S1032:将环境物的点云簇的坐标与二维KD树中的候选点坐标匹配。
在本申请实施例中,将环境物的点云簇的坐标与二维KD树中的候选点坐标匹配,具体包括步骤S1至步骤S3,简述如下:
步骤S1:从二维KD树的根节点开始,从上往下,根据构建二维KD树时候的分割方向,在二维的坐标点上,进行树的顺序查找,例如,假设环境物的点云簇的坐标为(3,1),对于经步骤S1031构建的二维KD树如图2所示,首先查找根节点(7,2),由于根节点的划分方向为 x,因此,只比较 x坐标的划分,又由于3<7,因此,往左边走,后续的节点同样的道理,最终到达叶子节点为止;
步骤S2:当以后步骤S1查找到的节点不是最近时,可以转入步骤S2;
步骤S3:回溯至父节点,回溯至父节点时,首先和父节点比,若父节点距离环境物的点云簇的坐标(以下以待查数据点称呼)更近,则更改当前找到的最近节点,然后,以待查数据点为圆心,待查数据点与最近节点的距离为半径画个圆,判断是否与父节点的分割线相交,若相交,则说明父节点另外的孩子空间存在与待查数据点距离更近的节点,进行父节点空间的又一次深度优先遍历,在局部的遍历查找完毕,再与当前的最近节点比较,比较完之后,继续往上回溯。
步骤S1033:若环境物的点云簇匹配到二维KD树的最近的点云簇,则将环境物的点云簇的坐标关联至已经存在的点云簇状态向量。
在本申请实施例中,将环境物的点云簇的坐标关联至已经存在的点云簇状态向量具体是将环境物的点云簇的坐标保存至已经存在的点云簇状态向量的观测容器中,意味着已经存在的点云簇状态向量找到了与其对应的位置观测约束。
步骤S1034:若定位***不存在点云簇状态向量,则创建点云簇状态向量。
若定位***不存在点云簇状态向量,则使用环境物的点云簇的坐标创建一个点云簇状态向量,该环境物的点云簇的坐标即为创建的点云簇状态向量的位置信息的初始值,速度则直接设定为0。
步骤S1035:将环境物的点云簇的坐标确定为已经存在的点云簇状态向量或创建的点云簇状态向量的位置观测约束。
所谓位置观测约束,点云簇状态向量为点云簇在环境中的估计位置,而已经存在的点云簇状态向量或创建的点云簇状态向量的坐标可以认为是通过激光雷达观测到状态向量在环境中的真实位置,位置观测约束就是用于约束观测到的移动机器人实际位姿与点云簇状态向量表示的估计位置的关系。
为了防止将环境物的点云簇的坐标错误地关联至已经存在的点云簇状态向量,在本申请实施例中,在将环境物的点云簇的坐标关联至已经存在的点云簇状态向量之前或同时,可以计算环境物的点云簇的坐标与已经存在的点云簇状态向量之间的马氏距离,若马氏距离大于预设阈值,则将与已经存在的点云簇状态向量之间的马氏距离大于预设阈值的点云簇的坐标去除。
步骤S104:基于移动机器人的位姿、点云簇状态向量和点云簇状态向量的位置观测约束,迭代优化移动机器人的位姿和点云簇状态向量中动态目标状态。
具体地,基于移动机器人的位姿、点云簇状态向量和点云簇状态向量的位置观测约束,迭代优化移动机器人的位姿和点云簇状态向量中动态目标状态可以是:基于移动机器人的多个连续位姿、点云簇状态向量和点云簇状态向量的位置观测约束,构建非线性最小二乘化问题;以移动机器人的位姿误差和动态目标状态的状态误差作为非线性最小二乘化问题的误差约束条件,迭代优化移动机器人的位姿和点云簇状态向量中动态目标状态,直至移动机器人的位姿误差和动态目标状态的状态误差最小时为止。
上述实施例的方法还包括:判断点云簇状态向量中状态向量对应环境物是否为动态目标,若点云簇状态向量中状态向量对应环境物为动态目标,则将动态目标的状态反馈至移动机器人的运动控制***。
从上述附图1示例的动态目标跟踪定位方法可知,一方面,通过对激光雷达点云数据进行聚类,将环境物的点云簇的坐标关联至已经存在的点云簇状态向量或者创建点云簇状态向量,与现有技术相比,无需使用复杂的检测方法就可识别环境物的种类,进而能够通过点云簇状态向量中的速度向量判断环境物是否为动态目标,算法速度快,提升了激光雷达数据的利用率;另一方面,基于移动机器人的位姿、点云簇状态向量和点云簇状态向量的位置观测约束,迭代优化移动机器人的位姿和点云簇状态向量中动态目标状态,换言之,将动态目标状态和移动机器人的位姿进行联合优化,获得状态估计的全局最优,从而实现了对动态目标的精确定位和跟踪。
请参阅附图3,是本申请实施例提供的一种动态目标跟踪定位装置,可以包括第一获取模块301、第二获取模块302、点云簇坐标处理模块303和迭代优化模块304,详述如下:
第一获取模块301,用于通过移动机器人搭载的激光雷达对环境物进行采样,获取环境物的激光雷达点云数据;
第二获取模块302,用于对环境物的激光雷达点云数据进行聚类,获取环境物的点云簇的坐标;
点云簇坐标处理模块303,用于将环境物的点云簇的坐标关联至已经存在的点云簇状态向量或者根据环境物的点云簇的坐标,创建点云簇状态向量;
迭代优化模块304,用于基于移动机器人的位姿、点云簇状态向量和点云簇状态向量的位置观测约束,迭代优化移动机器人的位姿和点云簇状态向量中动态目标状态。
可选地,附图3示例的第二获取模块302可以包括第一聚类单元、第二聚类单元和求取单元,其中:
第一聚类单元,用于对激光雷达点云数据进行初次聚类,得到第一点云簇的几何中心;
第二聚类单元,用于若第一点云簇存在相似点云簇,则将第一点云簇的几何中心与相似点云簇的几何中心再次聚类,得到第二点云簇;
求取单元,用于求取第二点云簇的几何中心坐标,将第二点云簇的几何中心坐标作为环境物的点云簇的坐标。
可选地,上述第一聚类单元可以包括设置单元和几何中心求取单元,其中:
设置单元,用于设置具有噪声的基于密度聚类DBSCAN算法的邻域半径和最小点云簇点数,采用DBSCAN算法对激光雷达点云数据进行初次聚类,得到初次点云簇;
几何中心求取单元,用于求取初次点云簇的几何中心作为第一点云簇的几何中心。
可选地,附图3示例的点云簇坐标处理模块303可以包括二维KD树构建单元、匹配单元、关联单元、创建单元和确定单元,其中:
二维KD树构建单元,用于若定位***存在点云簇状态向量,则使用已经存在的点云簇状态向量中的位置信息构建二维KD树;
匹配单元,用于将环境物的点云簇的坐标与二维KD树中的候选点坐标匹配;
关联单元,用于若匹配单元匹配到二维KD树的最近的点云簇,则将环境物的点云簇的坐标关联至已经存在的点云簇状态向量;
创建单元,用于若定位***不存在点云簇状态向量,则创建点云簇状态向量;
确定单元,用于将环境物的点云簇的坐标确定为已经存在的点云簇状态向量或创建的点云簇状态向量的位置观测约束。
可选地,附图3示例的迭代优化模块304可以包括二维KD树构建单元和确定单元,其中:
优化问题构建单元,用于基于移动机器人的位姿、点云簇状态向量和点云簇状态向量的位置观测约束,构建非线性最小二乘化问题;
位姿状态迭代单元,用于以移动机器人的位姿误差和动态目标状态的状态误差作为非线性最小二乘化问题的误差约束条件,迭代优化移动机器人的位姿和点云簇状态向量中动态目标状态,直至移动机器人的位姿误差和动态目标状态的状态误差最小时为止。
可选地,附图3示例的装置还可以包括马氏距离计算模块和剔除模块,其中:
马氏距离计算模块,用于点云簇坐标处理模块303将环境物的点云簇的坐标关联至已经存在的点云簇状态向量之前或同时,计算环境物的点云簇的坐标与已经存在的点云簇状态向量之间的马氏距离;
剔除模块,用于若马氏距离大于预设阈值,则将与已经存在的点云簇状态向量之间的马氏距离大于预设阈值的点云簇的坐标去除。
可选地,附图3示例的装置还可以包括判断模块和状态反馈模块,其中:
判断模块,用于判断点云簇状态向量中状态向量对应环境物是否为动态目标;
状态反馈模块,用于若点云簇状态向量中状态向量对应环境物为动态目标,则将动态目标的状态反馈至所述移动机器人的运动控制***。
从以上技术方案的描述中可知,一方面,通过对激光雷达点云数据进行聚类,将环境物的点云簇的坐标关联至已经存在的点云簇状态向量或者创建点云簇状态向量,与现有技术相比,无需使用复杂的检测方法就可识别环境物的种类,进而能够通过点云簇状态向量中的速度向量判断环境物是否为动态目标,算法速度快,提升了激光雷达数据的利用率;另一方面,基于移动机器人的位姿、点云簇状态向量和点云簇状态向量的位置观测约束,迭代优化移动机器人的位姿和点云簇状态向量中动态目标状态,换言之,将动态目标状态和移动机器人的位姿进行联合优化,获得状态估计的全局最优,从而实现了对动态目标的精确定位和跟踪。
请参阅图4,是本申请一实施例提供的设备的结构示意图。如图4所示,该实施例的设备4主要包括:处理器40、存储器41以及存储在存储器41中并可在处理器40上运行的计算机程序42,例如动态目标跟踪定位方法的程序。处理器40执行计算机程序42时实现上述动态目标跟踪定位方法实施例中的步骤,例如图1所示的步骤S101至S104。或者,处理器40执行计算机程序42时实现上述各装置实施例中各模块/单元的功能,例如图3所示第一获取模块301、第二获取模块302、点云簇坐标处理模块303和迭代优化模块304的功能。
示例性地,动态目标跟踪定位方法的计算机程序42主要包括:通过移动机器人搭载的激光雷达对环境物进行采样,获取环境物的激光雷达点云数据;对环境物的激光雷达点云数据进行聚类,获取环境物的点云簇的坐标;将环境物的点云簇的坐标关联至已经存在的点云簇状态向量或者根据环境物的点云簇的坐标,创建点云簇状态向量;基于移动机器人的位姿、点云簇状态向量和点云簇状态向量的位置观测约束,迭代优化移动机器人的位姿和点云簇状态向量中动态目标状态。计算机程序42可以被分割成一个或多个模块/单元,一个或者多个模块/单元被存储在存储器41中,并由处理器40执行,以完成本申请。一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述计算机程序42在设备4中的执行过程。例如,计算机程序42可以被分割成第一获取模块301、第二获取模块302、点云簇坐标处理模块303和迭代优化模块304(虚拟装置中的模块)的功能,各模块具体功能如下:通过移动机器人搭载的激光雷达对环境物进行采样,获取环境物的激光雷达点云数据;对环境物的激光雷达点云数据进行聚类,获取环境物的点云簇的坐标;将环境物的点云簇的坐标关联至已经存在的点云簇状态向量或者根据环境物的点云簇的坐标,创建点云簇状态向量;基于移动机器人的位姿、点云簇状态向量和点云簇状态向量的位置观测约束,迭代优化移动机器人的位姿和点云簇状态向量中动态目标状态。
设备4可包括但不仅限于处理器40、存储器41。本领域技术人员可以理解,图4仅仅是设备4的示例,并不构成对设备4的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如计算设备还可以包括输入输出设备、网络接入设备、总线等。
所称处理器40可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application Specific Integrated Circuit,ASIC)、现成可编程门阵列 (Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
存储器41可以是设备4的内部存储单元,例如设备4的硬盘或内存。存储器41也可以是设备4的外部存储设备,例如设备4上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器41还可以既包括设备4的内部存储单元也包括外部存储设备。存储器41用于存储计算机程序以及设备所需的其他程序和数据。存储器41还可以用于暂时地存储已经输出或者将要输出的数据。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种动态目标跟踪定位方法,所述方法包括:
    通过移动机器人搭载的激光雷达对环境物进行采样,获取所述环境物的激光雷达点云数据;
    对所述环境物的激光雷达点云数据进行聚类,获取所述环境物的点云簇的坐标;
    将所述环境物的点云簇的坐标关联至已经存在的点云簇状态向量或者根据所述环境物的点云簇的坐标,创建点云簇状态向量;
    基于所述移动机器人的位姿、所述点云簇状态向量和所述点云簇状态向量的位置观测约束,迭代优化所述移动机器人的位姿和所述点云簇状态向量中动态目标状态。
  2. 如权利要求1所述动态目标跟踪定位方法,其特征在于,所述对所述环境物的激光雷达点云数据进行聚类,获取所述环境物的点云簇的坐标,包括:
    对所述激光雷达点云数据进行初次聚类,得到第一点云簇的几何中心;
    若所述第一点云簇存在相似点云簇,则将所述第一点云簇的几何中心与所述相似点云簇的几何中心再次聚类,得到第二点云簇;
    求取所述第二点云簇的几何中心坐标,将所述第二点云簇的几何中心坐标作为所述环境物的点云簇的坐标。
  3. 如权利要求2所述动态目标跟踪定位方法,其特征在于,所述对所述激光雷达点云数据进行初次聚类,得到第一点云簇的几何中心,包括:
    设置具有噪声的基于密度聚类DBSCAN算法的邻域半径和最小点云簇点数,采用所述DBSCAN算法对所述激光雷达点云数据进行初次聚类,得到初次点云簇;
    求取所述初次点云簇的几何中心作为所述第一点云簇的几何中心。
  4. 如权利要求1所述动态目标跟踪定位方法,其特征在于,所述将所述环境物的点云簇的坐标关联至已经存在的点云簇状态向量或者根据所述环境物的点云簇的坐标,创建点云簇状态向量,包括:
    若定位***存在点云簇状态向量,则使用所述已经存在的点云簇状态向量中的位置信息构建二维KD树;
    将所述环境物的点云簇的坐标与所述二维KD树中的候选点坐标匹配;
    若匹配到所述二维KD树的最近的点云簇,则将所述环境物的点云簇的坐标关联至所述已经存在的点云簇状态向量;
    若所述定位***不存在点云簇状态向量,则创建所述点云簇状态向量;
    将所述环境物的点云簇的坐标确定为所述已经存在的点云簇状态向量或创建的点云簇状态向量的位置观测约束。
  5. 如权利要求1所述动态目标跟踪定位方法,其特征在于,所述基于所述移动机器人的位姿、所述点云簇状态向量和所述点云簇状态向量的位置观测约束,迭代优化所述移动机器人的位姿和所述点云簇状态向量中动态目标状态,包括:
    基于所述移动机器人的位姿、所述点云簇状态向量和所述点云簇状态向量的位置观测约束,构建非线性最小二乘化问题;
    以所述移动机器人的位姿误差和所述动态目标状态的状态误差作为所述非线性最小二乘化问题的误差约束条件,迭代优化所述移动机器人的位姿和所述点云簇状态向量中动态目标状态,直至所述移动机器人的位姿误差和所述动态目标状态的状态误差最小时为止。
  6. 如权利要求1所述动态目标跟踪定位方法,其特征在于,所述将所述环境物的点云簇的坐标关联至已经存在的点云簇状态向量之前或同时,所述方法还包括:
    计算所述环境物的点云簇的坐标与所述已经存在的点云簇状态向量之间的马氏距离;
    若所述马氏距离大于预设阈值,则将所述与所述已经存在的点云簇状态向量之间的马氏距离大于所述预设阈值的点云簇的坐标去除。
  7. 如权利要求1所述动态目标跟踪定位方法,其特征在于,所述方法还包括:
    判断所述点云簇状态向量中状态向量对应环境物是否为动态目标;
    若所述点云簇状态向量中状态向量对应环境物为动态目标,则将所述动态目标的状态反馈至所述移动机器人的运动控制***。
  8. 一种动态目标跟踪定位装置,所述装置包括:
    第一获取模块,用于通过移动机器人搭载的激光雷达对环境物进行采样,获取所述环境物的激光雷达点云数据;
    第二获取模块,用于对所述环境物的激光雷达点云数据进行聚类,获取所述环境物的点云簇的坐标;
    点云簇坐标处理模块,用于将所述环境物的点云簇的坐标关联至已经存在的点云簇状态向量或者根据所述环境物的点云簇的坐标,创建点云簇状态向量;
    迭代优化模块,用于基于所述移动机器人的位姿、所述点云簇状态向量和所述点云簇状态向量的位置观测约束,迭代优化所述移动机器人的位姿和所述点云簇状态向量中动态目标状态。
  9. 如权利要求8所述动态目标跟踪定位装置,其特征在于,所述第二获取模块包括:
    第一聚类单元,用于对所述激光雷达点云数据进行初次聚类,得到第一点云簇的几何中心;
    第二聚类单元,用于若所述第一点云簇存在相似点云簇,则将所述第一点云簇的几何中心与所述相似点云簇的几何中心再次聚类,得到第二点云簇;
    求取单元,用于求取所述第二点云簇的几何中心坐标,将所述第二点云簇的几何中心坐标作为所述环境物的点云簇的坐标。
  10. 如权利要求9所述动态目标跟踪定位装置,其特征在于,所述第一聚类单元包括:
    设置单元,用于设置具有噪声的基于密度聚类DBSCAN算法的邻域半径和最小点云簇点数,采用所述DBSCAN算法对所述激光雷达点云数据进行初次聚类,得到初次点云簇;
    几何中心求取单元,用于求取所述初次点云簇的几何中心作为所述第一点云簇的几何中心。
  11. 如权利要求8所述动态目标跟踪定位装置,其特征在于,所述点云簇坐标处理模块包括:
    二维KD树构建单元,用于若定位***存在点云簇状态向量,则使用所述已经存在的点云簇状态向量中的位置信息构建二维KD树;
    匹配单元,用于将所述环境物的点云簇的坐标与所述二维KD树中的候选点坐标匹配;
    关联单元,用于若匹配到所述二维KD树的最近的点云簇,则将所述环境物的点云簇的坐标关联至所述已经存在的点云簇状态向量;
    创建单元,用于若所述定位***不存在点云簇状态向量,则创建所述点云簇状态向量;
    确定单元,用于将所述环境物的点云簇的坐标确定为所述已经存在的点云簇状态向量或创建的点云簇状态向量的位置观测约束。
  12. 如权利要求8所述动态目标跟踪定位装置,其特征在于,所述迭代优化模块包括:
    优化问题构建单元,用于基于所述移动机器人的位姿、所述点云簇状态向量和所述点云簇状态向量的位置观测约束,构建非线性最小二乘化问题;
    位姿状态迭代单元,用于以所述移动机器人的位姿误差和所述动态目标状态的状态误差作为所述非线性最小二乘化问题的误差约束条件,迭代优化所述移动机器人的位姿和所述点云簇状态向量中动态目标状态,直至所述移动机器人的位姿误差和所述动态目标状态的状态误差最小时为止。
  13. 如权利要求8所述动态目标跟踪定位装置,其特征在于,所述装置还包括:
    马氏距离计算模块,用于所述点云簇坐标处理模块将环境物的点云簇的坐标关联至已经存在的点云簇状态向量之前或同时,计算所述环境物的点云簇的坐标与所述已经存在的点云簇状态向量之间的马氏距离;
    剔除模块,用于若所述马氏距离大于预设阈值,则将所述与所述已经存在的点云簇状态向量之间的马氏距离大于所述预设阈值的点云簇的坐标去除。
  14. 如权利要求8所述动态目标跟踪定位装置,其特征在于,所述装置还包括:
    判断模块,用于判断所述点云簇状态向量中状态向量对应环境物是否为动态目标;
    状态反馈模块,用于若所述点云簇状态向量中状态向量对应环境物为动态目标,则将所述动态目标的状态反馈至所述移动机器人的运动控制***。
  15. 一种设备,所述设备包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1所述方法的步骤。
  16. 如权利要求15所述设备,其特征在于,所述对所述环境物的激光雷达点云数据进行聚类,获取所述环境物的点云簇的坐标,包括:
    对所述激光雷达点云数据进行初次聚类,得到第一点云簇的几何中心;
    若所述第一点云簇存在相似点云簇,则将所述第一点云簇的几何中心与所述相似点云簇的几何中心再次聚类,得到第二点云簇;
    求取所述第二点云簇的几何中心坐标,将所述第二点云簇的几何中心坐标作为所述环境物的点云簇的坐标。
  17. 如权利要求16所述设备,其特征在于,所述对所述激光雷达点云数据进行初次聚类,得到第一点云簇的几何中心,包括:
    设置具有噪声的基于密度聚类DBSCAN算法的邻域半径和最小点云簇点数,采用所述DBSCAN算法对所述激光雷达点云数据进行初次聚类,得到初次点云簇;
    求取所述初次点云簇的几何中心作为所述第一点云簇的几何中心。
  18. 如权利要求15所述设备,其特征在于,所述将所述环境物的点云簇的坐标关联至已经存在的点云簇状态向量或者根据所述环境物的点云簇的坐标,创建点云簇状态向量,包括:
    若定位***存在点云簇状态向量,则使用所述已经存在的点云簇状态向量中的位置信息构建二维KD树;
    将所述环境物的点云簇的坐标与所述二维KD树中的候选点坐标匹配;
    若匹配到所述二维KD树的最近的点云簇,则将所述环境物的点云簇的坐标关联至所述已经存在的点云簇状态向量;
    若所述定位***不存在点云簇状态向量,则创建所述点云簇状态向量;
    将所述环境物的点云簇的坐标确定为所述已经存在的点云簇状态向量或创建的点云簇状态向量的位置观测约束。
  19. 如权利要求15所述设备,其特征在于,所述基于所述移动机器人的位姿、所述点云簇状态向量和所述点云簇状态向量的位置观测约束,迭代优化所述移动机器人的位姿和所述点云簇状态向量中动态目标状态,包括:
    基于所述移动机器人的位姿、所述点云簇状态向量和所述点云簇状态向量的位置观测约束,构建非线性最小二乘化问题;
    以所述移动机器人的位姿误差和所述动态目标状态的状态误差作为所述非线性最小二乘化问题的误差约束条件,迭代优化所述移动机器人的位姿和所述点云簇状态向量中动态目标状态,直至所述移动机器人的位姿误差和所述动态目标状态的状态误差最小时为止。
  20. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1所述方法的步骤。
PCT/CN2021/134124 2020-12-29 2021-11-29 动态目标跟踪定位方法、装置、设备和存储介质 WO2022142948A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011589891.0 2020-12-29
CN202011589891.0A CN112847343B (zh) 2020-12-29 2020-12-29 动态目标跟踪定位方法、装置、设备和存储介质

Publications (1)

Publication Number Publication Date
WO2022142948A1 true WO2022142948A1 (zh) 2022-07-07

Family

ID=75998045

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/134124 WO2022142948A1 (zh) 2020-12-29 2021-11-29 动态目标跟踪定位方法、装置、设备和存储介质

Country Status (2)

Country Link
CN (1) CN112847343B (zh)
WO (1) WO2022142948A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115424367A (zh) * 2022-08-03 2022-12-02 洛阳智能农业装备研究院有限公司 基于gps的作业状态判断方法、装置、设备及可读存储介质
CN115884479A (zh) * 2023-02-22 2023-03-31 广州易而达科技股份有限公司 一种照明灯具的转向方法、装置、设备及存储介质
CN116168036A (zh) * 2023-04-26 2023-05-26 深圳市岑科实业有限公司 用于电感绕线设备的异常智能监测***
WO2024140054A1 (zh) * 2022-12-29 2024-07-04 北京极智嘉科技股份有限公司 基于环境信息的设备控制方法及装置

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112847343B (zh) * 2020-12-29 2022-08-16 深圳市普渡科技有限公司 动态目标跟踪定位方法、装置、设备和存储介质
CN113343840B (zh) * 2021-06-02 2022-03-08 合肥泰瑞数创科技有限公司 基于三维点云的对象识别方法及装置
US11861870B2 (en) * 2021-07-23 2024-01-02 The Boeing Company Rapid object detection for vehicle situational awareness
CN113496697B (zh) * 2021-09-08 2021-12-28 深圳市普渡科技有限公司 机器人、语音数据处理方法、装置以及存储介质
CN113894050B (zh) * 2021-09-14 2023-05-23 深圳玩智商科技有限公司 物流件分拣方法、分拣设备及存储介质
CN114495026A (zh) * 2022-01-07 2022-05-13 武汉市虎联智能科技有限公司 一种激光雷达识别方法、装置、电子设备和存储介质
CN114442101B (zh) * 2022-01-28 2023-11-14 南京慧尔视智能科技有限公司 基于成像毫米波雷达的车辆导航方法、装置、设备及介质
CN114897040B (zh) * 2022-03-16 2023-06-16 宁夏广天夏科技股份有限公司 采煤面矫直方法、装置及综采工作面***
TWI819613B (zh) * 2022-05-19 2023-10-21 緯創資通股份有限公司 物件的雙感測方法及用於物件感測的運算裝置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170316569A1 (en) * 2014-06-08 2017-11-02 The Board Of Trustees Of The Leland Stanford Junior University Robust Anytime Tracking Combining 3D Shape, Color, and Motion with Annealed Dynamic Histograms
CN108445480A (zh) * 2018-02-02 2018-08-24 重庆邮电大学 基于激光雷达的移动平台自适应扩展目标跟踪***及方法
CN110647835A (zh) * 2019-09-18 2020-01-03 合肥中科智驰科技有限公司 基于3d点云数据的目标检测与分类方法和***
CN110766719A (zh) * 2019-09-21 2020-02-07 北醒(北京)光子科技有限公司 一种目标跟踪方法、设备及存储介质
CN111260683A (zh) * 2020-01-09 2020-06-09 合肥工业大学 一种三维点云数据的目标检测与跟踪方法及其装置
CN112847343A (zh) * 2020-12-29 2021-05-28 深圳市普渡科技有限公司 动态目标跟踪定位方法、装置、设备和存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985254A (zh) * 2018-08-01 2018-12-11 上海主线科技有限公司 一种基于激光的带挂卡车跟踪方法
CN110889861A (zh) * 2019-11-15 2020-03-17 广州供电局有限公司 一种电力杆塔点云提取方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170316569A1 (en) * 2014-06-08 2017-11-02 The Board Of Trustees Of The Leland Stanford Junior University Robust Anytime Tracking Combining 3D Shape, Color, and Motion with Annealed Dynamic Histograms
CN108445480A (zh) * 2018-02-02 2018-08-24 重庆邮电大学 基于激光雷达的移动平台自适应扩展目标跟踪***及方法
CN110647835A (zh) * 2019-09-18 2020-01-03 合肥中科智驰科技有限公司 基于3d点云数据的目标检测与分类方法和***
CN110766719A (zh) * 2019-09-21 2020-02-07 北醒(北京)光子科技有限公司 一种目标跟踪方法、设备及存储介质
CN111260683A (zh) * 2020-01-09 2020-06-09 合肥工业大学 一种三维点云数据的目标检测与跟踪方法及其装置
CN112847343A (zh) * 2020-12-29 2021-05-28 深圳市普渡科技有限公司 动态目标跟踪定位方法、装置、设备和存储介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115424367A (zh) * 2022-08-03 2022-12-02 洛阳智能农业装备研究院有限公司 基于gps的作业状态判断方法、装置、设备及可读存储介质
WO2024140054A1 (zh) * 2022-12-29 2024-07-04 北京极智嘉科技股份有限公司 基于环境信息的设备控制方法及装置
CN115884479A (zh) * 2023-02-22 2023-03-31 广州易而达科技股份有限公司 一种照明灯具的转向方法、装置、设备及存储介质
CN115884479B (zh) * 2023-02-22 2023-05-09 广州易而达科技股份有限公司 一种照明灯具的转向方法、装置、设备及存储介质
CN116168036A (zh) * 2023-04-26 2023-05-26 深圳市岑科实业有限公司 用于电感绕线设备的异常智能监测***
CN116168036B (zh) * 2023-04-26 2023-07-04 深圳市岑科实业有限公司 用于电感绕线设备的异常智能监测***

Also Published As

Publication number Publication date
CN112847343A (zh) 2021-05-28
CN112847343B (zh) 2022-08-16

Similar Documents

Publication Publication Date Title
WO2022142948A1 (zh) 动态目标跟踪定位方法、装置、设备和存储介质
WO2022142992A1 (zh) 融合定位方法、装置、设备和计算机可读存储介质
US8798357B2 (en) Image-based localization
US11775610B2 (en) Flexible imputation of missing data
US11300664B1 (en) LiDAR odometry method, system and apparatus based on directed geometric point and sparse frame
CN114494650B (zh) 一种分布式非结构网格跨处理器面对接方法及***
CN111460234A (zh) 图查询方法、装置、电子设备及计算机可读存储介质
Wu et al. 3D scene reconstruction based on improved ICP algorithm
Yu et al. Cludoop: an efficient distributed density-based clustering for big data using hadoop
JP2022521540A (ja) オンライン学習を利用した物体追跡のための方法およびシステム
Anderson et al. Delaunay walk for fast nearest neighbor: accelerating correspondence matching for ICP
US20170364334A1 (en) Method and Apparatus of Read and Write for the Purpose of Computing
Ali et al. A life-long SLAM approach using adaptable local maps based on rasterized LIDAR images
Liu et al. An incremental broad learning approach for semi-supervised classification
Wang et al. An efficient scene semantic labeling approach for 3D point cloud
Balamurugan Faster region based convolution neural network with context iterative refinement for object detection
Cheng et al. 3D vehicle object tracking algorithm based on bounding box similarity measurement
Hajebi et al. An efficient index for visual search in appearance-based SLAM
Wang et al. ProbNet: Bayesian deep neural network for point cloud analysis
Salehi et al. Improving constrained bundle adjustment through semantic scene labeling
JPWO2016121998A1 (ja) 情報マッチング装置及びその方法
Zheng et al. A fast 3D object recognition pipeline in cluttered and occluded scenes
Sonogashira et al. Towards open-set scene graph generation with unknown objects
Cui et al. Fast Relocalization and Loop Closing in Keyframe-Based 3D LiDAR SLAM
Xu et al. ISAC: In-switch approximate cache for IoT object detection and recognition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21913689

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21913689

Country of ref document: EP

Kind code of ref document: A1