WO2024108753A1 - 基于激光雷达的移动机器人高效鲁棒全局定位方法 - Google Patents

基于激光雷达的移动机器人高效鲁棒全局定位方法 Download PDF

Info

Publication number
WO2024108753A1
WO2024108753A1 PCT/CN2023/071662 CN2023071662W WO2024108753A1 WO 2024108753 A1 WO2024108753 A1 WO 2024108753A1 CN 2023071662 W CN2023071662 W CN 2023071662W WO 2024108753 A1 WO2024108753 A1 WO 2024108753A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
bird
eye view
translation
laser point
Prior art date
Application number
PCT/CN2023/071662
Other languages
English (en)
French (fr)
Inventor
王越
芦莎
许学成
熊蓉
Original Assignee
浙江大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江大学 filed Critical 浙江大学
Publication of WO2024108753A1 publication Critical patent/WO2024108753A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • the invention relates to the field of mobile robot positioning, and in particular to a mobile robot global positioning method based on laser radar information.
  • Global positioning technology plays a vital role in the autonomous positioning and map construction of mobile robots.
  • the robot faces the problem of unknown starting position and the possibility of positioning failure due to various external factors.
  • Global positioning is the key to achieving relocalization under the global map.
  • global positioning can be used to identify the same locations on the robot's existing trajectory, thereby adding constraints to the construction of a globally consistent map.
  • Global positioning is essentially a problem of solving the position and posture of the robot in the map coordinate system based on the robot's current sensor observations. This problem is very challenging because it aims to search for solutions to the entire candidate pose space, that is, every pose that the robot can move to on the map, using only the current observations and map database.
  • lightweight computing and storage are also required, which places higher requirements on the effectiveness and efficiency of global positioning.
  • the purpose of the present invention is to propose an efficient and robust global positioning method for a mobile robot based on laser radar, so as to achieve global positioning that is robust to changes in environment and viewing angle.
  • a laser radar-based efficient and robust global positioning method for a mobile robot characterized by comprising:
  • the mobile robot collects laser point cloud data in real time during the movement through the laser radar, and projects the laser point cloud data to obtain a bird's-eye view f(x, y) at each set distance according to the odometer information of the mobile robot;
  • the amplitude spectrum corresponding to the laser point cloud data at the current position is: Traverse all candidate locations stored in the map database and calculate the amplitude spectrum corresponding to the laser point cloud data at the current location Amplitude spectrum corresponding to the laser point cloud data of each candidate location Cross-correlation operation In this way, the similarity between the laser point cloud data of the current location and each candidate location is measured, and then the candidate location index n with the greatest similarity to the laser point cloud data of the current location is retrieved from the map database to achieve location recognition; at the same time, the relative rotation between the laser point cloud data of the current location and the laser point cloud data corresponding to the retrieved index n is estimated.
  • the relative rotation estimated in S5 Compensate to the bird's-eye view f(x, y) of the current position to obtain the rotation-compensated bird's-eye view f′(x, y), where only a translation transformation exists between the rotation-compensated bird's-eye view f′(x, y) and the bird's-eye view f n (x, y) corresponding to the candidate location index n; then use the cross-correlation operation Solve the relative translation between the bird's-eye view f′(x,y) and the bird's-eye view f n (x,y)
  • the specific steps of projecting the laser point cloud data into a bird's-eye view are as follows:
  • the laser point cloud is encoded into a bird's-eye view, wherein the occupancy, maximum height or reflection intensity information of the grid corresponds to the pixel value in the bird's-eye view.
  • ⁇ ( ⁇ ) is the Dirac function.
  • the translation equivariant feature extraction network used is Auto-Encoder or U-Net.
  • the point cloud registration algorithm adopted is an ICP algorithm.
  • the set distance needs to be smaller than the range of the laser radar.
  • the bird's-eye view and amplitude spectrum collected during the movement of the mobile robot are stored in a map database for use in the next location recognition and posture estimation.
  • the present invention has the following beneficial effects:
  • the present invention does not rely on images collected by cameras, is robust to environmental changes, and is not easily affected by changes in light and time and season.
  • the present invention is not limited to the features artificially designed by traditional methods, but uses data-driven methods to extract more unique features for place recognition tasks.
  • the present invention uses a sinusoidal graph to characterize the laser point cloud, and supervises the extraction of rotation-translation invariant features through correlation learning.
  • the rotation invariance is not affected by translation changes, thereby realizing rotation-translation invariant location recognition and improving the accuracy of global positioning.
  • the present invention also estimates relative rotation and translation, providing good initial values for point cloud registration algorithms such as ICP, enabling them to converge globally quickly and obtain more accurate positioning results.
  • FIG1 is a flow chart of a laser radar-based efficient and robust global positioning method for a mobile robot according to an embodiment of the present invention.
  • FIG2 is a visualization diagram of the intermediate process of end-to-end learning according to an embodiment of the present invention.
  • an efficient and robust global positioning method based on laser radar uses Radon transform to convert rotation and translation changes into translation changes of two axes of the sinusoidal graph, uses a translation equivariant feature extraction network to extract features, ensures the equivariance of rotation and translation, uses Fourier transform to obtain the amplitude spectrum of the spectrum to achieve translation invariance, and realizes rotation and translation invariant similarity calculation through cross-correlation operation.
  • the supervised network extracts features suitable for location recognition tasks to improve the characterization ability of rotation and translation invariant features in laser point clouds.
  • the present invention uses cross-correlation operation to estimate relative rotation and translation to provide a good initial value for the point cloud registration algorithm, and further solves the accurate 6-DOF relative pose.
  • the laser radar-based mobile robot efficient and robust global positioning method specifically includes the following steps:
  • Step (1) The mobile robot collects laser point cloud data in real time during the movement through the laser radar, and projects the laser point cloud data to obtain a bird's-eye view f(x, y) at each set distance (which must be less than the range of the laser radar, such as 20 meters) according to the odometer information of the mobile robot.
  • the specific steps of projecting the laser point cloud data into a bird's-eye view according to the occupancy/maximum height/reflection intensity information are as follows:
  • Step (1-1) first filter out ground points without information in the laser point cloud data.
  • Step (1-2) dividing the laser point cloud after removing the ground points into a finite number of independent grids under a bird's-eye view along the z-axis in the 3D Cartesian space.
  • Step (1-3) according to the occupancy, maximum height or reflection intensity information in the grid under the bird's-eye view, the laser point cloud is encoded into a bird's-eye view, wherein the occupancy, maximum height or reflection intensity information of the grid corresponds to the pixel value in the bird's-eye view.
  • the rotation ⁇ and translation of the bird's-eye view f(x,y) The changes are converted into sinusoidal graphs Translation of the longitudinal and transverse axes.
  • the Radon transform is shown in formula (1):
  • ⁇ ( ⁇ ) is the Dirac function.
  • the final sinusoidal graph is In the sine graph, the change of the vertical axis ( ⁇ axis) reflects the rotation of the laser point cloud, and the horizontal axis ( ⁇ axis) represents the translation of the point cloud at different rotation angles. That is, the rotation change of the mobile robot at the same location is shown as a cyclic translation of the vertical axis in the sine graph, and the translation change near the same location is shown as a translation of the horizontal axis.
  • Step (3) using the translation equivariant feature extraction network to obtain the sinusoidal graph in S2
  • the feature extraction is performed on the sine graph to ensure that the rotation and translation of the extracted features are equivariant.
  • the translation equivariant feature extraction network can ensure the rotation and translation equivariance of the extracted features, which is the key to the subsequent rotation and translation invariance.
  • the translation equivariant feature extraction network used above can be a network such as Auto-Encoder or U-Net.
  • Step (4) perform a one-dimensional Fourier transform on each row of the feature map E f , and take the amplitude spectrum M f as the translation-invariant representation.
  • the feature map is converted and represented in the frequency domain, and the amplitude spectrum of the obtained frequency spectrum will not be affected by the translation change, so the obtained amplitude spectrum has translation invariance, that is, the representation near the same location is consistent, and the amplitude spectrum can be used as a translation invariant representation.
  • the visualization results of the laser point cloud, bird's-eye view, sine graph, characteristic graph, and amplitude spectrum of the above map are shown in FIG2 .
  • Step (5) During the movement of the mobile robot, the amplitude spectrum corresponding to the laser point cloud data at the current position is Traverse all candidate locations stored in the map database and calculate the amplitude spectrum corresponding to the laser point cloud data at the current location Amplitude spectrum corresponding to the laser point cloud data of each candidate location Cross-correlation operation This measures the similarity between the laser point cloud data of the current location and each candidate location, and obtains the correlation vector Then, the candidate location index n with the greatest similarity to the laser point cloud data of the current location is retrieved from the map database to realize location recognition; at the same time, the relative rotation between the laser point cloud data of the current location and the laser point cloud data corresponding to the retrieved index n is estimated.
  • the cross-correlation operation The correlation spectrum is obtained by performing cross-correlation calculation on the current amplitude spectrum of the robot and the amplitude spectrum of the candidate location along the vertical axis, and the maximum value of the correlation spectrum is taken as the similarity.
  • the formula is:
  • the relative rotation between the current robot and the retrieved map data can be determined based on the position corresponding to the maximum value of the correlation spectrum in the cross-correlation operation.
  • the specific calculation formula is as follows:
  • Step (6) the relative rotation estimated in step (5) is Compensate to the bird's-eye view f(x,y) of the current position to obtain the rotation-compensated bird's-eye view f′(x,y).
  • the rotation-compensated bird's-eye view f′(x,y) there is only a translation transformation between the rotation-compensated bird's-eye view f′(x,y) and the bird's-eye view f n (x,y) corresponding to the candidate location index n.
  • the cross-correlation operation is used again Solve the relative translation between the bird's-eye view f′(x,y) and the bird's-eye view f n (x,y) Relative translation
  • the principle formula of the cross-correlation operation is shown in (8).
  • the relative translation can be calculated by formula (9):
  • Step (7) the relative rotation obtained in step (5) and the relative translation obtained in step (6) As the initial value input into the point cloud registration algorithm, a more accurate 6-DOF relative pose solution is achieved, thereby realizing pose estimation.
  • the point cloud registration algorithm adopted above may be an ICP algorithm.
  • Step (8) the bird's-eye view and amplitude spectrum collected during the movement of the mobile robot are stored in the map database for the next location recognition and posture estimation.
  • the present invention uses sinusoidal graphs to characterize laser point clouds, and supervises the extraction of rotation-translation-invariant features through correlation learning.
  • the rotation-invariance is not affected by translation changes, and the rotation-translation-invariant location recognition is realized, which improves the accuracy of global positioning.
  • the present invention also estimates relative rotation and translation, providing good initial values for point cloud registration algorithms such as ICP, so that they can quickly converge globally and obtain more accurate positioning results.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

本发明公开了一种基于激光雷达的移动机器人高效鲁棒全局定位方法。本发明利用拉东变换将旋转和平移变化转换到正弦图的两个轴的平移变化,使用平移等变的特征提取网络进行特征提取,保证旋转和平移的等变性,利用傅里叶变换得到频谱的幅度谱实现平移不变性,通过互相关操作实现旋转平移不变的相似度计算,同时监督网络提取适用于地点识别任务的特征,以提高对激光点云中旋转平移不变特征的表征能力。此外,本发明利用互相关操作估计相对旋转和平移给点云配准算法提供良好的初值,进一步求解精确的6自由度相对位姿。

Description

基于激光雷达的移动机器人高效鲁棒全局定位方法 技术领域
本发明涉及移动机器人定位领域,特别是涉及基于激光雷达信息的移动机器人全局定位方法。
背景技术
全局定位技术在移动机器人自主定位和地图构建中起到至关重要的作用。在自主定位时,机器人面临起始位置未知、以及可能会由于各种外部因素导致定位失败的问题,全局定位是实现全局地图下重定位的关键。在地图构建时,通过全局定位可在机器人已行进轨迹上识别出相同地点,从而为全局一致性地图构建添加约束。全局定位本质上是一个根据机器人当前的传感器观测,求解其在地图坐标系下的位姿的问题。这个问题非常具有挑战性,因为它旨在搜索整个候选位姿空间的解决方案,即机器人可以在地图上移动的每个位姿,仅使用当前观测和地图数据库。此外,由于资源受限,还需要轻量级的计算和存储,从而对全局定位的有效性和效率提出了更高要求。
为了解决全局定位问题,大部分全局定位方法采用源自视觉定位的策略,将问题分为两个阶段:先进行地点识别,然后进行位姿估计。基于视觉的方法易受到光照季节等环境变化影响,与视觉定位相比,基于激光雷达的全局定位方法对环境的变化鲁棒,不容易受到光照以及季节变化的影响。这类方法在机器人重访视角变化较大的地点时容易失败,而目前具备旋转不变性的全局定位方法的旋转不变性容易受到平移变化的影响,在较大相对平移下会失去旋转不变性,导致定位结果的错误。
发明内容
本发明的目的在于提出一种基于激光雷达的移动机器人高效鲁棒全局定位方法,实现对环境和视角变化鲁棒的全局定位。
本发明具体采用的技术方案如下:
一种基于激光雷达的移动机器人高效鲁棒全局定位方法,其特征在于,包括:
S1、移动机器人通过激光雷达实时采集行进过程中的激光点云数据,并根据移动机器人的里程计信息,每行进设定的距离将激光点云数据投影得到鸟瞰图f(x,y);
S2、利用拉东变换将S1中得到的鸟瞰图沿着直线L:x cosθ+ysinθ=τ进行积分,得到纵轴为θ、横轴为τ的正弦图
Figure PCTCN2023071662-appb-000001
拉东变换过程中,鸟瞰图f(x,y)的旋转α和平移
Figure PCTCN2023071662-appb-000002
变化分别被转换成正弦图
Figure PCTCN2023071662-appb-000003
中纵轴和横轴的平移;
S3、使用平移等变的特征提取网络在S2得到的正弦图
Figure PCTCN2023071662-appb-000004
上进行特征提取,得到与正弦图
Figure PCTCN2023071662-appb-000005
相同大小的特征图E f
S4、对特征图E f的每行进行一维傅里叶变换,得到的频谱取幅度谱M f作为平移不变表征;
S5、在移动机器人行进过程中,当前位置激光点云数据对应的幅度谱为
Figure PCTCN2023071662-appb-000006
遍历地图数据库中所有存储的候选地点,对当前位置激光点云数据对应的幅度谱
Figure PCTCN2023071662-appb-000007
和每个候选地点的激光点云数据对应的幅度谱
Figure PCTCN2023071662-appb-000008
分别进行互相关操作
Figure PCTCN2023071662-appb-000009
从而衡量当前位置与各候选地点的激光点云数据相似度,进而从地图数据库中检索出与当前位置激光点云数据相似度最大的候选地点索引n,实现地点识别;同时估计当前位置激光点云数据与检索到索引n对应的激光点云数据之间的相对旋转
Figure PCTCN2023071662-appb-000010
其中
Figure PCTCN2023071662-appb-000011
n和
Figure PCTCN2023071662-appb-000012
的计算公式分别如下:
Figure PCTCN2023071662-appb-000013
Figure PCTCN2023071662-appb-000014
Figure PCTCN2023071662-appb-000015
式中:
Figure PCTCN2023071662-appb-000016
表示地图数据库中存储的第i个候选地点激光点云数据对应的幅度谱;
Figure PCTCN2023071662-appb-000017
表示
Figure PCTCN2023071662-appb-000018
对应的纵轴为θ且一维傅里叶变换时的离散频率为ω;
S6、将S5中估计得到的相对旋转
Figure PCTCN2023071662-appb-000019
补偿到当前位置的鸟瞰图f(x,y)上,得到旋转补偿后的鸟瞰图f′(x,y),所述旋转补偿后的鸟瞰图f′(x,y)和候选地点索引n对应的鸟瞰图f n(x,y)之间只存在平移变换;再利用互相关操作
Figure PCTCN2023071662-appb-000020
求解鸟瞰图f′(x,y)和鸟瞰图f n(x,y)之间的相对平移
Figure PCTCN2023071662-appb-000021
Figure PCTCN2023071662-appb-000022
S7、将S5中求解的相对旋转
Figure PCTCN2023071662-appb-000023
和S6中求解得到的相对平移
Figure PCTCN2023071662-appb-000024
作为初值输入点云配准算法中,实现6自由度相对位姿求解,从而实现位姿估计。
作为优选,所述S1中,将激光点云数据投影成鸟瞰图的具体步骤如下:
S11、首先滤除激光点云数据中没有信息的地面点;
S12、将去除地面点后的激光点云在3D笛卡尔空间下沿z轴划分成独立的鸟瞰视角下的栅格;
S13、根据鸟瞰视角下的栅格中的占用、最大高度或反射强度信息情况,将激光点云编码成鸟瞰图,其中栅格的占用、最大高度或反射强度信息对应作为鸟瞰图中的像素值。
作为优选,所述S2中,拉东变换的公式为:
Figure PCTCN2023071662-appb-000025
式中:δ(·)为狄拉克函数。
作为优选,所述S3中,使用的平移等变的特征提取网络为Auto-Encoder或U-Net。
作为优选,所述S7中,采用的点云配准算法为ICP算法。
作为优选,所述设定的距离需小于激光雷达的射程。
作为优选,在移动机器人行进过程中采集的鸟瞰图和幅度谱存储到地图数据库中,用于下一次地点识别和位姿估计。
本发明相比现有技术,其有益效果在于:
1.本发明不依赖相机采集的图像,实现了对环境变化鲁棒,不容易受到光照 以及时间季节变化的影响。
2.本发明不局限于传统方法人工设计的特征,利用数据驱动的方法提取更具独特性的特征用于地点识别任务。
3.本发明用正弦图表征激光点云,通过相关性学***移不变特征的提取,且其旋转不变性不受平移变化影响,实现了旋转平移不变的地点识别,提升了全局定位的准确性。
4.本发明除地点识别外还估计了相对旋转和平移,给ICP等点云配准算法提供了良好的初值,使其能快速地全局收敛,得到更精准的定位结果。
附图说明
图1为本发明实施例基于激光雷达的移动机器人高效鲁棒全局定位方法的流程图。
图2为本发明实施例端到端学习的中间过程可视化图。
具体实施方式
为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图对本发明的具体实施方式做详细的说明。在下面的描述中阐述了很多具体细节以便于充分理解本发明。但是本发明能够以很多不同于在此描述的其它方式来实施,本领域技术人员可以在不违背本发明内涵的情况下做类似改进,因此本发明不受下面公开的具体实施例的限制。本发明各个实施例中的技术特征在没有相互冲突的前提下,均可进行相应组合。
在本发明中,提供了一种基于激光雷达的高效鲁棒全局定位方法。本发明利用拉东变换将旋转和平移变化转换到正弦图的两个轴的平移变化,使用平移等变的特征提取网络进行特征提取,保证旋转和平移的等变性,利用傅里叶变换得到频谱的幅度谱实现平移不变性,通过互相关操作实现旋转平移不变的相似度计算,同时监督网络提取适用于地点识别任务的特征,以提高对激光点云中旋转平移不变特征的表征能力。此外,本发明利用互相关操作估计相对旋转和平移给点云配准算法提供良好的初值,进一步求解精确的6自由度相对位姿。
如图1所示,在本发明的一个较佳实施例中,该基于激光雷达的移动机器人高效鲁棒全局定位方法具体包括以下步骤:
步骤(1)、移动机器人通过激光雷达实时采集行进过程中的激光点云数据, 并根据移动机器人的里程计信息,每行进设定的距离(需小于激光雷达的射程,如20米)将激光点云数据投影得到鸟瞰图f(x,y)。
作为本发明实施例的具体实现方式,可将激光点云数据根据占用/最大高度/反射强度信息投影成鸟瞰图的具体步骤如下:
步骤(1-1)、首先滤除激光点云数据中没有信息的地面点。
步骤(1-2)、将去除地面点后的激光点云在3D笛卡尔空间下沿z轴划分成有限个独立的鸟瞰视角下的栅格。
步骤(1-3)、根据鸟瞰视角下的栅格中的占用、最大高度或反射强度信息情况,将激光点云编码成鸟瞰图,其中栅格的占用、最大高度或反射强度信息对应作为鸟瞰图中的像素值。
步骤(2)、利用拉东变换将步骤(1)中得到的鸟瞰图沿着直线L:xcosθ+ysinθ=τ进行积分,得到纵轴为θ、横轴为τ的正弦图
Figure PCTCN2023071662-appb-000026
拉东变换过程中,鸟瞰图f(x,y)的旋转α和平移
Figure PCTCN2023071662-appb-000027
变化分别被转换成正弦图
Figure PCTCN2023071662-appb-000028
中纵轴和横轴的平移。
作为本发明实施例的具体实现方式,上述拉东变换如公式(1)所示:
Figure PCTCN2023071662-appb-000029
式中:δ(·)为狄拉克函数。
鸟瞰图f(x,y)的旋转α和平移
Figure PCTCN2023071662-appb-000030
转换到正弦图
Figure PCTCN2023071662-appb-000031
的θ和τ轴的平移变化可以分别表示公式(2)和公式(3)为:
Figure PCTCN2023071662-appb-000032
Figure PCTCN2023071662-appb-000033
因此,经步骤(2)的计算,最终得到的正弦图
Figure PCTCN2023071662-appb-000034
中,纵轴(θ轴)变化反映了激光点云的旋转,横轴(τ轴)则表示了不同旋转角度上点云的平移。即移动机器人在同一地点的旋转变化在正弦图中表现为纵轴的循环平移,在同一地点附近的平移变化则表现为横轴的平移。
步骤(3)、使用平移等变的特征提取网络在S2得到的正弦图
Figure PCTCN2023071662-appb-000035
上进行特征 提取,保证提取特征的旋转和平移等变性,得到与正弦图
Figure PCTCN2023071662-appb-000036
相同大小的特征图E f
在本步骤中,平移等变的特征提取网络能保证提取特征的旋转和平移等变性,是后续旋转平移不变性实现的关键。作为本发明实施例的具体实现方式,上述使用的平移等变的特征提取网络可以是Auto-Encoder或U-Net等网络。
步骤(4)、对特征图E f的每行进行一维傅里叶变换,得到的频谱取幅度谱M f作为平移不变表征。
在本步骤中,由于快速傅里叶变换的特性,将特征图转换在频域下表示,得到频谱的幅度谱不会受到平移变化的影响,故得到的幅度谱具有平移不变性,即在同一地点附近的表征是一致的,幅度谱可作为平移不变表征。
在本发明的一个示例中,上述地图激光点云、鸟瞰图、正弦图、特征图、幅度谱的可视化结果如图2所示。
步骤(5)、在移动机器人行进过程中,当前位置激光点云数据对应的幅度谱为
Figure PCTCN2023071662-appb-000037
遍历地图数据库中所有存储的候选地点,对当前位置激光点云数据对应的幅度谱
Figure PCTCN2023071662-appb-000038
和每个候选地点的激光点云数据对应的幅度谱
Figure PCTCN2023071662-appb-000039
分别进行互相关操作
Figure PCTCN2023071662-appb-000040
从而衡量当前位置与各候选地点的激光点云数据相似度,得到相关性向量
Figure PCTCN2023071662-appb-000041
进而从地图数据库中检索出与当前位置激光点云数据相似度最大的候选地点索引n,实现地点识别;同时估计当前位置激光点云数据与检索到索引n对应的激光点云数据之间的相对旋转
Figure PCTCN2023071662-appb-000042
其中互相关操作时
Figure PCTCN2023071662-appb-000043
将机器人当前的幅度谱与候选地点的幅度谱沿着纵轴进行互相关计算后得到相关性谱,取相关性谱的最大值作为相似度,公式为:
Figure PCTCN2023071662-appb-000044
在确定候选地点索引n时,需要比较机器人当前激光点云观测和地图数据库中所有激光点云观测的相似度,将最大相似度对应的地点作为与当前机器人最相近的地点,具体公式如下:
Figure PCTCN2023071662-appb-000045
在计算相对旋转
Figure PCTCN2023071662-appb-000046
时,可基于互相关操作中相关性谱的最大值对应的位置确定当前机器人与检索到的地图数据之间的相对旋转,具体计算公式如下:
Figure PCTCN2023071662-appb-000047
式中:
Figure PCTCN2023071662-appb-000048
表示地图数据库中存储的第i个候选地点激光点云数据对应的幅度谱;
Figure PCTCN2023071662-appb-000049
表示
Figure PCTCN2023071662-appb-000050
对应的纵轴为θ且一维傅里叶变换时的离散频率为ω。
步骤(6)、按公式(7)所示,将步骤(5)中估计得到的相对旋转
Figure PCTCN2023071662-appb-000051
补偿到当前位置的鸟瞰图f(x,y)上,得到旋转补偿后的鸟瞰图f′(x,y),此时旋转补偿后的鸟瞰图f′(x,y)和候选地点索引n对应的鸟瞰图f n(x,y)之间只存在平移变换。因此,再利用互相关操作
Figure PCTCN2023071662-appb-000052
求解鸟瞰图f′(x,y)和鸟瞰图f n(x,y)之间的相对平移
Figure PCTCN2023071662-appb-000053
相对平移
Figure PCTCN2023071662-appb-000054
的计算,需应用互相关操作得到补偿后的鸟瞰图和检索到的鸟瞰图的相关性谱,并基于相关性谱的最大值对应的位置确定当前机器人与检索到的地图数据之间的相对平移。互相关操作的原理公式如(8)所示,实际计算时,可通过公式(9)计算相对平移
Figure PCTCN2023071662-appb-000055
Figure PCTCN2023071662-appb-000056
Figure PCTCN2023071662-appb-000057
Figure PCTCN2023071662-appb-000058
步骤(7)、将步骤(5)中求解的相对旋转
Figure PCTCN2023071662-appb-000059
和步骤(6)中求解得到的相对平移
Figure PCTCN2023071662-appb-000060
作为初值输入点云配准算法中,实现更精确的6自由度相对位姿求解,从而实现位姿估计。
作为本发明实施例的具体实现方式,上述采用的点云配准算法可以是ICP算法。
步骤(8)、在移动机器人行进过程中采集的鸟瞰图和幅度谱存储到地图数据库中,用于下一次地点识别和位姿估计。
综上,本发明用正弦图表征激光点云,通过相关性学***移不变特征的提取,且其旋转不变性不受平移变化影响,实现了旋转平移不变的地点识别,提升了全局定位的准确性。而且本发明除地点识别外还估计了相对旋转和平移,给ICP等点云配准算法提供了良好的初值,使其能快速地全局收敛,得到更精准 的定位结果。
以上所述的实施例只是本发明的一种较佳的方案,然其并非用以限制本发明。有关技术领域的普通技术人员,在不脱离本发明的精神和范围的情况下,还可以做出各种变化和变型。因此凡采取等同替换或等效变换的方式所获得的技术方案,均落在本发明的保护范围内。

Claims (7)

  1. 一种基于激光雷达的移动机器人高效鲁棒全局定位方法,其特征在于,包括:
    S1、移动机器人通过激光雷达实时采集行进过程中的激光点云数据,并根据移动机器人的里程计信息,每行进设定的距离将激光点云数据投影得到鸟瞰图f(x,y);
    S2、利用拉东变换将S1中得到的鸟瞰图沿着直线L:xcosθ+ysinθ=τ进行积分,得到纵轴为θ、横轴为τ的正弦图
    Figure PCTCN2023071662-appb-100001
    拉东变换过程中,鸟瞰图f(x,y)的旋转α和平移
    Figure PCTCN2023071662-appb-100002
    变化分别被转换成正弦图
    Figure PCTCN2023071662-appb-100003
    中纵轴和横轴的平移;
    S3、使用平移等变的特征提取网络在S2得到的正弦图
    Figure PCTCN2023071662-appb-100004
    上进行特征提取,得到与正弦图
    Figure PCTCN2023071662-appb-100005
    相同大小的特征图E f
    S4、对特征图E f的每行进行一维傅里叶变换,得到的频谱取幅度谱M f作为平移不变表征;
    S5、在移动机器人行进过程中,当前位置激光点云数据对应的幅度谱为
    Figure PCTCN2023071662-appb-100006
    遍历地图数据库中所有存储的候选地点,对当前位置激光点云数据对应的幅度谱
    Figure PCTCN2023071662-appb-100007
    和每个候选地点的激光点云数据对应的幅度谱
    Figure PCTCN2023071662-appb-100008
    分别进行互相关操作
    Figure PCTCN2023071662-appb-100009
    从而衡量当前位置与各候选地点的激光点云数据相似度,进而从地图数据库中检索出与当前位置激光点云数据相似度最大的候选地点索引n,实现地点识别;同时估计当前位置激光点云数据与检索到索引n对应的激光点云数据之间的相对旋转
    Figure PCTCN2023071662-appb-100010
    其中
    Figure PCTCN2023071662-appb-100011
    n和
    Figure PCTCN2023071662-appb-100012
    的计算公式分别如下:
    Figure PCTCN2023071662-appb-100013
    Figure PCTCN2023071662-appb-100014
    Figure PCTCN2023071662-appb-100015
    式中:
    Figure PCTCN2023071662-appb-100016
    表示地图数据库中存储的第i个候选地点激光点云数据对应的幅度谱;
    Figure PCTCN2023071662-appb-100017
    表示
    Figure PCTCN2023071662-appb-100018
    对应的纵轴为θ且一维傅里叶变换时的离散频率为ω;
    S6、将S5中估计得到的相对旋转
    Figure PCTCN2023071662-appb-100019
    补偿到当前位置的鸟瞰图f(x,y)上,得到旋转补偿后的鸟瞰图f′(x,y),所述旋转补偿后的鸟瞰图f′(x,y)和候选地点索引n对应的鸟瞰图f n(x,y)之间只存在平移变换;再利用互相关操作
    Figure PCTCN2023071662-appb-100020
    求解鸟瞰图f′(x,y)和鸟瞰图f n(x,y)之间的相对平移
    Figure PCTCN2023071662-appb-100021
    Figure PCTCN2023071662-appb-100022
    S7、将S5中求解的相对旋转
    Figure PCTCN2023071662-appb-100023
    和S6中求解得到的相对平移
    Figure PCTCN2023071662-appb-100024
    作为初值输入点云配准算法中,实现6自由度相对位姿求解,从而实现位姿估计。
  2. 如权利要求1所述的基于激光雷达的移动机器人高效鲁棒全局定位方法,其特征在于,所述S1中,将激光点云数据投影成鸟瞰图的具体步骤如下:
    S11、首先滤除激光点云数据中没有信息的地面点;
    S12、将去除地面点后的激光点云在3D笛卡尔空间下沿z轴划分成独立的鸟瞰视角下的栅格;
    S13、根据鸟瞰视角下的栅格中的占用、最大高度或反射强度信息情况,将激光点云编码成鸟瞰图,其中栅格的占用、最大高度或反射强度信息对应作为鸟瞰图中的像素值。
  3. 如权利要求1所述的基于激光雷达的移动机器人高效鲁棒全局定位方法,其特征在于,所述S2中,拉东变换的公式为:
    Figure PCTCN2023071662-appb-100025
    式中:δ(·)为狄拉克函数。
  4. 如权利要求1所述的基于激光雷达的移动机器人高效鲁棒全局定位方法,其特征在于,所述S3中,使用的平移等变的特征提取网络为Auto-Encoder或U-Net。
  5. 如权利要求1所述的基于激光雷达的移动机器人高效鲁棒全局定位方法,其特征在于,所述S7中,采用的点云配准算法为ICP算法。
  6. 如权利要求1所述的基于激光雷达的移动机器人高效鲁棒全局定位方法,其特征在于,所述设定的距离需小于激光雷达的射程。
  7. 如权利要求1所述的基于激光雷达的移动机器人高效鲁棒全局定位方法,其特征在于,在移动机器人行进过程中采集的鸟瞰图和幅度谱存储到地图数据库中,用于下一次地点识别和位姿估计。
PCT/CN2023/071662 2022-11-21 2023-01-10 基于激光雷达的移动机器人高效鲁棒全局定位方法 WO2024108753A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211461320.8 2022-11-21
CN202211461320.8A CN115932868A (zh) 2022-11-21 2022-11-21 基于激光雷达的移动机器人高效鲁棒全局定位方法

Publications (1)

Publication Number Publication Date
WO2024108753A1 true WO2024108753A1 (zh) 2024-05-30

Family

ID=86554903

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/071662 WO2024108753A1 (zh) 2022-11-21 2023-01-10 基于激光雷达的移动机器人高效鲁棒全局定位方法

Country Status (2)

Country Link
CN (1) CN115932868A (zh)
WO (1) WO2024108753A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113406659A (zh) * 2021-05-28 2021-09-17 浙江大学 一种基于激光雷达信息的移动机器人位置重识别方法
CN114001706A (zh) * 2021-12-29 2022-02-01 阿里巴巴达摩院(杭州)科技有限公司 航向角估计方法、装置、电子设备和存储介质
US20220163346A1 (en) * 2020-11-23 2022-05-26 Electronics And Telecommunications Research Institute Method and apparatus for generating a map for autonomous driving and recognizing location
CN115267724A (zh) * 2022-07-13 2022-11-01 浙江大学 一种基于激光雷达可估位姿的移动机器人位置重识别方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220163346A1 (en) * 2020-11-23 2022-05-26 Electronics And Telecommunications Research Institute Method and apparatus for generating a map for autonomous driving and recognizing location
CN113406659A (zh) * 2021-05-28 2021-09-17 浙江大学 一种基于激光雷达信息的移动机器人位置重识别方法
CN114001706A (zh) * 2021-12-29 2022-02-01 阿里巴巴达摩院(杭州)科技有限公司 航向角估计方法、装置、电子设备和存储介质
CN115267724A (zh) * 2022-07-13 2022-11-01 浙江大学 一种基于激光雷达可估位姿的移动机器人位置重识别方法

Also Published As

Publication number Publication date
CN115932868A (zh) 2023-04-07

Similar Documents

Publication Publication Date Title
AU2019272032B2 (en) Statistical point pattern matching technique
CN109345574B (zh) 基于语义点云配准的激光雷达三维建图方法
WO2021196294A1 (zh) 一种跨视频人员定位追踪方法、***及设备
CN108764048B (zh) 人脸关键点检测方法及装置
Zhuang et al. 3-D-laser-based scene measurement and place recognition for mobile robots in dynamic indoor environments
CN107179768B (zh) 一种障碍物识别方法及装置
CN108229416B (zh) 基于语义分割技术的机器人slam方法
US9147287B2 (en) Statistical point pattern matching technique
CN101907459B (zh) 基于单目视频的实时三维刚体目标姿态估计与测距方法
WO2021208442A1 (zh) 一种三维场景的重建***、方法、设备及存储介质
US10288425B2 (en) Generation of map data
US20140099035A1 (en) Systems and methods for relating images to each other by determining transforms without using image acquisition metadata
WO2022247045A1 (zh) 一种基于激光雷达信息的移动机器人位置重识别方法
CN104281840A (zh) 一种基于智能终端定位识别建筑物的方法及装置
CN103136525A (zh) 一种利用广义Hough变换的异型扩展目标高精度定位方法
CN114088081A (zh) 一种基于多段联合优化的用于精确定位的地图构建方法
WO2023284358A1 (zh) 相机标定方法、装置、电子设备及存储介质
CN110636248B (zh) 目标跟踪方法与装置
Dreher et al. Global localization in meshes
WO2024108753A1 (zh) 基于激光雷达的移动机器人高效鲁棒全局定位方法
Yoshisada et al. Indoor map generation from multiple LiDAR point clouds
Hujebri et al. Automatic building extraction from lidar point cloud data in the fusion of orthoimage
WO2024011455A1 (zh) 一种基于激光雷达可估位姿的移动机器人位置重识别方法
Kim et al. A novel technique for indoor object distance measurement by using 3D point cloud and LiDAR
Wang et al. Geometrical Features based Visual Relocalization for Indoor Service Robot