WO2019140745A1 - Robot positioning method and device - Google Patents

Robot positioning method and device Download PDF

Info

Publication number
WO2019140745A1
WO2019140745A1 PCT/CN2018/077582 CN2018077582W WO2019140745A1 WO 2019140745 A1 WO2019140745 A1 WO 2019140745A1 CN 2018077582 W CN2018077582 W CN 2018077582W WO 2019140745 A1 WO2019140745 A1 WO 2019140745A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
category
current
map
pose
Prior art date
Application number
PCT/CN2018/077582
Other languages
French (fr)
Chinese (zh)
Inventor
苏泽荣
周雪峰
徐保来
鄢武
程韬波
黄丹
Original Assignee
广东省智能制造研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东省智能制造研究所 filed Critical 广东省智能制造研究所
Publication of WO2019140745A1 publication Critical patent/WO2019140745A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Definitions

  • the invention relates to the field of robot positioning, and in particular to a robot positioning method and device.
  • Robots can replace people for complex or high-risk jobs.
  • the robot will encounter the situation of working in an unknown environment.
  • the positioning and map construction capability of the robot is particularly important.
  • the following two methods are used to realize the positioning and map construction of the robot: one is to use the laser sensor to obtain relatively accurate distance information, mainly using the laser to construct the grid map for positioning, which is easy to do path planning and navigation;
  • the amount of information in the laser-sensing environment is small, and only the planar information of the environment can be obtained, and the recognition degree of the environment is not high, and the error in the initial position matching of the robot is large.
  • the other is to use a vision sensor to identify the scene location.
  • the visual sensor-based method is rich in the collection environment information, which is helpful for information processing of dynamic scenes. It has better effects than laser in loop detection and data correlation.
  • the current visual positioning technology is not very mature, and there is computational time consuming. Larger, positioning accuracy is not as good as lasers.
  • a method of co-localization of vision and laser is proposed.
  • map construction a laser map is used to construct a plane laser map, and a three-dimensional reconstruction map is built by using a robot camera.
  • the three-dimensional reconstruction map is unified with the plane laser map.
  • the feature image is extracted from the environment image obtained by the robot camera to collect the environmental information, and the correspondence relationship between the two-dimensional feature points in the three-dimensional reconstructed map is established, and the position of the robot acquiring the current image in the three-dimensional reconstructed map is solved.
  • the position of the current robot in the plane laser map is solved by the correspondence between the plane laser map and the three-dimensional reconstructed map.
  • an embodiment of the present invention provides a robot positioning method, including:
  • Reading current scan data of the laser sensor of the robot obtaining a coincidence score of the current scan data and the hybrid visual laser map, and sorting the coincidence scores of each cluster category;
  • the pose information corresponding to the central key frame image of the new cluster category is updated to the current pose of the robot.
  • the step of determining whether to generate a new cluster category further comprises the steps of:
  • the current scan data is stored, and the pose of the robot at the next moment is obtained according to the robot mileage information, the adjacent two-frame scan data, and the positioning algorithm, and the pose of the next moment is updated. For the current pose of the robot.
  • the hybrid visual laser map is constructed offline by using the laser sensor and the robot camera; wherein the hybrid visual laser map is a map corresponding to the visual feature of the visual feature map and the pose information of the laser plane map.
  • the step of constructing the hybrid visual laser map offline using the laser sensor and the robotic camera comprises:
  • the visual features of the key frame image are extracted, the visual feature map is obtained, and the hybrid visual laser map corresponding to the visual feature map and the laser plane map is obtained according to the visual feature and the pose information.
  • the step of extracting visual features of the key frame image comprises:
  • the visual features of the key frame image are extracted and stored by the Gist global description operator.
  • the step of obtaining the current pose of the plurality of robots having the same number of categories according to the local feature matching result includes the steps of:
  • the PnP method is used to obtain the current pose of the robot for each clustering category whose matching internal points exceed the preset threshold.
  • the step of obtaining the current pose of the plurality of robots having the same number of categories according to the local feature matching result further comprises the steps of:
  • the pose of the corresponding central keyframe image is taken as the current pose of the robot.
  • the step of performing clustering of the plurality of categories on the first m frames includes:
  • the k-means clustering method is used to index the k categories in the index of the first m frames, and the sorting of k cluster categories is obtained;
  • the k cluster categories configuring the new range are matched with the k cluster categories corresponding to the last searched image.
  • a robot positioning device including:
  • An image acquisition and similarity measurement module is configured to read a current search image of the robot camera, and perform similarity measurement on the current search image and the hybrid visual laser map to find a top m frame with the highest similarity in the current search image;
  • the cluster matching module is configured to perform clustering of multiple categories on the first m frame, and match each cluster category with a cluster category corresponding to the last search image to determine whether a new cluster category is generated;
  • a local feature matching module configured to perform local feature matching on a central key frame image of each clustering category corresponding to the current search image and the current search image if a new clustering category is detected, and obtain a category according to the local feature matching result The current pose of multiple robots of the same number;
  • a coincidence score module configured to read current scan data of the laser sensor of the robot, obtain a coincidence score of the current scan data and the hybrid visual laser map, and sort the coincidence scores of each cluster category;
  • the pose update issuing module is configured to update the pose information corresponding to the central key frame image of the new cluster category to the current pose of the robot if the category with the highest score is detected as the new cluster category.
  • a computer device comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, the step of implementing the robot positioning method described above when the processor executes the program.
  • a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the steps of the robot positioning method described above.
  • the robot positioning method and device obtain the current search image by the robot camera, and perform similarity measurement on the current search image and the hybrid visual laser map to find the first m frame with the highest similarity in the current search image; Class matching, judging whether a new cluster category is generated; when a new cluster category is detected, local feature matching is performed on the central key frame image of each cluster category corresponding to the current search image and the current search image, and matching according to local features
  • Class matching judging whether a new cluster category is generated
  • local feature matching is performed on the central key frame image of each cluster category corresponding to the current search image and the current search image, and matching according to local features
  • the pose information corresponding to the central keyframe image of the new clustering category is updated to the current pose of the robot, thereby achieving accurate positioning.
  • the robot positioning method and device in the embodiment of the invention combines the visual feature map and the laser plane map, adopts a cluster matching and coincidence degree scoring mechanism, increases the recognition rate of the robot to the environment, thereby improving the positioning accuracy.
  • FIG. 1 is an explanatory diagram of an application scenario according to an embodiment of the present invention.
  • FIG. 2 is a first schematic flowchart of an embodiment of a robot positioning method according to the present invention
  • FIG. 3 is a second schematic flowchart of an embodiment of a robot positioning method according to the present invention.
  • FIG. 4 is a third schematic flowchart of an embodiment of a robot positioning method according to the present invention.
  • FIG. 5 is a first schematic flowchart of a method for constructing a hybrid visual laser map offline in a robot positioning method according to the present invention
  • FIG. 6 is a second schematic flowchart of a method for constructing a hybrid visual laser map offline in a robot positioning method according to the present invention
  • FIG. 7 is a fourth schematic flowchart of an embodiment of a robot positioning method according to the present invention.
  • FIG. 8 is a schematic flowchart of a method for performing clustering of multiple categories on a previous m frame in a robot positioning method according to the present invention
  • FIG. 9 is a first schematic structural view of an embodiment of a robot positioning device according to the present invention.
  • FIG. 10 is a second schematic structural diagram of an embodiment of a robot positioning device according to the present invention.
  • Robot positioning refers to the process in which the robot acquires environmental information through perception and determines the position of itself and the target through relevant information processing.
  • the robot works in an unknown environment, it needs to determine its position so that the path planning can be performed according to the target position during actual operation. Therefore, robot positioning is extremely important.
  • the robot in the unknown environment, the robot needs to operate on the target A.
  • the robot needs to determine its position in the environment, and then performs path planning according to the posture and posture of the target A to move. Go to the position of the target A and operate on the target A.
  • This example is only an example of an application scenario of the robot positioning, but does not limit the application range of the robot positioning method and device proposed in the embodiment of the present invention.
  • An embodiment of the present invention provides a robot positioning method. As shown in FIG. 2, the foregoing robot positioning method includes:
  • S50 reading current scan data of the laser sensor of the robot, obtaining a coincidence score of the current scan data and the hybrid visual laser map, and sorting the coincidence scores of each cluster category;
  • the hybrid visual laser map refers to a map including a visual feature map and a laser plane map, a visual feature map, and a laser plane map.
  • the selection of the m parameter in the first m frame is obtained through experiments. When the m frame is taken, the positioning accuracy and speed of the robot can be better.
  • the clustering category corresponding to the last searched image refers to the similarity measure of the robot camera's search image obtained by the previous positioning and the hybrid visual laser map during the moving process, and then performs the similarity of the first m frame with the highest similarity. Clustering the generated cluster categories.
  • the new cluster category refers to a category different from the cluster category corresponding to the last search image.
  • the coincidence score is a score of the degree of coincidence of the scan data and the scan data stored in the hybrid visual laser map, and the more coincident data points, the higher the coincidence score.
  • the hybrid visual laser map is pre-established.
  • the visual feature map in the hybrid visual laser map refers to the map including the visual features.
  • the laser planar map in the hybrid visual laser map refers to the map with the scan data constructed by using the laser sensor. .
  • the robot reads the current retrieved image of the robot camera, wherein the retrieved image may include a key frame color image and a key frame depth image; and then, the current search image is compared with the hybrid visual laser map to find a current search image.
  • the first m frame with the highest similarity, and clusters the first m frames, and matches the cluster categories corresponding to the last retrieved image.
  • the current search image corresponds to the current search image.
  • the central key frame image of each clustering category performs local feature matching, and obtains the current pose of multiple robots with the same number of categories. For example, if k clustering categories are generated when clustering is performed in the previous cluster, local feature matching is performed.
  • the hybrid visual laser map further comprises a one-to-one correspondence between the visual features in the visual feature map and the pose information in the laser plane map, so that the current position of the robot can be quickly obtained by performing local feature matching on the currently retrieved image.
  • the possibility of posture After the local feature matching is performed, the current scan data of the laser sensor of the robot is read, the coincidence score of the scan data in the current scan data and the mixed visual laser map is obtained, and the coincidence scores of the cluster categories are sorted. When the category with the highest score is detected as the new cluster category, the pose information corresponding to the central key frame image of the new cluster category is updated to the current pose of the robot.
  • an ORB local visual feature extraction and matching method may be adopted, or a local feature extraction and matching method such as SIFT, SURF, or the like may be adopted.
  • ORB ORiented Brief
  • SIFT Scale-invariant feature transform
  • SURF Speeded Up Robust Features
  • the process of performing similarity measurement on the current search image and the hybrid visual laser map may include:
  • the visual feature vectors of the visual features in the laser map are calculated one by one to find the first m frames with the highest similarity.
  • Gist global description operator is a global scene description operator based on spatial envelope description image features.
  • step S60 if it is detected that the category with the highest score is a new cluster category, the pose information corresponding to the central key frame image of the new cluster category is updated to the current state of the robot. After the pose, the process jumps to step S20 for the next positioning.
  • the robot uses the timing and distance strategy for the next positioning.
  • the first m frames are clustered, and the generated cluster categories are sorted; when the highest score is detected as the highest ranked new cluster category, the center of the highest ranked new cluster category is selected.
  • the pose information corresponding to the key frame image is updated to the current pose of the robot.
  • the sorting order of the clustering category represents the degree of trust of the robot in matching the visual features of each category with the hybrid visual laser map.
  • the score of the coincidence score is high and low, which represents the matching of the laser to the laser plane map of each category and the hybrid visual laser map.
  • the degree of trust, the pose information corresponding to the central keyframe image of the category with the highest sorting and the highest score is selected as the current pose of the robot, and the positioning is achieved, which greatly improves the positioning accuracy of the robot.
  • the robot positioning method provided in this embodiment uses cluster matching to determine whether to generate a new cluster category, and when generating a new cluster category, performs local feature matching on the current search image and the central key frame image of each cluster category, according to The matching result obtains multiple possibilities of the current pose of the robot, and further passes the score of the coincidence degree of the current scan data and the laser plane map. When the cluster with the highest score is a new cluster, the current pose of the robot is updated.
  • this method combining cluster matching and coincidence scoring mechanism greatly improves the recognition rate of the robot to the surrounding environment in the robot positioning, thereby achieving accurate positioning.
  • the method further includes the steps of:
  • the pose of the robot at the next moment refers to the pose corresponding to the next possible position of the robot.
  • the adjacent two frames of scan data refer to the current scan data of the robot and the scan data stored at the time of the last positioning.
  • the robot predicts the first predicted pose of the robot according to the mileage information, and stores the current scan data, and retrieves the scan data stored in the previous positioning, and then according to the current
  • the scan data and the scan data stored in the previous positioning obtain a second predicted pose
  • the first predicted pose and the second predicted pose are passed through a positioning algorithm to obtain a pose of the next moment of the robot, and the calculated lower
  • the pose of a moment is the current pose of the robot.
  • the positioning algorithm can adopt Rao-Blackwellised particle filter algorithm, extended Kalman filter algorithm and the like.
  • the Rao-Blackwellised particle filter algorithm is an algorithm that improves the estimation accuracy by introducing an edge function.
  • the Extended Kalman Filter algorithm is a high-efficiency recursive filter algorithm.
  • the robot mileage information is obtained through the robot's odometer, and the robot mileage information can also be obtained through the robot's wheel odometer.
  • step S70 if the new clustering category is not detected, the current scan data is stored, and the robot is obtained according to the robot mileage information, the adjacent two-frame scan data, and the positioning algorithm. Go to step S20 for the next positioning.
  • the method further includes the steps of: before reading the currently retrieved image of the robot camera:
  • S10 Using a laser sensor and a robot camera to construct a hybrid visual laser map offline; wherein the hybrid visual laser map is a map corresponding to the visual feature of the visual feature map and the pose information of the laser plane map.
  • the hybrid visual laser map includes a laser plane map and a visual feature map, and a one-to-one correspondence between the pose information in the laser plane map and the visual features in the visual feature map. Specifically, before the robot starts positioning, the laser plane map and the visual feature map are constructed offline, and the pose information of the laser plane map and the visual features of the visual feature map are bundled and stored at the same time to generate a hybrid visual laser map.
  • the steps of constructing the hybrid visual laser map offline using the laser sensor and the robot camera include:
  • S12 controlling the laser sensor to scan the surrounding environment, obtaining current scan data, and reading the key frame image of the robot camera;
  • S13 initializing the position of the robot and storing the current scan data, obtaining the predicted position of the robot at the next moment according to the robot mileage information, and using the positioning algorithm to obtain the pose information of the robot for the adjacent two frames of scan data and the robot mileage information;
  • S14 controlling the robot to repeatedly move, and constructing a laser plane map according to the scan data collected by the laser sensor;
  • S15 extracting visual features of the key frame image, obtaining a visual feature map, and jointly obtaining a hybrid visual laser map corresponding to the visual feature map and the laser plane map according to the visual feature and the pose information.
  • the adjacent two frames of scan data are the same as those in the above embodiment, and are not described herein.
  • the current retrieved image of the robot camera is read, and the visual features of the currently retrieved image are extracted, and the extracted visual features are compared with the key frame visual features of the visual feature map of the hybrid visual laser map, and the similarity is measured.
  • the top m frames with the highest similarity in the image are retrieved, and the first m frames are clustered, and matched with the cluster category corresponding to the last searched image.
  • the current search image and current The central key frame image of each cluster category corresponding to the image is searched for local feature matching, and the current poses of the plurality of robots having the same number of categories are obtained, and after the local feature matching is performed, the current scan data of the laser sensor of the robot is read, Obtaining a coincidence score of the scan data stored by the current scan data and the laser plane map in the hybrid visual laser map, and sorting the coincidence scores of each cluster category, when detecting that the category with the highest score is a new cluster category, Then updating the pose information corresponding to the central key frame image of the new cluster category to the machine Who currently pose.
  • a laser plane map of the robot can also be obtained by using a map-based optimization positioning and map construction method.
  • the step of extracting the visual features of the key frame image includes:
  • the Gist global description operator is used to extract and store the visual features of the key frame images collected by the robot camera, which can reduce the storage space occupied by the storage and use the global description operator storage.
  • the visual characteristics of the key frame image ensure the integrity of the visual features in the surrounding environment, which helps to improve the efficiency of post-matching calculation and the stability of matching.
  • the step of obtaining the current pose of the plurality of robots having the same number of categories according to the local feature matching result includes the steps of:
  • S42 The PnP method is used to obtain the current pose of the robot for each cluster category in which the number of matched inner points exceeds a preset threshold.
  • the preset threshold is obtained through experiments.
  • the PnP algorithm refers to an algorithm for solving camera external parameters by minimizing reprojection errors by using multiple pairs of 3D and 2D matching points in the case of known or unknown camera internal parameters. Specifically, when it is detected that the visual feature of the current search image and the matching feature point of the visual feature of the central key frame image of each cluster category exceed a preset threshold, the camera pose and the visual feature map are similar to the key frame of the current search image. The image is transformed between the camera poses and then the current pose of the robot is obtained.
  • the matching degree between the current search image and the visual feature in the hybrid visual laser map is high, and the PnP method is used to calculate the current pose of the robot, that is, the pose corresponding to the cluster category is updated, and the positioning accuracy is further improved.
  • the step of obtaining the current pose of the plurality of robots having the same number of categories according to the local feature matching result further comprises the steps of:
  • the pose of the center key frame image corresponding to the cluster category is used as the current pose of the robot.
  • the current search image and the hybrid visual laser map have a low degree of matching, so the pose of the central key frame image corresponding to the cluster type is used as the current pose of the robot.
  • the step of performing clustering of multiple categories on the first m frames includes:
  • the k-means clustering method is a representative of the prototype-based objective function clustering method.
  • the Euclidean distance is used as the similarity measure. It is a distance-based clustering algorithm, and the distance is used as the evaluation index of similarity. That is, the closer the distance between the two objects is, the greater the similarity is.
  • the definitions of the last search image and the like are the same as those in the above embodiment, and will not be described herein.
  • the parameter k is obtained experimentally and is preset according to the application scenario and the like.
  • the robot positioning device includes:
  • the image acquisition and similarity measurement module 20 is configured to read the current search image of the robot camera, and perform similarity measurement on the current search image and the hybrid visual laser map to find the first m frame with the highest similarity in the current search image;
  • the cluster matching module 30 is configured to perform clustering of multiple categories on the previous m frames, and match each cluster category with the cluster category corresponding to the last search image to determine whether a new cluster category is generated;
  • the local feature matching module 40 is configured to perform local feature matching on the central key frame image of each clustering category corresponding to the current search image and the current search image if the new clustering category is detected, and obtain and class according to the local feature matching result.
  • the coincidence score module 50 is configured to read current scan data of the laser sensor of the robot, obtain a coincidence score of the current scan data and the hybrid visual laser map, and sort the coincidence scores of each cluster category;
  • the pose update issuing module 60 is configured to update the pose information corresponding to the central keyframe image of the new cluster category to the current pose of the robot if the category with the highest score is detected as the new cluster category.
  • the image acquisition and similarity measurement module 20 reads the current search image of the robot camera, and performs similarity measurement on the current search image and the hybrid visual laser map to find the top m frame with the highest similarity in the current search image; clustering The matching module 30 performs clustering of multiple categories on the first m frames, and matches each cluster category with the cluster category corresponding to the last search image; when the local feature matching module 40 detects the new cluster category, The central key frame image of each clustering category corresponding to the current search image and the current search image is locally characterized, and the current pose of the plurality of robots having the same number of categories is obtained according to the local feature matching result; the coincidence score module 50 reads Taking the current scan data of the laser sensor of the robot, obtaining the coincidence score of the current scan data and the laser plane map, and sorting the coincidence scores of the cluster categories; then, the pose update issuing module 60 detects the highest score.
  • the category is a new clustering category
  • the pose information corresponding to the central keyframe image of the new clustering category The
  • the robot positioning device further includes:
  • the pose issuing module 70 is configured to store the current scan data when the new cluster category is not detected, and obtain the pose of the robot at the next moment according to the robot mileage information, the adjacent two-frame scan data, and the positioning algorithm.
  • the robot positioning device further includes:
  • the hybrid visual laser map offline building module 10 is configured to construct a hybrid visual laser map offline by using a laser sensor and a robot camera; wherein the hybrid visual laser map is a map corresponding to the visual feature of the visual feature map and the pose information of the laser plane map. .
  • each unit module in the embodiment of the robot positioning device of the present invention can implement the method steps in the foregoing method embodiments, and details are not described herein.
  • a computer device comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, the step of implementing the robot positioning method described above when the processor executes the program.
  • a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the steps of the robot positioning method described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

Provided in the invention are a robot positioning method and a device. The robot positioning method comprises: obtaining current retrieval images; measuring a similarity between the current retrieval images and a hybrid visual laser map; performing cluster matching on the first m frames of images having the highest similarity; when a new cluster category is detected, performing local feature matching on the current retrieval images and the central key frame images of cluster categories corresponding to the current retrieval images; obtaining current poses of a plurality of robots of which the number is equal to the number of the categories according to a local feature matching result; sequencing coincidence scores of current scanning data and the hybrid visual laser map; and when it is detected that a category with the highest score is a new cluster category, updating the pose information corresponding to the central key frame images of the new cluster category to current pose and location of the robot. The robot positioning method provided by the embodiments of the invention has the advantages of high environment recognition rate and high positioning accuracy.

Description

机器人定位方法及装置Robot positioning method and device 技术领域Technical field
本发明涉及机器人定位领域,特别是涉及一种机器人定位方法及装置。The invention relates to the field of robot positioning, and in particular to a robot positioning method and device.
背景技术Background technique
近年来机器人尤其是自主移动机器人已经成为一个重要研究领域。机器人可以代替人进行复杂作业或高危作业。机器人在作业过程中,多会碰到在未知环境作业的情况,此时机器人的定位与地图构建能力显得尤为重要。In recent years, robots, especially autonomous mobile robots, have become an important research field. Robots can replace people for complex or high-risk jobs. During the operation of the robot, the robot will encounter the situation of working in an unknown environment. At this time, the positioning and map construction capability of the robot is particularly important.
目前,传统技术中多采用以下两种方式实现机器人的定位与地图构建:一种是利用激光传感器得到比较准确的距离信息,主要是利用激光构建栅格地图进行定位,易于做路径规划和导航;然而激光感知环境的信息量较少,只能获取环境的平面信息,对环境的识别度不高,进行机器人初始位置匹配时的误差大。At present, in the traditional technology, the following two methods are used to realize the positioning and map construction of the robot: one is to use the laser sensor to obtain relatively accurate distance information, mainly using the laser to construct the grid map for positioning, which is easy to do path planning and navigation; However, the amount of information in the laser-sensing environment is small, and only the planar information of the environment can be obtained, and the recognition degree of the environment is not high, and the error in the initial position matching of the robot is large.
另一种是利用视觉传感器对场景地点进行识别。基于视觉传感器的方法由于采集环境信息较丰富,有助于对动态场景的信息处理,在回环检测和数据关联方面比激光有较好的效果,但是目前视觉定位技术不是很成熟,存在计算耗时较大,定位精度不如激光等缺点。The other is to use a vision sensor to identify the scene location. The visual sensor-based method is rich in the collection environment information, which is helpful for information processing of dynamic scenes. It has better effects than laser in loop detection and data correlation. However, the current visual positioning technology is not very mature, and there is computational time consuming. Larger, positioning accuracy is not as good as lasers.
针对以上两种机器人定位与地图构建方法的不足,提出了一种融合视觉和激光共同定位的方法:在地图构建准备环节,利用激光构建平面激光地图,同时利用机器人相机建立三维重构地图,并将三维重构地图与平面激光地图进行尺度统一。在机器人进行初始化定位环节,对机器人相机采集环境信息得到的环境图像进行特征点提取,建立二维特征点在三维重构地图的对应关系,求解获取当前图像的机器人在三维重构地图中的位置,通过平面激光地图和三维重构地图的对应关系求解当前机器人在平面激光地图的位置。Aiming at the shortcomings of the above two methods of robot positioning and map construction, a method of co-localization of vision and laser is proposed. In the preparation of map construction, a laser map is used to construct a plane laser map, and a three-dimensional reconstruction map is built by using a robot camera. The three-dimensional reconstruction map is unified with the plane laser map. In the initial positioning of the robot, the feature image is extracted from the environment image obtained by the robot camera to collect the environmental information, and the correspondence relationship between the two-dimensional feature points in the three-dimensional reconstructed map is established, and the position of the robot acquiring the current image in the three-dimensional reconstructed map is solved. The position of the current robot in the plane laser map is solved by the correspondence between the plane laser map and the three-dimensional reconstructed map.
但发明人在实施过程中,发现传统技术至少存在以下技术问题:传统技术中只通过对环境图像的特征点提取及激光平面地图、三维重构地图的对应关系,得到机器人的位置,定位精度低。However, in the implementation process, the inventor found that the traditional technology has at least the following technical problems: in the conventional technology, the position of the robot is obtained only by the feature point extraction of the environmental image and the correspondence between the laser plane map and the three-dimensional reconstructed map, and the positioning accuracy is low. .
发明内容Summary of the invention
基于此,有必要针对机器人定位精度低问题,提供一种机器人定位方法及装置。Based on this, it is necessary to provide a robot positioning method and device for the problem of low positioning accuracy of the robot.
一方面,本发明实施例提供了一种机器人定位方法,包括:In one aspect, an embodiment of the present invention provides a robot positioning method, including:
读取机器人相机的当前检索图像,并将当前检索图像与混合视觉激光地图进行相似度度量,查找当前检索图像中相似度最高的前m帧;Reading a current search image of the robot camera, and performing similarity measurement on the current search image and the hybrid visual laser map, and searching for the first m frame with the highest similarity in the current search image;
对前m帧进行多个类别的聚类,并将各聚类类别与上一次检索图像对应的聚类类别进行匹配,判断是否产生新聚类类别;Perform clustering of multiple categories on the previous m frames, and match each cluster category with the cluster category corresponding to the last searched image to determine whether a new clustering category is generated;
若检测到新聚类类别,则对当前检索图像和当前检索图像对应的各聚类类别的中心关键帧图像进行局部特征匹配,并根据局部特征匹配结果获得与类别个数相同的多个机器人的当前位姿;If a new clustering category is detected, local feature matching is performed on the central key frame image of each clustering category corresponding to the current search image and the current search image, and multiple robots having the same number of categories are obtained according to the local feature matching result. Current pose
读取机器人的激光传感器的当前扫描数据,获得当前扫描数据与混合视觉激光地图的重合度得分,并对各聚类类别的重合度得分进行排序;Reading current scan data of the laser sensor of the robot, obtaining a coincidence score of the current scan data and the hybrid visual laser map, and sorting the coincidence scores of each cluster category;
若检测到得分最高的类别是新聚类类别时,则将该新聚类类别的中心关键帧图像对应的位姿信息更新为机器人的当前位姿。If the category with the highest score is detected as the new cluster category, the pose information corresponding to the central key frame image of the new cluster category is updated to the current pose of the robot.
在其中一个实施例中,在判断是否产生新聚类类别的步骤之后还包括步骤:In one of the embodiments, the step of determining whether to generate a new cluster category further comprises the steps of:
若未检测到新聚类类别,则存储当前的扫描数据,并根据机器人里程信息、相邻的两帧扫描数据和定位算法获得机器人下一时刻的位姿,并将下一时刻的位姿更新为机器人的当前位姿。If the new clustering category is not detected, the current scan data is stored, and the pose of the robot at the next moment is obtained according to the robot mileage information, the adjacent two-frame scan data, and the positioning algorithm, and the pose of the next moment is updated. For the current pose of the robot.
在其中一个实施例中,在读取机器人相机的当前检索图像之前还包括步骤:In one of the embodiments, the step of: before reading the currently retrieved image of the robot camera:
利用激光传感器和机器人相机离线构建混合视觉激光地图;其中,混合视觉激光地图为视觉特征地图的视觉特征与激光平面地图的位姿信息一一对应的地图。The hybrid visual laser map is constructed offline by using the laser sensor and the robot camera; wherein the hybrid visual laser map is a map corresponding to the visual feature of the visual feature map and the pose information of the laser plane map.
在其中一个实施例中,利用激光传感器和机器人相机离线构建混合视觉激光地图的步骤包括:In one of the embodiments, the step of constructing the hybrid visual laser map offline using the laser sensor and the robotic camera comprises:
获取机器人里程信息;Obtain robot mileage information;
控制激光传感器对周围环境进行扫描,得到当前扫描数据,同时读取机器人相机的关键帧图像;Control the laser sensor to scan the surrounding environment, obtain the current scan data, and simultaneously read the key frame image of the robot camera;
初始化机器人的位置并存储当前扫描数据,根据机器人里程信息得到机器人下一时刻的预测位置,对相邻的两帧扫描数据及机器人里程信息采用定位算法得到机器人的位姿信息;Initializing the position of the robot and storing the current scan data, obtaining the predicted position of the robot at the next moment according to the robot mileage information, and using the positioning algorithm to obtain the pose information of the robot for the adjacent two frames of scan data and the robot mileage information;
控制机器人重复移动,并根据激光传感器采集的扫描数据构建激光平面地图;Controlling the robot to repeatedly move, and constructing a laser plane map according to the scan data collected by the laser sensor;
在构建激光平面地图的同时,提取关键帧图像的视觉特征,获得视觉特征地图,并根据视觉特征与位姿信息联合得到视觉特征地图和激光平面地图一一对应的混合视觉激光地图。At the same time of constructing the laser plane map, the visual features of the key frame image are extracted, the visual feature map is obtained, and the hybrid visual laser map corresponding to the visual feature map and the laser plane map is obtained according to the visual feature and the pose information.
在其中一个实施例中,提取关键帧图像的视觉特征的步骤包括:In one of the embodiments, the step of extracting visual features of the key frame image comprises:
通过Gist全局描述算子提取并存储关键帧图像的视觉特征。The visual features of the key frame image are extracted and stored by the Gist global description operator.
在其中一个实施例中,根据局部特征匹配结果获得与类别个数相同的多个机器人的当前位姿的步骤包括步骤:In one of the embodiments, the step of obtaining the current pose of the plurality of robots having the same number of categories according to the local feature matching result includes the steps of:
对匹配内点数超过预设阈值的各聚类类别采用PnP法得到机器人的当前位姿。The PnP method is used to obtain the current pose of the robot for each clustering category whose matching internal points exceed the preset threshold.
在其中一个实施例中,根据局部特征匹配结果获得与类别个数相同的多个机器人的当前位姿的步骤还包括步骤:In one of the embodiments, the step of obtaining the current pose of the plurality of robots having the same number of categories according to the local feature matching result further comprises the steps of:
对匹配内点数未超过预设阈值的各聚类类别,将其对应的中心关键帧图像的位姿作为机器人当前位姿。For each clustering category in which the number of matching internal points does not exceed the preset threshold, the pose of the corresponding central keyframe image is taken as the current pose of the robot.
在其中一个实施例中,对前m帧进行多个类别的聚类的步骤包括:In one of the embodiments, the step of performing clustering of the plurality of categories on the first m frames includes:
对前m帧的索引采用k-means聚类方法进行k个类别的聚类,得到k个聚类类别的排序;The k-means clustering method is used to index the k categories in the index of the first m frames, and the sorting of k cluster categories is obtained;
将每个聚类类别的中心值替换该类别关键帧索引的中位数,以落在每个聚类类别的范围内的帧数为基数,乘以机器人的最大速度因子作为该类别的新范围;Replace the median value of each cluster category with the median of the keyframe index of the category, with the number of frames falling within the range of each cluster category as the base, multiplied by the maximum speed factor of the robot as the new range of the category ;
将配置新范围的k个聚类类别与上一次检索图像对应的k个聚类类别进行匹配。The k cluster categories configuring the new range are matched with the k cluster categories corresponding to the last searched image.
本发明实施例另一方面还提供一种机器人定位装置,包括:Another aspect of the present invention provides a robot positioning device, including:
图像获取及相似度度量模块,用于读取机器人相机的当前检索图像,并将当前检索图像与混合视觉激光地图进行相似度度量,查找当前检索图像中相似度最高的前m帧;An image acquisition and similarity measurement module is configured to read a current search image of the robot camera, and perform similarity measurement on the current search image and the hybrid visual laser map to find a top m frame with the highest similarity in the current search image;
聚类匹配模块,用于对前m帧进行多个类别的聚类,并将各聚类类别与上一次检索图像对应的聚类类别进行匹配,判断是否产生新聚类类别;The cluster matching module is configured to perform clustering of multiple categories on the first m frame, and match each cluster category with a cluster category corresponding to the last search image to determine whether a new cluster category is generated;
局部特征匹配模块,用于若检测到新聚类类别,则对当前检索图像和当前检索图像对应的各聚类类别的中心关键帧图像进行局部特征匹配,并根据局部特征匹配结果获得与类别个数相同的多个机器人的当前位姿;a local feature matching module, configured to perform local feature matching on a central key frame image of each clustering category corresponding to the current search image and the current search image if a new clustering category is detected, and obtain a category according to the local feature matching result The current pose of multiple robots of the same number;
重合度得分模块,用于读取机器人的激光传感器的当前扫描数据,获得当前扫描数据与混合视觉激光地图的重合度得分,并对各聚类类别的重合度得分进行排序;a coincidence score module, configured to read current scan data of the laser sensor of the robot, obtain a coincidence score of the current scan data and the hybrid visual laser map, and sort the coincidence scores of each cluster category;
位姿更新发布模块,用于若检测到得分最高的类别是新聚类类别时,则将该新聚类类别的中心关键帧图像对应的位姿信息更新为机器人的当前位姿。The pose update issuing module is configured to update the pose information corresponding to the central key frame image of the new cluster category to the current pose of the robot if the category with the highest score is detected as the new cluster category.
一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行程序时实现上述机器人定位方法的步骤。A computer device comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, the step of implementing the robot positioning method described above when the processor executes the program.
一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述机器人定位方法的步骤。A computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the steps of the robot positioning method described above.
上述机器人定位方法及装置,通过机器人相机获取当前检索图像,并将当前检索图像与混合视觉激光地图进行相似度度量,查找当前检索图像中相似度最高的前m帧;并对前m帧进行聚类匹配,判断是否产生新聚类类别;当检测到新聚类类别时,则对当前检索图像和当前检索图像对应的各聚类类别的中心关键帧图像进行局部特征匹配,并根据局部特征匹配结果获得与类别个数相同的多个机器人的当前位姿;再通过获取机器人的激光传感器的当前扫描数据,并将当前扫描数据与混合视觉激光地图的重合度得分进行排序,当检测到得分最高的类别是新聚类类别时,则将该新聚类类别的中心关键帧图像对应的位姿信息更新为机器人的当前位姿,从而实现精确定位。本发明实施例中的机器人定位方法及装置结合视觉特征地图和激光平面地图,采用聚类匹配、重合度得分机制,增加对机器人对环境的识别率,从而提高定位准确度。The robot positioning method and device obtain the current search image by the robot camera, and perform similarity measurement on the current search image and the hybrid visual laser map to find the first m frame with the highest similarity in the current search image; Class matching, judging whether a new cluster category is generated; when a new cluster category is detected, local feature matching is performed on the central key frame image of each cluster category corresponding to the current search image and the current search image, and matching according to local features As a result, the current poses of the plurality of robots having the same number of categories are obtained; and the current scan data of the laser sensor of the robot is acquired, and the coincidence scores of the current scan data and the hybrid visual laser map are sorted, and the highest score is detected. When the category is a new clustering category, the pose information corresponding to the central keyframe image of the new clustering category is updated to the current pose of the robot, thereby achieving accurate positioning. The robot positioning method and device in the embodiment of the invention combines the visual feature map and the laser plane map, adopts a cluster matching and coincidence degree scoring mechanism, increases the recognition rate of the robot to the environment, thereby improving the positioning accuracy.
附图说明DRAWINGS
图1为本发明实施例一应用场景说明图;1 is an explanatory diagram of an application scenario according to an embodiment of the present invention;
图2为本发明机器人定位方法实施例的第一流程示意图;2 is a first schematic flowchart of an embodiment of a robot positioning method according to the present invention;
图3为本发明机器人定位方法实施例的第二流程示意图;3 is a second schematic flowchart of an embodiment of a robot positioning method according to the present invention;
图4为本发明机器人定位方法实施例的第三流程示意图;4 is a third schematic flowchart of an embodiment of a robot positioning method according to the present invention;
图5为本发明机器人定位方法中混合视觉激光地图离线构建方法的第一示意流程图;FIG. 5 is a first schematic flowchart of a method for constructing a hybrid visual laser map offline in a robot positioning method according to the present invention; FIG.
图6为本发明机器人定位方法中混合视觉激光地图离线构建方法的第二示意流程图;6 is a second schematic flowchart of a method for constructing a hybrid visual laser map offline in a robot positioning method according to the present invention;
图7为本发明机器人定位方法实施例的第四流程示意图;7 is a fourth schematic flowchart of an embodiment of a robot positioning method according to the present invention;
图8为本发明机器人定位方法中对前m帧进行多个类别的聚类的方法流程示意图;FIG. 8 is a schematic flowchart of a method for performing clustering of multiple categories on a previous m frame in a robot positioning method according to the present invention; FIG.
图9为本发明机器人定位装置实施例的第一结构示意图;9 is a first schematic structural view of an embodiment of a robot positioning device according to the present invention;
图10为本发明机器人定位装置实施例的第二结构示意图。FIG. 10 is a second schematic structural diagram of an embodiment of a robot positioning device according to the present invention.
具体实施方式Detailed ways
为了便于理解本发明,下面将参照相关附图对本发明进行更全面的描述。附图中给出了本发明的首选实施例。但是,本发明可以以许多不同的形式来实现,并不限于本文所描述的实施例。相反地,提供这些实施例的目的是使对本发明的公开内容更加透彻全面。In order to facilitate the understanding of the present invention, the present invention will be described more fully hereinafter with reference to the accompanying drawings. Preferred embodiments of the invention are given in the drawings. However, the invention may be embodied in many different forms and is not limited to the embodiments described herein. Rather, these embodiments are provided so that this disclosure will be thorough and comprehensive.
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。All technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs, unless otherwise defined. The terminology used in the description of the present invention is for the purpose of describing particular embodiments and is not intended to limit the invention. The term "and/or" used herein includes any and all combinations of one or more of the associated listed items.
为更好的说明本发明实施例的技术方案,对本发明实施例中的技术方案的应用场景作以下说明:To better illustrate the technical solutions of the embodiments of the present invention, the application scenarios of the technical solutions in the embodiments of the present invention are described as follows:
机器人定位是指机器人通过感知获取环境信息,并经过相关的信息处理而确定自身及目标位姿的过程。机器人在未知环境中工作时,需要确定自身位置,以便在实际作业时,能够根据目标位置进行路径规划,因此,机器人定位是极为重要的。例如,如图1所示,机器人在未知环境中,要对目标A进行操作,首先机器人需要确定自身在环境中的位置,然后根据自身位姿和目标A的位姿关系进行路径规划,以移动到目标A的位置并对目标A进行操作,此例子只是机器人定位的一个应用场景的举例,但不限制本发明实施例中提出的机器人定位方法及装置的应用范围。Robot positioning refers to the process in which the robot acquires environmental information through perception and determines the position of itself and the target through relevant information processing. When the robot works in an unknown environment, it needs to determine its position so that the path planning can be performed according to the target position during actual operation. Therefore, robot positioning is extremely important. For example, as shown in FIG. 1 , in the unknown environment, the robot needs to operate on the target A. First, the robot needs to determine its position in the environment, and then performs path planning according to the posture and posture of the target A to move. Go to the position of the target A and operate on the target A. This example is only an example of an application scenario of the robot positioning, but does not limit the application range of the robot positioning method and device proposed in the embodiment of the present invention.
本发明实施例提供了一种机器人定位方法,如图2所示,上述机器人定位方法包括:An embodiment of the present invention provides a robot positioning method. As shown in FIG. 2, the foregoing robot positioning method includes:
S20:读取机器人相机的当前检索图像,并将当前检索图像与混合视觉激光地图进行相似度度量,查找当前检索图像中相似度最高的前m帧;S20: reading a current search image of the robot camera, and performing similarity measurement on the current search image and the hybrid visual laser map, and searching for the first m frame with the highest similarity in the current search image;
S30:对前m帧进行多个类别的聚类,并将各聚类类别与上一次检索图像对应的聚类类别进行匹配,判断是否产生新聚类类别;S30: performing clustering of multiple categories on the first m frame, and matching each cluster category with a clustering category corresponding to the last searched image to determine whether a new clustering category is generated;
S40:若检测到新聚类类别,则对当前检索图像和当前检索图像对应的各聚类类别的中心关键帧图像进行局部特征匹配,并根据局部特征匹配结果获得与类别个数相同的多个机器人的当前位姿;S40: If a new clustering category is detected, performing local feature matching on the central key frame image of each clustering category corresponding to the current search image and the current search image, and obtaining the same number of categories according to the local feature matching result. The current pose of the robot;
S50:读取机器人的激光传感器的当前扫描数据,获得当前扫描数据与混合视觉激光地图的重合度得分,并对各聚类类别的重合度得分进行排序;S50: reading current scan data of the laser sensor of the robot, obtaining a coincidence score of the current scan data and the hybrid visual laser map, and sorting the coincidence scores of each cluster category;
S60:若检测到得分最高的类别是新聚类类别时,则将该新聚类类别的中心关键帧图像对应的位姿信息更新为机器人的当前位姿。S60: If it is detected that the category with the highest score is a new cluster category, the pose information corresponding to the central key frame image of the new cluster category is updated to the current pose of the robot.
其中,混合视觉激光地图中是指包括视觉特征地图和激光平面地图及视觉特征地图、激光平面地图对应关系的地图。前m帧中m参数的选取是通过实验获得的,当取前m帧时,机器人定位精度和速度均能达到较优。上一次检索图像对应的聚类类别是指,机器人在移动过程中,上一次定位时所获取的机器人相机的检索图像与混合视觉激光地图进行相似度度量后,对相似度最高的前m帧进行聚类所生成的聚类类别。新聚类类别是指与上一次检索图像对应的聚类类别不同的类别。重合度得分是指扫描数据和混合视觉激光地图中存储的扫描数据的重合程度的得分,重合的数据点越多,则重合度得分越高。混合视觉激光地图是预先建立好的,混合视觉激光地图中的视觉特征地图是指,包括视觉特征的地图,混合视觉激光地图中的激光平面地图是指利用激光传感器构建的、含扫描数据的地图。Among them, the hybrid visual laser map refers to a map including a visual feature map and a laser plane map, a visual feature map, and a laser plane map. The selection of the m parameter in the first m frame is obtained through experiments. When the m frame is taken, the positioning accuracy and speed of the robot can be better. The clustering category corresponding to the last searched image refers to the similarity measure of the robot camera's search image obtained by the previous positioning and the hybrid visual laser map during the moving process, and then performs the similarity of the first m frame with the highest similarity. Clustering the generated cluster categories. The new cluster category refers to a category different from the cluster category corresponding to the last search image. The coincidence score is a score of the degree of coincidence of the scan data and the scan data stored in the hybrid visual laser map, and the more coincident data points, the higher the coincidence score. The hybrid visual laser map is pre-established. The visual feature map in the hybrid visual laser map refers to the map including the visual features. The laser planar map in the hybrid visual laser map refers to the map with the scan data constructed by using the laser sensor. .
具体的,机器人读取机器人相机的当前检索图像,其中,检索图像可以包括关键帧彩色图像和关键帧深度图像;然后,将当前检索图像与混合视觉激光地图进行相似度度量,查找当前检索图像中相似度最高的前m帧,并对前m帧进行聚类,并与上一次检索图像对应的聚类类别进行匹配,当检测到新聚类类别时,对当前检索图像和当前检索图像对应的各聚类类别的中心关键帧图像进行局部特征匹配,得到与类别个数相同的多个机器人的当前位姿,例如,若前面进行聚类时生成了k个聚类类别,则经过局部特征匹配,将得到k种机器人的当前位姿的可能性。可选的,混合视觉激光地图中还包括视觉特征地图中的视觉特征与激光平面地图中的位姿信息的一一对应关系,所以通过对当前检索图像进行局部特征匹配可以迅速得到机器人的当前位姿的可能性。在进行局部特征匹配后,读取机器人的激光传感器的当前扫描数据,获得当前扫描数据与混合视觉激光地图中的扫描数据的重合度得分,并对各聚类类别的重合度得分进行排序,当检测到得分最高的类别是新聚类类别时,则将该新聚类类别的中心关键帧图像对应的位姿信息更新为机器人的当前位姿。Specifically, the robot reads the current retrieved image of the robot camera, wherein the retrieved image may include a key frame color image and a key frame depth image; and then, the current search image is compared with the hybrid visual laser map to find a current search image. The first m frame with the highest similarity, and clusters the first m frames, and matches the cluster categories corresponding to the last retrieved image. When the new cluster category is detected, the current search image corresponds to the current search image. The central key frame image of each clustering category performs local feature matching, and obtains the current pose of multiple robots with the same number of categories. For example, if k clustering categories are generated when clustering is performed in the previous cluster, local feature matching is performed. , will get the possibility of the current pose of k kinds of robots. Optionally, the hybrid visual laser map further comprises a one-to-one correspondence between the visual features in the visual feature map and the pose information in the laser plane map, so that the current position of the robot can be quickly obtained by performing local feature matching on the currently retrieved image. The possibility of posture. After the local feature matching is performed, the current scan data of the laser sensor of the robot is read, the coincidence score of the scan data in the current scan data and the mixed visual laser map is obtained, and the coincidence scores of the cluster categories are sorted. When the category with the highest score is detected as the new cluster category, the pose information corresponding to the central key frame image of the new cluster category is updated to the current pose of the robot.
其中,可选的,进行局部特征匹配时,可以采用基于ORB局部视觉特征提取和匹配方法,还可以是采用SIFT,SURF等局部特征提取和匹配方法。Optionally, when performing local feature matching, an ORB local visual feature extraction and matching method may be adopted, or a local feature extraction and matching method such as SIFT, SURF, or the like may be adopted.
其中,ORB(ORiented Brief)局部视觉特征提取和匹配方法,主要是通过在视觉特征点附近随机选取若干点对,将这些点对的灰度值的大小,组合成一个二进制串,并将这个二进制串作为该特征点的特征描述子,并根据该特征描述子来进行匹配的方法。 SIFT(Scale-invariant feature transform),尺度不变特征转换,是用于图像处理领域的一种描述子,这种描述具有尺度不变性,可在图像中检测出关键点,是一种局部特征描述子。SURF(Speeded Up Robust Features),加速稳健特征,是一个稳健的图像识别描述算法,是基于SIFT算法的一种加速算法。Among them, ORB (ORiented Brief) local visual feature extraction and matching method, mainly by randomly selecting several pairs of points near the visual feature points, combining the magnitudes of the gray values of these pairs into a binary string, and combining the binary A string is used as a feature descriptor of the feature point, and a matching method is performed according to the feature descriptor. SIFT (Scale-invariant feature transform) is a descriptor used in the field of image processing. This description has scale invariance and can detect key points in images. It is a local feature description. child. SURF (Speeded Up Robust Features), which accelerates robust features, is a robust image recognition description algorithm and is an acceleration algorithm based on SIFT algorithm.
可选的,将当前检索图像与混合视觉激光地图进行相似度度量的过程可以包括:Optionally, the process of performing similarity measurement on the current search image and the hybrid visual laser map may include:
提取当前检索图像的视觉特征,并将当前检索图像的视觉特征与混合视觉激光地图中的视觉特征进行相似度度量;然后,采用欧氏距离对当前检索图像的视觉特征的视觉特征向量和混合视觉激光地图中的视觉特征的视觉特征向量进行逐一计算,查找相似度最高的前m帧。Extracting the visual features of the currently retrieved image, and measuring the similarity between the visual features of the currently retrieved image and the visual features in the hybrid visual laser map; then, using the Euclidean distance to visual feature vectors and hybrid vision of the visual features of the currently retrieved image The visual feature vectors of the visual features in the laser map are calculated one by one to find the first m frames with the highest similarity.
可选的,读取机器人相机的当前检索图像,并采用Gist全局描述算子提取当前检索图像的视觉特征。其中,Gist全局描述算子是一种基于空间包络描述图像特征的全局场景描述算子。Optionally, the current retrieved image of the robot camera is read, and the visual feature of the currently retrieved image is extracted using a Gist global description operator. Among them, Gist global description operator is a global scene description operator based on spatial envelope description image features.
可选的,如图2所示,在步骤S60:若检测到得分最高的类别是新聚类类别时,则将该新聚类类别的中心关键帧图像对应的位姿信息更新为机器人的当前位姿之后,跳转到步骤S20进行下一次定位。可选的,机器人采用定时和定距策略进行下一次的定位。Optionally, as shown in FIG. 2, in step S60: if it is detected that the category with the highest score is a new cluster category, the pose information corresponding to the central key frame image of the new cluster category is updated to the current state of the robot. After the pose, the process jumps to step S20 for the next positioning. Optionally, the robot uses the timing and distance strategy for the next positioning.
可选的,对前m帧进行聚类,并对生成的聚类类别进行排序;当检测到得分最高的类别是排序最高的新聚类类别时,则将排序最高的新聚类类别的中心关键帧图像对应的位姿信息更新为机器人的当前位姿。聚类类别的排序顺序代表了机器人对各类别与混合视觉激光地图中的视觉特征匹配的信任程度,重合度得分的得分高低,代表了机器人对各类别与混合视觉激光地图中的激光平面地图匹配的信任程度,选择聚类类别排序最高且得分最高的类别的中心关键帧图像对应的位姿信息作为机器人的当前位姿,实现定位,大幅度提高了机器人的定位精度。Optionally, the first m frames are clustered, and the generated cluster categories are sorted; when the highest score is detected as the highest ranked new cluster category, the center of the highest ranked new cluster category is selected. The pose information corresponding to the key frame image is updated to the current pose of the robot. The sorting order of the clustering category represents the degree of trust of the robot in matching the visual features of each category with the hybrid visual laser map. The score of the coincidence score is high and low, which represents the matching of the laser to the laser plane map of each category and the hybrid visual laser map. The degree of trust, the pose information corresponding to the central keyframe image of the category with the highest sorting and the highest score is selected as the current pose of the robot, and the positioning is achieved, which greatly improves the positioning accuracy of the robot.
本实施例中提供的机器人定位方法,利用聚类匹配判别是否产生新聚类类别,当生成新聚类类别时,对当前检索图像和各聚类类别的中心关键帧图像进行局部特征匹配,根据匹配结果得到机器人当前位姿的多种可能性,并进一步通过对当前扫描数据和激光平面地图的重合度得分,在得分最高的聚类是新聚类时,即对机器人的当前位姿进行更新,以实现机器人定位,这种利用聚类匹配和重合得分机制结合的方法,大大提高了在机器人定位中,机器人对周围环境的识别率,从而实现精准定位。The robot positioning method provided in this embodiment uses cluster matching to determine whether to generate a new cluster category, and when generating a new cluster category, performs local feature matching on the current search image and the central key frame image of each cluster category, according to The matching result obtains multiple possibilities of the current pose of the robot, and further passes the score of the coincidence degree of the current scan data and the laser plane map. When the cluster with the highest score is a new cluster, the current pose of the robot is updated. In order to realize robot positioning, this method combining cluster matching and coincidence scoring mechanism greatly improves the recognition rate of the robot to the surrounding environment in the robot positioning, thereby achieving accurate positioning.
在其中一个实施例中,如图3所示,在判断是否产生新聚类类别的步骤之后还包括步骤:In one of the embodiments, as shown in FIG. 3, after the step of determining whether to generate a new cluster category, the method further includes the steps of:
S70:若未检测到新聚类类别,则存储当前的扫描数据,并根据机器人里程信息、相邻的两帧扫描数据和定位算法获得机器人下一时刻的位姿,并将下一时刻的位姿更新为机器人的当前位姿。S70: If the new clustering category is not detected, the current scan data is stored, and the pose of the next moment of the robot is obtained according to the robot mileage information, the adjacent two-frame scan data, and the positioning algorithm, and the bit of the next moment is obtained. The pose is updated to the current pose of the robot.
其中,机器人下一时刻的位姿是指机器人下一个可能在的位置所对应的位姿。相邻的两帧扫描数据是指,机器人当前的扫描数据以及上次定位时所存储的扫描数据。具体的,当机 器人未检测到新聚类类别时,机器人根据里程信息预测得到机器人的第一预测位姿,并存储当前的扫描数据,调取上一次定位时所储存的扫描数据,然后根据当前的扫描数据、上一次定位时存储的扫描数据得到第二预测位姿,将第一预测位姿和第二预测位姿通过定位算法,得到机器人下一时刻的位姿,并将计算得到的下一时刻的位姿作为机器人的当前位姿。其中,定位算法可以采用Rao-Blackwellised粒子滤波器算法、扩展卡尔曼滤波器算法等。Rao-Blackwellised粒子滤波器算法是通过引入边缘函数从而提高估计精度的一种算法,扩展卡尔曼滤波器算法(Extended Kalman Filter)是一种高效率的递归滤波器算法。可选的,通过机器人的里程计得到机器人里程信息,还可以通过机器人的轮子里程计得到机器人里程信息。Among them, the pose of the robot at the next moment refers to the pose corresponding to the next possible position of the robot. The adjacent two frames of scan data refer to the current scan data of the robot and the scan data stored at the time of the last positioning. Specifically, when the robot does not detect the new clustering category, the robot predicts the first predicted pose of the robot according to the mileage information, and stores the current scan data, and retrieves the scan data stored in the previous positioning, and then according to the current The scan data and the scan data stored in the previous positioning obtain a second predicted pose, and the first predicted pose and the second predicted pose are passed through a positioning algorithm to obtain a pose of the next moment of the robot, and the calculated lower The pose of a moment is the current pose of the robot. Among them, the positioning algorithm can adopt Rao-Blackwellised particle filter algorithm, extended Kalman filter algorithm and the like. The Rao-Blackwellised particle filter algorithm is an algorithm that improves the estimation accuracy by introducing an edge function. The Extended Kalman Filter algorithm is a high-efficiency recursive filter algorithm. Optionally, the robot mileage information is obtained through the robot's odometer, and the robot mileage information can also be obtained through the robot's wheel odometer.
可选的,在步骤S70:若未检测到新聚类类别,则存储当前的扫描数据,并根据机器人里程信息、相邻的两帧扫描数据和定位算法获得机器人下一时刻的位姿之后跳转至步骤S20进行下一次定位。Optionally, in step S70: if the new clustering category is not detected, the current scan data is stored, and the robot is obtained according to the robot mileage information, the adjacent two-frame scan data, and the positioning algorithm. Go to step S20 for the next positioning.
在其中一个实施例中,如图4所示,在读取机器人相机的当前检索图像之前还包括步骤:In one of the embodiments, as shown in FIG. 4, the method further includes the steps of: before reading the currently retrieved image of the robot camera:
S10:利用激光传感器和机器人相机离线构建混合视觉激光地图;其中,混合视觉激光地图为视觉特征地图的视觉特征与激光平面地图的位姿信息一一对应的地图。S10: Using a laser sensor and a robot camera to construct a hybrid visual laser map offline; wherein the hybrid visual laser map is a map corresponding to the visual feature of the visual feature map and the pose information of the laser plane map.
其中,混合视觉激光地图中包括激光平面地图和视觉特征地图以及激光平面地图中的位姿信息与视觉特征地图中的视觉特征的一一对应关系。具体的,在机器人开始定位前,离线构建激光平面地图、视觉特征地图,并将同一时刻激光平面地图的位姿信息和视觉特征地图的视觉特征进行对应捆绑存储,生成混合视觉激光地图。The hybrid visual laser map includes a laser plane map and a visual feature map, and a one-to-one correspondence between the pose information in the laser plane map and the visual features in the visual feature map. Specifically, before the robot starts positioning, the laser plane map and the visual feature map are constructed offline, and the pose information of the laser plane map and the visual features of the visual feature map are bundled and stored at the same time to generate a hybrid visual laser map.
在其中一个实施例中,如图5所示,利用激光传感器和机器人相机离线构建混合视觉激光地图的步骤包括:In one of the embodiments, as shown in FIG. 5, the steps of constructing the hybrid visual laser map offline using the laser sensor and the robot camera include:
S11:获取机器人里程信息;S11: acquiring robot mileage information;
S12:控制激光传感器对周围环境进行扫描,得到当前扫描数据,同时读取机器人相机的关键帧图像;S12: controlling the laser sensor to scan the surrounding environment, obtaining current scan data, and reading the key frame image of the robot camera;
S13:初始化机器人的位置并存储当前扫描数据,根据机器人里程信息得到机器人下一时刻的预测位置,对相邻的两帧扫描数据及机器人里程信息采用定位算法得到机器人的位姿信息;S13: initializing the position of the robot and storing the current scan data, obtaining the predicted position of the robot at the next moment according to the robot mileage information, and using the positioning algorithm to obtain the pose information of the robot for the adjacent two frames of scan data and the robot mileage information;
S14:控制机器人重复移动,并根据激光传感器采集的扫描数据构建激光平面地图;S14: controlling the robot to repeatedly move, and constructing a laser plane map according to the scan data collected by the laser sensor;
S15:提取关键帧图像的视觉特征,获得视觉特征地图,并根据视觉特征与位姿信息联合得到视觉特征地图和激光平面地图一一对应的混合视觉激光地图。S15: extracting visual features of the key frame image, obtaining a visual feature map, and jointly obtaining a hybrid visual laser map corresponding to the visual feature map and the laser plane map according to the visual feature and the pose information.
其中,相邻的两帧扫描数据与上述实施例中的解释一样,在此不作赘述。具体的,机器人定位时,读取机器人相机的当前检索图像,并提取当前检索图像的视觉特征,将提取的视觉特征与混合视觉激光地图的视觉特征地图的关键帧视觉特征进行相似度度量,查找当前检索图像中相似度最高的前m帧,并对前m帧进行聚类,并与上一次检索图像对应的聚类类别 进行匹配,当检测到新聚类类别时,对当前检索图像和当前检索图像对应的各聚类类别的中心关键帧图像进行局部特征匹配,得到与类别个数相同的多个机器人的当前位姿,进行局部特征匹配后,读取机器人的激光传感器的当前扫描数据,获得当前扫描数据与混合视觉激光地图中的激光平面地图存储的扫描数据的重合度得分,并对各聚类类别的重合度得分进行排序,当检测到得分最高的类别是新聚类类别时,则将该新聚类类别的中心关键帧图像对应的位姿信息更新为机器人的当前位姿。可选的,还可以利用基于图优化的定位与地图构建方法获得机器人的激光平面地图。The adjacent two frames of scan data are the same as those in the above embodiment, and are not described herein. Specifically, when the robot is positioned, the current retrieved image of the robot camera is read, and the visual features of the currently retrieved image are extracted, and the extracted visual features are compared with the key frame visual features of the visual feature map of the hybrid visual laser map, and the similarity is measured. Currently, the top m frames with the highest similarity in the image are retrieved, and the first m frames are clustered, and matched with the cluster category corresponding to the last searched image. When the new cluster category is detected, the current search image and current The central key frame image of each cluster category corresponding to the image is searched for local feature matching, and the current poses of the plurality of robots having the same number of categories are obtained, and after the local feature matching is performed, the current scan data of the laser sensor of the robot is read, Obtaining a coincidence score of the scan data stored by the current scan data and the laser plane map in the hybrid visual laser map, and sorting the coincidence scores of each cluster category, when detecting that the category with the highest score is a new cluster category, Then updating the pose information corresponding to the central key frame image of the new cluster category to the machine Who currently pose. Optionally, a laser plane map of the robot can also be obtained by using a map-based optimization positioning and map construction method.
在其中一个实施例中,如图6所示,提取关键帧图像的视觉特征的步骤包括:In one of the embodiments, as shown in FIG. 6, the step of extracting the visual features of the key frame image includes:
S151:通过Gist全局描述算子提取并存储关键帧图像的视觉特征。S151: Extract and store visual features of the key frame image by using a Gist global description operator.
具体的,在离线构建混合视觉激光地图时,采用Gist全局描述算子提取并存储机器人相机采集的关键帧图像的视觉特征,可以减小存储时所占的存储空间,且采用全局描述算子存储关键帧图像的视觉特征,保证了周围环境中的视觉特征的整体性,有助于提高后期匹配计算效率和匹配的稳定性。Specifically, when the hybrid visual laser map is constructed offline, the Gist global description operator is used to extract and store the visual features of the key frame images collected by the robot camera, which can reduce the storage space occupied by the storage and use the global description operator storage. The visual characteristics of the key frame image ensure the integrity of the visual features in the surrounding environment, which helps to improve the efficiency of post-matching calculation and the stability of matching.
在其中一个实施例中,如图7所示,根据局部特征匹配结果获得与类别个数相同的多个机器人的当前位姿的步骤包括步骤:In one embodiment, as shown in FIG. 7, the step of obtaining the current pose of the plurality of robots having the same number of categories according to the local feature matching result includes the steps of:
S42:对匹配内点数超过预设阈值的各聚类类别采用PnP法得到机器人的当前位姿。S42: The PnP method is used to obtain the current pose of the robot for each cluster category in which the number of matched inner points exceeds a preset threshold.
其中,预设阈值是通过实验得到的。PnP算法是指通过多对3D与2D匹配点,在已知或者未知相机内参的情况下,利用最小化重投影误差来求解相机外参的算法。具体的,当检测到当前检索图像的视觉特征和各聚类类别的中心关键帧图像的视觉特征的匹配内点数超过预设阈值时,求解当前检索图像的相机位姿与视觉特征地图相似关键帧图像对应的相机位姿之间的变换,然后得到机器人当前位姿。此时,当前检索图像与混合视觉激光地图中的视觉特征的匹配度较高,利用PnP法计算机器人的当前位姿,即对该聚类类别对应的位姿进行更新,进一步提高定位精度。Among them, the preset threshold is obtained through experiments. The PnP algorithm refers to an algorithm for solving camera external parameters by minimizing reprojection errors by using multiple pairs of 3D and 2D matching points in the case of known or unknown camera internal parameters. Specifically, when it is detected that the visual feature of the current search image and the matching feature point of the visual feature of the central key frame image of each cluster category exceed a preset threshold, the camera pose and the visual feature map are similar to the key frame of the current search image. The image is transformed between the camera poses and then the current pose of the robot is obtained. At this time, the matching degree between the current search image and the visual feature in the hybrid visual laser map is high, and the PnP method is used to calculate the current pose of the robot, that is, the pose corresponding to the cluster category is updated, and the positioning accuracy is further improved.
在其中一个实施例中,根据局部特征匹配结果获得与类别个数相同的多个机器人的当前位姿的步骤还包括步骤:In one of the embodiments, the step of obtaining the current pose of the plurality of robots having the same number of categories according to the local feature matching result further comprises the steps of:
S43:对匹配内点数未超过预设阈值的各聚类类别,将其对应的中心关键帧图像的位姿作为机器人当前位姿。S43: For each cluster category in which the number of matching internal points does not exceed the preset threshold, the pose of the corresponding central key frame image is used as the current pose of the robot.
具体的,当聚类类别匹配内点数没有超过预设阈值时,则对该聚类类别对应的中心关键帧图像的位姿作为机器人当前位姿。此时,当前检索图像与混合视觉激光地图匹配度较低,所以采用该聚类类别对应的中心关键帧图像的位姿作为机器人当前位姿。Specifically, when the number of points in the cluster category matching does not exceed the preset threshold, the pose of the center key frame image corresponding to the cluster category is used as the current pose of the robot. At this time, the current search image and the hybrid visual laser map have a low degree of matching, so the pose of the central key frame image corresponding to the cluster type is used as the current pose of the robot.
在其中一个实施例中,如图8所示,对前m帧进行多个类别的聚类的步骤包括:In one embodiment, as shown in FIG. 8, the step of performing clustering of multiple categories on the first m frames includes:
S31:对前m帧的索引采用k-means聚类方法进行k个类别的聚类,得到k个聚类类别的排序;S31: performing k-means clustering method on the index of the previous m frame to perform k-category clustering, and obtaining sorting of k cluster categories;
S32:将每个聚类类别的中心值替换该类别关键帧索引的中位数,以落在每个聚类类别的范围内的帧数为基数,乘以机器人的最大速度因子作为该类别的新范围;S32: Replace the central value of each cluster category with the median of the key frame index of the category, and use the number of frames falling within the range of each cluster category as a base, multiplied by the maximum speed factor of the robot as the category New scope
S33:将配置新范围的k个聚类类别与上一次检索图像对应的k个聚类类别进行匹配。S33: Matching the k cluster categories of the new range with the k cluster categories corresponding to the last search image.
其中,k-means聚类方法,是典型的基于原型的目标函数聚类方法的代表,以欧式距离作为相似度度量,是一种基于距离的聚类算法,采用距离作为相似性的评价指标,即认为两个对象的距离越近,其相似度就越大。上一次检索图像等的定义与上述实施例中的相同,在此不做赘述。参数k是通过实验得出的,是根据应用场景等不同,预先设定的。Among them, the k-means clustering method is a representative of the prototype-based objective function clustering method. The Euclidean distance is used as the similarity measure. It is a distance-based clustering algorithm, and the distance is used as the evaluation index of similarity. That is, the closer the distance between the two objects is, the greater the similarity is. The definitions of the last search image and the like are the same as those in the above embodiment, and will not be described herein. The parameter k is obtained experimentally and is preset according to the application scenario and the like.
本发明实施例另一方面还提供一种机器人定位装置,如图9所示,上述机器人定位装置包括:Another aspect of the embodiment of the present invention further provides a robot positioning device. As shown in FIG. 9, the robot positioning device includes:
图像获取及相似度度量模块20,用于读取机器人相机的当前检索图像,并将当前检索图像与混合视觉激光地图进行相似度度量,查找当前检索图像中相似度最高的前m帧;The image acquisition and similarity measurement module 20 is configured to read the current search image of the robot camera, and perform similarity measurement on the current search image and the hybrid visual laser map to find the first m frame with the highest similarity in the current search image;
聚类匹配模块30,用于对前m帧进行多个类别的聚类,并将各聚类类别与上一次检索图像对应的聚类类别进行匹配,判断是否产生新聚类类别;The cluster matching module 30 is configured to perform clustering of multiple categories on the previous m frames, and match each cluster category with the cluster category corresponding to the last search image to determine whether a new cluster category is generated;
局部特征匹配模块40,用于若检测到新聚类类别,则对当前检索图像和当前检索图像对应的各聚类类别的中心关键帧图像进行局部特征匹配,并根据局部特征匹配结果获得与类别个数相同的多个机器人的当前位姿;The local feature matching module 40 is configured to perform local feature matching on the central key frame image of each clustering category corresponding to the current search image and the current search image if the new clustering category is detected, and obtain and class according to the local feature matching result. The current pose of multiple robots with the same number;
重合度得分模块50,用于读取机器人的激光传感器的当前扫描数据,获得当前扫描数据与混合视觉激光地图的重合度得分,并对各聚类类别的重合度得分进行排序;The coincidence score module 50 is configured to read current scan data of the laser sensor of the robot, obtain a coincidence score of the current scan data and the hybrid visual laser map, and sort the coincidence scores of each cluster category;
位姿更新发布模块60,用于若检测到得分最高的类别是新聚类类别时,则将该新聚类类别的中心关键帧图像对应的位姿信息更新为机器人的当前位姿。The pose update issuing module 60 is configured to update the pose information corresponding to the central keyframe image of the new cluster category to the current pose of the robot if the category with the highest score is detected as the new cluster category.
其中,混合视觉激光地图、参数m、上一次检索图像对应的聚类类别等解释与上述方法实施例中相同,在此不做赘述。The explanations of the hybrid visual laser map, the parameter m, the clustering category corresponding to the last searched image, and the like are the same as those in the foregoing method embodiment, and are not described herein.
具体的,图像获取及相似度度量模块20读取机器人相机的当前检索图像,并将当前检索图像与混合视觉激光地图进行相似度度量,查找当前检索图像中相似度最高的前m帧;聚类匹配模块30对前m帧进行多个类别的聚类,并将各聚类类别与上一次检索图像对应的聚类类别进行匹配;局部特征匹配模块40在检测到新聚类类别时,则对当前检索图像和当前检索图像对应的各聚类类别的中心关键帧图像进行局部特征匹配,并根据局部特征匹配结果获得与类别个数相同的多个机器人的当前位姿;重合度得分模块50读取机器人的激光传感器的当前扫描数据,获得当前扫描数据与激光平面地图的重合度得分,并对各聚类类别的重合度得分进行排序;然后,位姿更新发布模块60在检测到得分最高的类别是新聚类类别时,将该新聚类类别的中心关键帧图像对应的位姿信息更新为机器人的当前位姿。Specifically, the image acquisition and similarity measurement module 20 reads the current search image of the robot camera, and performs similarity measurement on the current search image and the hybrid visual laser map to find the top m frame with the highest similarity in the current search image; clustering The matching module 30 performs clustering of multiple categories on the first m frames, and matches each cluster category with the cluster category corresponding to the last search image; when the local feature matching module 40 detects the new cluster category, The central key frame image of each clustering category corresponding to the current search image and the current search image is locally characterized, and the current pose of the plurality of robots having the same number of categories is obtained according to the local feature matching result; the coincidence score module 50 reads Taking the current scan data of the laser sensor of the robot, obtaining the coincidence score of the current scan data and the laser plane map, and sorting the coincidence scores of the cluster categories; then, the pose update issuing module 60 detects the highest score. When the category is a new clustering category, the pose information corresponding to the central keyframe image of the new clustering category The new current pose of the robot.
在其中一个实施例中,如图10所示,机器人定位装置还包括:In one embodiment, as shown in FIG. 10, the robot positioning device further includes:
位姿发布模块70,用于在未检测到新聚类类别时,则存储当前的扫描数据,并根据机器 人里程信息、相邻的两帧扫描数据和定位算法获得机器人下一时刻的位姿。The pose issuing module 70 is configured to store the current scan data when the new cluster category is not detected, and obtain the pose of the robot at the next moment according to the robot mileage information, the adjacent two-frame scan data, and the positioning algorithm.
在其中一个实施例中,如图10所示,机器人定位装置还包括:In one embodiment, as shown in FIG. 10, the robot positioning device further includes:
混合视觉激光地图离线构建模块10,用于利用激光传感器和机器人相机离线构建混合视觉激光地图;其中,混合视觉激光地图为视觉特征地图的视觉特征与激光平面地图的位姿信息一一对应的地图。The hybrid visual laser map offline building module 10 is configured to construct a hybrid visual laser map offline by using a laser sensor and a robot camera; wherein the hybrid visual laser map is a map corresponding to the visual feature of the visual feature map and the pose information of the laser plane map. .
需要说明的是,本发明机器人定位装置实施例中的各单元模块能够实现上述方法实施例中的方法步骤,在此不做赘述。It should be noted that each unit module in the embodiment of the robot positioning device of the present invention can implement the method steps in the foregoing method embodiments, and details are not described herein.
一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行程序时实现上述机器人定位方法的步骤。A computer device comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, the step of implementing the robot positioning method described above when the processor executes the program.
一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述机器人定位方法的步骤。A computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the steps of the robot positioning method described above.
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,所述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,包括以上方法所述的步骤,所述的存储介质,如:ROM/RAM、磁碟、光盘等。The technical features of the above-described embodiments may be arbitrarily combined. For the sake of brevity of description, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction between the combinations of these technical features, All should be considered as the scope of this manual. One of ordinary skill in the art can understand that all or part of the steps of implementing the above embodiments may be completed by a program to instruct related hardware, and the program may be stored in a computer readable storage medium, and the program is executed. The method includes the steps described in the above method, such as a ROM/RAM, a magnetic disk, an optical disk, and the like.
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。The above-mentioned embodiments are merely illustrative of several embodiments of the present invention, and the description thereof is more specific and detailed, but is not to be construed as limiting the scope of the invention. It should be noted that a number of variations and modifications may be made by those skilled in the art without departing from the spirit and scope of the invention. Therefore, the scope of the invention should be determined by the appended claims.

Claims (10)

  1. 一种机器人定位方法,其特征在于,包括:A robot positioning method, comprising:
    读取机器人相机的当前检索图像,并将所述当前检索图像与混合视觉激光地图进行相似度度量,查找所述当前检索图像中相似度最高的前m帧;Reading a current search image of the robot camera, and performing similarity measurement on the current search image and the hybrid visual laser map, and searching for the first m frame with the highest similarity in the current search image;
    对所述前m帧进行多个类别的聚类,并将各聚类类别与上一次检索图像对应的聚类类别进行匹配,判断是否产生新聚类类别;Perform clustering of multiple categories on the previous m frames, and match each cluster category with a cluster category corresponding to the last search image to determine whether a new cluster category is generated;
    若检测到新聚类类别,则对所述当前检索图像和所述当前检索图像对应的各所述聚类类别的中心关键帧图像进行局部特征匹配,并根据局部特征匹配结果获得与类别个数相同的多个机器人的当前位姿;If a new clustering category is detected, local feature matching is performed on the central key frame image of each of the clustering categories corresponding to the current search image and the current search image, and the number of categories is obtained according to the local feature matching result. The same pose of the same multiple robots;
    读取机器人的激光传感器的当前扫描数据,获得所述当前扫描数据与混合视觉激光地图的重合度得分,并对所述各聚类类别的重合度得分进行排序;Reading current scan data of the laser sensor of the robot, obtaining a coincidence score of the current scan data and the hybrid visual laser map, and sorting the coincidence scores of the cluster categories;
    若检测到得分最高的类别是新聚类类别时,则将该新聚类类别的中心关键帧图像对应的位姿信息更新为机器人的当前位姿。If the category with the highest score is detected as the new cluster category, the pose information corresponding to the central key frame image of the new cluster category is updated to the current pose of the robot.
  2. 根据权利要求1所述的机器人定位方法,其特征在于,在所述判断是否产生新聚类类别的步骤之后还包括步骤:The robot positioning method according to claim 1, further comprising the step of: after the step of determining whether to generate a new cluster category:
    若未检测到新聚类类别,则存储当前的扫描数据,并根据机器人里程信息、相邻的两帧扫描数据和定位算法获得机器人下一时刻的位姿,并将所述下一时刻的位姿更新为机器人的当前位姿。If the new clustering category is not detected, the current scan data is stored, and the pose of the next moment of the robot is obtained according to the robot mileage information, the adjacent two-frame scan data, and the positioning algorithm, and the bit of the next moment is obtained. The pose is updated to the current pose of the robot.
  3. 根据权利要求1或2所述的机器人定位方法,其特征在于,在所述读取机器人相机的当前检索图像之前还包括步骤:The robot positioning method according to claim 1 or 2, further comprising the steps of: before reading the currently retrieved image of the robot camera:
    利用所述激光传感器和机器人相机离线构建混合视觉激光地图;其中,所述混合视觉激光地图为视觉特征地图的视觉特征与激光平面地图的位姿信息一一对应的地图。The hybrid visual laser map is constructed offline by using the laser sensor and the robot camera; wherein the hybrid visual laser map is a map corresponding to the visual feature of the visual feature map and the pose information of the laser plane map.
  4. 根据权利要求3所述的机器人定位方法,其特征在于,所述利用激光传感器和机器人相机离线构建混合视觉激光地图的步骤包括:The robot positioning method according to claim 3, wherein the step of constructing the hybrid visual laser map using the laser sensor and the robot camera offline comprises:
    获取机器人里程信息;Obtain robot mileage information;
    控制所述激光传感器对周围环境进行扫描,得到所述当前扫描数据,同时读取所述机器人相机的关键帧图像;Controlling the laser sensor to scan the surrounding environment to obtain the current scan data, and simultaneously reading a key frame image of the robot camera;
    初始化机器人的位置并存储当前所述扫描数据,根据所述机器人里程信息得到机器人下一时刻的预测位置,对相邻的两帧扫描数据及所述机器人里程信息采用定位算法得到机器人的位姿信息;Initializing the position of the robot and storing the current scan data, obtaining the predicted position of the robot at the next moment according to the robot mileage information, and using the positioning algorithm to obtain the pose information of the robot for the adjacent two frames of scan data and the robot mileage information. ;
    控制机器人重复移动,并根据激光传感器采集的扫描数据构建所述激光平面地图;Controlling the robot to repeatedly move, and constructing the laser plane map according to the scan data collected by the laser sensor;
    在构建所述激光平面地图的同时,提取所述关键帧图像的视觉特征,获得所述视觉特征 地图,并根据所述视觉特征与所述位姿信息联合得到视觉特征地图和激光平面地图一一对应的混合视觉激光地图。While constructing the laser plane map, extracting visual features of the key frame image, obtaining the visual feature map, and jointly obtaining a visual feature map and a laser plane map according to the visual feature and the pose information. Corresponding hybrid visual laser map.
  5. 根据权利要求4所述的机器人定位方法,其特征在于,所述提取所述关键帧图像的视觉特征的步骤包括:The robot positioning method according to claim 4, wherein the step of extracting the visual features of the key frame image comprises:
    通过Gist全局描述算子提取并存储所述关键帧图像的所述视觉特征。The visual features of the key frame image are extracted and stored by a Gist global description operator.
  6. 根据权利要求1所述的机器人定位方法,其特征在于,所述根据局部特征匹配结果获得与类别个数相同的多个机器人的当前位姿的步骤包括步骤:The robot positioning method according to claim 1, wherein the step of obtaining the current pose of the plurality of robots having the same number of categories according to the local feature matching result comprises the steps of:
    对匹配内点数超过预设阈值的各聚类类别采用PnP法得到机器人的当前位姿。The PnP method is used to obtain the current pose of the robot for each clustering category whose matching internal points exceed the preset threshold.
  7. 根据权利要求1所述的机器人定位方法,其特征在于,所述根据局部特征匹配结果获得与类别个数相同的多个机器人的当前位姿的步骤还包括步骤:The robot positioning method according to claim 1, wherein the step of obtaining a current pose of the plurality of robots having the same number of categories according to the local feature matching result further comprises the steps of:
    对匹配内点数未超过预设阈值的各聚类类别,将其对应的中心关键帧图像的位姿作为机器人当前位姿。For each clustering category in which the number of matching internal points does not exceed the preset threshold, the pose of the corresponding central keyframe image is taken as the current pose of the robot.
  8. 根据权利要求1所述的机器人定位方法,其特征在于,所述对所述前m帧进行多个类别的聚类的步骤包括:The robot positioning method according to claim 1, wherein the step of performing clustering of the plurality of categories on the previous m frames comprises:
    对前m帧的索引采用k-means聚类方法进行k个类别的聚类,得到k个聚类类别的排序;The k-means clustering method is used to index the k categories in the index of the first m frames, and the sorting of k cluster categories is obtained;
    将每个聚类类别的中心值替换该类别关键帧索引的中位数,以落在每个聚类类别的范围内的帧数为基数,乘以机器人的最大速度因子作为该类别的新范围;Replace the median value of each cluster category with the median of the keyframe index of the category, with the number of frames falling within the range of each cluster category as the base, multiplied by the maximum speed factor of the robot as the new range of the category ;
    将配置新范围的k个聚类类别与上一次检索图像对应的k个聚类类别进行匹配。The k cluster categories configuring the new range are matched with the k cluster categories corresponding to the last searched image.
  9. 一种机器人定位装置,其特征在于,包括:A robot positioning device, comprising:
    图像获取及相似度度量模块,用于读取机器人相机的当前检索图像,并将所述当前检索图像与混合视觉激光地图进行相似度度量,查找所述当前检索图像中相似度最高的前m帧;An image acquisition and similarity measurement module, configured to read a current search image of the robot camera, and perform similarity measurement on the current search image and the hybrid visual laser map, and search for the first m frame with the highest similarity among the current search images. ;
    聚类匹配模块,用于对所述前m帧进行多个类别的聚类,并将各聚类类别与上一次检索图像对应的聚类类别进行匹配,判断是否产生新聚类类别;a cluster matching module, configured to perform clustering of the plurality of categories on the first m frame, and match each cluster category with a cluster category corresponding to the last search image to determine whether a new cluster category is generated;
    局部特征匹配模块,用于若检测到新聚类类别,则对所述当前检索图像和所述当前检索图像对应的各所述聚类类别的中心关键帧图像进行局部特征匹配,并根据局部特征匹配结果获得与类别个数相同的多个机器人的当前位姿;a local feature matching module, configured to perform local feature matching on the central key frame image of each of the cluster categories corresponding to the current search image and the current search image if a new clustering category is detected, and according to local features The matching result obtains the current pose of the plurality of robots having the same number of categories;
    重合度得分模块,用于读取机器人的激光传感器的当前扫描数据,获得所述当前扫描数据与混合视觉激光地图的重合度得分,并对所述各聚类类别的重合度得分进行排序;a coincidence score module, configured to read current scan data of the laser sensor of the robot, obtain a coincidence score of the current scan data and the hybrid visual laser map, and sort the coincidence scores of the cluster categories;
    位姿更新发布模块,用于若检测到得分最高的类别是新聚类类别时,则将该新聚类类别的中心关键帧图像对应的位姿信息更新为机器人的当前位姿。The pose update issuing module is configured to update the pose information corresponding to the central key frame image of the new cluster category to the current pose of the robot if the category with the highest score is detected as the new cluster category.
  10. 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现权利要求1-8中任意一项所述方法的步骤。A computer apparatus comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein the processor executes the program to implement any one of claims 1-8 The steps of the method.
PCT/CN2018/077582 2018-01-16 2018-02-28 Robot positioning method and device WO2019140745A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810041205.2A CN108256574B (en) 2018-01-16 2018-01-16 Robot positioning method and device
CN201810041205.2 2018-01-16

Publications (1)

Publication Number Publication Date
WO2019140745A1 true WO2019140745A1 (en) 2019-07-25

Family

ID=62741434

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/077582 WO2019140745A1 (en) 2018-01-16 2018-02-28 Robot positioning method and device

Country Status (2)

Country Link
CN (1) CN108256574B (en)
WO (1) WO2019140745A1 (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110673608A (en) * 2019-09-26 2020-01-10 福建首松智能科技有限公司 Robot navigation method
CN110728721A (en) * 2019-10-21 2020-01-24 北京百度网讯科技有限公司 Method, device and equipment for acquiring external parameters
CN111060888A (en) * 2019-12-31 2020-04-24 芜湖哈特机器人产业技术研究院有限公司 Mobile robot repositioning method fusing ICP and likelihood domain model
CN111161334A (en) * 2019-12-31 2020-05-15 南通大学 Semantic map construction method based on deep learning
CN111222514A (en) * 2019-12-31 2020-06-02 西安航天华迅科技有限公司 Local map optimization method based on visual positioning
CN111275763A (en) * 2020-01-20 2020-06-12 深圳市普渡科技有限公司 Closed loop detection system, multi-sensor fusion SLAM system and robot
CN111444853A (en) * 2020-03-27 2020-07-24 长安大学 Loop detection method of visual S L AM
CN111461141A (en) * 2020-03-30 2020-07-28 歌尔科技有限公司 Equipment pose calculation method device and equipment
CN111538855A (en) * 2020-04-29 2020-08-14 浙江商汤科技开发有限公司 Visual positioning method and device, electronic equipment and storage medium
CN111780744A (en) * 2020-06-24 2020-10-16 浙江大华技术股份有限公司 Mobile robot hybrid navigation method, equipment and storage device
CN111862214A (en) * 2020-07-29 2020-10-30 上海高仙自动化科技发展有限公司 Computer equipment positioning method and device, computer equipment and storage medium
CN111986313A (en) * 2020-08-21 2020-11-24 浙江商汤科技开发有限公司 Loop detection method and device, electronic equipment and storage medium
CN112162294A (en) * 2020-10-10 2021-01-01 北京布科思科技有限公司 Robot structure detection method based on laser sensor
CN112219087A (en) * 2019-08-30 2021-01-12 深圳市大疆创新科技有限公司 Pose prediction method, map construction method, movable platform and storage medium
CN112393719A (en) * 2019-08-12 2021-02-23 科沃斯商用机器人有限公司 Grid semantic map generation method and device and storage equipment
CN112965076A (en) * 2021-01-28 2021-06-15 上海思岚科技有限公司 Multi-radar positioning system and method for robot
CN112966616A (en) * 2021-03-11 2021-06-15 深圳市无限动力发展有限公司 Visual repositioning method, device, equipment and medium based on clustering
CN113011517A (en) * 2021-03-30 2021-06-22 上海商汤临港智能科技有限公司 Positioning result detection method and device, electronic equipment and storage medium
CN113189613A (en) * 2021-01-25 2021-07-30 广东工业大学 Robot positioning method based on particle filtering
CN113256715A (en) * 2020-02-12 2021-08-13 北京京东乾石科技有限公司 Robot positioning method and device
CN113739785A (en) * 2020-05-29 2021-12-03 杭州海康机器人技术有限公司 Robot positioning method and device and storage medium
CN113777615A (en) * 2021-07-19 2021-12-10 派特纳(上海)机器人科技有限公司 Positioning method and system of indoor robot and cleaning robot
CN114199243A (en) * 2020-09-18 2022-03-18 浙江舜宇智能光学技术有限公司 Pose estimation and motion planning method and device for robot and robot
CN114415648A (en) * 2020-10-10 2022-04-29 杭州海康威视数字技术股份有限公司 Method, device and equipment for determining ground plane
US20220198688A1 (en) * 2019-04-17 2022-06-23 Megvii (Beijing) Technology Co., Ltd. Laser coarse registration method, device, mobile terminal and storage medium
CN116680431A (en) * 2022-11-29 2023-09-01 荣耀终端有限公司 Visual positioning method, electronic equipment, medium and product

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020014864A1 (en) * 2018-07-17 2020-01-23 深圳市大疆创新科技有限公司 Pose determination method and device, and computer readable storage medium
KR102392100B1 (en) * 2018-07-19 2022-04-27 우이시 테크놀로지스 (베이징) 리미티드. Methods, devices, systems and storage media for storing and loading visual localization maps
CN111209353B (en) 2018-11-21 2024-06-14 驭势科技(北京)有限公司 Visual positioning map loading method, device, system and storage medium
CN111381589A (en) * 2018-12-29 2020-07-07 沈阳新松机器人自动化股份有限公司 Robot path planning method
CN110686687B (en) * 2019-10-31 2021-11-09 珠海市一微半导体有限公司 Method for constructing map by visual robot, robot and chip
CN111223145A (en) * 2020-01-03 2020-06-02 上海有个机器人有限公司 Data processing method, system, service device and storage medium thereof
CN111337943B (en) * 2020-02-26 2022-04-05 同济大学 Mobile robot positioning method based on visual guidance laser repositioning
CN112596064B (en) * 2020-11-30 2024-03-08 中科院软件研究所南京软件技术研究院 Laser and vision integrated global positioning method for indoor robot
CN113031588B (en) * 2021-02-02 2023-11-07 广东柔乐电器有限公司 Mall robot navigation system
CN113011359B (en) * 2021-03-26 2023-10-24 浙江大学 Method for simultaneously detecting plane structure and generating plane description based on image and application
CN113010724A (en) * 2021-04-29 2021-06-22 山东新一代信息产业技术研究院有限公司 Robot map selection method and system based on visual feature point matching
CN113269803B (en) * 2021-06-09 2023-01-13 中国科学院自动化研究所 Scanning positioning method, system and equipment based on 2D laser and depth image fusion
CN117115238B (en) * 2023-04-12 2024-06-25 荣耀终端有限公司 Pose determining method, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100040279A1 (en) * 2008-08-12 2010-02-18 Samsung Electronics Co., Ltd Method and apparatus to build 3-dimensional grid map and method and apparatus to control automatic traveling apparatus using the same
US20140350839A1 (en) * 2013-05-23 2014-11-27 Irobot Corporation Simultaneous Localization And Mapping For A Mobile Robot
CN106092104A (en) * 2016-08-26 2016-11-09 深圳微服机器人科技有限公司 The method for relocating of a kind of Indoor Robot and device
CN106153048A (en) * 2016-08-11 2016-11-23 广东技术师范学院 A kind of robot chamber inner position based on multisensor and Mapping System
CN106940186A (en) * 2017-02-16 2017-07-11 华中科技大学 A kind of robot autonomous localization and air navigation aid and system
CN107357286A (en) * 2016-05-09 2017-11-17 两只蚂蚁公司 Vision positioning guider and its method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7043055B1 (en) * 1999-10-29 2006-05-09 Cognex Corporation Method and apparatus for locating objects using universal alignment targets
CN101920498A (en) * 2009-06-16 2010-12-22 泰怡凯电器(苏州)有限公司 Device for realizing simultaneous positioning and map building of indoor service robot and robot
CN105866782B (en) * 2016-04-04 2018-08-17 上海大学 A kind of moving object detection system and method based on laser radar

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100040279A1 (en) * 2008-08-12 2010-02-18 Samsung Electronics Co., Ltd Method and apparatus to build 3-dimensional grid map and method and apparatus to control automatic traveling apparatus using the same
US20140350839A1 (en) * 2013-05-23 2014-11-27 Irobot Corporation Simultaneous Localization And Mapping For A Mobile Robot
CN107357286A (en) * 2016-05-09 2017-11-17 两只蚂蚁公司 Vision positioning guider and its method
CN106153048A (en) * 2016-08-11 2016-11-23 广东技术师范学院 A kind of robot chamber inner position based on multisensor and Mapping System
CN106092104A (en) * 2016-08-26 2016-11-09 深圳微服机器人科技有限公司 The method for relocating of a kind of Indoor Robot and device
CN106940186A (en) * 2017-02-16 2017-07-11 华中科技大学 A kind of robot autonomous localization and air navigation aid and system

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220198688A1 (en) * 2019-04-17 2022-06-23 Megvii (Beijing) Technology Co., Ltd. Laser coarse registration method, device, mobile terminal and storage medium
CN112393719A (en) * 2019-08-12 2021-02-23 科沃斯商用机器人有限公司 Grid semantic map generation method and device and storage equipment
CN112219087A (en) * 2019-08-30 2021-01-12 深圳市大疆创新科技有限公司 Pose prediction method, map construction method, movable platform and storage medium
CN110673608A (en) * 2019-09-26 2020-01-10 福建首松智能科技有限公司 Robot navigation method
CN110728721A (en) * 2019-10-21 2020-01-24 北京百度网讯科技有限公司 Method, device and equipment for acquiring external parameters
CN110728721B (en) * 2019-10-21 2022-11-01 北京百度网讯科技有限公司 Method, device and equipment for acquiring external parameters
CN111060888A (en) * 2019-12-31 2020-04-24 芜湖哈特机器人产业技术研究院有限公司 Mobile robot repositioning method fusing ICP and likelihood domain model
CN111161334A (en) * 2019-12-31 2020-05-15 南通大学 Semantic map construction method based on deep learning
CN111222514A (en) * 2019-12-31 2020-06-02 西安航天华迅科技有限公司 Local map optimization method based on visual positioning
CN111060888B (en) * 2019-12-31 2023-04-07 芜湖哈特机器人产业技术研究院有限公司 Mobile robot repositioning method fusing ICP and likelihood domain model
CN111222514B (en) * 2019-12-31 2023-06-27 上海星思半导体有限责任公司 Local map optimization method based on visual positioning
CN111275763A (en) * 2020-01-20 2020-06-12 深圳市普渡科技有限公司 Closed loop detection system, multi-sensor fusion SLAM system and robot
CN111275763B (en) * 2020-01-20 2023-10-13 深圳市普渡科技有限公司 Closed loop detection system, multi-sensor fusion SLAM system and robot
CN113256715B (en) * 2020-02-12 2024-04-05 北京京东乾石科技有限公司 Positioning method and device for robot
CN113256715A (en) * 2020-02-12 2021-08-13 北京京东乾石科技有限公司 Robot positioning method and device
CN111444853B (en) * 2020-03-27 2023-04-07 长安大学 Loop detection method of visual SLAM
CN111444853A (en) * 2020-03-27 2020-07-24 长安大学 Loop detection method of visual S L AM
CN111461141B (en) * 2020-03-30 2023-08-29 歌尔科技有限公司 Equipment pose calculating method and device
CN111461141A (en) * 2020-03-30 2020-07-28 歌尔科技有限公司 Equipment pose calculation method device and equipment
CN111538855B (en) * 2020-04-29 2024-03-08 浙江商汤科技开发有限公司 Visual positioning method and device, electronic equipment and storage medium
CN111538855A (en) * 2020-04-29 2020-08-14 浙江商汤科技开发有限公司 Visual positioning method and device, electronic equipment and storage medium
CN113739785A (en) * 2020-05-29 2021-12-03 杭州海康机器人技术有限公司 Robot positioning method and device and storage medium
CN111780744A (en) * 2020-06-24 2020-10-16 浙江大华技术股份有限公司 Mobile robot hybrid navigation method, equipment and storage device
CN111780744B (en) * 2020-06-24 2023-12-29 浙江华睿科技股份有限公司 Mobile robot hybrid navigation method, equipment and storage device
CN111862214B (en) * 2020-07-29 2023-08-25 上海高仙自动化科技发展有限公司 Computer equipment positioning method, device, computer equipment and storage medium
CN111862214A (en) * 2020-07-29 2020-10-30 上海高仙自动化科技发展有限公司 Computer equipment positioning method and device, computer equipment and storage medium
CN111986313A (en) * 2020-08-21 2020-11-24 浙江商汤科技开发有限公司 Loop detection method and device, electronic equipment and storage medium
CN114199243B (en) * 2020-09-18 2024-05-24 浙江舜宇智能光学技术有限公司 Pose estimation and motion planning method and device for robot and robot
CN114199243A (en) * 2020-09-18 2022-03-18 浙江舜宇智能光学技术有限公司 Pose estimation and motion planning method and device for robot and robot
CN112162294A (en) * 2020-10-10 2021-01-01 北京布科思科技有限公司 Robot structure detection method based on laser sensor
CN112162294B (en) * 2020-10-10 2023-12-15 北京布科思科技有限公司 Robot structure detection method based on laser sensor
CN114415648A (en) * 2020-10-10 2022-04-29 杭州海康威视数字技术股份有限公司 Method, device and equipment for determining ground plane
CN113189613A (en) * 2021-01-25 2021-07-30 广东工业大学 Robot positioning method based on particle filtering
CN112965076A (en) * 2021-01-28 2021-06-15 上海思岚科技有限公司 Multi-radar positioning system and method for robot
CN112965076B (en) * 2021-01-28 2024-05-24 上海思岚科技有限公司 Multi-radar positioning system and method for robot
CN112966616A (en) * 2021-03-11 2021-06-15 深圳市无限动力发展有限公司 Visual repositioning method, device, equipment and medium based on clustering
CN113011517A (en) * 2021-03-30 2021-06-22 上海商汤临港智能科技有限公司 Positioning result detection method and device, electronic equipment and storage medium
CN113777615B (en) * 2021-07-19 2024-03-29 派特纳(上海)机器人科技有限公司 Positioning method and system of indoor robot and cleaning robot
CN113777615A (en) * 2021-07-19 2021-12-10 派特纳(上海)机器人科技有限公司 Positioning method and system of indoor robot and cleaning robot
CN116680431A (en) * 2022-11-29 2023-09-01 荣耀终端有限公司 Visual positioning method, electronic equipment, medium and product

Also Published As

Publication number Publication date
CN108256574A (en) 2018-07-06
CN108256574B (en) 2020-08-11

Similar Documents

Publication Publication Date Title
WO2019140745A1 (en) Robot positioning method and device
CN111652934B (en) Positioning method, map construction method, device, equipment and storage medium
Zhu et al. Gosmatch: Graph-of-semantics matching for detecting loop closures in 3d lidar data
Rios-Cabrera et al. Discriminatively trained templates for 3d object detection: A real time scalable approach
Atanasov et al. Nonmyopic view planning for active object classification and pose estimation
Himstedt et al. Large scale place recognition in 2D LIDAR scans using geometrical landmark relations
JP5800494B2 (en) Specific area selection device, specific area selection method, and program
Su et al. Global localization of a mobile robot using lidar and visual features
JP2012033022A (en) Change area detection device and method in space
CN102236794A (en) Recognition and pose determination of 3D objects in 3D scenes
US11501462B2 (en) Multi-view three-dimensional positioning
CN104615998B (en) A kind of vehicle retrieval method based on various visual angles
Atanasov et al. Hypothesis testing framework for active object detection
Krajník et al. Image features and seasons revisited
CN110146080B (en) SLAM loop detection method and device based on mobile robot
GB2599947A (en) Visual-inertial localisation in an existing map
WO2023173950A1 (en) Obstacle detection method, mobile robot, and machine readable storage medium
Vieriu et al. Facial expression recognition under a wide range of head poses
CN111239763A (en) Object positioning method and device, storage medium and processor
Sahin et al. Recovering 6D object pose: A review and multi-modal analysis
WO2022079258A1 (en) Visual-inertial localisation in an existing map
Zhou et al. Robust global localization by using global visual features and range finders data
Gupta et al. Effectively Detecting Loop Closures using Point Cloud Density Maps
KR20220055072A (en) Method for indoor localization using deep learning
Chai et al. ORB-SHOT SLAM: trajectory correction by 3D loop closing based on bag-of-visual-words (BoVW) model for RGB-D visual SLAM

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18901185

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 07.12.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18901185

Country of ref document: EP

Kind code of ref document: A1