CN113552586B - Mobile robot positioning method and mobile robot - Google Patents
Mobile robot positioning method and mobile robot Download PDFInfo
- Publication number
- CN113552586B CN113552586B CN202010269105.2A CN202010269105A CN113552586B CN 113552586 B CN113552586 B CN 113552586B CN 202010269105 A CN202010269105 A CN 202010269105A CN 113552586 B CN113552586 B CN 113552586B
- Authority
- CN
- China
- Prior art keywords
- scanning
- frame
- information
- mobile robot
- scanning frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 230000008569 process Effects 0.000 claims abstract description 26
- 238000013507 mapping Methods 0.000 claims abstract description 15
- 238000010276 construction Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 4
- 238000004422 calculation algorithm Methods 0.000 abstract description 7
- 238000005516 engineering process Methods 0.000 description 5
- 239000002245 particle Substances 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses a positioning method of a mobile robot, which comprises the steps of collecting a scanning frame of current radar scanning, extracting characteristic information of the current scanning frame, and matching the extracted characteristic information of the scanning frame with characteristic information in a characteristic library stored in a mobile robot mapping process; the feature library stores feature information extracted based on the acquired scanning frames and local position information of the scanning frames in the process of mapping, and the type of the extracted scanning frame feature information is the same as the type of the stored feature information; determining a matched scanning frame according to the matched characteristic information; and determining local position information of the scanning frame according to the matched scanning frame, and obtaining current global position information of the mobile robot based on the local position information. The invention avoids global search matching of the mobile robot to the global map in the positioning process, shortens the positioning time, improves the positioning precision and reduces the complexity of the existing positioning algorithm.
Description
Technical Field
The invention relates to the field of machine vision, in particular to a positioning method of a mobile robot.
Background
With the development of the instant localization and mapping (SLAM) technology, more and more mobile robots with mapping and localization functions, such as cleaning robots, are beginning to be applied, and such mobile robots can sense the surrounding environment and build a map of the environment based on the sensed information. However, as the scale of creating a map becomes larger, it is important for a mobile robot to retrieve its own position in the map after it has a "kidnapping" problem.
Taking a mobile robot using a laser radar as an example, for the mobile robot using the laser radar, the method adopted is to match the radar scanning data acquired after being "kidnapped" with a map, and find the position with the highest matching degree as the repositioning position. The algorithm often needs to search for a global position once, and the positioning time is long.
For example, a geometry-based global search algorithm is used to reposition the mobile robot after "kidnapping". The method uses the position with the maximum response value as the position of the mobile robot through the corresponding function of the geometric features. The algorithm extracts the global geometric features, and the geometric feature extraction takes a long time.
As another example, the "kidnapping" problem of mobile robots is solved by particle filtering and features of obstacles. The optimal position is determined as the current position of the mobile robot through multiple acquisitions. The algorithm is sensitive to the number of particles, and if the number of particles is large, a large amount of memory is required to memorize the particles, and for embedded applications, the memory consumption is large.
For another example, a repositioning method combining a radar and a camera is adopted, a candidate part is selected from the laser radar repositioning results, and an optimized positioning result is determined by using an image matching method. The method uses two sensors, namely the radar sensor and the camera, has higher cost, and the image algorithm is influenced by illumination conditions, so that positioning failure results are easy to occur.
Disclosure of Invention
The invention provides a positioning method of a mobile robot, which is used for reducing the time required by positioning.
The invention provides a positioning method of a mobile robot, which comprises the following steps of,
collecting the scanning frame of the current radar scanning, extracting the characteristic information of the current scanning frame,
matching the extracted scanning frame characteristic information with characteristic information in a characteristic library stored in the mobile robot mapping process; the feature library stores feature information extracted based on the acquired scanning frames and local position information of the scanning frames in the process of mapping, and the type of the extracted scanning frame feature information is the same as the type of the stored feature information;
determining a matched scanning frame according to the matched characteristic information;
determining local position information of the scanning frame according to the matched scanning frame,
and obtaining the current global position information of the mobile robot based on the local position information.
Preferably, the feature library is obtained by the mobile robot in the process of immediately positioning and mapping SLAM through the following steps:
a scan frame is acquired and a frame is acquired,
local position information of the mobile robot is determined from the scan frame,
judging whether the characteristic information of the scanning frame needs to be extracted, if so, extracting the characteristic information of the scanning frame, storing the characteristic information, and storing the local position information determined based on the scanning frame;
judging whether the scanning frame is a key frame or not, if so, constructing map information based on the key frame, and returning to the step of collecting the scanning frame until the map construction is completed.
Preferably, the determining whether the feature information of the scan frame needs to be extracted includes,
judging whether the position deviation between the current local position and the last local position is larger than a set deviation threshold value, judging whether the current position does not store the characteristic information of the scanning frame, if so, judging that the characteristic information of the scanning frame needs to be extracted, and if not, judging that the characteristic information of the scanning frame does not need to be extracted.
Preferably, the determining the local position information of the mobile robot according to the scanning frame further includes recording the local position information with grid coordinates to obtain grid information;
the extracting feature information of the scan frame is stored, and local position information determined based on the scan frame is stored, further comprising,
locally storing the characteristic information of the scanning frame and grid information corresponding to the local position information in the mobile robot, and marking the grids stored with the characteristic information;
and determining whether the characteristic information of the scanning frame is not stored in the current position according to the mark of the grid.
Preferably, the determining the matched scan frame according to the matched characteristic information includes,
taking the scanning frame corresponding to the feature information meeting the matching condition in the feature library as a candidate frame,
determining the best candidate frame from the candidate frame set according to a certain strategy;
and determining local position information of the scanning frame according to the matched scanning frame, wherein the local position information of the best candidate frame is taken as an initial value.
Preferably, the method further comprises loading global map information, and a feature library when the mobile robot triggers repositioning;
the mobile robot is positioned by adopting a laser radar; the characteristic information is the scanning area of a scanning frame;
the method comprises the steps of determining local position information of the mobile robot according to a scanning frame, wherein the step of matching the scanning frame with a currently established map, and determining the current local position information of the mobile robot if the matching is successful;
the obtaining global position information of the mobile robot based on the local position information comprises,
and matching the local position information with the global map to obtain the current global position information of the mobile robot.
Preferably, the scanning area of the scanning frame is: and (3) scanning the average value of the scanning areas of all the scanning points in the frame, wherein the scanning area of the scanning points is a circular area with the distance between the scanning points and the radar origin as a radius.
The present invention also provides a method of constructing a map by a mobile robot, the method comprising,
a scan frame of a radar scan is acquired,
local position information of the mobile robot is determined from the scan frame,
extracting characteristic information of the scanning frame, storing the characteristic information, and storing local position information determined based on the scanning frame;
judging whether the scanning frame is a key frame or not, if so, constructing map information based on the key frame, and returning to the step of collecting the scanning frame until the map construction is completed.
The invention provides a mobile robot, which comprises a memory and a processor,
the memory stores instructions executable by the processor to cause the processor to perform the steps of the positioning method and/or the steps of the map construction method described above.
The present invention provides a computer readable storage medium having stored therein a computer program which when executed by a processor implements the steps of the positioning method and/or the steps of the map construction method.
According to the positioning method of the mobile robot, the characteristics of the radar scanning frames acquired during the construction of the map are extracted when the map of the mobile robot is constructed, and the extracted scanning frame characteristics and the local position information determined by the scanning frames are stored, so that the characteristics of the current scanning frames are matched through the stored characteristics after being extracted in the positioning process, and the positions in the global map information are determined after the local map information is obtained, the global searching and matching of the mobile robot to the global map in the positioning process is avoided, the positioning time is shortened, the positioning precision is improved, the algorithm complexity of the existing positioning is reduced, the algorithm implementation is simple, the cost is low, and the expandability and the compatibility are good.
Drawings
Fig. 1 is an example of a map constructed using lidar sensors.
Fig. 2 shows an example of a current scan frame.
Fig. 3 is a schematic flow chart of a mobile robot performing map construction by using SLAM technology.
Fig. 4 is a schematic flow chart of another embodiment of the map construction of the mobile robot by using SLAM technology.
Fig. 5 is a schematic diagram of a grid map of an established map.
Fig. 6 is a schematic flow chart of relocation.
Detailed Description
In order to make the objects, technical means and advantages of the present application more apparent, the present application is further described in detail below with reference to the accompanying drawings.
According to the repositioning method provided by the invention, the radar scanning data of the current position is obtained when the mobile robot performs map construction, the characteristic information of the current radar scanning data is extracted, and the characteristic information is stored so as to be convenient for searching during repositioning. And in the repositioning process, matching the characteristic information stored by the characteristics of the current radar scanning data to obtain candidate position information, and preferably, selecting the optimal position information as a repositioning position initial value.
Hereinafter, a relocation method according to an embodiment of the present invention will be described with reference to a mobile robot using a lidar as an example.
In the positioning process of the mobile robot adopting the laser radar, the laser radar scans each time to obtain a scanning frame, so that surrounding environment data are continuously collected, and compared with a global map, the scanning frame of the laser radar only can obtain local information in the map each time, and the position of the mobile robot in the global map is obtained by matching the local information with the global map information. Referring to fig. 1, fig. 1 is an example of a map constructed using lidar sensors.
Referring to fig. 3, fig. 3 is a schematic flow chart of a mobile robot performing map construction by using SLAM technology. In the process, the mobile robot builds map information of the mobile robot, and specifically comprises the following steps:
step 301, acquiring radar data to obtain a scanning frame of current radar scanning; as shown in fig. 2, fig. 2 shows an example of a current scan frame, which is two-dimensional data, including scan points therein.
Step 302, matching the current scanning frame with the current established map, if the matching is successful, obtaining the local position (current position information) of the current mobile robot, if the matching is unsuccessful, executing step 301,
step 303, according to whether the position deviation between the current position and the last position is larger than the set deviation threshold value,
if yes, further judging whether the characteristic information of the scanning frame is stored in the current position, and if so, executing step 304; if not, extracting the characteristic information of the current scanning frame, storing the extracted characteristic information, and storing the current position (global position) corresponding to the characteristic information;
if the positional offset is not greater than the set offset threshold, no feature extraction of the scan frame is performed, step 304 is performed,
through the above process, it is determined whether the feature information of the current scan frame needs to be extracted.
In this step, the feature information adopts features with smaller dimensions to reduce the amount of calculation when feature matching is performed, thereby reducing the positioning time. For a two-dimensional lidar scanning frame, the features that may be selected may be a radar scanning area of the radar scanning frame, a histogram of the first order reciprocal, and so on. For example, a radar scan area is characterized by the scan area of any scan point i in a scan frame being:
wherein,i.e. r i For the distance between the scanning point i and the origin of the laser radar, n is the total point number included in the scanning frame.
The radar scanning area of the scanning frame is the scanning area of all scanning points, which is expressed mathematically as:
wherein n is the total point number included in the scanning frame, i.e. the total point number of the scanning points obtained in the current scanning.
In this step, the positional shift may also be determined based on the inter-frame relative positional shift between the scan frames.
Step 304, judging whether the current scanning frame is a key frame, if yes, determining the pose corresponding to the key frame based on the current scanning frame, converting to the pose under the world coordinate system, thereby obtaining newly added map information, and updating the map.
Step 305, judging whether the mapping is completed, if not, returning to step 301, otherwise, storing map data.
Preferably, the mapping process of the mobile robot may be performed at the time of initial construction.
Referring to fig. 4, fig. 4 is a schematic flow chart of another embodiment of the map construction of the mobile robot by using SLAM technology. In order to improve the positioning accuracy, the currently constructed map is rasterized during map construction. The method comprises the following specific steps:
step 401, acquiring radar data to obtain a scanning frame of current radar scanning; this step is the same as step 301;
step 402, matching the current scanning frame with the current established grid map, if the matching is successful, obtaining the local position of the current mobile robot, recording the position by the grid where the position is located, if the matching is unsuccessful, executing step 401,
in the step, the grid map is divided into different blocks according to the set grid size, and the positioning information obtained by successful current matching is converted into coordinates of the grid. As shown in fig. 5, fig. 5 is a schematic diagram of a grid map of an established map. The grid size may be determined based on the required positioning accuracy.
Step 403, determining whether feature information of the current scanning frame needs to be extracted, if so, extracting features of the current scanning frame, storing the extracted features and the current grid information in the mobile robot local, and identifying that the grid has the feature information stored therein, for example, setting the grid value in the map grid to 1 for marking. As shown in fig. 5, fig. 5 is a schematic view of a number of grids set to 1.
In this step, the basis for judging whether the feature information of the current scanning frame needs to be extracted is: if the position deviation between the current position and the last position is larger than a set threshold value and the grid where the current position is located does not have an identification for recording that the grid has stored the extracted characteristic information, the characteristic information of the current scanning frame is judged to be required to be extracted.
The specific extracted feature information is the same as in step 303.
Step 404, judging whether the current scanning frame is a key frame, if yes, determining the pose corresponding to the key frame based on the current scanning frame, converting to the pose under the world coordinate system, recording the position information in the pose by using grid coordinates, thereby obtaining newly added map grid information, and updating the current map.
Step 405, judging whether the mapping is completed, if not, returning to step 401, otherwise, storing map data; thereby obtaining a grid map and a feature library for storing the extracted scanned frame feature information.
In the embodiment, the map is constructed and the characteristic information of the scanning frame is extracted at the same time in the moving process of the mobile robot, so that the complexity of the composition process is not increased.
Referring to fig. 6, fig. 6 is a schematic flow chart of relocation. When relocation is triggered, the following steps are performed:
step 601, loading map data stored in the mobile robot local and feature library data,
step 602, collecting current scanning frame, extracting characteristic information of current scanning frame,
in this step, the type of feature information extracted is the same as that extracted when the image is created, for example, the scanning area of the scanning frame is used as the feature information when the image is created, and the scanning area of the scanning frame is also used as the feature information when the image is repositioned.
Step 603, matching the feature information of the current scanning frame with the feature information in the feature library, and taking the scanning frame corresponding to the feature information meeting the matching condition in the feature library as a candidate frame;
in this step, the matching condition may be described in terms of similarity, for example, setting the similarity within a set threshold value range;
in this step, since the feature information of the scan frame and the local position information corresponding to the scan frame, for example, the grid coordinates in the grid map are stored in the feature library, the current local position information of the mobile robot can be determined by matching the candidate frame.
In order to improve the accuracy and rapidity of positioning, the number of candidate frames may be plural in this step, i.e., a plurality of candidate frames may be retained when the selection of the candidate frames is made, so that it is obtained that the mobile robot may appear in several positions.
Step 604, determining the best candidate frame from the candidate frame set according to a certain strategy, and matching the local position information of the best candidate frame as an initial value with the global map to obtain the current position.
In the step, calculating the correlation degree between the scanning frame and the map time respectively for the local positions corresponding to each candidate frame in the candidate frame set, taking the correlation degree as an evaluation index, and selecting the candidate frame with the best evaluation index as the best candidate frame; and then, the pose of the best candidate frame is used as an initial value to be matched with the global map, so that the current global pose information of the mobile robot is obtained.
In order to increase the robustness of positioning, the pose of the partial candidate frame with better evaluation index can be preferably selected as an initial value.
In the repositioning process of the embodiment of the invention, the initial position of the mobile robot is obtained by matching the current scanning frame with the characteristic information in the characteristic library, and the characteristic information is low-dimensional information, so that the matching efficiency is high, the matching time is greatly shortened, the time consumption of searching and matching the global map is avoided, and the repositioning efficiency is improved.
It should be appreciated that although fig. 6 is a process that is illustrated with repositioning as an example, the practice is not limited to repositioning and any positioning process may be applicable in a mobile robot.
The mobile robot comprises a memory and a processor, wherein the memory stores instructions executable by the processor, and the instructions are executed by the processor so that the processor executes the steps of the positioning method.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The embodiment of the invention also provides a computer readable storage medium, wherein the storage medium stores a computer program, and the computer program realizes the following steps when being executed by a processor:
collecting current radar scanning frame, extracting characteristic information of the current scanning frame,
matching the extracted scanning frame characteristic information with characteristic information in a characteristic library stored in the mobile robot mapping process; the feature library stores feature information extracted based on the acquired scanning frames and local position information of the scanning frames in the process of mapping, and the type of the extracted scanning frame feature information is the same as the type of the stored feature information;
determining a matched scanning frame according to the matched characteristic information;
determining local position information of the scanning frame according to the matched scanning frame,
and obtaining the current global position information of the mobile robot based on the local position information.
For the apparatus/network side device/storage medium embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and the relevant points are referred to in the description of the method embodiment.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather to enable any modification, equivalent replacement, improvement or the like to be made within the spirit and principles of the invention.
Claims (9)
1. A positioning method of mobile robot is characterized by comprising collecting the scanning frame of current radar scanning, extracting the characteristic information of the current scanning frame,
matching the extracted scanning frame characteristic information with characteristic information in a characteristic library stored in the mobile robot mapping process; the feature library stores feature information extracted based on the acquired scanning frames and local position information of the scanning frames in the process of mapping, and the type of the extracted scanning frame feature information is the same as the type of the stored feature information;
determining a matched scanning frame according to the matched characteristic information;
determining local position information of the scanning frame according to the matched scanning frame,
obtaining current global position information of the mobile robot based on the local position information;
wherein,
the characteristic information is a scan area of a scan frame,
the scanning area of the scanning frame is as follows: the average value of the scanning areas of all scanning points in the scanning frame, wherein the scanning area of the scanning points is a circular area with the distance between the scanning points and the radar origin as a radius;
the average value of the scanning areas of all scanning points in the scanning frame is calculated as follows:
for any scanning point in any scanning frame, calculating the ratio of the scanning area of the scanning point to the total point included in the scanning frame,
and accumulating all the ratios calculated for the scanning frame to obtain the average value of the scanning areas of all the scanning points in the scanning frame.
2. The positioning method as claimed in claim 1, wherein the feature library is obtained by the mobile robot during the instant positioning and mapping SLAM process by:
a scan frame is acquired and a frame is acquired,
local position information of the mobile robot is determined from the scan frame,
judging whether the characteristic information of the scanning frame needs to be extracted, if so, extracting the characteristic information of the scanning frame, storing the characteristic information, and storing the local position information determined based on the scanning frame;
judging whether the scanning frame is a key frame or not, if so, constructing map information based on the key frame, and returning to the step of collecting the scanning frame until the map construction is completed.
3. The positioning method of claim 2 wherein said determining whether the feature information of the scan frame needs to be extracted comprises,
judging whether the position deviation between the current local position and the last local position is larger than a set deviation threshold value, judging whether the current position does not store the characteristic information of the scanning frame, if so, judging that the characteristic information of the scanning frame needs to be extracted, and if not, judging that the characteristic information of the scanning frame does not need to be extracted.
4. The positioning method of claim 3, wherein determining local position information of the mobile robot from the scan frame further comprises recording the local position information with grid coordinates to obtain grid information;
the extracting feature information of the scan frame is stored, and local position information determined based on the scan frame is stored, further comprising,
locally storing the characteristic information of the scanning frame and grid information corresponding to the local position information in the mobile robot, and marking the grids stored with the characteristic information;
and determining whether the characteristic information of the scanning frame is not stored in the current position according to the mark of the grid.
5. The positioning method of claim 4 wherein said determining a matched scan frame based on the matched characteristic information comprises,
taking the scanning frame corresponding to the feature information meeting the matching condition in the feature library as a candidate frame,
determining the best candidate frame from the candidate frame set according to a certain strategy;
and determining local position information of the scanning frame according to the matched scanning frame, wherein the local position information of the best candidate frame is taken as an initial value.
6. The positioning method of claim 5, further comprising loading global map information, and a feature library, when the mobile robot triggers a relocation;
the mobile robot is positioned by adopting a laser radar;
the determining local position information of the mobile robot according to the scan frame includes,
matching the scanning frame with a currently established map, and if the matching is successful, determining the current local position information of the mobile robot;
the obtaining global position information of the mobile robot based on the local position information comprises,
and matching the local position information with the global map to obtain the current global position information of the mobile robot.
7. A method for constructing a map by a mobile robot, the method comprising,
a scan frame of a radar scan is acquired,
local position information of the mobile robot is determined from the scan frame,
extracting and storing the characteristic information of the scanning frame, and storing the local position information determined based on the scanning frame to construct a characteristic library;
judging whether the scanning frame is a key frame or not, if so, constructing map information based on the key frame, and returning to the step of collecting the scanning frame until the map construction is completed;
wherein,
the characteristic information is a scan area of a scan frame,
the scanning area of the scanning frame is as follows: the average value of the scanning areas of all scanning points in the scanning frame, wherein the scanning area of the scanning points is a circular area with the distance between the scanning points and the radar origin as a radius;
the average value of the scanning areas of all scanning points in the scanning frame is calculated as follows:
for any scanning point in any scanning frame, calculating the ratio of the scanning area of the scanning point to the total point included in the scanning frame,
and accumulating all the ratios calculated for the scanning frame to obtain the average value of the scanning areas of all the scanning points in the scanning frame.
8. A mobile robot is characterized by comprising a memory and a processor,
the memory stores instructions executable by the processor to cause the processor to perform the steps of the positioning method of any one of claims 1 to 6 and/or the steps of the map construction method of claim 7.
9. A computer-readable storage medium, characterized in that the storage medium has stored therein a computer program which, when executed by a processor, implements the steps of the positioning method according to any one of claims 1 to 6 and/or the steps of the map construction method according to claim 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010269105.2A CN113552586B (en) | 2020-04-08 | 2020-04-08 | Mobile robot positioning method and mobile robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010269105.2A CN113552586B (en) | 2020-04-08 | 2020-04-08 | Mobile robot positioning method and mobile robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113552586A CN113552586A (en) | 2021-10-26 |
CN113552586B true CN113552586B (en) | 2024-04-05 |
Family
ID=78129284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010269105.2A Active CN113552586B (en) | 2020-04-08 | 2020-04-08 | Mobile robot positioning method and mobile robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113552586B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106092104A (en) * | 2016-08-26 | 2016-11-09 | 深圳微服机器人科技有限公司 | The method for relocating of a kind of Indoor Robot and device |
CN107561549A (en) * | 2017-08-17 | 2018-01-09 | 广州视源电子科技股份有限公司 | Method for relocating, device, terminal and the storage medium of terminal location |
CN109141393A (en) * | 2018-07-02 | 2019-01-04 | 北京百度网讯科技有限公司 | Method for relocating, equipment and storage medium |
KR20190045006A (en) * | 2017-10-23 | 2019-05-02 | 주식회사 유진로봇 | Method and Apparatus for Localization and Mapping Using LIDAR |
CN110006432A (en) * | 2019-04-15 | 2019-07-12 | 广州高新兴机器人有限公司 | A method of based on the Indoor Robot rapid relocation under geometry prior information |
CN110189373A (en) * | 2019-05-30 | 2019-08-30 | 四川长虹电器股份有限公司 | A kind of fast relocation method and device of view-based access control model semantic information |
CN110376605A (en) * | 2018-09-18 | 2019-10-25 | 北京京东尚科信息技术有限公司 | Map constructing method, air navigation aid and device |
CN110689622A (en) * | 2019-07-05 | 2020-01-14 | 电子科技大学 | Synchronous positioning and composition algorithm based on point cloud segmentation matching closed-loop correction |
CN110686677A (en) * | 2019-10-10 | 2020-01-14 | 东北大学 | Global positioning method based on geometric information |
-
2020
- 2020-04-08 CN CN202010269105.2A patent/CN113552586B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106092104A (en) * | 2016-08-26 | 2016-11-09 | 深圳微服机器人科技有限公司 | The method for relocating of a kind of Indoor Robot and device |
CN107561549A (en) * | 2017-08-17 | 2018-01-09 | 广州视源电子科技股份有限公司 | Method for relocating, device, terminal and the storage medium of terminal location |
KR20190045006A (en) * | 2017-10-23 | 2019-05-02 | 주식회사 유진로봇 | Method and Apparatus for Localization and Mapping Using LIDAR |
CN109141393A (en) * | 2018-07-02 | 2019-01-04 | 北京百度网讯科技有限公司 | Method for relocating, equipment and storage medium |
CN110376605A (en) * | 2018-09-18 | 2019-10-25 | 北京京东尚科信息技术有限公司 | Map constructing method, air navigation aid and device |
CN110006432A (en) * | 2019-04-15 | 2019-07-12 | 广州高新兴机器人有限公司 | A method of based on the Indoor Robot rapid relocation under geometry prior information |
CN110189373A (en) * | 2019-05-30 | 2019-08-30 | 四川长虹电器股份有限公司 | A kind of fast relocation method and device of view-based access control model semantic information |
CN110689622A (en) * | 2019-07-05 | 2020-01-14 | 电子科技大学 | Synchronous positioning and composition algorithm based on point cloud segmentation matching closed-loop correction |
CN110686677A (en) * | 2019-10-10 | 2020-01-14 | 东北大学 | Global positioning method based on geometric information |
Non-Patent Citations (2)
Title |
---|
基于激光雷达的同时定位与地图构建方法综述;危双丰 等;;《计算机应用研究》(第02期);第327-332页 * |
激光扫描匹配方法研究综述;宗文鹏 等;;《中国光学》(第06期);第914-929页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113552586A (en) | 2021-10-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108256574B (en) | Robot positioning method and device | |
Lim et al. | ERASOR: Egocentric ratio of pseudo occupancy-based dynamic object removal for static 3D point cloud map building | |
US6826293B2 (en) | Image processing device, singular spot detection method, and recording medium upon which singular spot detection program is recorded | |
US8768071B2 (en) | Object category recognition methods and robots utilizing the same | |
CN108038139B (en) | Map construction method and device, robot positioning method and device, computer equipment and storage medium | |
CN109033989A (en) | Target identification method, device and storage medium based on three-dimensional point cloud | |
CN111652929A (en) | Visual feature identification and positioning method and system | |
CN109146918B (en) | Self-adaptive related target positioning method based on block | |
CN111726591B (en) | Map updating method, map updating device, storage medium and electronic equipment | |
JP2009163682A (en) | Image discrimination device and program | |
KR101917525B1 (en) | Method and apparatus for identifying string | |
CN115527050A (en) | Image feature matching method, computer device and readable storage medium | |
JP5305031B2 (en) | Feature amount extraction apparatus and method, and position estimation apparatus and method | |
CN114494881A (en) | Method, device and terminal for detecting remote sensing image change based on subdivision grid | |
CN113552586B (en) | Mobile robot positioning method and mobile robot | |
WO2020194079A1 (en) | Method and system for performing localization of an object in a 3d | |
CN115132370A (en) | Flow adjustment auxiliary method and device based on machine vision and deep learning | |
CN110686687B (en) | Method for constructing map by visual robot, robot and chip | |
CN114862953A (en) | Mobile robot repositioning method and device based on visual features and 3D laser | |
JP3700675B2 (en) | Road white line recognition device | |
CN107392209B (en) | Device and method for extracting line segments | |
CN111428565A (en) | Point cloud identification point positioning method and device based on deep learning | |
CN113807137A (en) | Method, device, agricultural machine and medium for identifying center line of planting row | |
Darvishzadeh | Change detection for urban spatial databases using remote sensing and GIS | |
CN116798056B (en) | Form image positioning method, apparatus, device and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |