CN113010724A - Robot map selection method and system based on visual feature point matching - Google Patents
Robot map selection method and system based on visual feature point matching Download PDFInfo
- Publication number
- CN113010724A CN113010724A CN202110471649.1A CN202110471649A CN113010724A CN 113010724 A CN113010724 A CN 113010724A CN 202110471649 A CN202110471649 A CN 202110471649A CN 113010724 A CN113010724 A CN 113010724A
- Authority
- CN
- China
- Prior art keywords
- module
- map
- image
- robot
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 13
- 238000010187 selection method Methods 0.000 title claims abstract description 10
- 238000000034 method Methods 0.000 claims abstract description 16
- 238000000605 extraction Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000003491 array Methods 0.000 claims description 7
- 239000000284 extract Substances 0.000 claims description 2
- 230000007547 defect Effects 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/587—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Library & Information Science (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a robot map selection method and system based on visual feature point matching, belonging to the field of robot application; the method comprises the following specific steps: s1, collecting map data and establishing a dictionary at the cloud end; s2, acquiring a frame of image through the robot camera and uploading the frame of image to a cloud service; s3 extracting image key points; s4 calculating a descriptor; s5 recursion all map files for feature point matching S6 repeat S2-S5 steps until matching to the corresponding map; s7, the matched map file is sent to the robot end to be loaded; according to the method, by means of the advantages of cloud computing, the currently scanned videos are uploaded to the cloud end frame by frame, characteristic points of the videos are analyzed at the cloud end, matching computing is conducted on the characteristic points and all known map files in a cloud end database, and the matched maps are fed back to the robot, so that the robot can achieve the functions of automatically selecting and loading the current scene map, and the defect that the robot needs to manually switch the map in different indoor scenes is overcome.
Description
Technical Field
The invention discloses a robot map selection method and system based on visual feature point matching, and relates to the technical field of robot application.
Background
With the continuous development of robots and automatic driving technologies, robots and automatic driving technologies thereof are applied to more and more fields, and the application scenes of the robots are ubiquitous from outdoor patrol robots to indoor distribution service robots.
The robot navigation and positioning technology depends on a map constructed in advance, and the outdoor robot can perform accurate positioning through a GPS (global positioning system), so that the position of the robot in the map is confirmed. However, when a robot enters an indoor scene, because a building is shielded, a GPS signal cannot be acquired, so that a conventional map and positioning method are invalid, different indoor environments correspond to different map files, and the robot can start to work after a map of the robot is configured manually by people in a fixed scene, but for robots working in a plurality of indoor scenes, if the maps are switched manually each time, time and labor are wasted, and the intelligence of the robot cannot be highlighted. Therefore, how to realize that the robot automatically loads the corresponding map file under the known indoor environment becomes a problem to be solved urgently at present;
therefore, the invention provides a robot map selection method and system based on visual feature point matching to solve the problems.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a robot map selection method and system based on visual feature point matching, and the adopted technical scheme is as follows: a robot map selection method based on visual feature point matching comprises the following specific steps:
s1, collecting map data and establishing a dictionary at the cloud end;
s2, acquiring a frame of image through the robot camera and uploading the frame of image to a cloud service;
s3 extracting image key points;
s4 calculating a descriptor;
s5 recursion all map files for feature point matching;
s6 repeating the steps S2-S5 until the corresponding map is matched;
and S7, sending the matched map file to the robot end for loading.
The S3 cloud service takes the ORB feature points of the received images of each frame as image key points, and the specific steps comprise:
s301, selecting a pixel p in an image, and setting the brightness of the pixel p as Ip;
s302, setting a threshold value T of Ip;
s303, selecting a pixel point by taking the pixel p as a center;
s304, judging whether the selected pixel points are characteristic points or not;
s305 executes the operations from S301 to S304 for each pixel cycle, and finds all the feature points in the picture.
The S4 calculates its descriptor for each image keypoint using the BRIEF feature description method.
The specific step of S5 recursion all map files for feature point matching includes:
s501, matching the feature points with map files in a map database;
s502, generating similarity arrays of all map files and the image feature points of the frame by adopting a rapid nearest neighbor algorithm, and performing descending arrangement.
The specific steps of S6 repeating the steps S2-S5 until the corresponding map is matched include:
s601, if the cloud end is matched with the corresponding map file, the map is issued to the robot;
s602, if the cloud does not match with the corresponding map file, the robot continuously reads the next frame of image and uploads the next frame of image to the cloud, and the steps S2-S5 are repeated.
A robot map selection system based on visual feature point matching specifically comprises an acquisition module, an uploading module, an extraction module, a calculation module, a matching module A, a matching module B and a loading module:
an acquisition module: collecting map data and establishing a dictionary at a cloud end;
an uploading module: acquiring a frame of image through a robot camera and uploading the frame of image to a cloud service;
an extraction module: extracting key points of the image;
a calculation module: calculating a descriptor;
a matching module A: performing characteristic point matching on all the recursive map files;
a matching module B: repeating the uploading module, the extracting module, the calculating module and the matching module A until the corresponding map is matched;
loading a module: and sending the matched map file to a robot end for loading.
The extraction module cloud service takes ORB feature points of each frame of received image as image key points, and specifically comprises a selection module A, a setting module, a selection module B, a judgment module and a repetition module A:
selecting a module A: selecting a pixel p in the image, and setting the brightness of the pixel p as Ip;
setting a module: setting a threshold value T of Ip;
selecting a module B: selecting a pixel point by taking the pixel p as a center;
a judging module: judging whether the selected pixel points are characteristic points or not;
repeating the module A: and circularly executing the operations of the selection module A, the setting module, the selection module B and the judgment module on each pixel to find out all characteristic points in the picture.
The calculation module calculates a descriptor for each image keypoint using a BRIEF feature description method.
The matching module A specifically comprises a matching module C and a sorting module:
a matching module C: matching the feature points with map files in a map database;
a sorting module: and generating similarity arrays of all map files and the image feature points of the frame by adopting a rapid nearest neighbor algorithm, and performing descending arrangement.
The matching module B specifically comprises a sending module and a repeating module B:
a sending module: if the cloud end is matched with the corresponding map file, the map is issued to the robot;
and repeating the module B: and if the cloud does not match with the corresponding map file, the robot continuously reads the next frame of image and uploads the next frame of image to the cloud, and the operation of the uploading module, the extracting module, the calculating module and the matching module A is repeated.
The invention has the beneficial effects that: the invention provides a robot map selection method based on visual feature point matching, which is characterized in that by means of the advantages of cloud computing, a currently scanned video is uploaded to a cloud end frame by frame, feature points of the video are analyzed at the cloud end and are subjected to matching computing with all known map files in a cloud end database, and a matched map is fed back to a robot, so that the robot realizes the functions of automatically selecting and loading a current scene map, and the defect that the robot needs to manually switch maps in different indoor scenes is overcome.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of the method of the present invention; fig. 2 is a schematic diagram of the system of the present invention.
Detailed Description
The present invention is further described below in conjunction with the following figures and specific examples so that those skilled in the art may better understand the present invention and practice it, but the examples are not intended to limit the present invention.
The first embodiment is as follows:
a robot map selection method based on visual feature point matching comprises the following specific steps:
s1, collecting map data and establishing a dictionary at the cloud end;
s2, acquiring a frame of image through the robot camera and uploading the frame of image to a cloud service;
s3 extracting image key points;
s4 calculating a descriptor;
s5 recursion all map files for feature point matching;
s6 repeating the steps S2-S5 until the corresponding map is matched;
s7, the matched map file is sent to the robot end to be loaded;
the method is characterized in that map data of all indoor scenes are collected in advance according to S1, a dictionary is built at a cloud end, after the robot is started, a current frame IMAGE is collected through a camera according to S2, the current frame IMAGE is stored as an img _1 file, a collection function interface is Mat img _1 ═ imread (argv [1], CV _ LOAD _ IMAGE _ COLOR), the collection function interface is sent to the cloud end, and the img _1 file is sent to a cloud end server through a TCP private protocol for next processing;
cloud service initialization, which is mainly embodied as initialization Ptr < FeatureDetector > detector ═ ORB of an ORB example, create () and FLANN matcher initialization FlanBasedMatcher;
then, extracting image key points according to S3 by adopting an OpenCV library function detector- > detector (img _1, keypoints _ 1);
calculating a BRIEF descriptor of the OpenCV according to the position of the key point and S4, wherein the corresponding OpenCV library function is descriptor- > computer (img _1, keypoints _1 and descriptors _ 1);
performing recursive matching operation on img _1 and a map file by using a FLANN algorithm according to S5, and matching (descriptors _1, descriptors _2, matches);
then repeating the steps S2-S5 according to S6 until the corresponding map is matched, and finally issuing the matched map file to the robot end for loading according to S7;
further, the S3 cloud service uses ORB feature points extracted from each frame of the received image as image key points, and includes the following specific steps:
s301, selecting a pixel p in an image, and setting the brightness of the pixel p as Ip;
s302, setting a threshold value T of Ip;
s303, selecting a pixel point by taking the pixel p as a center;
s304, judging whether the selected pixel points are characteristic points or not;
s305, circularly executing the operations from S301 to S304 on each pixel, and finding out all characteristic points in the picture; a
The cloud service extracts the feature points of each frame of received images, and the scheme adopts an ORB feature point (including key points and descriptors) extraction method;
further, the key point detection steps are as follows: selecting a pixel p in the image, and assuming that the brightness of the pixel p is Ip; setting a threshold T (e.g., 20% of Ip); selecting 16 pixel points on a circle with the radius of 3 by taking the pixel p as a center; if the brightness of consecutive N points on the chosen circle is greater than Ip + T or less than Ip-T, then the pixel p can be considered as a feature point (N is typically 12); the four steps are circulated, the same operation is executed on each pixel, and all key points in the picture are found;
further, the S4 calculates a descriptor for each image keypoint using a BRIEF feature description method; after extracting the key points, calculating a descriptor of each point; a BRIEF characterization method modified using ORB. The descriptor vector consists of 0 and 1. Taking two random pixels (such as q and q) near the key point, and if p > q, taking 1; otherwise, 0 is selected;
further, the specific step of S5 recursion all map files for feature point matching includes:
s501, matching the feature points with map files in a map database;
s502, generating similarity arrays of all map files and the image feature points of the frame by adopting a rapid nearest neighbor algorithm, and performing descending arrangement;
after extracting each frame of feature points, matching all the feature points with map files in a map database, generating similarity arrays of all the map files and the frame of image feature points by adopting a fast nearest neighbor algorithm (FLANN) algorithm, sequencing from top to bottom, and considering the map files as the map files of the environment where the current frame is located when the similarity of a certain map file slice and the frame of feature points exceeds 90%;
still further, the step S6 of repeating steps S2 to S5 until the specific step of matching the corresponding map includes:
s601, if the cloud end is matched with the corresponding map file, the map is issued to the robot;
s602, if the cloud does not match with the corresponding map file, the robot continuously reads the next frame of image and uploads the next frame of image to the cloud, and the steps S2-S5 are repeated;
if the cloud end is matched with the corresponding map file, the map is issued to the robot, and the robot loads and uses the map;
if the cloud does not match the map file corresponding to the current frame, the robot continuously reads the next frame of image and uploads the next frame of image to the cloud, and the previous steps from S2 to S5 are repeated, and the process is repeated.
Example two:
a robot map selection system based on visual feature point matching specifically comprises an acquisition module, an uploading module, an extraction module, a calculation module, a matching module A, a matching module B and a loading module:
an acquisition module: collecting map data and establishing a dictionary at a cloud end;
an uploading module: acquiring a frame of image through a robot camera and uploading the frame of image to a cloud service;
an extraction module: extracting key points of the image;
a calculation module: calculating a descriptor;
a matching module A: performing characteristic point matching on all the recursive map files;
a matching module B: repeating the uploading module, the extracting module, the calculating module and the matching module A until the corresponding map is matched;
loading a module: the matched map file is sent to a robot end to be loaded;
the system is characterized in that a collection module collects map data of all indoor scenes in advance and builds a dictionary at a cloud end, after the robot is started, a current frame IMAGE is collected through a camera by an uploading module, the current frame is stored as an img _1 file, a collection function interface is Mat img _1 ═ imread (argv [1], CV _ LOAD _ IMAGE _ COLOR), the collection function interface is sent to the cloud end, and the img _1 file is sent to a cloud end server through a TCP private protocol for further processing;
cloud service initialization, which is mainly embodied as initialization Ptr < FeatureDetector > detector ═ ORB of an ORB example, create () and FLANN matcher initialization FlanBasedMatcher;
then, extracting image key points by an extraction module by adopting an OpenCV library function detector- > detector (img _1, keypoints _ 1);
calculating the BRIEF descriptor of the key point through a calculation module according to the position of the key point, wherein the corresponding OpenCV library function is descriptor- > computer (img _1, keypoints _1 and descriptors _ 1);
performing recursive matching operation on img _1 and a map file through a matching module A by adopting a FLANN algorithm, and matching.
Then the matching module B repeats the operations of the uploading module, the extracting module, the calculating module and the matching module A until the corresponding map is matched, and finally the matched map file is sent to the robot end for loading through the loading module;
further, the extraction module cloud service takes the ORB feature points of the received image of each frame as image key points, and specifically includes a selection module a, a setting module, a selection module B, a judgment module and a repetition module a:
selecting a module A: selecting a pixel p in the image, and setting the brightness of the pixel p as Ip;
setting a module: setting a threshold value T of Ip;
selecting a module B: selecting a pixel point by taking the pixel p as a center;
a judging module: judging whether the selected pixel points are characteristic points or not;
repeating the module A: circularly executing the operations of the selection module A, the setting module, the selection module B and the judgment module on each pixel to find out all characteristic points in the picture;
further, the calculation module calculates a descriptor of each image key point by using a BRIEF feature description method;
further, the matching module a specifically includes a matching module C and a sorting module:
a matching module C: matching the feature points with map files in a map database;
a sorting module: and generating similarity arrays of all map files and the image feature points of the frame by adopting a rapid nearest neighbor algorithm, and performing descending arrangement.
Further, the matching module B specifically includes a sending module and a repeating module B:
a sending module: if the cloud end is matched with the corresponding map file, the map is issued to the robot;
and repeating the module B: and if the cloud does not match with the corresponding map file, the robot continuously reads the next frame of image and uploads the next frame of image to the cloud, and the operation of the uploading module, the extracting module, the calculating module and the matching module A is repeated.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A robot map selection method based on visual feature point matching is characterized by comprising the following specific steps:
s1, collecting map data and establishing a dictionary at the cloud end;
s2, acquiring a frame of image through the robot camera and uploading the frame of image to a cloud service;
s3 extracting image key points;
s4 calculating a descriptor;
s5 recursion all map files for feature point matching;
s6 repeating the steps S2-S5 until the corresponding map is matched;
and S7, sending the matched map file to the robot end for loading.
2. The method as claimed in claim 1, wherein the S3 cloud service extracts ORB feature points of each frame of the received image as image key points, and the specific steps include:
s301, selecting a pixel p in an image, and setting the brightness of the pixel p as Ip;
s302, setting a threshold value T of Ip;
s303, selecting a pixel point by taking the pixel p as a center;
s304, judging whether the selected pixel points are characteristic points or not;
s305 executes the operations from S301 to S304 for each pixel cycle, and finds all the feature points in the picture.
3. The method according to claim 2, wherein said S4 uses BRIEF feature description method to calculate its descriptor for each image keypoint.
4. The method as claimed in claim 3, wherein said step of S5 performing feature point matching for all map files comprises:
s501, matching the feature points with map files in a map database;
s502, generating similarity arrays of all map files and the image feature points of the frame by adopting a rapid nearest neighbor algorithm, and performing descending arrangement.
5. The method as claimed in claim 4, wherein the step of S6 repeating the steps S2-S5 until the corresponding map is matched comprises the steps of:
s601, if the cloud end is matched with the corresponding map file, the map is issued to the robot;
s602, if the cloud does not match with the corresponding map file, the robot continuously reads the next frame of image and uploads the next frame of image to the cloud, and the steps S2-S5 are repeated.
6. A robot map selection system based on visual feature point matching is characterized by specifically comprising an acquisition module, an uploading module, an extraction module, a calculation module, a matching module A, a matching module B and a loading module:
an acquisition module: collecting map data and establishing a dictionary at a cloud end;
an uploading module: acquiring a frame of image through a robot camera and uploading the frame of image to a cloud service;
an extraction module: extracting key points of the image;
a calculation module: calculating a descriptor;
a matching module A: performing characteristic point matching on all the recursive map files;
a matching module B: repeating the uploading module, the extracting module, the calculating module and the matching module A until the corresponding map is matched;
loading a module: and sending the matched map file to a robot end for loading.
7. The system as claimed in claim 6, wherein the extraction module cloud service uses ORB feature points extracted from each frame of the received image as image key points, and specifically comprises a selection module a, a setting module, a selection module B, a judgment module and a repetition module a:
selecting a module A: selecting a pixel p in the image, and setting the brightness of the pixel p as Ip;
setting a module: setting a threshold value T of Ip;
selecting a module B: selecting a pixel point by taking the pixel p as a center;
a judging module: judging whether the selected pixel points are characteristic points or not;
repeating the module A: and circularly executing the operations of the selection module A, the setting module, the selection module B and the judgment module on each pixel to find out all characteristic points in the picture.
8. The system according to claim 7, wherein said computation module computes its descriptor for each image keypoint using the BRIEF feature description method.
9. The system of claim 8, wherein the matching module a specifically comprises a matching module C and a sorting module:
a matching module C: matching the feature points with map files in a map database;
a sorting module: and generating similarity arrays of all map files and the image feature points of the frame by adopting a rapid nearest neighbor algorithm, and performing descending arrangement.
10. The system as claimed in claim 9, wherein the matching module B specifically includes a down-sending module, a repeating module B:
a sending module: if the cloud end is matched with the corresponding map file, the map is issued to the robot;
and repeating the module B: and if the cloud does not match with the corresponding map file, the robot continuously reads the next frame of image and uploads the next frame of image to the cloud, and the operation of the uploading module, the extracting module, the calculating module and the matching module A is repeated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110471649.1A CN113010724A (en) | 2021-04-29 | 2021-04-29 | Robot map selection method and system based on visual feature point matching |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110471649.1A CN113010724A (en) | 2021-04-29 | 2021-04-29 | Robot map selection method and system based on visual feature point matching |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113010724A true CN113010724A (en) | 2021-06-22 |
Family
ID=76380431
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110471649.1A Pending CN113010724A (en) | 2021-04-29 | 2021-04-29 | Robot map selection method and system based on visual feature point matching |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113010724A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116030213A (en) * | 2023-03-30 | 2023-04-28 | 千巡科技(深圳)有限公司 | Multi-machine cloud edge collaborative map creation and dynamic digital twin method and system |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103278170A (en) * | 2013-05-16 | 2013-09-04 | 东南大学 | Mobile robot cascading map building method based on remarkable scenic spot detection |
CN107436148A (en) * | 2016-05-25 | 2017-12-05 | 深圳市朗驰欣创科技股份有限公司 | A kind of robot navigation method and device based on more maps |
CN107680133A (en) * | 2017-09-15 | 2018-02-09 | 重庆邮电大学 | A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm |
CN108256574A (en) * | 2018-01-16 | 2018-07-06 | 广东省智能制造研究所 | Robot localization method and device |
CN109073390A (en) * | 2018-07-23 | 2018-12-21 | 深圳前海达闼云端智能科技有限公司 | A kind of localization method and device, electronic equipment and readable storage medium storing program for executing |
CN109459048A (en) * | 2019-01-07 | 2019-03-12 | 上海岚豹智能科技有限公司 | Map loading method and equipment for robot |
CN109544636A (en) * | 2018-10-10 | 2019-03-29 | 广州大学 | A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method |
CN110849367A (en) * | 2019-10-08 | 2020-02-28 | 杭州电子科技大学 | Indoor positioning and navigation method based on visual SLAM fused with UWB |
CN110967009A (en) * | 2019-11-27 | 2020-04-07 | 云南电网有限责任公司电力科学研究院 | Navigation positioning and map construction method and device for transformer substation inspection robot |
CN112525206A (en) * | 2019-09-17 | 2021-03-19 | 隆博科技(常熟)有限公司 | Navigation method based on multi-map switching |
CN112666943A (en) * | 2020-12-17 | 2021-04-16 | 珠海市一微半导体有限公司 | Cleaning map storage method and system for intelligent terminal, cleaning robot and system |
-
2021
- 2021-04-29 CN CN202110471649.1A patent/CN113010724A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103278170A (en) * | 2013-05-16 | 2013-09-04 | 东南大学 | Mobile robot cascading map building method based on remarkable scenic spot detection |
CN107436148A (en) * | 2016-05-25 | 2017-12-05 | 深圳市朗驰欣创科技股份有限公司 | A kind of robot navigation method and device based on more maps |
CN107680133A (en) * | 2017-09-15 | 2018-02-09 | 重庆邮电大学 | A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm |
CN108256574A (en) * | 2018-01-16 | 2018-07-06 | 广东省智能制造研究所 | Robot localization method and device |
CN109073390A (en) * | 2018-07-23 | 2018-12-21 | 深圳前海达闼云端智能科技有限公司 | A kind of localization method and device, electronic equipment and readable storage medium storing program for executing |
CN109544636A (en) * | 2018-10-10 | 2019-03-29 | 广州大学 | A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method |
CN109459048A (en) * | 2019-01-07 | 2019-03-12 | 上海岚豹智能科技有限公司 | Map loading method and equipment for robot |
CN112525206A (en) * | 2019-09-17 | 2021-03-19 | 隆博科技(常熟)有限公司 | Navigation method based on multi-map switching |
CN110849367A (en) * | 2019-10-08 | 2020-02-28 | 杭州电子科技大学 | Indoor positioning and navigation method based on visual SLAM fused with UWB |
CN110967009A (en) * | 2019-11-27 | 2020-04-07 | 云南电网有限责任公司电力科学研究院 | Navigation positioning and map construction method and device for transformer substation inspection robot |
CN112666943A (en) * | 2020-12-17 | 2021-04-16 | 珠海市一微半导体有限公司 | Cleaning map storage method and system for intelligent terminal, cleaning robot and system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116030213A (en) * | 2023-03-30 | 2023-04-28 | 千巡科技(深圳)有限公司 | Multi-machine cloud edge collaborative map creation and dynamic digital twin method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111242973A (en) | Target tracking method and device, electronic equipment and storage medium | |
CN111445517B (en) | Robot vision end positioning method, device and computer readable storage medium | |
CN110111388B (en) | Three-dimensional object pose parameter estimation method and visual equipment | |
US10867166B2 (en) | Image processing apparatus, image processing system, and image processing method | |
CN110610150B (en) | Tracking method, device, computing equipment and medium of target moving object | |
WO2020233397A1 (en) | Method and apparatus for detecting target in video, and computing device and storage medium | |
CN110634138A (en) | Bridge deformation monitoring method, device and equipment based on visual perception | |
CN113515655A (en) | Fault identification method and device based on image classification | |
CN113010724A (en) | Robot map selection method and system based on visual feature point matching | |
Tsoukalas et al. | Deep learning assisted visual tracking of evader-UAV | |
CN112883827B (en) | Method and device for identifying specified target in image, electronic equipment and storage medium | |
Cai et al. | Improving CNN-based planar object detection with geometric prior knowledge | |
CN110930436B (en) | Target tracking method and device | |
CN112084365A (en) | Real-time image retrieval method of network camera based on OpenCV and CUDA acceleration | |
CN115550555B (en) | Holder calibration method and related device, camera device and storage medium | |
CN109117757B (en) | Method for extracting guy cable in aerial image | |
CN112802112B (en) | Visual positioning method, device, server and storage medium | |
CN112767431B (en) | Power grid target detection method and device for power system | |
CN112634357B (en) | Communication data processing method and system for robot two-dimensional vision system | |
CN115546021A (en) | Multi-camera image splicing method applied to cold bed shunting scene detection | |
CN114638846A (en) | Pickup pose information determination method, pickup pose information determination device, pickup pose information determination equipment and computer readable medium | |
CN114422776A (en) | Detection method and device for camera equipment, storage medium and electronic device | |
Bai et al. | Redundancy removal through semantic neighbor selection in visual sensor networks | |
CN115376073B (en) | Foreign matter detection method and system based on feature points | |
CN117409043B (en) | Sub-pixel level video target tracking method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210622 |
|
RJ01 | Rejection of invention patent application after publication |