CN113390409A - Method for realizing SLAM technology through robot whole-course autonomous exploration navigation - Google Patents

Method for realizing SLAM technology through robot whole-course autonomous exploration navigation Download PDF

Info

Publication number
CN113390409A
CN113390409A CN202110780670.XA CN202110780670A CN113390409A CN 113390409 A CN113390409 A CN 113390409A CN 202110780670 A CN202110780670 A CN 202110780670A CN 113390409 A CN113390409 A CN 113390409A
Authority
CN
China
Prior art keywords
robot
image
navigation
point cloud
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110780670.XA
Other languages
Chinese (zh)
Inventor
李盛前
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Mechanical and Electrical College
Original Assignee
Guangdong Mechanical and Electrical College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Mechanical and Electrical College filed Critical Guangdong Mechanical and Electrical College
Priority to CN202110780670.XA priority Critical patent/CN113390409A/en
Publication of CN113390409A publication Critical patent/CN113390409A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3837Data obtained from a single source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a method for realizing SLAM technology by a robot through autonomous exploration navigation, wherein after the robot walks for a distance, the robot autonomously rotates for a circle at the place, a panoramic vision image acquired by a depth camera is processed by an image splicing technology to obtain a panoramic image, a three-dimensional point cloud image of the working environment of the robot is derived according to the depth image corresponding to the panoramic image, then the three-dimensional point cloud image is sent to a Move _ base navigation module to randomly generate a target point, after the robot automatically navigates to the target point, the robot stops at the position and rotates for a circle in place to obtain the panoramic image of the point, and the image obtained at the point is spliced with the image obtained at the front by matching of characteristic points to generate an expanded latest three-dimensional point cloud image; and repeating the above work flow until the establishment of the three-dimensional point cloud picture of the whole work environment is completed. The invention can reduce the technical requirements of robot application on operators, and has simple process and high degree of autonomy.

Description

Method for realizing SLAM technology through robot whole-course autonomous exploration navigation
Technical Field
The invention relates to the technical field of autonomous navigation and vision of robots, in particular to a method for realizing SLAM technology by the aid of the robot through autonomous exploration and navigation in the whole process.
Background
The development of science and technology and the progress of society improve the national living standard and the quality of life of people is continuously improved along with the rapid development of economy, and the application of the robot technology to the industries of people living, industry, agriculture, national defense and the like to replace human beings to work is a necessary trend. Meanwhile, along with the shortage of human labor force and the increase of the value of the human labor force, the phenomenon of wasted labor force and the improvement of production automation and intellectualization are generated, so that the robot is rapidly developed, particularly a service robot, various service robots applied to different fields are generated at present, wherein the autonomous navigation technology of the robot is the core technology of robot autonomy and is an important technical standard for measuring the intellectualization degree of the robot, and the autonomous navigation technology for solving the problem of the robot is the main research direction of the robot. Although many autonomous navigation methods for robots appear at present, the most common method at present is to implement SLAM technology to construct a map, and then control the robots according to the map to complete autonomous navigation. However, in the prior implementation of the SLAM technology, every time a robot arrives at a strange environment, technicians are required to manually control the robot to walk once in a working environment to generate an environment map, and finally the robot can realize autonomous navigation movement according to the map, which is troublesome, has high technical requirements, and is difficult to complete the SLAM technology in the first step under the operation of non-technicians, so that the application development of the robot is greatly limited, and if the robot can explore the environment by itself, the map can be constructed by itself conveniently and quickly. Therefore, it is very important to realize the SLAM technical method by the autonomous exploration of the robot. Therefore, in order to solve these problems, reduce the application of autonomous navigation of the robot and improve the intelligence degree of the robot, it is necessary to redesign an autonomous navigation method that can effectively solve various disadvantages.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a method for realizing the SLAM technology by the robot through autonomous exploration and navigation.
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
a method for realizing SLAM technology by robot whole-course autonomous exploration navigation comprises the following steps:
s1, when the robot starts to rotate a circle in situ in a working environment, a depth camera is used for collecting images, and the collected images are spliced to obtain a 360-degree panoramic visual image of the initial position of the robot;
s2, generating a three-dimensional point cloud picture in the visual field range of the initial position of the robot according to the 360-degree panoramic visual image of the initial position of the robot obtained in the step S1;
s3, sending the latest three-dimensional point cloud map to a Move _ base navigation module in the ROS system, and automatically and randomly generating a navigation target point by the Move _ base navigation module according to the point cloud data;
s4, the robot autonomously judges that the robot navigates to a target point;
s5, after the robot reaches a target point, the robot rotates a circle in situ at the target point, the depth camera is used for collecting images, and the collected images are spliced to obtain a 360-degree panoramic visual image of the target point;
s6, splicing the 360-degree panoramic visual image at the target point obtained in the step S5 with the 360-degree panoramic visual image obtained last time to obtain an expanded 360-degree panoramic visual image;
s7, generating an expanded latest three-dimensional point cloud picture according to the expanded 360-degree panoramic visual image obtained in the step S6;
and S8, repeating the steps S3 to S7 until the establishment of the three-dimensional point cloud picture of the whole working environment is completed.
Further, in step S1 and step S5, after the images are captured by the depth camera, the captured images are stitched by a stitching algorithm to obtain a 360-degree panoramic visual image of the position where the robot is located.
Further, the stitching algorithm comprises the following steps:
a1, obtaining a multi-frame image to be spliced, wherein two adjacent frame images with overlapped areas form a group;
a2, converting the RGB of the two images in each group into YUV space according to the image types to obtain a brightness matrix Y;
a3, extracting the feature points of the brightness matrix Y of the two images in each group by adopting N feature extraction operators;
a4, screening feature points, and removing undesirable feature points;
a5, establishing a grid density model, and calculating the optimal grid density of the feature point distribution;
a6, selecting optimal feature points according to grids;
a7, calculating a homography transformation matrix of adjacent image registration;
a8, splicing the multi-frame images through the homography matrix, and fusing the adjacent images by using a gradual-in and gradual-out image fusion algorithm to eliminate splicing gaps.
Further, in step S2 and step S7, the 360-degree panoramic visual image is generated into a corresponding three-dimensional point cloud map by using the correspondence between the depth map of the depth camera and the RGB image.
Further, the 360-degree panoramic visual image is generated into a corresponding three-dimensional point cloud image by using the corresponding relation between the depth image of the depth camera and the RGB image, wherein the conversion process is as follows:
setting any point (u, v) in the image, and combining the depth value of the point and the internal parameters of the depth camera to obtain the corresponding three-dimensional space coordinate (x, y, z), the conversion formula is as follows:
Figure BDA0003156724790000031
Figure BDA0003156724790000032
d=z·s
in the above formula, fxAnd fyDenotes the focal length of the camera in the x-axis and y-axis, respectively, cxAnd cyRespectively, the aperture center of the depth camera, s the zoom factor of the depth map, and d the depth data.
Further, the step S3 sends the latest three-dimensional point cloud map data to the Move _ base navigation module in the ROS system, and the Move _ base navigation module randomly generates a navigation target position in an unobstructed place according to the three-dimensional point cloud map, so that the robot has a navigation target point.
Further, in step S4, under the control of the Move _ base navigation module of the ROS system, the robot performs path planning and obstacle avoidance autonomous navigation and obstacle avoidance to smoothly Move to the target point.
Further, in step S4, the shortest path from the robot to the target point is found through the a-x algorithm.
Compared with the prior art, the principle of the scheme is as follows:
after walking a certain distance, the robot automatically rotates for a circle in situ, a panoramic vision image collected by the depth camera is processed by an image splicing technology to obtain a panoramic image, deriving a three-dimensional point cloud picture of the working environment of the robot according to the depth picture corresponding to the panoramic picture, thus obtaining a local map of the robot with a certain view field range at the current position, thus, the local map is sent to the Move _ base navigation module of the ROS system, the robot can randomly generate a target point on the local map, so that the robot automatically navigates to the target point, and after reaching the target point, the robot stops at the position and rotates for a circle in situ to obtain a panoramic image of the point, the image obtained by the point is spliced with the image obtained in the front through the matching of the characteristic points, so that the map is continuously overlapped, enlarged and expanded along with the moving process of the robot; and repeating the above work flow, which is equivalent to that when the robot moves a certain distance in a navigation way, a known local map exists in a certain range around the robot, so that the navigation process of the robot is consistent with the navigation process of the traditional known environment map. When the depth camera has a certain height, the small-range working environment of the robot can be predicted in advance, the robot is regarded as a known environment, and the autonomous navigation is realized by using map information of the known environment, namely the autonomous exploration of the robot and the realization method process of the SLAM technology are completed.
Compared with the prior art, the scheme has the advantages that:
1. the robot autonomously rotates for a circle in situ, only one depth camera is used for measuring 360-degree panoramic vision, the complexity of the system is simplified, and the use cost is reduced.
2. The robot can independently explore and navigate to realize the SLAM technology, the technical requirements of the robot on operators are reduced, and the rapid development of the technical field of the robot is effectively promoted.
3. The method has the advantages of simple process, stable algorithm, strong universality, high operation efficiency and high autonomous navigation success rate.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the services required for the embodiments or the technical solutions in the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for realizing SLAM technology by robot full-course autonomous exploration navigation according to the present invention.
Detailed Description
The invention will be further illustrated with reference to specific examples:
as shown in fig. 1, a method for implementing SLAM technology by robot full-course autonomous exploration navigation includes the following steps:
s1, when the robot starts to rotate a circle in situ in a working environment, a depth camera is used for collecting images, and the collected images are spliced to obtain a 360-degree panoramic visual image of the initial position of the robot;
s2, generating a three-dimensional point cloud picture in the visual field range of the initial position of the robot according to the 360-degree panoramic visual image of the initial position of the robot obtained in the step S1;
s3, sending the latest three-dimensional point cloud map to a Move _ base navigation module in the ROS system, and according to the point cloud data, the Move _ base navigation module randomly generates a navigation target position in an accessible place;
s4, under the control of a Move _ base navigation module of the ROS system, the robot carries out path planning and obstacle avoidance autonomous navigation and obstacle avoidance smooth movement to a target point; when path planning is carried out, the robot searches a shortest path from the position to a target point through an A-x algorithm;
s5, after the robot reaches a target point, the robot rotates a circle in situ at the target point, the depth camera is used for collecting images, and the collected images are spliced to obtain a 360-degree panoramic visual image of the target point;
s6, splicing the 360-degree panoramic visual image at the target point obtained in the step S5 with the 360-degree panoramic visual image obtained last time to obtain an expanded 360-degree panoramic visual image;
s7, generating an expanded latest three-dimensional point cloud picture according to the expanded 360-degree panoramic visual image obtained in the step S6;
and S8, repeating the steps S3 to S7 until the establishment of the three-dimensional point cloud picture of the whole working environment is completed.
Specifically, in step S1 and step S5 of this embodiment, after the images are captured by the depth camera, the captured images are stitched by a stitching algorithm to obtain a 360-degree panoramic visual image of the position where the robot is located. The splicing algorithm comprises the following steps:
a1, obtaining a multi-frame image to be spliced, wherein two adjacent frame images with overlapped areas form a group;
a2, converting the RGB of the two images in each group into YUV space according to the image types to obtain a brightness matrix Y;
a3, extracting the feature points of the brightness matrix Y of the two images in each group by adopting N feature extraction operators;
a4, screening feature points, and removing undesirable feature points;
a5, establishing a grid density model, and calculating the optimal grid density of the feature point distribution;
a6, selecting optimal feature points according to grids;
a7, calculating a homography transformation matrix of adjacent image registration;
a8, splicing the multi-frame images through the homography matrix, and fusing the adjacent images by using a gradual-in and gradual-out image fusion algorithm to eliminate splicing gaps.
Specifically, in step S2 and step S7 of this embodiment, a corresponding three-dimensional point cloud map is generated for the 360-degree panoramic visual image by using the correspondence between the depth map of the depth camera and the RGB image. The conversion process is as follows:
setting any point (u, v) in the image, and combining the depth value of the point and the internal parameters of the depth camera to obtain the corresponding three-dimensional space coordinate (x, y, z), the conversion formula is as follows:
Figure BDA0003156724790000061
Figure BDA0003156724790000062
d=z·s
in the above formula, fxAnd fyDenotes the focal length of the camera in the x-axis and y-axis, respectively, cxAnd cyRespectively, the aperture center of the depth camera, s the zoom factor of the depth map, and d the depth data.
In the embodiment, when the robot starts to work in a strange environment, the robot carries a depth camera on a platform support, after the robot moves for a distance, the robot autonomously rotates for a circle in situ, a panoramic vision image collected by the depth camera is processed by an image splicing technology to obtain a panoramic image, a three-dimensional point cloud image of the working environment of the robot is derived according to a depth image corresponding to the panoramic image, so that the position of the robot has a local map with a certain view field range, the local map is sent to a Move _ base navigation module of an ROS system, the robot can randomly generate a target point on the local map, the robot automatically navigates to the target point, after the target point is reached, the robot stops at the position and rotates for a circle to obtain the panoramic image of the point, the image obtained at the point is spliced with the image obtained in the front through matching of characteristic points, therefore, the map is continuously overlapped, enlarged and expanded along with the moving process of the robot; and repeating the above work flow, which is equivalent to that when the robot moves a certain distance in a navigation way, a known local map exists in a certain range around the robot, so that the navigation process of the robot is consistent with the navigation process of the traditional known environment map. The problem that when the robot is applied to a strange environment, a technician needs to operate and control the robot to walk in the strange environment for a circle to obtain a whole environment map, and then the robot can conduct autonomous navigation according to the environment map is solved.
The above-mentioned embodiments are merely preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, so that variations based on the shape and principle of the present invention should be covered within the scope of the present invention.

Claims (8)

1. A method for realizing SLAM technology by robot whole-course autonomous exploration navigation is characterized by comprising the following steps:
s1, when the robot starts to rotate a circle in situ in a working environment, a depth camera is used for collecting images, and the collected images are spliced to obtain a 360-degree panoramic visual image of the initial position of the robot;
s2, generating a three-dimensional point cloud picture in the visual field range of the initial position of the robot according to the 360-degree panoramic visual image of the initial position of the robot obtained in the step S1;
s3, sending the latest three-dimensional point cloud map to a Move _ base navigation module in the ROS system, and automatically and randomly generating a navigation target point by the Move _ base navigation module according to the point cloud data;
s4, the robot autonomously judges that the robot navigates to a target point;
s5, after the robot reaches a target point, the robot rotates a circle in situ at the target point, the depth camera is used for collecting images, and the collected images are spliced to obtain a 360-degree panoramic visual image of the target point;
s6, splicing the 360-degree panoramic visual image at the target point obtained in the step S5 with the 360-degree panoramic visual image obtained last time to obtain an expanded 360-degree panoramic visual image;
s7, generating an expanded latest three-dimensional point cloud picture according to the expanded 360-degree panoramic visual image obtained in the step S6;
and S8, repeating the steps S3 to S7 until the establishment of the three-dimensional point cloud picture of the whole working environment is completed.
2. The method for realizing SLAM technology through whole-course autonomous exploration navigation of a robot as claimed in claim 1, wherein in steps S1 and S5, after the images are collected by the depth camera, the collected images are stitched through a stitching algorithm to obtain a 360-degree panoramic visual image of the robot position.
3. The method for implementing SLAM technology through robot autonomous exploration navigation according to claim 2, wherein the stitching algorithm comprises the following steps:
a1, obtaining a multi-frame image to be spliced, wherein two adjacent frame images with overlapped areas form a group;
a2, converting the RGB of the two images in each group into YUV space according to the image types to obtain a brightness matrix Y;
a3, extracting the feature points of the brightness matrix Y of the two images in each group by adopting N feature extraction operators;
a4, screening feature points, and removing undesirable feature points;
a5, establishing a grid density model, and calculating the optimal grid density of the feature point distribution;
a6, selecting optimal feature points according to grids;
a7, calculating a homography transformation matrix of adjacent image registration;
a8, splicing the multi-frame images through the homography matrix, and fusing the adjacent images by using a gradual-in and gradual-out image fusion algorithm to eliminate splicing gaps.
4. The method of claim 1, wherein in steps S2 and S7, the 360-degree panoramic visual image is generated into a corresponding three-dimensional point cloud image by using a correspondence between a depth map of a depth camera and an RGB image.
5. The method of claim 4, wherein the 360-degree panoramic visual image is generated into a corresponding three-dimensional point cloud image by using a correspondence between a depth map of a depth camera and an RGB image, wherein a conversion process is as follows:
setting any point (u, v) in the image, and combining the depth value of the point and the internal parameters of the depth camera to obtain the corresponding three-dimensional space coordinate (x, y, z), the conversion formula is as follows:
Figure FDA0003156724780000021
Figure FDA0003156724780000022
d=z·s
in the above formula, fxAnd fyDenotes the focal length of the camera in the x-axis and y-axis, respectively, cxAnd cyRespectively, the aperture center of the depth camera, s the zoom factor of the depth map, and d the depth data.
6. The method for realizing SLAM technology through whole-course autonomous exploration navigation of a robot as claimed in claim 1, wherein said step S3 is to send the latest three-dimensional point cloud map data to a Move _ base navigation module in ROS system, the Move _ base navigation module generates a navigation target position at random in an obstacle-free place according to the three-dimensional point cloud map, so that the robot has a navigation target point.
7. The method for realizing SLAM technology through whole-course autonomous exploration navigation of the robot as claimed in claim 1, wherein in step S4, under the control of a Move _ base navigation module of the ROS system, the robot performs path planning and obstacle avoidance autonomous navigation obstacle avoidance to smoothly Move to a target point.
8. The method of claim 1, wherein in step S4, the shortest path from the robot to the target point is found through a-algorithm.
CN202110780670.XA 2021-07-09 2021-07-09 Method for realizing SLAM technology through robot whole-course autonomous exploration navigation Pending CN113390409A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110780670.XA CN113390409A (en) 2021-07-09 2021-07-09 Method for realizing SLAM technology through robot whole-course autonomous exploration navigation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110780670.XA CN113390409A (en) 2021-07-09 2021-07-09 Method for realizing SLAM technology through robot whole-course autonomous exploration navigation

Publications (1)

Publication Number Publication Date
CN113390409A true CN113390409A (en) 2021-09-14

Family

ID=77625817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110780670.XA Pending CN113390409A (en) 2021-07-09 2021-07-09 Method for realizing SLAM technology through robot whole-course autonomous exploration navigation

Country Status (1)

Country Link
CN (1) CN113390409A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117994309A (en) * 2024-04-07 2024-05-07 绘见科技(深圳)有限公司 SLAM laser point cloud and panoramic image automatic registration method based on large model

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105537824A (en) * 2016-01-27 2016-05-04 华南理工大学 Automatic welding control method based on hand-eye coordination of mechanical arm
CN105676848A (en) * 2016-03-11 2016-06-15 湖南人工智能科技有限公司 Robot autonomous navigation method based on ROS operating system
CN105843223A (en) * 2016-03-23 2016-08-10 东南大学 Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model
CN106940186A (en) * 2017-02-16 2017-07-11 华中科技大学 A kind of robot autonomous localization and air navigation aid and system
CN107590827A (en) * 2017-09-15 2018-01-16 重庆邮电大学 A kind of indoor mobile robot vision SLAM methods based on Kinect
CN108073167A (en) * 2016-11-10 2018-05-25 深圳灵喵机器人技术有限公司 A kind of positioning and air navigation aid based on depth camera and laser radar
CN110842940A (en) * 2019-11-19 2020-02-28 广东博智林机器人有限公司 Building surveying robot multi-sensor fusion three-dimensional modeling method and system
CN112857390A (en) * 2021-01-14 2021-05-28 江苏智派战线智能科技有限公司 Calculation method applied to intelligent robot moving path
CN113052765A (en) * 2021-04-23 2021-06-29 中国电子科技集团公司第二十八研究所 Panoramic image splicing method based on optimal grid density model

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105537824A (en) * 2016-01-27 2016-05-04 华南理工大学 Automatic welding control method based on hand-eye coordination of mechanical arm
CN105676848A (en) * 2016-03-11 2016-06-15 湖南人工智能科技有限公司 Robot autonomous navigation method based on ROS operating system
CN105843223A (en) * 2016-03-23 2016-08-10 东南大学 Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model
CN108073167A (en) * 2016-11-10 2018-05-25 深圳灵喵机器人技术有限公司 A kind of positioning and air navigation aid based on depth camera and laser radar
CN106940186A (en) * 2017-02-16 2017-07-11 华中科技大学 A kind of robot autonomous localization and air navigation aid and system
CN107590827A (en) * 2017-09-15 2018-01-16 重庆邮电大学 A kind of indoor mobile robot vision SLAM methods based on Kinect
CN110842940A (en) * 2019-11-19 2020-02-28 广东博智林机器人有限公司 Building surveying robot multi-sensor fusion three-dimensional modeling method and system
CN112857390A (en) * 2021-01-14 2021-05-28 江苏智派战线智能科技有限公司 Calculation method applied to intelligent robot moving path
CN113052765A (en) * 2021-04-23 2021-06-29 中国电子科技集团公司第二十八研究所 Panoramic image splicing method based on optimal grid density model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117994309A (en) * 2024-04-07 2024-05-07 绘见科技(深圳)有限公司 SLAM laser point cloud and panoramic image automatic registration method based on large model

Similar Documents

Publication Publication Date Title
CN109579843B (en) Multi-robot cooperative positioning and fusion image building method under air-ground multi-view angles
Yue et al. Collaborative semantic understanding and mapping framework for autonomous systems
CN113276106B (en) Climbing robot space positioning method and space positioning system
CN107179082B (en) Autonomous exploration method and navigation method based on fusion of topological map and measurement map
CN108415434B (en) Robot scheduling method
Kim et al. UAV-UGV cooperative 3D environmental mapping
JP7182710B2 (en) Surveying methods, equipment and devices
CN109579863A (en) Unknown topographical navigation system and method based on image procossing
CN112419501A (en) Method for constructing geospatial heterogeneous collaborative map
Ma et al. Crlf: Automatic calibration and refinement based on line feature for lidar and camera in road scenes
CN111161334A (en) Semantic map construction method based on deep learning
CN113390409A (en) Method for realizing SLAM technology through robot whole-course autonomous exploration navigation
CN117152249A (en) Multi-unmanned aerial vehicle collaborative mapping and perception method and system based on semantic consistency
Gao et al. Multi-source data-based 3D digital preservation of largescale ancient chinese architecture: A case report
CN109636897B (en) Octmap optimization method based on improved RGB-D SLAM
CN108364340A (en) The method and system of synchronous spacescan
CN114459483B (en) Landmark navigation map construction and application method and system based on robot navigation
JPH0814860A (en) Model creating device
CN115147356A (en) Photovoltaic panel inspection positioning method, device, equipment and storage medium
JPH08114416A (en) Three-dimensional object recognizing apparatus based on image data
Show et al. 3D Mapping and Indoor Navigation for an Indoor Environment of the University Campus
Yue et al. Probabilistic 3d semantic map fusion based on bayesian rule
Min Generating homogeneous map with targets and paths for coordinated search
CN117906595B (en) Scene understanding navigation method and system based on feature point method vision SLAM
Li et al. Simultaneous Coverage and Mapping of Stereo Camera Network for Unknown Deformable Object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210914

RJ01 Rejection of invention patent application after publication