CN108052103B - Underground space simultaneous positioning and map construction method of inspection robot based on depth inertia odometer - Google Patents

Underground space simultaneous positioning and map construction method of inspection robot based on depth inertia odometer Download PDF

Info

Publication number
CN108052103B
CN108052103B CN201711334617.7A CN201711334617A CN108052103B CN 108052103 B CN108052103 B CN 108052103B CN 201711334617 A CN201711334617 A CN 201711334617A CN 108052103 B CN108052103 B CN 108052103B
Authority
CN
China
Prior art keywords
depth
inspection robot
underground space
plane
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711334617.7A
Other languages
Chinese (zh)
Other versions
CN108052103A (en
Inventor
朱华
陈常
李振亚
汪雷
杨汪庆
李鹏
赵勇
由韶泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN201711334617.7A priority Critical patent/CN108052103B/en
Publication of CN108052103A publication Critical patent/CN108052103A/en
Application granted granted Critical
Publication of CN108052103B publication Critical patent/CN108052103B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A method for simultaneously positioning underground space and constructing a map of an inspection robot based on a depth inertia odometer comprises the steps of loosely coupling a depth camera and an inertia measurement unit, acquiring point cloud information through a depth map acquired by the depth camera, and extracting plane features; converting an RGB image collected by a depth camera into a gray level image and fusing plane features, and optimizing by using an iterative closest point algorithm; and loosely coupling the data after the iterative closest point optimization and the inertial measurement unit data, and improving the accuracy of the pose graph by using loop detection to obtain the running track of the inspection robot, a point cloud map and a tree jump table map so as to achieve the effect of simultaneously positioning and map building the inspection robot indoors. The method improves the positioning precision and robustness of the inspection robot in the underground space, and achieves the effects of positioning and map construction of the inspection robot in the underground space. When the inspection robot works in the underground space, the method adopted by the invention has good robustness under the strong rotation environment.

Description

Underground space simultaneous positioning and map construction method of inspection robot based on depth inertia odometer
Technical Field
The invention relates to the field of inspection robot simultaneous positioning, in particular to an inspection robot underground space simultaneous positioning and map construction method based on a depth inertia odometer.
Background
With the progress of science and technology, the inspection robot is more and more widely applied in the fields of industry, military and the like. In many cases, the information of the working space of the inspection robot is complicated and unknown. The inspection robot is required to realize functions of indoor autonomous navigation, target identification, automatic obstacle avoidance and the like, and the accurate and simultaneous positioning is particularly important. The traditional simultaneous positioning method mostly takes global satellites such as GPS and Beidou as main positioning, but the common GPS sensor has lower simultaneous positioning precision and cannot meet the requirement of precise simultaneous positioning of the inspection robot.
Although the differential GPS has high positioning accuracy outdoors, the differential GPS is expensive and cannot work in the GPS failure environments such as tunnels, roadways, and basements. Underground spaces such as tunnels, roadways and basements cannot be irradiated by sunlight all the year round, and the illumination is low. In the aspect of visual positioning, the positioning accuracy is lower by generally using a simple camera at present, and the effective positioning effect of the inspection robot cannot be achieved.
Along with the development of computer vision and image processing technologies, a machine vision method conducts navigation through a perception environment and is widely applied to the aspect of real-time positioning of a robot. The principle of the vision simultaneous positioning method is that a camera arranged on a robot body collects images in the motion process in real time, relevant information is extracted from the images, the operation posture and the track of the robot are further judged and calculated, and finally navigation and real-time positioning are achieved. However, the vision sensor is easily affected by light, and the simultaneous positioning is easily lost in the case of strong exposure, low brightness, and the like. In addition, a simple monocular vision sensor has no scale information, cannot sense the depth of the surrounding environment where the robot is located, and features are lost when the robot turns in place, which easily causes the real-time positioning failure of the robot.
The inspection robot uses an inertia measurement unit to perform positioning development earlier, and the inertia positioning is to calculate six-degree-of-freedom simultaneous positioning information of a carrier by using linear acceleration and rotation angular rate measured by the inertia measurement unit. The angular rate of the carrier is measured by a gyroscope, and the angular rate of the carrier is mainly used for calculating a rotation matrix of the robot and providing a conversion relation between a carrier coordinate system and a navigation coordinate system; the linear acceleration of the carrier is measured through an accelerometer, the velocity information and the displacement information of the robot are solved through the obtained acceleration integration, and finally the positioning is completed through converting the six-degree-of-freedom information of the robot into a navigation coordinate system. However, the error accumulation of the simple inertial measurement unit under the repeated path is large, and effective loop detection cannot be performed. In addition, due to the properties of random walk of the inertia measurement unit and the like, a large amount of hysteresis errors are generated when the inspection robot starts and the acceleration changes greatly.
Consumer depth cameras represented by woo xtion and microsoft Kinect can acquire RGB images and depth maps, and are widely applied to the field of indoor robots. However, the field of view of the depth camera is generally narrow, so that the tracking target of the algorithm is easily lost, and meanwhile, a great deal of noise exists in the depth data, and even some data cannot be used. In the conventional method, the visual feature extraction algorithm is often based on the difference of pixels, but in the depth data measured by the depth camera, the points located at the corners are not easily recognized. And under the condition of large rotation, the mobile robot is easy to lose by adopting a single depth camera for positioning.
Simultaneous localization and mapping (SLAM) was originally applied in the field of robotics. Although the method using a single sensor has a small calculation amount, the positioning accuracy is not high and the robustness is not strong. Simultaneous localization and mapping methods using multiple sensor fusion have become the mainstream of development and lack of effective simultaneous localization and mapping of depth camera and inertial measurement unit fusion.
Disclosure of Invention
According to the defects of the prior art, the invention provides the inspection robot underground space simultaneous positioning and map construction method based on the depth inertia odometer, and the method improves the positioning precision and robustness of the inspection robot in the underground space, thereby achieving the effect of simultaneously positioning and map construction of the inspection robot in the underground space. When the inspection robot works in the underground space, the method adopted by the invention has good robustness under the strong rotation environment.
In order to achieve the purpose, the invention adopts the technical scheme that:
a method for simultaneously positioning and mapping underground space of an inspection robot based on a depth inertia odometer comprises the following steps:
loosely coupling a depth camera and an inertia measurement unit, acquiring point cloud information through a depth map acquired by the depth camera, and extracting plane features; converting an RGB image collected by a depth camera into a gray level image and fusing plane features, and optimizing by using an iterative closest point algorithm; and loosely coupling the data after the iterative closest point optimization and the inertial measurement unit data, and improving the accuracy of the pose graph by using loop detection to obtain the running track of the inspection robot, a point cloud map and a tree jump table map so as to achieve the effect of simultaneously positioning and map building the inspection robot indoors.
Preferably, the depth camera collects two adjacent frames as a scene S and a model M, and two sets of matching points are respectively denoted as P ═ Pi1,. n } and Q ═ Q ·i|i=1,...,n};
Wherein p isiAnd q isiAll represent three-dimensional space points and can be parameterized
Figure BDA0001505816830000021
Preferably, the depth camera model is:
Figure BDA0001505816830000031
wherein (u, v) is a spatial point (x, y, z)TThe corresponding pixel position, d is the depth value, and C is the camera internal reference.
Preferably, the movement of M to S is described by a rotation R and a translation t, solved using an iterative closest point algorithm:
Figure BDA0001505816830000032
preferably, plane features are extracted from the point cloud obtained from the depth map, describing the plane of the three-dimensional space with four parameters:
p=(a,b,c,d)={x,y,z|ax+by+cz+d=0};
and d is equal to 0, each fitted plane is projected on an imaging plane to obtain the imaging position (u, v) of a plane point, and a projection equation is used for solving:
Figure BDA0001505816830000033
wherein f isx,fy,cx,cyFor the depth camera's internal reference, s is the scaling factor of the depth data.
Preferably, the gray level histogram normalization is performed once on each plane graph to enhance the contrast thereof, and then the feature points are extracted and the depths of the feature points are calculated:
Figure BDA0001505816830000034
preferably, too many key frames will cause extra computation for the back-end and loop detection, while too few key frames will cause too much motion between key frames and insufficient feature matching, resulting in easy loss. After extracting the plane of the image, calculating that the relative motion between the plane of the image and the previous key frame exceeds a certain threshold, the image is considered as a new key frame.
Preferably, the threshold is calculated by evaluating the translation and euler angle rotation:
Figure BDA0001505816830000035
where (Δ x, Δ y, Δ z) is the relative translation and (α, β, γ) is the relative euler angle;
w1=(m,m,m),m∈(0.6,0.7);w2∈(0.95,1.05)。
the invention has the beneficial effects that:
by the method, the positioning precision and the robustness of the inspection robot during operation in the underground space are improved, and the effects of simultaneous positioning and map construction of the inspection robot in the underground space environment are achieved. When the inspection robot works in the underground space, the method adopted by the invention has good robustness under the strong rotation environment.
Drawings
FIG. 1 is a schematic diagram of a process framework of the present invention;
FIG. 2 is a schematic diagram of a grayscale representation of the RGB image captured by the depth camera of the present invention;
FIG. 3 is a schematic view of a depth camera of the present invention acquiring a depth image;
FIG. 4 is a three-dimensional point cloud map of a construction environment according to the present invention;
FIG. 5 is a three-dimensional tree skip list map of a build environment of the present invention;
fig. 6 is a robot running track of the present invention.
Detailed Description
The invention is further described by the following specific embodiments with reference to the attached drawings.
As shown in fig. 1 to 6, a method for simultaneously positioning and mapping an underground space of an inspection robot based on a depth inertia odometer comprises the following steps: and loose coupling is carried out by utilizing the depth camera and the inertia measurement unit, point cloud information is obtained through a depth map collected by the depth camera, and plane features are extracted. Fusing the RGB image collected by the depth camera and the plane features, and optimizing by using an Iterative Closest Point (ICP) algorithm; and loosely coupling the data after ICP optimization and Inertial Measurement Unit (IMU) data, and improving the accuracy of the pose graph by Loop closure (Loop closure) to obtain the running track of the inspection robot, a point cloud map and a tree jump table map.
The depth camera collects two adjacent frames as a scene S and a model M, and two groups of matching points are respectively recorded as P ═ Pi1,. n } and Q ═ Q ·i1., n }. Wherein p isiAnd q isiAll represent three-dimensional space points and can be parameterized
Figure BDA0001505816830000042
The depth camera model is:
Figure BDA0001505816830000041
wherein (u, v) is a spatial point (x, y, z)TThe corresponding pixel position, d is the depth value, and C is the camera internal reference.
The motion from M to S is described by the rotation R and translation t, and is solved by the ICP algorithm:
Figure BDA0001505816830000051
extracting plane features from a point cloud obtained from a depth map, and describing a plane of a three-dimensional space with four parameters:
p=(a,b,c,d)={x,y,z|ax+by+cz+d=0}
and d is equal to 0, each fitted plane is projected on an imaging plane to obtain the imaging position (u, v) of a plane point, and a projection equation is used for solving:
Figure BDA0001505816830000052
wherein f isx,fy,cx,cyFor the depth camera's internal reference, s is the scaling factor of the depth data.
Performing gray level histogram normalization on each plane graph once to enhance the contrast ratio of each plane graph, then extracting feature points and calculating the depths of the feature points:
Figure BDA0001505816830000053
camera pose in three-dimensional space, expressed in translation and unit quaternion: x ═ x, y, z, qx,qy,qz,qw};
Set of planes P ═ { P ] extracted from the frameiAnd each plane comprises the plane parameters and the corresponding characteristic points.
Too many key frames will bring extra computation to the back-end and loop detection, while too few key frames will result in too much motion between key frames, and insufficient feature matching, resulting in easy loss. After extracting the plane of the image, calculating that the relative motion between the plane of the image and the previous key frame exceeds a certain threshold, the image is considered as a new key frame. The threshold is calculated by evaluating the translation and euler angle rotation:
Figure BDA0001505816830000054
here (Δ x, Δ y, Δ z) are relative translations, and (α, β, γ) are relative euler angles.
w1=(m,m,m),m∈(0.6,0.7),w2∈(0.95,1.05)。
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not intended to limit the present invention in any way, so that any person skilled in the art can make modifications or changes in the technical content disclosed above, and equivalent embodiments can be obtained. However, any simple modification, equivalent change and modification of the above embodiments according to the technical essence of the present invention are still within the protection scope of the present invention, unless they depart from the technical spirit of the present invention.

Claims (6)

1. A method for simultaneously positioning and mapping underground space of an inspection robot based on a depth inertia odometer is characterized by comprising the following steps:
loosely coupling a depth camera and an inertia measurement unit, acquiring point cloud information through a depth map acquired by the depth camera, and extracting plane features;
converting an RGB image collected by a depth camera into a gray level image and fusing plane features, and optimizing by using an iterative closest point algorithm;
loosely coupling the data after the iterative closest point optimization and the inertial measurement unit data, and improving the accuracy of a pose graph by using loop detection to obtain a running track of the inspection robot, a point cloud map and a tree jump table map so as to achieve the effects of simultaneously positioning and mapping the inspection robot in an underground space;
the depth camera collects two adjacent frames as a scene S and a model M, and two groups of matching points are respectively recorded as P ═ Pi1,. n } and Q ═ Q ·i|i=1,...,n};
Wherein p isiAnd q isiAll represent three-dimensional space points and can be parameterized
Figure FDA0002630736360000013
The motion from M to S is described by rotation R and translation t, and the iterative closest point algorithm is used for solving:
Figure FDA0002630736360000011
2. the inspection robot underground space simultaneous localization and mapping method based on the depth inertia odometer according to claim 1, characterized in that:
the depth camera model is:
Figure FDA0002630736360000012
wherein (u, v) is a spatial point (x, y, z)TThe corresponding pixel position, d is the depth value, and C is the camera internal reference.
3. The inspection robot underground space simultaneous localization and mapping method based on the depth inertia odometer according to claim 1, characterized in that:
extracting plane features from a point cloud obtained from a depth map, and describing a plane of a three-dimensional space with four parameters:
p=(a,b,c,d)={x,y,z|ax+by+cz+d=0};
and d is equal to 0, each fitted plane is projected on an imaging plane to obtain the imaging position (u, v) of a plane point, and a projection equation is used for solving:
Figure FDA0002630736360000021
Figure FDA0002630736360000022
d=z·s
wherein f isx,fy,cx,cyFor the depth camera's internal reference, s is the scaling factor of the depth data.
4. The inspection robot underground space simultaneous localization and mapping method based on the depth inertia odometer according to claim 3, characterized in that:
performing gray level histogram normalization on each plane graph once to enhance the contrast ratio of each plane graph, then extracting feature points and calculating the depths of the feature points:
Figure FDA0002630736360000023
5. the inspection robot underground space simultaneous localization and mapping method based on the depth inertia odometer according to claim 1, characterized in that:
too many key frames bring extra calculation amount to the back end and loop detection, while too few key frames cause too large movement among the key frames, and the number of feature matching is not enough, thereby causing easy loss; after extracting the plane of the image, calculating that the relative motion between the plane of the image and the previous key frame exceeds a certain threshold, the image is considered as a new key frame.
6. The inspection robot underground space simultaneous localization and mapping method based on the depth inertia odometer according to claim 5, characterized in that:
the threshold is calculated by evaluating the translation and euler angle rotation:
Figure FDA0002630736360000024
where (Δ x, Δ y, Δ z) is the relative translation and (α, β, γ) is the relative euler angle;
w1=(m,m,m),m∈(0.6,0.7);w2∈(0.95,1.05)。
CN201711334617.7A 2017-12-13 2017-12-13 Underground space simultaneous positioning and map construction method of inspection robot based on depth inertia odometer Active CN108052103B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711334617.7A CN108052103B (en) 2017-12-13 2017-12-13 Underground space simultaneous positioning and map construction method of inspection robot based on depth inertia odometer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711334617.7A CN108052103B (en) 2017-12-13 2017-12-13 Underground space simultaneous positioning and map construction method of inspection robot based on depth inertia odometer

Publications (2)

Publication Number Publication Date
CN108052103A CN108052103A (en) 2018-05-18
CN108052103B true CN108052103B (en) 2020-12-04

Family

ID=62132123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711334617.7A Active CN108052103B (en) 2017-12-13 2017-12-13 Underground space simultaneous positioning and map construction method of inspection robot based on depth inertia odometer

Country Status (1)

Country Link
CN (1) CN108052103B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108776487A (en) * 2018-08-22 2018-11-09 中国矿业大学 A kind of mining rail formula crusing robot and its localization method
US11747825B2 (en) * 2018-10-12 2023-09-05 Boston Dynamics, Inc. Autonomous map traversal with waypoint matching
CN110322511B (en) * 2019-06-28 2021-03-26 华中科技大学 Semantic SLAM method and system based on object and plane features
US11268816B2 (en) 2019-08-06 2022-03-08 Boston Dynamics, Inc. Intermediate waypoint generator
CN110722559A (en) * 2019-10-25 2020-01-24 国网山东省电力公司信息通信公司 Auxiliary inspection positioning method for intelligent inspection robot
CN112258568B (en) * 2020-10-12 2022-07-01 武汉中海庭数据技术有限公司 High-precision map element extraction method and device
CN112697131A (en) * 2020-12-17 2021-04-23 中国矿业大学 Underground mobile equipment positioning method and system based on vision and inertial navigation system
CN114720978A (en) * 2021-01-06 2022-07-08 扬智科技股份有限公司 Method and mobile platform for simultaneous localization and mapping
CN113378694B (en) * 2021-06-08 2023-04-07 北京百度网讯科技有限公司 Method and device for generating target detection and positioning system and target detection and positioning
CN113486854A (en) * 2021-07-29 2021-10-08 北京超维世纪科技有限公司 Recognition detection algorithm for realizing industrial inspection robot based on depth camera
CN114429432B (en) * 2022-04-07 2022-06-21 科大天工智能装备技术(天津)有限公司 Multi-source information layered fusion method and device and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104220895B (en) * 2012-05-01 2017-03-01 英特尔公司 Using for indoor location room and time coherence while localization and mapping
CN103411621B (en) * 2013-08-09 2016-02-10 东南大学 A kind of vision/INS Combinated navigation method of the optical flow field towards indoor mobile robot
US9759918B2 (en) * 2014-05-01 2017-09-12 Microsoft Technology Licensing, Llc 3D mapping with flexible camera rig
CN107085422A (en) * 2017-01-04 2017-08-22 北京航空航天大学 A kind of tele-control system of the multi-functional Hexapod Robot based on Xtion equipment
CN107063246A (en) * 2017-04-24 2017-08-18 齐鲁工业大学 A kind of Loosely coupled air navigation aid of vision guided navigation/inertial navigation
CN107160395B (en) * 2017-06-07 2020-10-16 中国人民解放军装甲兵工程学院 Map construction method and robot control system

Also Published As

Publication number Publication date
CN108052103A (en) 2018-05-18

Similar Documents

Publication Publication Date Title
CN108052103B (en) Underground space simultaneous positioning and map construction method of inspection robot based on depth inertia odometer
CN108717712B (en) Visual inertial navigation SLAM method based on ground plane hypothesis
CN109307508B (en) Panoramic inertial navigation SLAM method based on multiple key frames
CN107909614B (en) Positioning method of inspection robot in GPS failure environment
US9888235B2 (en) Image processing method, particularly used in a vision-based localization of a device
JP5832341B2 (en) Movie processing apparatus, movie processing method, and movie processing program
CN114018236B (en) Laser vision strong coupling SLAM method based on self-adaptive factor graph
CN110827353B (en) Robot positioning method based on monocular camera assistance
CN110388919B (en) Three-dimensional model positioning method based on feature map and inertial measurement in augmented reality
Momeni-k et al. Height estimation from a single camera view
CN112179373A (en) Measuring method of visual odometer and visual odometer
CN115371665A (en) Mobile robot positioning method based on depth camera and inertia fusion
Horanyi et al. Generalized pose estimation from line correspondences with known vertical direction
Karam et al. Integrating a low-cost mems imu into a laser-based slam for indoor mobile mapping
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
CN115147344A (en) Three-dimensional detection and tracking method for parts in augmented reality assisted automobile maintenance
Muffert et al. The estimation of spatial positions by using an omnidirectional camera system
CN112731503A (en) Pose estimation method and system based on front-end tight coupling
CN112945233A (en) Global drift-free autonomous robot simultaneous positioning and map building method
Irmisch et al. Simulation framework for a visual-inertial navigation system
CN111862146B (en) Target object positioning method and device
CN112767482B (en) Indoor and outdoor positioning method and system with multi-sensor fusion
Lee et al. Visual odometry for absolute position estimation using template matching on known environment
Hernández et al. Visual SLAM with oriented landmarks and partial odometry
Deng et al. Robust 3D-SLAM with tight RGB-D-inertial fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant