CN112461228A - IMU and vision-based secondary loop detection positioning method in similar environment - Google Patents

IMU and vision-based secondary loop detection positioning method in similar environment Download PDF

Info

Publication number
CN112461228A
CN112461228A CN202011206955.4A CN202011206955A CN112461228A CN 112461228 A CN112461228 A CN 112461228A CN 202011206955 A CN202011206955 A CN 202011206955A CN 112461228 A CN112461228 A CN 112461228A
Authority
CN
China
Prior art keywords
imu
image
loop
binocular camera
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011206955.4A
Other languages
Chinese (zh)
Other versions
CN112461228B (en
Inventor
吕婧
邹霞
岳定春
应旻
涂良辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Hangkong University
Original Assignee
Nanchang Hangkong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Hangkong University filed Critical Nanchang Hangkong University
Priority to CN202011206955.4A priority Critical patent/CN112461228B/en
Publication of CN112461228A publication Critical patent/CN112461228A/en
Application granted granted Critical
Publication of CN112461228B publication Critical patent/CN112461228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a secondary loop detection positioning method based on IMU and vision in similar environment, which comprises the following steps: calibrating and synchronizing parameters of a binocular camera and an IMU (inertial measurement unit); step two, extracting image features and matching the image features with the image; thirdly, pose estimation and movement track formation are carried out; step four, loop detection; step five, a secondary loop detection mechanism; and step six, repositioning. The method can carry out coarse comparison constraint on the pose of the current image frame through IMU pose information, namely, the current position direction of the IMU is compared with the position direction of a primary closed-loop image to carry out direction consistency constraint prejudgment, so that the problems that surrounding scenes are similar, the image similarity can be judged to be a closed loop, and the current actual position of the IMU cannot be the same position and the loop is mistakenly returned are solved; positioning errors caused by similar images in similar environments are prevented through direction prejudgment of the IMU; and the updated high-precision pose after relocation is used for correcting the accumulated drift error of the IMU and improving the robustness of the secondary loop detection positioning method.

Description

IMU and vision-based secondary loop detection positioning method in similar environment
Technical Field
The invention relates to a secondary loop detection positioning method, in particular to a secondary loop detection positioning method based on IMU and vision in similar environment.
Background
Unmanned aerial vehicles are developing towards intelligent unmanned aerial vehicles and intelligent systems, and the intelligent level of unmanned aerial vehicles is determined by the autonomous positioning and navigation capability. Unmanned aerial vehicle has all proposed urgent demand to its autonomous positioning ability in fields such as military reconnaissance, electric power inspection, geological prospecting, forest fire prevention, especially along with the appearance of satellite navigation signal interference and decoy technique, unmanned aerial vehicle requires to possess complete autonomous positioning navigation ability under the large-scale scene to get rid of the reliance to the satellite navigation signal. The vision and Inertia (IMU) combined SLAM (Simultaneous localization and Mapping) provides a new technical support and an effective way for realizing autonomous positioning navigation without GPS signals. Through the complementation of two different sensors, the IMU carries out error correction compensation on the visual positioning information, the real-time positioning capability of the unmanned aerial vehicle is improved, accumulated drift errors also exist in the IMU after the IMU operates for a long time along with the lapse of time, and if effective means is not adopted, the unmanned aerial vehicle still can be positioned and disabled after a long time. Meanwhile, in the visual and IMU combined positioning method, the existing rear-end loop detection positioning method optimizes the pose of the previous related image frame by judging whether the current position is an environment region visited before, and has a problem of misjudgment, because a large number of similar environments exist in the actual environment, such as clustered buildings with similar appearances, fluctuant dunes and dense forest environments, cause difficult judgment of closed loop detection in the absence of a GPS environment due to similarity of a large number of images in the similar environments, so that the actual loop is judged, the algorithm judges that the loop is not the loop or is not the loop, and judges that the loop is misjudged, so that the false loop is caused, and the positioning fails. Therefore, in the similar environment without GPS, a key problem to be solved urgently is how to prevent the error loop caused by a large number of similar environment pictures from causing positioning failure.
Disclosure of Invention
The invention aims to provide a secondary loop detection and positioning method based on IMU and vision in a similar environment, which can realize the purposes of correcting vision accumulated errors and IMU accumulated drift errors and correctly positioning loops in a similar environment without GPS.
In order to achieve the purpose, the invention provides the following technical scheme:
a secondary loop detection positioning method based on IMU and vision in similar environment comprises the following steps:
firstly, calibrating and synchronizing parameters of a binocular camera and an IMU:
unifying timestamps of a binocular camera and an IMU (inertial measurement Unit), calibrating parameters of the binocular camera and the IMU, including binocular camera starting delay time, IMU starting delay time, accelerometer zero offset and gyroscope zero offset (zero offset, input is zero, namely when the IMU is not moved, an initial value which is not zero is output), synchronizing the time of the binocular camera and the IMU, and reading images acquired by the binocular camera and IMU pose information;
step two, image feature extraction and image matching:
detecting an angular point in an image acquired by a binocular camera as a characteristic point, describing images around the angular point to form a characteristic descriptor, matching the characteristic descriptors in two continuous images by using a Hamming distance, and screening characteristic matching point pairs to prevent mismatching;
thirdly, pose estimation and movement track formation:
calculating a rotation vector matrix R and a translational vector t which enable the error square sum of the feature matching point pairs to reach a minimum value by adopting a least square method through the matched 3D information of the plurality of feature matching point pairs to obtain the pose and the moving map point track of the unmanned aerial vehicle;
when the camera moves too fast to cause image blurring and positioning information is lost, acquiring the pose information of the current binocular camera through integrating IMU pose information under the unified timestamp, and updating the associated map point;
step four, loop detection:
judging whether the images return to the passing position according to the similarity of the two images to determine a loop detection relation, and measuring the similarity between the image frames in a mode of constructing a tree-shaped clustering database to judge whether the positions are consistent and loop; feature points extracted when the images are matched are clustered with feature descriptions to form a tree branch structure so as to quickly screen image features, output candidate images and reduce similarity judgment time; calculating the similarity of class vectors of non-connected but adjacent key frames with common feature classes with the current image, and judging whether looping exists or not;
step five, a secondary loop detection mechanism:
after the closed loop confirmation is carried out in the step, the closed loop is not carried out, but rough comparison constraint is carried out on the pose of the current image frame through IMU pose information, namely the current position direction of the IMU is compared with the position direction of the primary closed loop image, direction consistency constraint prejudgment is carried out, the correctness of the loop is confirmed again, and a secondary detection mechanism is formed;
step six, repositioning:
uniformly distributing the loop errors to all key image frames by adopting global nonlinear optimization according to the error result after loop confirmation so as to optimize and update the poses and associated map points of all the key image frames in a world coordinate system and acquire relocation; and meanwhile, the updated current map point is subjected to differential processing so as to correct the accumulated drift error of the current IMU.
Preferably, the calculation formula of the least square method in the third step is as follows:
Figure RE-GDA0002883521860000031
where R is a rotation vector matrix, t is a translation vector matrix, piFor the feature points in the current image,
Figure RE-GDA0002883521860000032
for the last image with piAnd matching the corresponding characteristic points.
Preferably, the calculation formula of the similarity of the class vectors in the fourth step is as follows:
Figure RE-GDA0002883521860000033
wherein v is1、v2Feature class vectors for the two images.
The invention has the beneficial effects that:
1. the visual image acquisition frequency adopted by the invention is low, the IMU acquisition frequency is high, after the synchronous timestamp, when the IMU has no high accumulated drift error in the initial stage, the IMU has more accurate positioning information than the visual image, and the visual accumulated error can be corrected by using the IMU information;
2. according to the invention, through the combined IMU, when the camera moves too fast to cause image blurring and positioning information is lost, the current pose information is obtained through IMU pose information under the integral synchronization timestamp, the associated map point is updated, and the problem of positioning failure caused by visual image failure is solved;
3. the secondary loop detection positioning method provided by the invention carries out coarse comparison constraint on the pose of the current image frame through IMU pose information, namely, the current position direction of the IMU is compared with the position direction of a primary closed loop image, and direction consistency constraint pre-judgment is carried out, so that the problems that surrounding scenes are similar, such as similarity between buildings is similar, the closed loop can be judged according to the image similarity, and the false loop can not be caused by the same position according to the current actual position of the IMU are solved, namely, the positioning error caused by similar images in similar environments is prevented through the direction pre-judgment of the IMU;
4. the high-precision pose updated after relocation is used for correcting the accumulated drift error of the IMU, and the robustness of the secondary loop detection positioning method based on the IMU and vision is improved.
Drawings
Fig. 1 is a flowchart of a secondary loop detection positioning method based on IMU and vision in a similar environment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of the present invention: a secondary loop detection positioning method based on IMU and vision in similar environment comprises the following steps:
firstly, calibrating and synchronizing parameters of a binocular camera and an IMU:
unifying timestamps of a binocular camera and an IMU (inertial measurement Unit), calibrating parameters of the binocular camera and the IMU, including binocular camera starting delay time, IMU starting delay time, accelerometer zero offset and gyroscope zero offset (zero offset, input is zero, namely when the IMU is not moved, an initial value which is not zero is output), synchronizing the time of the binocular camera and the IMU, and reading images of the binocular camera and IMU pose information;
step two, image feature extraction and image matching:
detecting an angular point in the image as a characteristic point, describing images around the angular point to form a characteristic descriptor, matching the characteristic descriptors in two continuous images by using a Hamming distance, and screening characteristic matching point pairs to prevent mismatching;
thirdly, pose estimation and movement track formation:
calculating a rotation vector matrix R and a translational vector t which enable the error square sum of the feature matching point pairs to reach a minimum value by adopting a least square method through the matched 3D information of the plurality of feature matching point pairs to obtain the pose and the moving map point track of the binocular camera;
when the camera moves too fast to cause image blurring and positioning information is lost, acquiring the pose information of the current double-sided camera by integrating IMU pose information under the unified timestamp, and updating the associated map point;
step four, loop detection:
judging whether the images return to the passing position according to the similarity of the two images to determine a loop detection relation, and measuring the similarity between the image frames in a mode of constructing a tree-shaped clustering database to judge whether the positions are consistent and loop; feature points and feature description clusters extracted when the images are matched are formed into a tree branch structure, so that image features are quickly screened, candidate images are output, the similarity judgment time is shortened, the similarity of non-connected but adjacent key frames with common feature classes with the current image is calculated, and whether looping exists is judged;
step five, a secondary loop detection mechanism:
after the closed loop confirmation is carried out in the step, the closed loop is not carried out, but rough comparison constraint is carried out on the pose of the current image frame through IMU pose information, namely the current position direction of the IMU is compared with the position direction of the primary closed loop image, direction consistency constraint prejudgment is carried out, the correctness of the loop is confirmed again, and a secondary detection mechanism is formed;
step six, repositioning:
uniformly distributing the loop errors to all key image frames by adopting global nonlinear optimization according to the error result after loop confirmation so as to optimize and update the poses and associated map points of all the key image frames in a world coordinate system and acquire relocation; and meanwhile, the updated current map point is subjected to differential processing so as to correct the accumulated drift error of the current IMU.
In this embodiment, the calculation formula of the least square method in step three is as follows:
Figure RE-GDA0002883521860000061
where R is a rotation vector matrix, t is a translation vector matrix, piFor the feature points in the current image,
Figure RE-GDA0002883521860000062
for the last image with piAnd matching the corresponding characteristic points.
In this embodiment, the formula for calculating the similarity between the class vectors in the fourth step is as follows:
Figure RE-GDA0002883521860000063
wherein v is1、v2Feature class vectors for the two images.
In summary, embodiments of the present invention provide a secondary loop detection and positioning method based on IMU and vision in a similar environment, which performs coarse comparison constraint on the pose of the current image frame through the pose information of the IMU, that is, compares the current position direction of the IMU with the position direction of the primary closed-loop image, and performs direction consistency constraint pre-judgment, so as to prevent the occurrence of similar surrounding scenes, such as similar similarities between buildings, and can be judged to be closed loops according to image similarity, while the problem of wrong loop due to the fact that the current actual position of the IMU cannot be the same position at all, that is, prevent positioning errors caused by similar images in a similar environment through the direction pre-judgment of the IMU; and the updated high-precision pose after relocation is used for correcting the accumulated drift error of the IMU, so that the robustness of the secondary loop detection positioning method based on the IMU and vision is improved.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (4)

1. A secondary loop detection positioning method based on IMU and vision under similar environment is characterized by comprising the following steps:
firstly, calibrating and synchronizing parameters of a binocular camera and an IMU:
unifying timestamps of a binocular camera and an IMU (inertial measurement Unit), calibrating parameters of the binocular camera and the IMU, including binocular camera starting delay time, IMU starting delay time, accelerometer zero offset and gyroscope zero offset (the part can be written in the specification, and the part can be written in the claims or can not be written), synchronizing the binocular camera and the IMU time, and reading images acquired by the binocular camera and IMU pose information;
step two, image feature extraction and image matching:
detecting an angular point in an image acquired by a binocular camera as a characteristic point, describing images around the angular point to form a characteristic descriptor, matching the characteristic descriptors in two continuous images by using a Hamming distance, and screening characteristic matching point pairs to prevent mismatching;
thirdly, pose estimation and movement track formation:
calculating a rotation vector matrix R and a translational vector t which enable the error square sum of the feature matching point pairs to reach a minimum value by adopting a least square method through the matched 3D information of the plurality of feature matching point pairs to obtain the pose and the moving map point track of the binocular camera;
when the camera moves too fast to cause image blurring and positioning information is lost, acquiring pose information of the current binocular camera through integrating IMU pose information under the unified timestamp, and updating associated map points;
step four, loop detection:
judging whether the images return to the passing position according to the similarity of the two images to determine a loop detection relation, and measuring the similarity between the image frames in a mode of constructing a tree-shaped clustering database to judge whether the positions are consistent and loop; feature points extracted when the images are matched are clustered with feature descriptions to form a tree branch structure so as to quickly screen image features, output candidate images and reduce similarity judgment time; calculating the similarity of class vectors of non-connected but adjacent key frames with common feature classes with the current image, and judging whether looping exists or not;
step five, a secondary loop detection mechanism:
after the closed loop confirmation is carried out in the step, the closed loop is not carried out, but rough comparison constraint is carried out on the pose of the current image frame through IMU pose information, namely the current position direction of the IMU is compared with the position direction of the primary closed loop image, direction consistency constraint prejudgment is carried out, the correctness of the loop is confirmed again, and a secondary detection mechanism is formed;
step six, repositioning:
uniformly distributing the loop errors to all key image frames by adopting global nonlinear optimization according to the error result after loop confirmation so as to optimize and update the poses and associated map points of all the key image frames in a world coordinate system and acquire relocation; and meanwhile, the updated current map point is subjected to differential processing so as to correct the accumulated drift error of the current IMU.
2. The IMU and vision-based secondary loop detection and localization method according to claim 1, wherein the condition for selecting the feature matching point pair in the second step is that the Hamming distance of the descriptor is less than twice of its minimum distance.
3. The IMU and vision-based secondary loop detection and localization method according to claim 1, wherein the formula of the least square method in the third step is:
Figure RE-FDA0002883521850000021
where R is a rotation vector matrix, t is a translation vector matrix, piFor the feature points in the current image,
Figure RE-FDA0002883521850000022
for the last image with piAnd matching the corresponding characteristic points.
4. The method according to claim 1, wherein the formula for calculating the similarity of the class vectors in the fourth step is as follows:
Figure RE-FDA0002883521850000023
wherein v is1、v2Feature class vectors for the two images.
CN202011206955.4A 2020-11-03 2020-11-03 IMU and vision-based secondary loop detection positioning method in similar environment Active CN112461228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011206955.4A CN112461228B (en) 2020-11-03 2020-11-03 IMU and vision-based secondary loop detection positioning method in similar environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011206955.4A CN112461228B (en) 2020-11-03 2020-11-03 IMU and vision-based secondary loop detection positioning method in similar environment

Publications (2)

Publication Number Publication Date
CN112461228A true CN112461228A (en) 2021-03-09
CN112461228B CN112461228B (en) 2023-05-09

Family

ID=74834896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011206955.4A Active CN112461228B (en) 2020-11-03 2020-11-03 IMU and vision-based secondary loop detection positioning method in similar environment

Country Status (1)

Country Link
CN (1) CN112461228B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113506342A (en) * 2021-06-08 2021-10-15 北京理工大学 SLAM omnidirectional loop correction method based on multi-camera panoramic vision
US20220236069A1 (en) * 2021-09-30 2022-07-28 Beijing Baidu Netcom Science Technology Co., Ltd. Method and apparatus for route navigation, electronic device, computer readable medium
CN115631319A (en) * 2022-11-02 2023-01-20 北京科技大学 Loopback detection method based on cross attention network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110044354A (en) * 2019-03-28 2019-07-23 东南大学 A kind of binocular vision indoor positioning and build drawing method and device
CN110533722A (en) * 2019-08-30 2019-12-03 的卢技术有限公司 A kind of the robot fast relocation method and system of view-based access control model dictionary
CN110986968A (en) * 2019-10-12 2020-04-10 清华大学 Method and device for real-time global optimization and error loop judgment in three-dimensional reconstruction
CN111060101A (en) * 2018-10-16 2020-04-24 深圳市优必选科技有限公司 Vision-assisted distance SLAM method and device and robot
CN111462231A (en) * 2020-03-11 2020-07-28 华南理工大学 Positioning method based on RGBD sensor and IMU sensor
CN111693047A (en) * 2020-05-08 2020-09-22 中国航空工业集团公司西安航空计算技术研究所 Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene
CN111767905A (en) * 2020-09-01 2020-10-13 南京晓庄学院 Improved image method based on landmark-convolution characteristics

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111060101A (en) * 2018-10-16 2020-04-24 深圳市优必选科技有限公司 Vision-assisted distance SLAM method and device and robot
CN110044354A (en) * 2019-03-28 2019-07-23 东南大学 A kind of binocular vision indoor positioning and build drawing method and device
CN110533722A (en) * 2019-08-30 2019-12-03 的卢技术有限公司 A kind of the robot fast relocation method and system of view-based access control model dictionary
CN110986968A (en) * 2019-10-12 2020-04-10 清华大学 Method and device for real-time global optimization and error loop judgment in three-dimensional reconstruction
CN111462231A (en) * 2020-03-11 2020-07-28 华南理工大学 Positioning method based on RGBD sensor and IMU sensor
CN111693047A (en) * 2020-05-08 2020-09-22 中国航空工业集团公司西安航空计算技术研究所 Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene
CN111767905A (en) * 2020-09-01 2020-10-13 南京晓庄学院 Improved image method based on landmark-convolution characteristics

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LIU X 等: "Optimized LOAM using ground plane constraints and SegMatch-based loop detection", 《SENSORS》 *
余威: "基于单目视觉与惯性测量单元的SLAM技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
余宇等: "基于深度学习的视觉SLAM回环检测方法", 《计算机工程与设计》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113506342A (en) * 2021-06-08 2021-10-15 北京理工大学 SLAM omnidirectional loop correction method based on multi-camera panoramic vision
CN113506342B (en) * 2021-06-08 2024-01-02 北京理工大学 SLAM omni-directional loop correction method based on multi-camera panoramic vision
US20220236069A1 (en) * 2021-09-30 2022-07-28 Beijing Baidu Netcom Science Technology Co., Ltd. Method and apparatus for route navigation, electronic device, computer readable medium
CN115631319A (en) * 2022-11-02 2023-01-20 北京科技大学 Loopback detection method based on cross attention network

Also Published As

Publication number Publication date
CN112461228B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN111561923B (en) SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion
CN112461228B (en) IMU and vision-based secondary loop detection positioning method in similar environment
Sola et al. Fusing monocular information in multicamera SLAM
CN105678754B (en) A kind of unmanned plane real-time map method for reconstructing
WO2017164479A1 (en) A device and method for determining a pose of a camera
CN110125928A (en) A kind of binocular inertial navigation SLAM system carrying out characteristic matching based on before and after frames
Wei et al. GPS and Stereovision‐Based Visual Odometry: Application to Urban Scene Mapping and Intelligent Vehicle Localization
CN105869136A (en) Collaborative visual SLAM method based on multiple cameras
AU2013343222A1 (en) Cloud feature detection
CN108051836A (en) A kind of localization method, device, server and system
US20120218409A1 (en) Methods and apparatus for automated assignment of geodetic coordinates to pixels of images of aerial video
CN110794828A (en) Road sign positioning method fusing semantic information
CN103411587A (en) Positioning and attitude-determining method and system
CN112556719A (en) Visual inertial odometer implementation method based on CNN-EKF
Li et al. Fast vision‐based autonomous detection of moving cooperative target for unmanned aerial vehicle landing
JP2002532770A (en) Method and system for determining a camera pose in relation to an image
Hallquist et al. Single view pose estimation of mobile devices in urban environments
Stow et al. Evaluation of geometric elements of repeat station imaging and registration
CN116007609A (en) Positioning method and computing system for fusion of multispectral image and inertial navigation
Andert et al. On the safe navigation problem for unmanned aircraft: Visual odometry and alignment optimizations for UAV positioning
Sheikh et al. Geodetic alignment of aerial video frames
Fong et al. Computer vision centric hybrid tracking for augmented reality in outdoor urban environments
PERLANT et al. Scene registration in aerial image analysis
CN114037968A (en) Lane line detection method based on depth radar point cloud and image data fusion
Jama et al. Parallel tracking and mapping for controlling vtol airframe

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant