CN110533718A - A kind of navigation locating method of the auxiliary INS of monocular vision artificial landmark - Google Patents

A kind of navigation locating method of the auxiliary INS of monocular vision artificial landmark Download PDF

Info

Publication number
CN110533718A
CN110533718A CN201910722220.8A CN201910722220A CN110533718A CN 110533718 A CN110533718 A CN 110533718A CN 201910722220 A CN201910722220 A CN 201910722220A CN 110533718 A CN110533718 A CN 110533718A
Authority
CN
China
Prior art keywords
camera
road sign
artificial landmark
image
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910722220.8A
Other languages
Chinese (zh)
Inventor
李传立
尚俊娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Hangzhou Electronic Science and Technology University
Original Assignee
Hangzhou Electronic Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Electronic Science and Technology University filed Critical Hangzhou Electronic Science and Technology University
Priority to CN201910722220.8A priority Critical patent/CN110533718A/en
Publication of CN110533718A publication Critical patent/CN110533718A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Navigation (AREA)

Abstract

The present invention is the navigation locating method of the auxiliary INS of monocular vision artificial landmark a kind of, comprising the following steps: design artificial landmark;Demarcate scale factor;Establish vision road sign library;The estimation of interframe pose is carried out to camera position and posture based on road sign library;Monocular vision artificial landmark assists INS positioning.The invention has the advantages that the position of camera and posture information when calculating acquisition current manual's road sign image using known vision road sign library information, camera current location and INS data are merged using Kalman filtering algorithm, accumulated error in traditional INS localization method can be reduced, make to position more acurrate, at low cost, easy to operate.

Description

A kind of navigation locating method of the auxiliary INS of monocular vision artificial landmark
Technical field
The invention belongs to the navigation of positioning and navigation field more particularly to a kind of auxiliary INS of monocular vision artificial landmark are fixed Position method.
Background technique
Traditional localization method mainly includes satellite, inertial navigation positioning etc., and satellite navigation and positioning is widely used, precision Height, but be highly susceptible to the influence of environment, have occlusion area all without satellite-signal and satellite-signal indoors and will be unable to using. Inertial navigation system is a kind of navigation system of autonomous type, its advantage is that can provide continuous navigation information, short-term positioning accurate Degree and stability are good, but since navigation information is generated by integral, position error increases at any time, cannot long-term independent work Make.
In recent years, vision guided navigation is got growing concern for as new localization method.Visual odometry (visual Odometer, VO) concept be by Nist é r propose, it is to estimate to carry by the variation of adjacent image matching characteristic point coordinate The pose situation of change of body.Because VO is the motion conditions for estimating camera by a series of comparison of before and after frames, can not What is avoided will cause the accumulative of error.Positioning and map structuring technology (visual simultaneous while view-based access control model Localization and mapping, VSLAM), the several part groups of figure are detected and built by the front end VO, rear end optimization, winding At being played an important role in robot autonomous localization and path planning.VSLAM is greatly improved compared to VO calculation amount, right Data processing terminal computing capability has very high requirement, difficult to realize to position in real time and build figure.
Summary of the invention
It the problem of increasing as the time increases present invention mainly solves traditional indoor orientation method position error, mentions The navigation locating method of the auxiliary INS of monocular vision artificial landmark at low cost out, easy to operate, accuracy is high.
The technical solution adopted by the present invention to solve the technical problems is a kind of auxiliary INS of monocular vision artificial landmark Navigation locating method, comprising the following steps:
S1: design artificial landmark;
S2: calibration scale factor;
S3: vision road sign library is established;
S4: monocular vision estimation is carried out to camera position and posture based on road sign library;
S5: monocular vision artificial landmark assists INS positioning.
Artificial landmark image is arranged in known location in the present invention, vision road sign library is established, according to image in vision road sign library With the relationship of current acquired image, the camera position and posture when obtaining in conjunction with known road sign library image are to obtain current phase The position of machine and posture carry out error correction to INS location data using the location information of camera, to reduce the error of INS Accumulation, makes to position more accurate.
As a kind of above-mentioned preferred embodiment, the step S1 artificial landmark Outside Dimensions are R, and center, which is equipped with, is used for road sign mark Number two dimensional code, the region of two dimensional code to artificial landmark edge is equipped with identification pattern.Identification pattern can be such that artificial landmark has Enough ORB characteristic points.
As a kind of above-mentioned preferred embodiment, the step S2 acceptance of the bid dimensioning factor, comprising the following steps:
S21: it is shot using camera in known distance D face artificial landmark;
S22: the Pixel Dimensions r of artificial landmark is obtained;
S23: according to pinhole imaging system principle, scale factor f is obtained
F=D*r/R.
As a kind of above-mentioned preferred embodiment, vision road sign library is established in the step S3, comprising the following steps:
S31: multiple artificial landmarks are arranged on the shiftable haulage line of the carrier with camera;
S32: shooting camera in known location face artificial landmark, records camera position, posture;
S33: road sign region and Pixel Dimensions in detection shooting image obtain road sign to camera according to scale factor Distance, and the unartificial road sign region in shooting image is filled;
S34: camera position, posture, the distance of camera to road sign and filled image correspondence are deposited into vision road sign In library.
As a kind of above-mentioned preferred embodiment, interframe position is carried out to camera position and posture based on road sign library in the step S4 Appearance estimation, comprising the following steps:
S41: according to the distance of the Pixel Dimensions acquisition camera of artificial landmark in image shot by camera to artificial landmark;
S42: the two dimensional code in identification shooting image in artificial landmark finds corresponding artificial landmark in vision road sign library Data;
S43: unartificial road sign region in shooting image is filled, carries out feature extraction with correspondence image in road sign library And matching;
S44: matching characteristic in road sign image is obtained according to the distance of matched characteristic point pixel coordinate and camera to road sign Three-dimensional point set P={ p of the point under camera coordinates system1..., pnAnd present image in matching characteristic point in camera coordinates Three-dimensional point set P '={ p ' under system1..., p 'n};
S45: meeting European variation between matched three-dimensional point, so that
pi=Rp 'i+t
Above formula is solved, spin matrix R and translation vector t is obtained;
S46: the position of camera, posture information when being obtained in conjunction with spin matrix R and translation vector t and known library image, Position and posture information when acquisition camera shooting present image.
As a kind of above-mentioned preferred embodiment, the three-dimensional point set of match point is obtained in the step S44 by the following method: In camera imaging as establishing pixel coordinate system in plane, by origin of camera photocentre three-dimensional camera coordinate system is established, it is known that figure As upper characteristic point p ' under pixel coordinate system coordinate (u, v), obtain fact characteristic point p on three-dimensional camera coordinate system coordinate (X, Y, Z)
(cx, cy) it is optical center in the coordinate fastened as the subpoint in plane in pixel coordinate, with optical center on as plane Subpoint is that origin is established as coordinate system, and characteristic point p ' is (x ', y ') in the coordinate as coordinate system on image, according to as coordinate system It can be obtained with the relationship of pixel coordinate system
Simultaneously
It can finally obtain
Wherein f1For camera focus, α f1=fx, β f1=fy, α is that the pixel coordinate system abscissa of characteristic point p ' on image exists As the scaling multiple on coordinate system axis of abscissas, the pixel coordinate system ordinate that β is characteristic point p ' on image is as coordinate system is vertical Scaling multiple in reference axis.
As a kind of above-mentioned preferred embodiment, in the step S45 by the method for singular value decomposition obtain spin matrix R and Translation vector t: calculating centroid position p, the p ' of match point in present image and library image, then calculate each point go center-of-mass coordinate
qi=pi- p, q 'i=p 'i-p′
Calculate spin matrix R and translation vector t
Translation vector t is sought according to spin matrix R
t*=p-Rp '.
As a kind of above-mentioned preferred embodiment, monocular vision artificial landmark auxiliary INS positioning in the S5: pushed away with INS boat position Calculating formula is that prediction model establishes state equation
Observational equation is established using monocular vision artificial landmark positioning method as observation model
Yk=HXk+v(k)
Wherein, Xk、Xk-1Corresponding state vector, X at the time of adjacent for trolleyk=[xk, yk, zk], (xk, yk, zk) it is small Coordinate of the vehicle current time on three-dimensional navigation coordinate system, three-dimensional navigation coordinate system are the coordinate system of activity space locating for carrier;A For state-transition matrix;W (k) is system noise error;YkFor observation, i.e. artificial vision's rout marking allocation result;H is observation square Battle array;V (k) is the noise error in observation process;Obtain state renewal equation
KkFor Kalman gain;It is the moment state estimator obtained by state equation;To be obtained by observational equation The observed quantity at the moment arrived;As fused location information.
The invention has the advantages that camera when calculating acquisition current manual's road sign image using known vision road sign library information Position and posture information, camera current location and INS data are merged using Kalman filtering algorithm, biography can be reduced Accumulated error in system INS localization method, makes to position more acurrate, at low cost, easy to operate.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of the invention.
Fig. 2 is a kind of structural schematic diagram of artificial landmark in the present invention.
Fig. 3 is a kind of flow diagram that the factor is dimensioned in the present invention.
Fig. 4 is a kind of flow diagram that vision road sign library is established in the present invention.
Fig. 5 is that a kind of process for carrying out the estimation of interframe pose to camera position and posture based on road sign library in the present invention is illustrated Figure.
Fig. 6 is camera coordinates system model in the present invention.
Fig. 7 is inter frame motion estimation figure in the present invention.
Specific embodiment
Below with reference to the embodiments and with reference to the accompanying drawing further description of the technical solution of the present invention.
Embodiment:
A kind of monocular vision artificial landmark of the present embodiment auxiliary INS navigation locating method, as shown in Figure 1, include with Lower step:
S1: design artificial landmark;As shown in Fig. 2, artificial landmark is the circle of known outermost radius R, it is easy to use Hough Circle transformation detects round road sign;The two dimensional code of setting road sign label in circle, convenient for the identification of road sign.In round road sign Various identification patterns are designed in region so that there is enough ORB characteristic points in road sign region, for later feature extraction and Matching, different road signs need to only replace the label two dimensional code of intermediate region.
S2: calibration scale factor, as shown in Figure 3, comprising the following steps:
S21: it is shot using camera in known distance D face artificial landmark;
S22: it uses hough-circle transform to detect the pixel radius of round road sign the image of shooting, obtains the picture of artificial landmark Plain radius r, it is known that artificial landmark real radius R;
S23: according to pinhole imaging system principle
R/r=D/f
, obtain scale factor f
F=D*r/R
According to scale factor, when the pixel radius for detecting people's road sign is r ', it is known that the distance of camera to road sign is
D=f*R/r '.
S3: vision road sign library is established, as shown in Figure 4, comprising the following steps:
S31: all artificial landmarks being separately positioned on the wall of the trolley shiftable haulage line with camera, height and camera Height apart from ground is identical;
S32: making camera in known location face, successively artificial landmark is shot, and records camera position, posture;
S33: road sign region and pixel radius in detection shooting image are detected using hough-circle transform, according in S23 Scale factor obtain road sign to camera distance, and to shooting image in unartificial road sign region be filled with black;
S34: camera position, posture, the distance of camera to road sign and filled image correspondence are deposited into vision road sign In library;
S4: the estimation of interframe pose is carried out to camera position and posture based on road sign library, as shown in Figure 5, comprising the following steps:
S41: according to the distance of the Pixel Dimensions acquisition camera of artificial landmark in image shot by camera to artificial landmark;
S42: the two dimensional code in identification shooting image in artificial landmark finds corresponding artificial landmark in vision road sign library Data;
S43: correspondence image in filled black, with road sign library is carried out to unartificial road sign region in shooting image and carries out ORB Feature extraction and matching;
S44: matching characteristic in road sign image is obtained according to the distance of matched characteristic point pixel coordinate and camera to road sign Three-dimensional point set P={ p of the point under camera coordinates system1..., pnAnd present image in matching characteristic point in camera coordinates Three-dimensional point set P '={ p ' under system1..., p 'n}.Three-dimensional point set acquisition methods are as follows: as shown in Figure 6 in camera imaging As establishing pixel coordinate system o "-uv in plane, three-dimensional camera coordinate system o-xyz is established by origin of camera photocentre, it is known that image Upper characteristic point p ' coordinate (u, v) under pixel coordinate system, obtain fact characteristic point p on three-dimensional camera coordinate system coordinate (X, Y, Z)
(cx, cy) it is optical center in the coordinate fastened as the subpoint in plane in pixel coordinate, with optical center on as plane Subpoint is that origin is established as coordinate system o '-x ' y ', and characteristic point p ' is (x ', y ') in the coordinate as coordinate system on image, according to As the relationship of coordinate system and pixel coordinate system can obtain
Simultaneously
It can finally obtain
Wherein f1For camera focus, α f1=fx, β f1=fy, α is that the pixel coordinate system abscissa of characteristic point p ' on image exists As the scaling multiple on coordinate system axis of abscissas, the pixel coordinate system ordinate that β is characteristic point p ' on image is as coordinate system is vertical Scaling multiple in reference axis.
S45: meeting European variation between matched three-dimensional point, so that
pi=Rp 'i+t
By the method solution above formula of singular value decomposition, obtains spin matrix R and translation vector t, solution procedure is as follows:
Centroid position p, the p ' of match point in present image and library image are calculated, then calculate each point go center-of-mass coordinate
qi=pi- p, q 'i=p 'i-p′
Calculate spin matrix R and translation vector t
Translation vector t is sought according to spin matrix R
t*=p-Rp '.
S46: as shown in fig. 7, obtaining spin matrix R and translation vector t and known in conjunction with by singular value decomposition (SVD) method Position and posture information when obtaining camera shooting present image of library image position of camera when obtaining, posture information;
S5: monocular vision artificial landmark assists INS positioning, using INS/ odometer as in a manner of the prime navaid of trolley, and Using the positioning method of view-based access control model artificial landmark as auxiliary, error correction is carried out to INS/ odometer location data.Because serving as reasons The attitudes vibration that vision solves is more accurate, the course angle directly obtained here with vision to inertial navigation to accumulated error into Row amendment, and then use the mode of Kalman filter to merge location information: using INS dead reckoning formula as prediction model Establish state equation
Observational equation is established using monocular vision artificial landmark positioning method as observation model
Yk=HXk+v(k)
Wherein, Xk、Xk-1Corresponding state vector, X at the time of adjacent for trolleyk=[xk, yk, zk], (xk, yk, zk) it is small Coordinate of the vehicle current time on three-dimensional navigation coordinate system, navigation three-dimensional coordinate system are the coordinate system of activity space locating for trolley;A For state-transition matrix;W (k) is system noise error;YkFor observation, i.e. artificial vision's rout marking allocation result;H is observation square Battle array;V (k) is the noise error in observation process;
Obtain state renewal equation
Wherein, Kk is Kalman gain;It is the moment state estimator obtained by state equation;For by observing The observed quantity at the moment that equation obtains;As fused location information.
Specific embodiment described herein is only an example for the spirit of the invention.The neck of technology belonging to the present invention The technical staff in domain can make various modifications or additions to the described embodiments or replace by a similar method In generation, however, it does not deviate from the spirit of the invention or beyond the scope of the appended claims.

Claims (8)

1. the navigation locating method of the auxiliary INS of monocular vision artificial landmark a kind of, it is characterized in that: the following steps are included:
S1: design artificial landmark;
S2: calibration scale factor;
S3: vision road sign library is established;
S4: the estimation of interframe pose is carried out to camera position and posture based on road sign library;
S5: monocular vision artificial landmark assists INS positioning.
2. the navigation locating method of the auxiliary INS of monocular vision artificial landmark according to claim 1 a kind of, feature Be: the step S1 artificial landmark Outside Dimensions are R, and center is equipped with the two dimensional code for road sign label, two dimensional code to artificial road The region for marking edge is equipped with identification pattern.
3. the navigation locating method of the auxiliary INS of monocular vision artificial landmark according to claim 1 a kind of, feature It is: the step S2 acceptance of the bid dimensioning factor, comprising the following steps:
S21: it is shot using camera in known distance D face artificial landmark;
S22: the Pixel Dimensions r of artificial landmark is obtained;
S23: according to pinhole imaging system principle, scale factor f is obtained
F=D*r/R.
4. the navigation locating method of the auxiliary INS of monocular vision artificial landmark according to claim 1 a kind of, feature It is: establishes vision road sign library in the step S3, comprising the following steps:
S31: multiple artificial landmarks are arranged on the shiftable haulage line of the carrier with camera;
S32: shooting camera in known location face artificial landmark, records camera position, posture;
S33: road sign region and Pixel Dimensions in detection shooting image, according to scale factor obtain road sign to camera away from From, and the unartificial road sign region in shooting image is filled;
S34: camera position, posture, the distance of camera to road sign and filled image correspondence are deposited into vision road sign library.
5. the navigation locating method of the auxiliary INS of monocular vision artificial landmark according to claim 1 a kind of, feature It is: the estimation of interframe pose is carried out to camera position and posture based on road sign library in the step S4, comprising the following steps:
S41: according to the distance of the Pixel Dimensions acquisition camera of artificial landmark in image shot by camera to artificial landmark;
S42: the two dimensional code in identification shooting image in artificial landmark finds corresponding artificial landmark number in vision road sign library According to;
S43: to shooting image in unartificial road sign region be filled, in road sign library correspondence image carry out feature extraction and Match;
S44: matching characteristic point in road sign image is obtained according to the distance of matched characteristic point pixel coordinate and camera to road sign and is existed Three-dimensional point set P={ p under camera coordinates system1..., pnAnd present image in matching characteristic point under camera coordinates system Three-dimensional point set P '={ p '1..., p 'n};
S45: meeting European variation between matched three-dimensional point, so that
pi=Rp 'i+t
Above formula is solved, spin matrix R and translation vector t is obtained;
S46: the position of camera, posture information when obtaining in conjunction with spin matrix R and translation vector t and known library image obtain Position and posture information when camera shooting present image.
6. the navigation locating method of the auxiliary INS of monocular vision artificial landmark according to claim 5 a kind of, feature It is: obtains the three-dimensional point set of match point in the step S44 by the following method: in camera imaging as establishing picture in plane Plain coordinate system establishes three-dimensional camera coordinate system by origin of camera photocentre, it is known that characteristic point p ' is under pixel coordinate system on image Coordinate (u, v) obtains coordinate (X, Y, Z) of the fact characteristic point p on three-dimensional camera coordinate system
(cx, cy) it is optical center in the coordinate fastened as the subpoint in plane in pixel coordinate, with optical center in as the projection in plane Point is that origin is established as coordinate system, and characteristic point p ' is (x ', y ') in the coordinate as coordinate system on image, according to as coordinate system and picture The relationship of plain coordinate system can obtain
Simultaneously
It can finally obtain
Wherein f1For camera focus, α f1=fx, β f1=fy, the pixel coordinate system abscissa that α is characteristic point p ' on image is as sitting Scaling multiple on mark system axis of abscissas, the pixel coordinate system ordinate that β is characteristic point p ' on image is as coordinate system ordinate Scaling multiple on axis.
7. the navigation locating method of the auxiliary INS of monocular vision artificial landmark according to claim 5 a kind of, feature It is: spin matrix R and translation vector t is obtained by the method for singular value decomposition in the step S45: calculates present image and library Centroid position p, the p ' of match point in image, then calculate each point go center-of-mass coordinate
qi=pi- p, q 'i=p 'i-p′
Calculate spin matrix R and translation vector t
Translation vector t is sought according to spin matrix R
t*=p-Rp '.
8. the navigation locating method of the auxiliary INS of monocular vision artificial landmark according to claim 1 a kind of, feature It is: monocular vision artificial landmark auxiliary INS positioning in the S5: establishes state side using INS dead reckoning formula as prediction model Journey
Observational equation is established using monocular vision artificial landmark positioning method as observation model
Yk=HXk+v(k)
Wherein, Xk、Xk-1Corresponding state vector, X at the time of adjacent for trolleyk=[xk, yk, zk], (xk, yk, zk) work as trolley Coordinate of the preceding moment on three-dimensional navigation coordinate system, three-dimensional navigation coordinate system are the coordinate system of activity space locating for carrier;A is shape State transfer matrix;W (k) is system noise error;YkFor observation, i.e. artificial vision's rout marking allocation result;H is observing matrix;v It (k) is the noise error in observation process;Obtain state renewal equation
KkFor Kalman gain;It is the moment state estimator obtained by state equation;It is obtained by observational equation The observed quantity at the moment;As fused location information.
CN201910722220.8A 2019-08-06 2019-08-06 A kind of navigation locating method of the auxiliary INS of monocular vision artificial landmark Pending CN110533718A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910722220.8A CN110533718A (en) 2019-08-06 2019-08-06 A kind of navigation locating method of the auxiliary INS of monocular vision artificial landmark

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910722220.8A CN110533718A (en) 2019-08-06 2019-08-06 A kind of navigation locating method of the auxiliary INS of monocular vision artificial landmark

Publications (1)

Publication Number Publication Date
CN110533718A true CN110533718A (en) 2019-12-03

Family

ID=68661438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910722220.8A Pending CN110533718A (en) 2019-08-06 2019-08-06 A kind of navigation locating method of the auxiliary INS of monocular vision artificial landmark

Country Status (1)

Country Link
CN (1) CN110533718A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739092A (en) * 2020-06-12 2020-10-02 广东博智林机器人有限公司 Hanging basket, detection robot, detection control system and detection method
CN111951262A (en) * 2020-08-25 2020-11-17 杭州易现先进科技有限公司 Method, device and system for correcting VIO error and electronic device
CN111964673A (en) * 2020-08-25 2020-11-20 一汽解放汽车有限公司 Unmanned vehicle positioning system
CN112945227A (en) * 2021-02-01 2021-06-11 北京嘀嘀无限科技发展有限公司 Positioning method and device
CN113340313A (en) * 2020-02-18 2021-09-03 北京四维图新科技股份有限公司 Navigation map parameter determination method and device
CN113403942A (en) * 2021-07-07 2021-09-17 西北工业大学 Label-assisted bridge detection unmanned aerial vehicle visual navigation method
CN113566809A (en) * 2021-06-29 2021-10-29 陕西省引汉济渭工程建设有限公司 Artificial road sign assisted water delivery tunnel detection robot navigation positioning device and implementation method
CN114415221A (en) * 2021-12-29 2022-04-29 中国科学院沈阳自动化研究所 Underwater robot polar region under-ice vision aided navigation method based on image code road sign

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104359464A (en) * 2014-11-02 2015-02-18 天津理工大学 Mobile robot positioning method based on stereoscopic vision
CN108827250A (en) * 2018-05-07 2018-11-16 深圳市三宝创新智能有限公司 A kind of robot monocular vision ranging technology method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104359464A (en) * 2014-11-02 2015-02-18 天津理工大学 Mobile robot positioning method based on stereoscopic vision
CN108827250A (en) * 2018-05-07 2018-11-16 深圳市三宝创新智能有限公司 A kind of robot monocular vision ranging technology method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
么娆: "《超声引导机器人***实时影像处理与导航定位技术》", 31 October 2013 *
陈明芽等: "单目视觉自然路标辅助的移动机器人定位方法", 《浙江大学学报》 *
黄露等: "单目视觉人工路标辅助的移动机器人导航方法", 《计算机***应用》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113340313A (en) * 2020-02-18 2021-09-03 北京四维图新科技股份有限公司 Navigation map parameter determination method and device
CN113340313B (en) * 2020-02-18 2024-04-16 北京四维图新科技股份有限公司 Navigation map parameter determining method and device
CN111739092A (en) * 2020-06-12 2020-10-02 广东博智林机器人有限公司 Hanging basket, detection robot, detection control system and detection method
CN111951262A (en) * 2020-08-25 2020-11-17 杭州易现先进科技有限公司 Method, device and system for correcting VIO error and electronic device
CN111964673A (en) * 2020-08-25 2020-11-20 一汽解放汽车有限公司 Unmanned vehicle positioning system
CN111951262B (en) * 2020-08-25 2024-03-12 杭州易现先进科技有限公司 VIO error correction method, device, system and electronic device
CN112945227A (en) * 2021-02-01 2021-06-11 北京嘀嘀无限科技发展有限公司 Positioning method and device
CN113566809A (en) * 2021-06-29 2021-10-29 陕西省引汉济渭工程建设有限公司 Artificial road sign assisted water delivery tunnel detection robot navigation positioning device and implementation method
CN113403942A (en) * 2021-07-07 2021-09-17 西北工业大学 Label-assisted bridge detection unmanned aerial vehicle visual navigation method
CN114415221A (en) * 2021-12-29 2022-04-29 中国科学院沈阳自动化研究所 Underwater robot polar region under-ice vision aided navigation method based on image code road sign

Similar Documents

Publication Publication Date Title
CN110533718A (en) A kind of navigation locating method of the auxiliary INS of monocular vision artificial landmark
CN111024066B (en) Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN108253963B (en) Robot active disturbance rejection positioning method and positioning system based on multi-sensor fusion
CN109631887B (en) Inertial navigation high-precision positioning method based on binocular, acceleration and gyroscope
CN110044354A (en) A kind of binocular vision indoor positioning and build drawing method and device
CN104732518B (en) A kind of PTAM improved methods based on intelligent robot terrain surface specifications
CN109166149A (en) A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
Nistér et al. Visual odometry for ground vehicle applications
CN109544636A (en) A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN109676604A (en) Robot non-plane motion localization method and its motion locating system
CN109540126A (en) A kind of inertia visual combination air navigation aid based on optical flow method
CN108090958A (en) A kind of robot synchronously positions and map constructing method and system
CN108759833A (en) A kind of intelligent vehicle localization method based on priori map
CN108226938A (en) A kind of alignment system and method for AGV trolleies
CN113706626B (en) Positioning and mapping method based on multi-sensor fusion and two-dimensional code correction
CN110865650B (en) Unmanned aerial vehicle pose self-adaptive estimation method based on active vision
CN106295512B (en) Vision data base construction method and indoor orientation method in more correction lines room based on mark
CN109579825B (en) Robot positioning system and method based on binocular vision and convolutional neural network
CN111784775B (en) Identification-assisted visual inertia augmented reality registration method
CN109443348A (en) It is a kind of based on the underground garage warehouse compartment tracking for looking around vision and inertial navigation fusion
CN108549376A (en) A kind of navigation locating method and system based on beacon
CN108481327A (en) A kind of positioning device, localization method and the robot of enhancing vision
CN110108269A (en) AGV localization method based on Fusion
CN113763548B (en) Vision-laser radar coupling-based lean texture tunnel modeling method and system
CN108519102A (en) A kind of binocular vision speedometer calculation method based on reprojection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191203