CN104121902A - Implementation method of indoor robot visual odometer based on Xtion camera - Google Patents
Implementation method of indoor robot visual odometer based on Xtion camera Download PDFInfo
- Publication number
- CN104121902A CN104121902A CN201410301943.8A CN201410301943A CN104121902A CN 104121902 A CN104121902 A CN 104121902A CN 201410301943 A CN201410301943 A CN 201410301943A CN 104121902 A CN104121902 A CN 104121902A
- Authority
- CN
- China
- Prior art keywords
- point
- robot
- vector
- dimensional
- xtion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 24
- 230000000007 visual effect Effects 0.000 title claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims description 13
- 230000008878 coupling Effects 0.000 claims description 9
- 238000010168 coupling process Methods 0.000 claims description 9
- 238000005859 coupling reaction Methods 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 2
- 238000000605 extraction Methods 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000004888 barrier function Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C22/00—Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to an implementation method of an indoor robot visual odometer based on an Xtion camera. The method comprises the following steps: firstly, performing information acquisition on the field of view in front of a robot by use of the Xtion camera installed on the robot so as to obtain RGB (Red, Green and Blue) information and three-dimensional coordination information of spatial points; secondly, performing coarse feature point matching on acquired sequential images based on an SIFT (Scale Invariant Feature Transform) feature matching algorithm in combination with the PFH (Point Feature Histogram) three-dimensional features of the spatial points, and rejecting wrong matched points in coarse matching by use of RANSAC (Radom Sample Consensus) to obtain accurate matched points; and finally, establishing an equation set and solving motion parameters of the robot by use of a least square method. The method has the characteristics that the Xtion camera is used for acquiring information so that the three-dimensional information of the spatial points can be directly obtained, and the feature extraction and matching are realized by use of the texture features and the three-dimensional features of the spatial points, and therefore, the efficiency and accuracy of robot positioning are obviously improved.
Description
Technical field
The present invention relates to indoor mobile robot autonomous navigation technology field, be especially a kind ofly applied to the vision mileage implementation method in wheel driving type mobile robot autonomous navigation technology in indoor environment.
Background technology
In the research of indoor mobile robot, it is extremely important obtaining in real time the high-precision kinematic parameter of robot, is related to robot navigation, keeps away the realization of barrier and the tasks such as path planning.Owing to encountering barrier wheel in robot traveling process, there is long term wear skidding, make the displacement accurate information that the equipment such as photoelectric code disk speed measuring motor can not Accurate Determining robot; Adopt GPS location, exist resolution low, and a little less than indoor signal, be not suitable for indoor mobile robot.
Visual odometry is by collection analysis associated picture sequence, determine robot location and towards, can make up above-mentioned problem, strengthened navigation accuracy when robot moves by any way on any surface.Traditional visual odometry obtains image by monocular camera, binocular camera or omnidirectional's camera, by the conversion of coordinate system, obtain the three-dimensional information of spatial point, thereby and by image RGB information, two two field pictures are carried out to the kinematic parameter that feature extraction and matching obtains robot.When obtaining the three-dimensional information of spatial point, due to the conversion of camera lenses error and coordinate system, make that computation process is complicated, efficiency is low and precision is not high.When to image characteristics extraction, coupling, owing to only having utilized the RGB information of image, lose three-dimensional information, usually occurred the situation of erroneous matching.
Summary of the invention
The object of this invention is to provide a kind of indoor mobile robot visual odometry implementation method based on Xtion video camera, the method is utilized Xtion camera, can directly obtain three-dimensional information and the RGB information of spatial point, and utilize the textural characteristics and the three-dimensional feature that extract spatial point to mate, thereby significantly improved efficiency and the precision of robot location.
For achieving the above object, the present invention adopts following scheme to realize: the method comprises the following steps:
Step S01: by being arranged on Xtion video camera the looking robot the place ahead in robot
Information acquisition is carried out in field, obtains RGB information and the three-dimensional coordinate information of spatial point;
Step S02: utilize SIFT Feature Correspondence Algorithm and in conjunction with the PFH three-dimensional feature of spatial point, the unique point that Xtion is collected between adjacent two width images is mated;
Step S03: use RANSAC to reject the Mismatching point in coupling;
Step S04: the match point of adjacent two two field pictures before and after obtaining
wherein footmark p represents former frame image, a two field picture after c representative, and i is match point number; Set up system of equations
Wherein R is rotation matrix, and T is translation vector; Utilize least square method can solve the kinematic parameter of robot.
Further, described Feature Points Matching specifically comprises the steps:
1) feature point detection: detect local extremum simultaneously using as unique point in image two dimensional surface space and DoG metric space, DoG operator definitions is the difference of the gaussian kernel of two different scales, DoG operator is suc as formula shown in (1):
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ) (1)
Wherein, G (x, y, σ) is changeable scale Gaussian function,
(x, y) is volume coordinate, and σ is yardstick coordinate, and I (x, y) is former figure, and L (x, y, σ) is metric space.
When detecting yardstick spatial extrema, certain pixel need to 8 pixels in surrounding field of same yardstick and 9 * 2, the field pixel of adjacent yardstick correspondence position altogether 26 pixels compare, to guarantee local extremum can be detected at metric space and 2 dimension image spaces;
2) unique point is described, and sets up proper vector, comprising SIFT proper vector and PFH three-dimensional feature vector;
SIFT proper vector: each key point by 2 * 2 totally 4 Seed Points form, each Seed Points comprises 8 direction vector information; To each key point adopt 4 * 4 totally 16 Seed Points describe, to produce 128 data, last each key point is set up the SIFT proper vector of 128 dimensions;
PFH three-dimensional feature vector: utilize KD number to find near K the point of each key point, and obtain these normal line vector n in space coordinates; For key point and near K point it, get its all mutual two somes combinations: P
i, P
j, according to judging <n
i, P
j-P
i> and <n
j, P
i-P
jthe size of > arranges point of origin P
swith impact point P
t, wherein n is normal line vector, if the former is P greatly
s=P
i, P
t=P
jotherwise, P
s=P
j, P
t=P
i;
According to the premises, set up local coordinate system:
Under this coordinate system, calculate three parameters:
Each parametrization is divided into 5 sub-ranges and sets up 3
5in the histogram in individual interval, three parameters (α, φ, θ) of every pair of point add one by the right histogram of answering, and the histogram that statistics institute produces after is a little normalized, and is 125 dimension PFH three-dimensional features vectorial;
3) carry out characteristic matching to obtain candidate matches point: after the SIFT of two two field pictures proper vector and the generation of FPH three-dimensional feature vector, the Euclidean distance between next step employing key point proper vector is used as the similarity determination tolerance of key point in two two field pictures; Get the nearest front two pairs of key points of two two field picture Euclidean distances, in these two pairs of key points, if nearest distance is removed near distance in proper order, be less than certain proportion threshold value, accept this pair of match point;
4) eliminate mistake coupling: adopt RANSAC algorithm to remove exterior point, RANSAC algorithm is by repeatedly randomly drawing certain sample, parameter to be estimated, first draw initial parameter model, then according to estimated parameter, all data are classified, part data are within the scope of specification error, be called interior point, otherwise be called exterior point, through iterative computation repeatedly, go out optimization model parameter.
Further, to solve the kinematic parameter method of robot as follows for described least square method:
Solving least square solution makes || P
ci-(RP
pi+ T) ||
2minimum, wherein P
piand P
pifor the former frame image in adjacent two sequence images and a rear two field picture, i is that Corresponding matching is counted; Model covariance matrix
Wherein
then to matrix Σ
cpcarry out Eigenvalues Decomposition Σ
cp=UDV, U wherein, V is unitary matrix, D=diag (d
i), d
1>=d
2>=d
m>=0 is singular value matrix, d in formula
ifor non-zero singular value, note
Finally solve rotation matrix R=USV, bring former variance into and can solve translation vector T.
Relative prior art, the invention has the beneficial effects as follows:
1, adopt Xtion video camera directly to obtain RGB information and the three-dimensional information of spatial point, avoided traditional visual odometry to need repeatedly transformed coordinate system, carry out a large amount of computings and just can solve the three-dimensional coordinate of spatial point, significantly very high computing velocity and precision.
2, the textural characteristics and the three-dimensional feature that take full advantage of spatial point mate sequence image, have improved the defect that traditional visual odometry only relies on the RGB information of spatial point, make matching result more accurate.
Accompanying drawing explanation
Fig. 1 is overview flow chart of the present invention.
embodiment
Below in conjunction with drawings and Examples, the present invention will be further described.
As shown in Figure 1, first the method for the present embodiment carries out information acquisition by the Xtion video camera being arranged in robot to the visual field in robot the place ahead, obtains RGB information and the three-dimensional coordinate information of spatial point.Then based on SIFT Feature Correspondence Algorithm and in conjunction with PFH (some feature histogram) three-dimensional feature of spatial point, the sequence image gathering is carried out to thick Feature Points Matching, and use RANSAC (stochastic sampling consistance) to reject the Mismatching point in thick coupling, obtain exact matching point.Finally set up system of equations, utilize least square method to solve the kinematic parameter of robot.
Concrete, Xtion video camera is placed in to robot the place ahead, when robot moves, Xtion video camera will be recorded the information of 640 * 480 points in robot the place ahead, institute presents two dimensional image as shown in Figure 2, three-dimensional point cloud information as shown in Figure 3, the information that obtains be D coordinates value and the RGB value of color of each spatial point.
Utilize SIFT Feature Correspondence Algorithm and in conjunction with the three-dimensional feature of spatial point,, obtain the thick Feature Points Matching in two images, and use RANSAC (stochastic sampling consistance) to reject the Mismatching point in thick coupling, obtain exact matching point.Concrete steps are as follows:
1) feature point detection: detect local extremum simultaneously using as unique point in image two dimensional surface space and DoG (Difference-of-Gaussian) metric space, DoG operator definitions is the difference of the gaussian kernel of two different scales, and DoG operator is suc as formula shown in (1):
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ) (1)
Wherein, G (x, y, σ) is changeable scale Gaussian function,
(x, y) is volume coordinate, and σ is yardstick coordinate, and I (x, y) is former figure, and L (x, y, σ) is metric space.
When detecting yardstick spatial extrema, certain pixel need to 8 pixels in surrounding field of same yardstick and 9 * 2, the field pixel of adjacent yardstick correspondence position altogether 26 pixels compare, to guarantee local extremum can be detected at metric space and 2 dimension image spaces.
2) unique point is described, and sets up proper vector, comprising SIFT proper vector and PFH three-dimensional feature vector.
SIFT proper vector: each key point by 2 * 2 totally 4 Seed Points form, each Seed Points comprises 8 direction vector information.In order to strengthen coupling stability, to each key point adopt 4 * 4 totally 16 Seed Points describe, for each key point, just can produce 128 data like this, finally each key point is set up the SIFT proper vector of 128 dimensions.
PFH three-dimensional feature vector: utilize KD number to find near K the point of each key point, and obtain these normal line vector n in space coordinates.For key point and near K point it, get its all mutual two somes combinations: P
i, P
j, according to judging <n
i, P
j-P
i> and <n
j, P
i-P
jthe size of > arranges point of origin P
swith impact point P
t, wherein n is normal line vector, if the former is P greatly
s=P
i, P
t=P
jotherwise, P
s=P
j, P
t=P
i.
According to the premises, set up local coordinate system:
Under this coordinate system, calculate three parameters:
Each parametrization is divided into 5 sub-ranges and sets up 3
5in the histogram in individual interval, three parameters (α, φ, θ) of every pair of point add one by the right histogram of answering, and the histogram that statistics institute produces after is a little normalized, and is 125 dimension PFH three-dimensional features vectorial.
3) carry out characteristic matching to obtain candidate matches point: after the SIFT of two two field pictures proper vector and the generation of FPH three-dimensional feature vector, the Euclidean distance between next step employing key point proper vector is used as the similarity determination tolerance of key point in two two field pictures.Get the nearest front two pairs of key points of two two field picture Euclidean distances, in these two pairs of key points, if nearest distance is removed near distance in proper order, be less than certain proportion threshold value, accept this pair of match point.
4) eliminate mistake coupling: adopt RANSAC algorithm to remove exterior point herein, RANSAC algorithm is by repeatedly randomly drawing certain sample, parameter to be estimated, first draw initial parameter model, then according to estimated parameter, all data are classified, part data are within the scope of specification error, be called interior point, otherwise be called exterior point, through iterative computation repeatedly, go out optimization model parameter.
Obtain the match point of adjacent two two field pictures
wherein i is match point number.Set up system of equations
Wherein R is rotation matrix, and T is translation vector.Utilize least square method can solve the kinematic parameter of robot.Figure tetra-and figure five are adjacent front and back two two field pictures rotation translation transformation schematic diagram, the transformation results solving for (unit: m):
In example according to odometer record, 5 ° of the actual rotations of video camera, mobile x=0.03m, y=0.04m, z=0.30m.According to measurement result, maximum absolute error is 0.016m, relative error 5.7%.
Claims (3)
1. the Indoor Robot visual odometry implementation method based on Xtion video camera, is characterized in that comprising the following steps:
Step S01: by the Xtion video camera being arranged in robot, information acquisition is carried out in the visual field in robot the place ahead, obtain RGB information and the three-dimensional coordinate information of spatial point;
Step S02: utilize SIFT Feature Correspondence Algorithm and in conjunction with the PFH three-dimensional feature of spatial point, the unique point that Xtion is collected between adjacent two width images is mated;
Step S03: use RANSAC to reject the Mismatching point in coupling;
Step S04: the match point of adjacent two two field pictures before and after obtaining
wherein superscript p represents former frame image, a two field picture after c representative, and i is match point number; Set up system of equations
Wherein R is rotation matrix, and T is translation vector; Utilize least square method can solve the kinematic parameter of robot.
2. the Indoor Robot visual odometry implementation method based on Xtion video camera according to claim 1, is characterized in that: described Feature Points Matching specifically comprises the steps:
1) feature point detection: detect local extremum simultaneously using as unique point in image two dimensional surface space and DoG metric space, DoG operator definitions is the difference of the gaussian kernel of two different scales, DoG operator is suc as formula shown in (1):
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ) (1)
Wherein, G (x, y, σ) is changeable scale Gaussian function,
(x, y) is volume coordinate, and σ is yardstick coordinate, and I (x, y) is original image, and L (x, y, σ) is metric space.
When detecting yardstick spatial extrema, certain pixel need to 8 pixels in surrounding field of same yardstick and 9 * 2, the field pixel of adjacent yardstick correspondence position altogether 26 pixels compare, to guarantee local extremum can be detected at metric space and 2 dimension image spaces;
2) unique point is described, and sets up proper vector, comprising SIFT proper vector and PFH three-dimensional feature vector;
SIFT proper vector: each key point by 2 * 2 totally 4 Seed Points form, each Seed Points comprises 8 direction vector information; To each key point adopt 4 * 4 totally 16 Seed Points describe, to produce 128 data, last each key point is set up the SIFT proper vector of 128 dimensions;
PFH three-dimensional feature vector: utilize KD number to find near K the point of each key point, and obtain these normal line vector n in space coordinates; For key point and near K point it, get its all mutual two somes combinations: P
i, P
j, according to judging <n
i, P
j-P
i> and <n
j, P
i-P
jthe size of > arranges point of origin P
swith impact point P
t, n wherein
iand n
jbe respectively corresponding P
iand P
jnormal vector, if the former greatly remembers P
s=P
i, P
t=P
jotherwise, note P
s=P
j, P
t=P
i;
According to the premises, set up local coordinate system:
Under this coordinate system, calculate three parameters:
Each parametrization is divided into 5 sub-ranges and sets up 3
5in the histogram in individual interval, three parameters (α, φ, θ) of every pair of point add one by the right histogram of answering, and the histogram that statistics institute produces after is a little normalized, and is 125 dimension PFH three-dimensional features vectorial;
3) carry out characteristic matching to obtain candidate matches point: after the SIFT of two two field pictures proper vector and the generation of FPH three-dimensional feature vector, the Euclidean distance between next step employing key point proper vector is used as the similarity determination tolerance of key point in two two field pictures; Get the nearest front two pairs of key points of two two field picture Euclidean distances, in these two pairs of key points, if nearest distance is removed near distance in proper order, be less than certain proportion threshold value, accept this pair of match point;
4) eliminate mistake coupling: adopt RANSAC algorithm to remove exterior point, RANSAC algorithm is by repeatedly randomly drawing certain sample, parameter to be estimated, first draw initial parameter model, then according to estimated parameter, all data are classified, part data are within the scope of specification error, be called interior point, otherwise be called exterior point, through iterative computation repeatedly, go out optimization model parameter.
3. the Indoor Robot visual odometry implementation method based on Xtion video camera according to claim 1, is characterized in that: the kinematic parameter method that described least square method solves robot is as follows:
Solving least square solution makes || P
ci-(RP
pi+ T) ||
2minimum, wherein P
piand P
cifor the former frame image in adjacent two sequence images and a rear two field picture, i is that Corresponding matching is counted; Model covariance matrix
Wherein
Corresponding matching is counted; Then to matrix Σ
cpcarry out Eigenvalues Decomposition Σ
cp=UDV, U wherein, V is unitary matrix, D=diag (d
i), d
1>=d
2>=d
m>=0 is singular value matrix, d in formula
ifor non-zero singular value, note
Finally solve rotation matrix R=USV, bring former variance into and can solve translation vector T.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410301943.8A CN104121902B (en) | 2014-06-28 | 2014-06-28 | Implementation method of indoor robot visual odometer based on Xtion camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410301943.8A CN104121902B (en) | 2014-06-28 | 2014-06-28 | Implementation method of indoor robot visual odometer based on Xtion camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104121902A true CN104121902A (en) | 2014-10-29 |
CN104121902B CN104121902B (en) | 2017-01-25 |
Family
ID=51767415
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410301943.8A Expired - Fee Related CN104121902B (en) | 2014-06-28 | 2014-06-28 | Implementation method of indoor robot visual odometer based on Xtion camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104121902B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105719272A (en) * | 2014-12-05 | 2016-06-29 | 航天信息股份有限公司 | Image characteristic point matching method for maintaining space structure |
CN105938619A (en) * | 2016-04-11 | 2016-09-14 | 中国矿业大学 | Visual odometer realization method based on fusion of RGB and depth information |
CN105955260A (en) * | 2016-05-03 | 2016-09-21 | 大族激光科技产业集团股份有限公司 | Mobile robot position perception method and device |
CN106052674A (en) * | 2016-05-20 | 2016-10-26 | 青岛克路德机器人有限公司 | Indoor robot SLAM method and system |
CN106813672A (en) * | 2017-01-22 | 2017-06-09 | 深圳悉罗机器人有限公司 | The air navigation aid and mobile robot of mobile robot |
CN107025668A (en) * | 2017-03-30 | 2017-08-08 | 华南理工大学 | A kind of design method of the visual odometry based on depth camera |
CN107633543A (en) * | 2017-08-21 | 2018-01-26 | 浙江工商大学 | Consider the stripe shape corresponding method of local topology |
CN107993287A (en) * | 2017-12-01 | 2018-05-04 | 大唐国信滨海海上风力发电有限公司 | A kind of auto-initiation method of target following |
CN108122412A (en) * | 2016-11-26 | 2018-06-05 | 沈阳新松机器人自动化股份有限公司 | The method disorderly stopped for supervisory-controlled robot detection vehicle |
CN108426566A (en) * | 2018-02-28 | 2018-08-21 | 中国计量大学 | A kind of method for positioning mobile robot based on multiple-camera |
CN109407665A (en) * | 2018-09-28 | 2019-03-01 | 浙江大学 | A kind of unmanned dispensing vehicle of small semiautomatic and Distribution path planing method |
CN109871024A (en) * | 2019-01-04 | 2019-06-11 | 中国计量大学 | A kind of UAV position and orientation estimation method based on lightweight visual odometry |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11899469B2 (en) | 2021-08-24 | 2024-02-13 | Honeywell International Inc. | Method and system of integrity monitoring for visual odometry |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101141633A (en) * | 2007-08-28 | 2008-03-12 | 湖南大学 | Moving object detecting and tracing method in complex scene |
CN101359366A (en) * | 2008-07-28 | 2009-02-04 | 同济大学 | Pattern matching recognition system and implementing method thereof |
CN102692236A (en) * | 2012-05-16 | 2012-09-26 | 浙江大学 | Visual milemeter method based on RGB-D camera |
-
2014
- 2014-06-28 CN CN201410301943.8A patent/CN104121902B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101141633A (en) * | 2007-08-28 | 2008-03-12 | 湖南大学 | Moving object detecting and tracing method in complex scene |
CN101359366A (en) * | 2008-07-28 | 2009-02-04 | 同济大学 | Pattern matching recognition system and implementing method thereof |
CN102692236A (en) * | 2012-05-16 | 2012-09-26 | 浙江大学 | Visual milemeter method based on RGB-D camera |
Non-Patent Citations (2)
Title |
---|
吕强等: "基于SIFT 特征提取的单目视觉里程计在导航***中的实现", 《传感技术学报》 * |
杨鸿等: "基于Kinect传感器的移动机器人室内环境三维地图创建", 《东南大学学报(自然科学版)》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105719272A (en) * | 2014-12-05 | 2016-06-29 | 航天信息股份有限公司 | Image characteristic point matching method for maintaining space structure |
CN105719272B (en) * | 2014-12-05 | 2020-07-10 | 航天信息股份有限公司 | Image feature point matching method for keeping space structure |
CN105938619A (en) * | 2016-04-11 | 2016-09-14 | 中国矿业大学 | Visual odometer realization method based on fusion of RGB and depth information |
CN105955260A (en) * | 2016-05-03 | 2016-09-21 | 大族激光科技产业集团股份有限公司 | Mobile robot position perception method and device |
CN106052674A (en) * | 2016-05-20 | 2016-10-26 | 青岛克路德机器人有限公司 | Indoor robot SLAM method and system |
CN106052674B (en) * | 2016-05-20 | 2019-07-26 | 青岛克路德机器人有限公司 | A kind of SLAM method and system of Indoor Robot |
CN108122412A (en) * | 2016-11-26 | 2018-06-05 | 沈阳新松机器人自动化股份有限公司 | The method disorderly stopped for supervisory-controlled robot detection vehicle |
CN108122412B (en) * | 2016-11-26 | 2021-03-16 | 沈阳新松机器人自动化股份有限公司 | Method for monitoring robot to detect vehicle disorderly stop |
CN106813672A (en) * | 2017-01-22 | 2017-06-09 | 深圳悉罗机器人有限公司 | The air navigation aid and mobile robot of mobile robot |
CN107025668A (en) * | 2017-03-30 | 2017-08-08 | 华南理工大学 | A kind of design method of the visual odometry based on depth camera |
CN107025668B (en) * | 2017-03-30 | 2020-08-18 | 华南理工大学 | Design method of visual odometer based on depth camera |
CN107633543A (en) * | 2017-08-21 | 2018-01-26 | 浙江工商大学 | Consider the stripe shape corresponding method of local topology |
CN107633543B (en) * | 2017-08-21 | 2020-12-08 | 浙江工商大学 | Line shape corresponding method considering local topological structure |
CN107993287A (en) * | 2017-12-01 | 2018-05-04 | 大唐国信滨海海上风力发电有限公司 | A kind of auto-initiation method of target following |
CN108426566A (en) * | 2018-02-28 | 2018-08-21 | 中国计量大学 | A kind of method for positioning mobile robot based on multiple-camera |
CN108426566B (en) * | 2018-02-28 | 2020-09-01 | 中国计量大学 | Mobile robot positioning method based on multiple cameras |
CN109407665A (en) * | 2018-09-28 | 2019-03-01 | 浙江大学 | A kind of unmanned dispensing vehicle of small semiautomatic and Distribution path planing method |
CN109871024A (en) * | 2019-01-04 | 2019-06-11 | 中国计量大学 | A kind of UAV position and orientation estimation method based on lightweight visual odometry |
Also Published As
Publication number | Publication date |
---|---|
CN104121902B (en) | 2017-01-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104121902A (en) | Implementation method of indoor robot visual odometer based on Xtion camera | |
Wang et al. | Intensity scan context: Coding intensity and geometry relations for loop closure detection | |
CN103559711B (en) | Based on the method for estimating of three dimensional vision system characteristics of image and three-dimensional information | |
CN102236794B (en) | Recognition and pose determination of 3D objects in 3D scenes | |
Fraundorfer et al. | Visual odometry: Part ii: Matching, robustness, optimization, and applications | |
Pandey et al. | Visually bootstrapped generalized ICP | |
CN113052903B (en) | Vision and radar fusion positioning method for mobile robot | |
CN104040590A (en) | Method for estimating pose of object | |
CN104077760A (en) | Rapid splicing system for aerial photogrammetry and implementing method thereof | |
CN105654507A (en) | Vehicle outer contour dimension measuring method based on image dynamic feature tracking | |
CN103971378A (en) | Three-dimensional reconstruction method of panoramic image in mixed vision system | |
CN103727930A (en) | Edge-matching-based relative pose calibration method of laser range finder and camera | |
CN115717894B (en) | Vehicle high-precision positioning method based on GPS and common navigation map | |
CN107909018B (en) | Stable multi-mode remote sensing image matching method and system | |
Benseddik et al. | SIFT and SURF Performance evaluation for mobile robot-monocular visual odometry | |
CN110570474B (en) | Pose estimation method and system of depth camera | |
CN103617328A (en) | Airplane three-dimensional attitude computation method | |
CN101833763B (en) | Method for detecting reflection image on water surface | |
CN111242000A (en) | Road edge detection method combining laser point cloud steering | |
CN111126116A (en) | Unmanned ship river channel garbage identification method and system | |
Boroson et al. | 3D keypoint repeatability for heterogeneous multi-robot SLAM | |
CN105513094A (en) | Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation | |
CN106709432B (en) | Human head detection counting method based on binocular stereo vision | |
CN117710458A (en) | Binocular vision-based carrier aircraft landing process relative position measurement method and system | |
Holliday et al. | Scale-robust localization using general object landmarks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170125 |