CN109974743A - A kind of RGB-D visual odometry optimized based on GMS characteristic matching and sliding window pose figure - Google Patents

A kind of RGB-D visual odometry optimized based on GMS characteristic matching and sliding window pose figure Download PDF

Info

Publication number
CN109974743A
CN109974743A CN201910195323.3A CN201910195323A CN109974743A CN 109974743 A CN109974743 A CN 109974743A CN 201910195323 A CN201910195323 A CN 201910195323A CN 109974743 A CN109974743 A CN 109974743A
Authority
CN
China
Prior art keywords
frame
pose
matching
point
gms
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910195323.3A
Other languages
Chinese (zh)
Other versions
CN109974743B (en
Inventor
陈佩
谢晓明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201910195323.3A priority Critical patent/CN109974743B/en
Publication of CN109974743A publication Critical patent/CN109974743A/en
Application granted granted Critical
Publication of CN109974743B publication Critical patent/CN109974743B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to computer vision fields, more particularly, to a kind of RGB-D visual odometry optimized based on GMS characteristic matching and sliding window pose figure.The present invention replaces distance threshold+RANSAC (random sample consensus) algorithm generally used in prior art to carry out error hiding rejecting using GMS (mesh motion statistics) algorithm, relative motion is larger between image and when brightness change is larger still is able to filter out sufficient amount of correct matching double points, improves the robustness of system;The present invention reduces the cumulative errors of pose estimation using sliding window pose figure optimisation technique, compared to the prior art by safeguarding local map or designing more complicated objective function in scheme, with higher real-time, while it can guarantee the accuracy of visual odometry again.

Description

In a kind of RGB-D vision optimized based on GMS characteristic matching and sliding window pose figure Journey meter
Technical field
The invention belongs to computer vision fields, are based on GMS characteristic matching and sliding window position more particularly, to one kind The RGB-D visual odometry of appearance figure optimization.
Background technique
Visual odometry refers to through machine vision technique, analyzes associated image sequences and carrys out real-time estimation robot pose, It can overcome the shortcomings of traditional odometer, be more precisely carried out positioning, and may operate in GPS (global positioning system) can not In the environment of covering, for example, indoor environment, space exploration etc..Visual odometry is obtained in localization for Mobile Robot and navigation field Extensive concern and application.
Currently, two kinds of main stream approach of visual odometry are method of characteristic point and direct method respectively.Method of characteristic point mainly passes through Three steps, i.e. feature extraction, characteristic matching minimize re-projection error (Reprojection Error), to estimate image Relative pose between frame.Method of characteristic point has always been considered as being in vision as the visual odometry solution risen earliest The main stream approach of journey meter, this method is stable, insensitive to dynamic object, is the solution of current comparative maturity, but There is also certain problems for this method.The extracting and matching feature points step of method of characteristic point exists more wrong than relatively time-consuming and meeting The problem of matching.When there are motion blurs for image, illumination condition is poor, when texture largely repeats or lacks texture, method of characteristic point Accuracy will receive large effect.The principle of direct method based on one be known as brightness invariance it is assumed that the hypothesis thinks, two Corresponding pixel answers brightness angle value having the same on frame image.Based on this it is assumed that directly using picture by camera model The brightness value of vegetarian refreshments constructs luminosity error, and estimates interframe position by minimizing luminosity error (Photometric Error) Appearance.According to the pixel quantity used, direct method can be divided into again it is dense and half dense two kinds, wherein dense direct method can make Luminosity error is calculated with all pixels point on image, therefore calculation amount is huge.Half dense direct method has only used centainly The pixel of gradient information calculates luminosity error, and such processing also allows while keeping the accuracy of Relative attitude and displacement estimation Direct method has certain real-time.Direct method can access robust when camera relative motion is smaller and accurate pose is estimated As a result, simultaneously, for making full use of for image information, so that it motion blur occurs in image, repeating texture and texture lacking The case where mistake, is still able to maintain preferable accuracy.The main problem of direct method is that the constant hypothesis of brightness is a kind of more forced Property it is assumed that i.e. the hypothesis is in the lesser situation of luminance difference it can be seen that being to set up, but in the biggish feelings of luminance difference Condition, the hypothesis is probably invalid, under the accuracy of the visual odometry of direct method has largely in this case Drop.
It, under normal circumstances all will not be only merely to figure when realizing visual odometry using method of characteristic point or direct method As sequence frame does Relative attitude and displacement estimation between any two, it will usually reduce cumulative errors by way of some technological means. These technological means mainly include maintenance local map and the more complicated luminosity error calculation method of design.Safeguard local map To map is needed to carry out the operation for being inserted into new point map and the old point map of deletion, both operations are dropped to will increase calculation amount The real-time of low visual odometry.Cumulative errors can be effectively reduced by the luminosity error calculation method for designing more complicated, But meanwhile be also required to more calculation amounts when minimizing luminosity error, reduce the real-time of system.
Summary of the invention
The present invention in order to overcome at least one of the drawbacks of the prior art described above, provide it is a kind of based on GMS characteristic matching and The RGB-D visual odometry of sliding window pose figure optimization carries out error hiding rejecting using GMS (mesh motion statistics) algorithm, Relative motion is larger between image and when brightness change is larger still is able to filter out sufficient amount of correct matching double points, improves The robustness of system;Reduce the cumulative errors of pose estimation using sliding window pose figure optimisation technique, there is higher reality Shi Xing.
In order to solve the above technical problems, the technical solution adopted by the present invention is that: one kind being based on GMS characteristic matching and sliding window The RGB-D visual odometry of mouth pose figure optimization, comprising the following steps:
Step 1. is read by RGB-D camera obtains first frame RGB image as reference frame, reads first frame depth image As the depth information of reference frame, characteristic point is extracted to reference frame and calculates ORB Feature Descriptor;The characteristic point of extraction is image The pixel position of middle FAST (quick fraction test feature) angle point.ORB feature passes through 128 around the brightness value of angle point and its The brightness of pixel is compared, and is lighted than key and is denoted as 1, otherwise is denoted as 0, finally generates the binary vector of one 128 dimension Feature Descriptor as the key point.
Step 2. reads next frame RGB image as present frame, reads depth of the next frame depth image as present frame Information extracts characteristic point to present frame and calculates ORB Feature Descriptor.
The pixel that step 3. extracts reference frame and present frame carries out preliminary characteristic matching;Use Hamming distance as two Each characteristic point on reference frame and all characteristic points on present frame are calculated the Chinese by the measurement of a characteristic point similarity one by one Prescribed distance chooses the smallest characteristic point of Hamming distance as match point, generates a pair of of matching double points.
Step 4. rejects mistake to by GMS (mesh motion statistics) algorithm to the characteristic matching point that second step obtains Match;GMS (mesh motion statistics) based drive flatness of algorithm proposes a kind of hypothesis: a feature on first frame image Point p1Match point on the second frame image is p2, with p if the matching is correct matching1Centered on 3*3 grid in feature Match point all maximum probabilities of point the second frame image is fallen in p2Centered on 3*3 grid in.Passed through based on the hypothesis to two frames Image carries out grid dividing and counts to the matching points in corresponding net region, if matching points are greater than threshold value T, this recognizes It is the matching to for correct matching pair, otherwise is erroneous matching pair, wherein the calculation formula of T is as follows:
Wherein, n is the mean number of the characteristic point in each grid;It is 6 that α value is taken in this programme;Based on GMS (grid Movement statistics) algorithm characteristic matching as one of key technology of the invention, have the effect that
1. GMS (mesh motion statistics) is in interframe compared to general RANSAC (random sample consensus algorithm) algorithm It moves and remains to filter out sufficient amount of correct matching pair in the relatively large and biggish situation of interframe brightness change, this is one Determine to ensure that the accuracy that subsequent pose calculates in degree;
Error hiding is rejected 2. the algorithm is based on statistical theory, real-time with higher.
Step 5. passes through step S3, two dimension-two dimension matching double points of available reference frame and present frame, in this step In, it needs to throw the characteristic point for passing through step S3 screening in reference frame using the depth information of camera projection model and reference frame Shadow obtains the three dimensional space coordinate of characteristic point into three-dimensional space, and the two-dimentional matching double points of two dimension-are thus converted into three-dimensional-two Tie up matching double points;The calculation formula of camera projection model are as follows:
P=dK-1p
Wherein, p is characterized pixel coordinate a little, and K is camera internal reference, and d is characterized depth a little, and P is characterized a three-dimensional space Between coordinate.
Step 6. minimizes re-projection error, minimizes the objective function of re-projection error are as follows:
ε*=argminε|(π(T(P;ε))-p)|2
Wherein, relative pose of the ε between reference frame and present frame to be estimated, T indicate reference frame arriving present frame Pose transformation, π indicate the projection model of camera, i.e., three-dimensional space are projected to the transformation on image.
Step 7. minimizes luminance errors, minimizes the objective function of luminance errors are as follows:
ε*=argminε|I2(π((P;ε))-I2(p)|2
Wherein, I2(*) indicates the brightness value of the pixel of present frame.
The optimization of step 8. sliding window pose figure, the pose for choosing the frame of present frame and the front will be carried out as in window The pose of iteration optimization;The present invention indicates the pose optimization problem using Graph figure, and wherein vertex is the pose of every frame, side Calculation formula for the relative pose between two frames, error is as follows:
Wherein, TijIndicate relative motion of the jth frame to the i-th frame, Ti,TjRespectively indicate the pose of the i-th frame and jth frame;It is right In the pose of the picture frame outside window, still they are retained in Graph figure, but when being iterated, to the position of these frames Appearance carries out marginalisation, is not updated to it.
Sliding window optimizes another key technology point as this programme, has the effect that
1. the pose calculating for present frame provides global information, cumulative errors can be effectively reduced, vision is improved The accuracy of odometer;
2. the pose of the picture frame outside for window, the way that this programme is taken be retained in figure (Graph), but Marginalisation is carried out to it and is not iterated update to it, pose figure can be optimized be maintained at a fixed size in this way Under scale, less the number of iterations improves the real-time of visual odometry.
S9. using present frame as reference frame, depth information of the depth information of present frame as reference frame, return step S2。
Compared with prior art, beneficial effect is:
1. using Relative attitude and displacement estimation of the interframe without structural formula two-by-two, without establishing local map and being maintained, Improve the real-time of visual odometry;
2. using the characteristic matching for being based on GMS (mesh motion statistics) algorithm, guarantee interframe movement is larger and brightness It still is able to filter out use when sufficient amount of matching double points are calculated for subsequent pose in the case where changing greatly, improves view Feel the robustness of odometer;
3. crossing the optimization of sliding window pose figure, nonlinear optimization is carried out to the pose estimated, improves vision mileage The accuracy of meter, meanwhile, it is constrained by the scale that window size optimizes pose figure, reduces changing for nonlinear optimization procedure Generation number ensure that the real-time of visual odometry.
Detailed description of the invention
Fig. 1 is flow chart of the method for the present invention.
Fig. 2 is that sliding window pose figure optimizes schematic diagram in the embodiment of the present invention.
Fig. 3 is the structural schematic diagram for the Graph figure that the present invention uses.
Specific embodiment
Attached drawing only for illustration, is not considered as limiting the invention;In order to better illustrate this embodiment, attached Scheme certain components to have omission, zoom in or out, does not represent the size of actual product;To those skilled in the art, The omitting of some known structures and their instructions in the attached drawings are understandable.Being given for example only property of positional relationship is described in attached drawing Illustrate, is not considered as limiting the invention.
Embodiment 1:
As shown in Figure 1, a kind of RGB-D visual odometry optimized based on GMS characteristic matching and sliding window pose figure, packet Include following steps:
Step 1. is read by RGB-D camera obtains first frame RGB image as reference frame, reads first frame depth image As the depth information of reference frame, characteristic point is extracted to reference frame and calculates ORB Feature Descriptor;The characteristic point of extraction is image The pixel position of middle FAST (quick fraction test feature) angle point.ORB feature passes through 128 around the brightness value of angle point and its The brightness of pixel is compared, and is lighted than key and is denoted as 1, otherwise is denoted as 0, finally generates the binary vector of one 128 dimension Feature Descriptor as the key point.
Step 2. reads next frame RGB image as present frame, reads depth of the next frame depth image as present frame Information extracts characteristic point to present frame and calculates ORB Feature Descriptor.
The pixel that step 3. extracts reference frame and present frame carries out preliminary characteristic matching;Use Hamming distance as two Each characteristic point on reference frame and all characteristic points on present frame are calculated the Chinese by the measurement of a characteristic point similarity one by one Prescribed distance chooses the smallest characteristic point of Hamming distance as match point, generates a pair of of matching double points.
Step 4. rejects mistake to by GMS (mesh motion statistics) algorithm to the characteristic matching point that second step obtains Match;GMS (mesh motion statistics) based drive flatness of algorithm proposes a kind of hypothesis: a feature on first frame image Point P1Match point on the second frame image is P2, with P if the matching is correct matching1Centered on 3*3 grid in feature Match point all maximum probabilities of point the second frame image is fallen in P2Centered on 3*3 grid in.Passed through based on the hypothesis to two frames Image carries out grid dividing and counts to the matching points in corresponding net region, if matching points are greater than threshold value T, this recognizes It is the matching to for correct matching pair, otherwise is erroneous matching pair, wherein the calculation formula of T is as follows:
Wherein, n is the mean number of the characteristic point in each grid;It is 6 that α value is taken in this programme.
Step 5. passes through step S3, two dimension-two dimension matching double points of available reference frame and present frame, in this step In, it needs to throw the characteristic point for passing through step S3 screening in reference frame using the depth information of camera projection model and reference frame Shadow obtains the three dimensional space coordinate of characteristic point into three-dimensional space, and the two-dimentional matching double points of two dimension-are thus converted into three-dimensional-two Tie up matching double points;The calculation formula of camera projection model are as follows:
P=dK-1p
Wherein, p is characterized pixel coordinate a little, and K is camera internal reference, and d is characterized depth a little, and P is characterized a three-dimensional space Between coordinate.
Step 6. minimizes re-projection error, minimizes the objective function of re-projection error are as follows:
ε*=argminεI(π(T(p;ε))-p)12
Wherein, relative pose of the ε between reference frame and present frame to be estimated, T indicate reference frame arriving present frame Pose transformation, π indicate the projection model of camera, i.e., three-dimensional space are projected to the transformation on image;
Re-projection error least square objective function is established according to above formula, is calculated using LM (sharp Wen Baige-Marquart) iteration Method is iterated optimization, obtains preliminary interframe relative pose.
Step 7. minimizes luminance errors, minimizes the objective function of luminance errors are as follows:
ε*=argminε|I2(π((P;ε))-I2(p)|2
Wherein, I2(*) indicates the brightness value of the pixel of present frame;
Luminance errors least square objective function is established according to above formula, the preliminary pose that step 7 is obtained is as this step The initial value of iteration optimization is iterated optimization using LM (sharp Wen Baige-Marquart) iterative algorithm, obtains by the second suboptimum The interframe relative pose of change.
For step 8. as shown in Fig. 2, sliding window pose figure optimizes, 9 frames for choosing present frame and the front amount to 10 frames Pose is as the pose that be iterated optimization in window;The present invention indicates that the pose optimizes using Graph figure as shown in Figure 3 Problem, wherein pose of the vertex for every frame, relative pose of the side between two frames, the calculation formula of error are as follows:
Wherein, TijIndicate relative motion of the jth frame to the i-th frame, Ti,TjRespectively indicate the pose of the i-th frame and jth frame;It is right In the pose of the picture frame outside window, still they are retained in Graph figure, but when being iterated, to the position of these frames Appearance carries out marginalisation, is not updated to it.
The pose figure that window optimization is established using the library g2o sets block solver, linear equation solver, iteration optimization Algorithm and the number of iterations call solver optimized interface to carry out the optimization of pose figure after initialization.
The relative pose of reference frame and present frame of the step 9. pair by the optimization of pose figure stores.
Step 10. is using present frame as reference frame, return step 2.
Obviously, the above embodiment of the present invention be only to clearly illustrate example of the present invention, and not be pair The restriction of embodiments of the present invention.For those of ordinary skill in the art, may be used also on the basis of the above description To make other variations or changes in different ways.There is no necessity and possibility to exhaust all the enbodiments.It is all this Made any modifications, equivalent replacements, and improvements etc., should be included in the claims in the present invention within the spirit and principle of invention Protection scope within.

Claims (8)

1. a kind of RGB-D visual odometry optimized based on GMS characteristic matching and sliding window pose figure, which is characterized in that packet Include following steps:
S1. it is read by RGB-D camera and obtains first frame RGB image as reference frame, read first frame depth image as ginseng The depth information for examining frame extracts characteristic point to reference frame and calculates ORB Feature Descriptor;
S2. next frame RGB image is read as present frame, reads depth information of the next frame depth image as present frame, it is right Present frame extracts characteristic point and calculates ORB Feature Descriptor;
S3. the pixel extracted to reference frame and present frame carries out preliminary characteristic matching;
S4. erroneous matching is rejected to by GMS algorithm to the characteristic matching point that second step obtains;
S5. pass through step S3, two dimension-two dimension matching double points of available reference frame and present frame, in this step, need by Characteristic point in reference frame by step S3 screening projects to three-dimensional using the depth information of camera projection model and reference frame In space, the three dimensional space coordinate of characteristic point is obtained, the two-dimentional matching double points of two dimension-are thus converted into three-dimensional-two-dimentional match point It is right;
S6. re-projection error is minimized;
S7. luminance errors is minimized;
S8. sliding window pose figure optimizes, and the pose for choosing the frame of present frame and the front is excellent as to be iterated in window The pose of change;
S9. using present frame as reference frame, depth information of the depth information of present frame as reference frame, return step S2.
2. in a kind of RGB-D vision optimized based on GMS characteristic matching and sliding window pose figure according to claim 1 Journey meter, which is characterized in that the characteristic point extracted in the S1 step is the picture of quick fraction test feature FAST angle point in image Vegetarian refreshments position;ORB feature compares key point by the way that the brightness value of angle point to be compared with the brightness of 128 pixels around it It is bright to be denoted as 1, on the contrary it is denoted as 0, finally generate Feature Descriptor of the binary vector of one 128 dimension as the key point.
3. in a kind of RGB-D vision optimized based on GMS characteristic matching and sliding window pose figure according to claim 2 Journey meter, which is characterized in that the S3 step specifically includes: using Hamming distance as the measurement of two characteristic point similarities, will All characteristic points in each characteristic point and present frame on reference frame calculate Hamming distance one by one, and it is minimum to choose Hamming distance Characteristic point as match point, generate a pair of of matching double points.
4. in a kind of RGB-D vision optimized based on GMS characteristic matching and sliding window pose figure according to claim 3 A kind of journey meter, which is characterized in that the based drive flatness of GMS algorithm proposes hypothesis a: feature on first frame image Point P1Match point on the second frame image is P2, with P if the matching is correct matching1Centered on 3*3 grid in feature Match point all maximum probabilities of point the second frame image is fallen in P2Centered on 3*3 grid in;Passed through based on the hypothesis to two frames Image carries out grid dividing and counts to the matching points in corresponding net region, if matching points are greater than threshold value T, recognizes It is the matching to for correct matching pair, otherwise is erroneous matching pair, wherein the calculation formula of T is as follows:
Wherein, n is the mean number of the characteristic point in each grid;α is the parameter that can customize.
5. in a kind of RGB-D vision optimized based on GMS characteristic matching and sliding window pose figure according to claim 4 Journey meter, which is characterized in that the calculation formula of the camera projection model are as follows:
P=dK-1p
Wherein, p is characterized pixel coordinate a little, and K is camera internal reference, and d is characterized depth a little, and P is characterized three-dimensional space and sits Mark.
6. in a kind of RGB-D vision optimized based on GMS characteristic matching and sliding window pose figure according to claim 4 Journey meter, which is characterized in that the objective function of the minimum re-projection error are as follows:
ε*=argminε|(π(T(P;ε))-p)|2
Wherein, relative pose of the ε between reference frame and present frame to be estimated, T are indicated the pose of reference frame to present frame Transformation, π indicate the projection model of camera, i.e., three-dimensional space are projected to the transformation on image.
7. in a kind of RGB-D vision optimized based on GMS characteristic matching and sliding window pose figure according to claim 4 Journey meter, which is characterized in that the objective function of the minimum luminance errors are as follows:
ε*=argminε|I2(π((P;ε))-I2(p)|2
Wherein, I2(*) indicates the brightness value of the pixel of present frame.
8. in a kind of RGB-D vision optimized based on GMS characteristic matching and sliding window pose figure according to claim 4 Journey meter, which is characterized in that the present invention indicates the pose optimization problem using Graph figure, and wherein vertex is the pose of every frame, side Calculation formula for the relative pose between two frames, error is as follows:
Wherein, TijIndicate relative motion of the jth frame to the i-th frame, Ti,TjRespectively indicate the pose of the i-th frame and jth frame;For window They are still retained in Graph figure by the pose of outer picture frame, but when being iterated, and carry out to the pose of these frames Marginalisation is not updated it.
CN201910195323.3A 2019-03-14 2019-03-14 Visual odometer based on GMS feature matching and sliding window pose graph optimization Expired - Fee Related CN109974743B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910195323.3A CN109974743B (en) 2019-03-14 2019-03-14 Visual odometer based on GMS feature matching and sliding window pose graph optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910195323.3A CN109974743B (en) 2019-03-14 2019-03-14 Visual odometer based on GMS feature matching and sliding window pose graph optimization

Publications (2)

Publication Number Publication Date
CN109974743A true CN109974743A (en) 2019-07-05
CN109974743B CN109974743B (en) 2021-01-01

Family

ID=67078903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910195323.3A Expired - Fee Related CN109974743B (en) 2019-03-14 2019-03-14 Visual odometer based on GMS feature matching and sliding window pose graph optimization

Country Status (1)

Country Link
CN (1) CN109974743B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110838145A (en) * 2019-10-09 2020-02-25 西安理工大学 Visual positioning and mapping method for indoor dynamic scene
CN111047620A (en) * 2019-11-15 2020-04-21 广东工业大学 Unmanned aerial vehicle visual odometer method based on depth point-line characteristics
CN111144441A (en) * 2019-12-03 2020-05-12 东南大学 DSO luminosity parameter estimation method and device based on feature matching
CN111161318A (en) * 2019-12-30 2020-05-15 广东工业大学 Dynamic scene SLAM method based on YOLO algorithm and GMS feature matching
CN111462190A (en) * 2020-04-20 2020-07-28 海信集团有限公司 Intelligent refrigerator and food material input method
CN111724439A (en) * 2019-11-29 2020-09-29 中国科学院上海微***与信息技术研究所 Visual positioning method and device in dynamic scene
CN112418288A (en) * 2020-11-17 2021-02-26 武汉大学 GMS and motion detection-based dynamic vision SLAM method
CN115115708A (en) * 2022-08-22 2022-09-27 荣耀终端有限公司 Image pose calculation method and system
US11899469B2 (en) 2021-08-24 2024-02-13 Honeywell International Inc. Method and system of integrity monitoring for visual odometry

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105938619A (en) * 2016-04-11 2016-09-14 中国矿业大学 Visual odometer realization method based on fusion of RGB and depth information
CN106556412A (en) * 2016-11-01 2017-04-05 哈尔滨工程大学 The RGB D visual odometry methods of surface constraints are considered under a kind of indoor environment
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN108537848A (en) * 2018-04-19 2018-09-14 北京工业大学 A kind of two-stage pose optimal estimating method rebuild towards indoor scene
US20190018423A1 (en) * 2017-07-12 2019-01-17 Mitsubishi Electric Research Laboratories, Inc. Barcode: Global Binary Patterns for Fast Visual Inference

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105938619A (en) * 2016-04-11 2016-09-14 中国矿业大学 Visual odometer realization method based on fusion of RGB and depth information
CN106556412A (en) * 2016-11-01 2017-04-05 哈尔滨工程大学 The RGB D visual odometry methods of surface constraints are considered under a kind of indoor environment
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
US20190018423A1 (en) * 2017-07-12 2019-01-17 Mitsubishi Electric Research Laboratories, Inc. Barcode: Global Binary Patterns for Fast Visual Inference
CN108537848A (en) * 2018-04-19 2018-09-14 北京工业大学 A kind of two-stage pose optimal estimating method rebuild towards indoor scene

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
向奉卓 等,: ""具备尺度恢复的单目视觉里程计方法"", 《测绘科学技术学报》 *
朱奇光 等,: ""移动机器人混合的半稠密视觉里程计算法"", 《仪器仪表学报》 *
李琦 等,: ""考虑特征误匹配的双目视觉里程计"", 《工业控制计算机》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110838145A (en) * 2019-10-09 2020-02-25 西安理工大学 Visual positioning and mapping method for indoor dynamic scene
CN111047620A (en) * 2019-11-15 2020-04-21 广东工业大学 Unmanned aerial vehicle visual odometer method based on depth point-line characteristics
CN111724439A (en) * 2019-11-29 2020-09-29 中国科学院上海微***与信息技术研究所 Visual positioning method and device in dynamic scene
CN111724439B (en) * 2019-11-29 2024-05-17 中国科学院上海微***与信息技术研究所 Visual positioning method and device under dynamic scene
CN111144441B (en) * 2019-12-03 2023-08-08 东南大学 DSO photometric parameter estimation method and device based on feature matching
CN111144441A (en) * 2019-12-03 2020-05-12 东南大学 DSO luminosity parameter estimation method and device based on feature matching
CN111161318A (en) * 2019-12-30 2020-05-15 广东工业大学 Dynamic scene SLAM method based on YOLO algorithm and GMS feature matching
CN111462190A (en) * 2020-04-20 2020-07-28 海信集团有限公司 Intelligent refrigerator and food material input method
CN111462190B (en) * 2020-04-20 2023-11-17 海信集团有限公司 Intelligent refrigerator and food material input method
CN112418288A (en) * 2020-11-17 2021-02-26 武汉大学 GMS and motion detection-based dynamic vision SLAM method
CN112418288B (en) * 2020-11-17 2023-02-03 武汉大学 GMS and motion detection-based dynamic vision SLAM method
US11899469B2 (en) 2021-08-24 2024-02-13 Honeywell International Inc. Method and system of integrity monitoring for visual odometry
CN115115708B (en) * 2022-08-22 2023-01-17 荣耀终端有限公司 Image pose calculation method and system
CN115115708A (en) * 2022-08-22 2022-09-27 荣耀终端有限公司 Image pose calculation method and system

Also Published As

Publication number Publication date
CN109974743B (en) 2021-01-01

Similar Documents

Publication Publication Date Title
CN109974743A (en) A kind of RGB-D visual odometry optimized based on GMS characteristic matching and sliding window pose figure
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN114782691B (en) Robot target identification and motion detection method based on deep learning, storage medium and equipment
CN108596974B (en) Dynamic scene robot positioning and mapping system and method
US9942535B2 (en) Method for 3D scene structure modeling and camera registration from single image
CN108682027A (en) VSLAM realization method and systems based on point, line Fusion Features
CN108256504A (en) A kind of Three-Dimensional Dynamic gesture identification method based on deep learning
CN108898676B (en) Method and system for detecting collision and shielding between virtual and real objects
CN106780592A (en) Kinect depth reconstruction algorithms based on camera motion and image light and shade
CN111882602B (en) Visual odometer implementation method based on ORB feature points and GMS matching filter
Li et al. Large-scale, real-time 3D scene reconstruction using visual and IMU sensors
CN110688905A (en) Three-dimensional object detection and tracking method based on key frame
CN112053447A (en) Augmented reality three-dimensional registration method and device
CN112446882A (en) Robust visual SLAM method based on deep learning in dynamic scene
CN110070578B (en) Loop detection method
CN113223045A (en) Vision and IMU sensor fusion positioning system based on dynamic object semantic segmentation
CN114708293A (en) Robot motion estimation method based on deep learning point-line feature and IMU tight coupling
CN108961385A (en) A kind of SLAM patterning process and device
Xu et al. Crosspatch-based rolling label expansion for dense stereo matching
CN111462132A (en) Video object segmentation method and system based on deep learning
Wang et al. Improving RGB-D SLAM accuracy in dynamic environments based on semantic and geometric constraints
CN114612525A (en) Robot RGB-D SLAM method based on grid segmentation and double-map coupling
Zhang et al. A robust visual odometry based on RGB-D camera in dynamic indoor environments
Min et al. Coeb-slam: A robust vslam in dynamic environments combined object detection, epipolar geometry constraint, and blur filtering
CN114707611B (en) Mobile robot map construction method, storage medium and equipment based on graph neural network feature extraction and matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210101

CF01 Termination of patent right due to non-payment of annual fee