CN110060277A - A kind of vision SLAM method of multiple features fusion - Google Patents

A kind of vision SLAM method of multiple features fusion Download PDF

Info

Publication number
CN110060277A
CN110060277A CN201910357796.9A CN201910357796A CN110060277A CN 110060277 A CN110060277 A CN 110060277A CN 201910357796 A CN201910357796 A CN 201910357796A CN 110060277 A CN110060277 A CN 110060277A
Authority
CN
China
Prior art keywords
feature
point
line
characteristic
vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910357796.9A
Other languages
Chinese (zh)
Inventor
尤波
梁强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN201910357796.9A priority Critical patent/CN110060277A/en
Publication of CN110060277A publication Critical patent/CN110060277A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The vision SLAM method of multiple features fusion is related to Robot visual location and builds figure field.The multiple features fusion vision SLAM method based on depth camera that the invention discloses a kind of, by sufficiently using the dotted line feature extracted from image and according to dotted line feature construction plane characteristic, to solve the problems, such as the vision positioning under pure point feature failure conditions.Point feature is extracted using a kind of adaptive threshold method, to obtain more uniform point feature;It extracts line feature and deletes short and small line segment, merges divided line segment, to improve the accuracy rate of line characteristic matching;Dotted line feature is for the estimation of interframe pose and the building of local map;Region feature is calculated using minimum parameter method, to reduce calculation amount;By constructing back projection's error function of fusion feature, by dotted line region feature close-coupled, and constructs global map and carry out global pose optimization.The present invention is that a kind of precision is high, real-time is good, strong robustness vision SLAM method, solves the problems, such as the vision SLAM accuracy decline even thrashing under low texture environment based on method of characteristic point.

Description

A kind of vision SLAM method of multiple features fusion
Technical field:
The invention belongs to robot simultaneous localization and mapping technical fields, and in particular to a kind of multiple features fusion SLAM (simultaneous localization and mapping) method.
Background technique:
With the development of vision SLAM technology, optimization and figure Optimization Framework based on interframe have become vision SLAM and ask Estimation and light-stream adjustment are introduced into vision SLAM by the mainstream frame of topic, figure Optimization Framework, estimation be by The position of robot and ambient enviroment feature are solved as Global Optimal Problem, carry out feature by the feature extracted on image Tracking, establishes error function, constructs linear optimization by linear hypothesis or carries out nonlinear optimization directly to solve so that error Robot pose when function obtains minimum value optimizes road sign simultaneously.Previous motion structure restores (Structure from Motion, SFM) in feature extraction and matching and subsequent optimization link the too many time is consumed, therefore can only carry out offline Pose optimization and three-dimensional reconstruction, can not complete to real-time online itself positioning and map structuring.As light beam is flat these years The sparsity of poor method is found and the upgrading of computer hardware, and the time of links consumption substantially reduces, therefore base It can be realized positioning in real time in the vision SLAM of figure Optimization Framework and build figure.
Vision SLAM is broadly divided into two class of the direct method based on gray scale and the indirect method based on feature.Wherein based on gray scale Direct method shows well in some cases, does not need to carry out feature extracting and matching, need to only carry out the tracking of shade of gray, because This can save a large amount of calculating time, and this method can enhance the continuity of vision SLAM in the less region of feature. But simultaneously as direct method only carries out matching and the motion tracking of interframe, therefore this method light using the gradient of pixel It is more strong according to influences such as intensity, noise jammings.Although the indirect method based on feature can occupy part computing resource, due to The raising of level of hardware, the feature that the computing resource occupied is substantially reduced, and extracted are engineer, are had relatively stable Characteristic, and be able to carry out signature tracking, composition, closed loop detection, robot simultaneous localization and mapping can be completed Overall process, therefore the vision SLAM method based on feature has been increasingly becoming the mainstream of SLAM technical research.
Summary of the invention:
The present invention is special in order to solve the point that the existing vision SLAM algorithm based on point feature calculates under low texture environment Sign can not carry out stable tracking, and be declined according to the restraining force that point feature generates, and can not construct accurate environmental map The problems such as, the present invention provides a kind of vision SLAM methods of multiple features fusion.
SLAM problem is modeled first are as follows:
It is the six-vector on a SE (3), indicates the movement at robot moment from t to t+1,Indicate fortune The covariance matrix of dynamic estimation is approached by the Hessian matrix of loss function in last iteration Lai inverse.
The technical scheme adopted by the invention is that: the vision SLAM method of the point, line, surface Fusion Features based on image, It is characterized in that, comprising the following steps:
Step 1, the calibration for carrying out depth camera determines the internal reference of camera;
Step 2, the video stream data obtained for camera on mobile robot platform carries out gaussian filtering to reduce noise Influence to subsequent step;
Step 3, feature extraction, the point feature and line feature of each frame of On-line testing, tool are carried out for the image after correction Body includes: to describe son for each frame image zooming-out FAST corner feature with rBRIEF and be described, and line feature is calculated using LSD Method is detected, and is described son using LBD and be described, and the geological information provided using line segment filters out direction and length not The same and biggish line segment of end-point distances;
Step 4, according to point, the line feature construction region feature extracted, it is special that candidate face is gone out by following three kinds of method constructs Sign: 1) three coplanar characteristic points;2) characteristic point and a characteristic curve;3) two coplanar characteristic curves;
Step 5, the frame matching that image is carried out using the point between present frame and previous frame, line feature, obtains dotted line feature Matching pair, establish the geometrical correspondence of interframe;
Step 6, feature back projection is carried out according to the obtained interframe corresponding relationship of step 5, uses Gauss-Newton method minimum Change back projection's error of feature, in order to handle exceptional value, executes two steps using pseudo- Hans Huber's cost function and minimize, acquire The incremental motion of two continuous interframe;
Step 7, the selection for carrying out key frame, by the uncertainty of covariance matrix by way of following formula is converted to entropy:
H (ζ)=3 (1+log (2 π))+0.5log (| Ωζ|),
The covariance matrix of Ω expression incremental motion.Ratio calculated:
If α < 0.9, and present frame is more than 300 with the characteristic that is overlapped of previous keyframe, then the image is added to pass In key frame list, otherwise return step 2, continue with the image data of input;
Step 8, the thought based on figure establishes local map, by all with the related key frame of present frame and according to this The feature composition that a little key frames observe, defining ψ is the variable under se (3), incremental motion including the continuous interframe of every two, every The space coordinate point of a point feature, the space coordinate point of line feature endpoint and the minimum of plane indicate vector;Then it minimizes Data are observed at a distance from road sign projection, construct accurate local map;
Step 9, judge whether the track of robot motion forms closed loop, if so, the local map to both ends is repaired Just, still using the thought of figure optimization, the robot pose that key frame is represented is closed as node, the constraint of both ends local map System carries out the amendment of track as side;Otherwise stop signal is judged whether there is, if being not received by stop signal, returns to step Rapid 2 carry out vision SALM process, otherwise enter step 10;
Step 10, stop motion.
Further, the implementation method of step 1:
The gridiron pattern image data for obtaining the fixed size under multiple different perspectivess using camera in advance, then by opening just Friendly camera calibration method carries out the calculating of camera internal reference to the gridiron pattern image data got, obtains the result of camera calibration.
Further, the implementation of step 3:
Step 3.1, the FAST corner feature of each frame image is extracted using M16 template, M16 template is as shown in Fig. 2, if There is the pixel of continuous 12 points to be all larger than Ip+ T is less than Ip- T is then characterized a little;An adaptive threshold T is calculated first:
WhereinIndicate the average value of all pixels gray value in current circle;Judge 1,5,9,13 4 point of gray value with The size relation of T, when at least there are three point meet above-mentioned condition when continue to calculate remaining 12 point, according to gray scale centroid method determine The direction of characteristic point: the position C of mass center, the square of neighborhood S are calculated in neighborhood S is defined as:
Mass center are as follows:
The direction of characteristic point are as follows:
2 (m of θ=atan01,m10),
The corner feature extracted is described by rBRIEF description with rotational invariance;
Step 3.2, line segment feature that may be present is detected using LSD algorithm for image, and merging may be same The line segment of straight line, by calculate any two line feature angle and respective line segment midpoint to another line segment distance with, If being respectively smaller than threshold value TlangAnd Tldis, then it is assumed that be same line segment, merge, later using LBD algorithm to optimization after Line feature is described, and obtains description of line feature.
Further, the implementation method of step 4:
Step 4.1, space coordinate solution is carried out for the dotted line feature that step 3 is extracted, particularly, line feature seeks it The equation that the space line belonging to it is calculated after space coordinate, is divided into two set for feature later, point feature is classified as SpCollection It closes, line feature is classified as SlSet;
Step 4.2, candidate region feature: 1) three coplanar characteristic points is gone out by following three kinds of method constructs;2) spy Sign point and a characteristic curve;3) two coplanar characteristic curves;And calculate the general equation of candidate plane: p1x+p2y+p3Z=p4
Step 4.3, the coefficient of plane equation is normalized:
||[p1,p2,p3,p4]T| |=1,
Plane: p=[p is indicated with three amounts with this1,p2,p3]T
Further, the implementation method of step 5:
Step 5.1, it is matched according to the point, line, surface feature obtained in step 3, for point feature, according to descriptor Distance takes preceding 60 percent to be used as optimal matching points according to ascending sort;For line feature, also according to description away from From according to ascending sort, preceding 50 percent is taken to be used as best match pair;For region feature, calculate:
According to ascending sort, 3-5 is used as best match pair before taking;
Step 5.2, using RANSAC matrix to obtained matching to carry out error characteristic rejecting;
Further, the implementation method of step 6:
Step 6.1, three-dimensional reconstruction is carried out according to the feature matched, three-dimensional coordinate is obtained, further according to the several of adjacent interframe What relationship calculates the re-projection error of feature to re-projection is carried out, and is missed using the projection that gauss-newton method minimizes point and straight line Difference;
Step 6.2, it using pseudo- Hans Huber's loss function and executes primary two step and minimizes process, finally obtain two companies The incremental motion of continuous key interframe.
Further, the implementation method of step 7:
Step 7.1, the uncertainty in covariance matrix is converted to the form of the scalar of entropy:
H (ζ)=3 (1+log (2 π))+0.5log (| Ωζ|),
Wherein Ω indicates the covariance matrix of incremental motion, for giving input frame image, checks previous keyframe i and works as The entropy of estimation between preceding frame i+u and its ratio with first key frame i+1 calculate:
If the value of α is lower than some threshold value, 0.9, and the feature in present frame and previous keyframe are defined as in this method Coincidence number is more than 300, then using i+u frame as new key frame insertion system;
Step 7.2, due to ζt,t+1The only estimation of adjacent interframe, thus it is above-mentioned by the synthesis of single order Gauss transmission function A series of estimations, with obtain two discontinuous interframe covariance.
Further, the implementation method of step 8:
Step 8.1, the variation estimation of the relative pose for present frame and before between all key frames refines, into Row data correlation, the incremental motion for using step 6 to obtain is as the initial input of Gauss-Newton optimization method;
Step 8.2, defining ψ is the variable under se (3), incremental motion, each point feature including the continuous interframe of every two Space coordinate point, the space coordinate point of line feature endpoint and the minimum of plane indicate vector, minimize re-projection error:
Wherein Kl、Pl、Ll、MlLocal map set, point feature set, line characteristic set, region feature set are respectively indicated, The covariance matrix of Ω expression incremental motion.Point feature projection error e thereinijIt is that j-th of mapping point in the i-th frame observes Two-dimensional distance, according to the following formula calculate:
eij=xij-π(ζiw,Xwj),
π refers to the mapping from se (3) to (3) SE, X in formulawjIt is the three dimensional space coordinate of point feature;The throwing of its middle line feature Shadow error is different from the projection error of point feature, the endpoint of three-dimensional line segment is projected to the plane of delineation, and it is put down with image The distance between corresponding Infinite Straight Line is used as error function, e on faceikIt is calculated by following formula:
P, Q are respectively the extreme coordinates of line feature, l in formulaikThe linear equation in plane is represented, is provided by step 3;Face is special The projection error e of signimIt is calculated according to the logarithmic mapping of quaternary number:
eim=log { q [T (ζiw)Tp(ζim)]-1q(ζiw),
P (ζ in formulaim) indicate plane, it is provided by step 3, q (*) is a quaternary number, T (ζiw) it is a transition matrix Homogeneous form.Second generation update is carried out using Levenberg-Marquardt optimization method, acquires optimal incremental motion.
Further, the implementation method of step 9:
Step 9.1, the description of BRIEF binary system is carried out to the line feature extracted, description of line feature is obtained, with a spy The vision entry vector that description of sign constitutes present frame is matched according to the vision bag of words established offline in advance;
Step 9.2, it if successful match, needs to optimize merging to two local maps where the two key frames, And update with the associated local map of the two local maps, be based on figure Optimization Framework, using interframe incremental motion as figure The construction of node, side is similar to step 8, according to:
It is updated to be iterated, log therein and exp function respectively indicate logarithmic mapping and Lee of the Lie group to Lie algebra Index mapping of the algebra to Lie group;
Step 9.3, if matching is unsuccessful, stop signal is judged whether there is, if being not received by stop signal, returns to step Rapid 2 continue to run, and otherwise enter step 10.
The invention has the following beneficial effects:
1. carrying out close-coupled using multiple features, can effectively solve to be based on tag system precision sharply under low texture environment The problems such as decline, system down;
2. can be very good to realize that the locating effect of mobile platform and building have structural information according to doors structure environment Ambient enviroment feature, can be very good to obtain on a variety of open experimental data sets high-precision as a result, it is possible in real time, efficiently The pose of mobile platform and the environment of surrounding are patterned using matched dotted line region feature, and carry out at winding detection Reason makes full use of dotted line feature to reduce accumulated error.
Detailed description of the invention:
Fig. 1 is a kind of flow diagram of the vision SLAM method of multiple features fusion described in specific embodiment;
Fig. 2 is FAST feature M16 template schematic diagram used in this paper SLAM method;
Fig. 3 is this paper SLAM method probability graph model used based on figure optimization thought;
Fig. 4 is plane characteristic schematic diagram used in this paper SLAM method;
Fig. 5 is this paper SLAM method dotted line feature re-projection schematic diagram.
Specific embodiment:
Below in conjunction with attached drawing and specific embodiment, the present invention is described further.
The technical scheme adopted by the invention is that: vision SLAM method based on multi-feature fusion, specific implementation process are shown in Fig. 1 is mainly comprised the steps that
Step 1, the calibration for carrying out depth camera determines the internal reference of camera;
Step 2, the video stream data obtained for camera on mobile robot platform carries out gaussian filtering to reduce noise Influence to subsequent step;
Step 3, feature extraction, the point feature and line feature of each frame of On-line testing, tool are carried out for the image after correction Body includes: to describe son for each frame image zooming-out FAST corner feature with rBRIEF and be described, and line feature is calculated using LSD Method is detected, and is described son using LBD and be described, and the geological information provided using line segment filters out direction and length not The same and biggish line segment of end-point distances;
Step 4, according to point, the line feature construction region feature extracted, it is special that candidate face is gone out by following three kinds of method constructs Sign: 1) three coplanar characteristic points;2) characteristic point and a characteristic curve;3) two coplanar characteristic curves;
Step 5, the frame matching that image is carried out using the point between present frame and previous frame, line feature, obtains dotted line feature Matching pair, establish the geometrical correspondence of interframe;
Step 6, feature back projection is carried out according to the obtained interframe corresponding relationship of step 5, uses Gauss-Newton method minimum Change back projection's error of feature, in order to handle exceptional value, executes two steps using pseudo- Hans Huber's cost function and minimize, acquire The incremental motion of two continuous interframe;
Step 7, the selection for carrying out key frame, the uncertainty of covariance matrix is passed through formula, and --- --- -- is converted to entropy Form:
H (ζ)=3 (1+log (2 π))+0.5log (| Ωζ|),
Ratio calculated:
If α < 0.9, which is added in Key Frames List, otherwise return step 2, the figure of input is continued with As data;
Step 8, the thought based on figure establishes local map, by all with the related key frame of present frame and according to this The feature composition that a little key frames observe, defining ψ is the variable under se (3), incremental motion including the continuous interframe of every two, every The space coordinate point of a point feature, the space coordinate point of line feature endpoint and the minimum of plane indicate vector;Then it minimizes Data are observed at a distance from road sign projection, construct accurate local map;
Step 9, judge whether the track of robot motion forms closed loop, if so, the local map to both ends is repaired Just, still using the thought of figure optimization, the robot pose that key frame is represented is closed as node, the constraint of both ends local map System carries out the amendment of track as side;Otherwise, it is determined whether there is stop signal, if being not received by stop signal, return Step 2 continues vision SALM process, otherwise enters step 10;
Step 10, stop motion.
Further, the implementation method of step 1:
The gridiron pattern image data for obtaining the fixed size under multiple different perspectivess using camera in advance, then by opening just Friendly camera calibration method carries out the calculating of camera internal reference to the gridiron pattern image data got, obtains the result of camera calibration.
Further, the implementation of step 3:
Step 3.1, the FAST corner feature of each frame image is extracted using M16 template, M16 template is as shown in Fig. 2, if There is the pixel of continuous 12 points to be all larger than Ip+ T is less than Ip- T is then characterized a little;An adaptive threshold T is calculated first:
WhereinIndicate the average value of all pixels gray value in current circle;Judge 1,5,9,13 4 point of gray value with The size relation of T, when at least there are three point meet above-mentioned condition when continue to calculate remaining 12 point, according to gray scale centroid method determine The direction of characteristic point: the position C of mass center, the square of neighborhood S are calculated in neighborhood S is defined as:
Mass center are as follows:
The direction of characteristic point are as follows:
2 (m of θ=atan01,m10),
The corner feature extracted is described by rBRIEF description with rotational invariance;
Step 3.2, line segment feature that may be present is detected using LSD algorithm for image, and merging may be same The line segment of straight line, by calculate any two line feature angle and respective line segment midpoint to another line segment distance with, If being respectively smaller than threshold value TlangAnd Tldis, then it is assumed that be same line segment, merge, later using LBD algorithm to optimization after Line feature is described, and obtains description of line feature.
Further, the implementation method of step 4:
Step 4.1, space coordinate solution is carried out for the dotted line feature that step 3 is extracted, particularly, line feature seeks it The equation that the space line belonging to it is calculated after space coordinate, is divided into two set for feature later, point feature is classified as SpCollection It closes, line feature is classified as SlSet;
Step 4.2, candidate region feature: 1) three coplanar characteristic points is gone out by following three kinds of method constructs;2) spy Sign point and a characteristic curve;3) two coplanar characteristic curves;And calculate the general equation of candidate plane: p1x+p2y+p3Z=p4
Step 4.3, the coefficient of plane equation is normalized:
||[p1,p2,p3,p4]T| |=1,
Plane: [p is indicated with three amounts with this1,p2,p3]T
Further, the implementation method of step 5:
Step 5.1, it is matched according to the point, line, surface feature obtained in step 3, for point feature, according to descriptor Distance takes preceding 60 percent to be used as optimal matching points according to ascending sort;For line feature, also according to description away from From according to ascending sort, preceding 50 percent is taken to be used as best match pair;For line feature, calculate:
According to ascending sort, 3-5 is used as best match pair before taking;
Step 5.2, using RANSAC matrix to obtained matching to carry out error characteristic rejecting;
Further, the implementation method of step 6:
Step 6.1, three-dimensional reconstruction is carried out according to the feature matched, three-dimensional coordinate is obtained, further according to the several of adjacent interframe What relationship calculates the re-projection error of feature to re-projection is carried out, and is missed using the projection that gauss-newton method minimizes point and straight line Difference;
Step 6.2, it using pseudo- Hans Huber's loss function and executes primary two step and minimizes process, finally obtain two companies The incremental motion of continuous key interframe.
Further, the implementation method of step 7:
Step 7.1, the uncertainty in covariance matrix is converted to the form of the scalar of entropy:
H (ζ)=3 (1+log (2 π))+0.5log (| Ωζ|),
For given input frame image, check the estimation between previous keyframe i and current frame i+u entropy and The ratio of itself and first key frame i+1 calculates:
If the value of α is lower than some threshold value, 0.9, and the feature in present frame and previous keyframe are defined as in this method Coincidence number is more than 300, then using i+u frame as new key frame insertion system;
Step 7.2, due to ζt,t+1The only estimation of adjacent interframe, thus it is above-mentioned by the synthesis of single order Gauss transmission function A series of estimations, with obtain two discontinuous interframe covariance.
Further, the implementation method of step 8:
Step 8.1, the variation estimation of the relative pose for present frame and before between all key frames refines, into Row data correlation, the incremental motion for using step 6 to obtain is as the initial input of Gauss-Newton optimization method;
Step 8.2, defining ψ is the variable under se (3), incremental motion, each point feature including the continuous interframe of every two Space coordinate point, the space coordinate point of line feature endpoint and the minimum of plane indicate vector, minimize re-projection error:
Wherein Kl、Pl、Ll、MlRespectively indicate local map set, point feature set, line characteristic set, region feature set. Point feature projection error e thereinijIt is the two-dimensional distance that j-th of mapping point in the i-th frame observes, calculates according to the following formula:
eij=xij-π(ζiw,Xwj),
π refers to the mapping from se (3) to (3) SE, X in formulawjIt is the three dimensional space coordinate of point feature;The throwing of its middle line feature Shadow error is different from the projection error of point feature, the endpoint of three-dimensional line segment is projected to the plane of delineation, and it is put down with image The distance between corresponding Infinite Straight Line is used as error function, e on faceikIt is calculated by following formula:
P, Q are respectively the extreme coordinates of line feature, l in formulaikThe linear equation in plane is represented, is provided by step 3;Face is special The projection error e of signimIt is calculated according to the logarithmic mapping of quaternary number:
eim=log { q [T (ζiw)Tp(ζim)]-1q(ζiw),
P (ζ in formulaim) indicate plane, it is provided by step 3, q (*) is a quaternary number, T (ζiw) it is a transition matrix Homogeneous form.Second generation update is carried out using Levenberg-Marquardt optimization method, acquires optimal incremental motion.
Further, the implementation method of step 9:
Step 9.1, the description of BRIEF binary system is carried out to the line feature extracted, description of line feature is obtained, with a spy The vision entry vector that description of sign constitutes present frame is matched according to the vision bag of words established offline in advance;
Step 9.2, it if successful match, needs to optimize merging to two local maps where the two key frames, And update with the associated local map of the two local maps, be based on figure Optimization Framework, using interframe incremental motion as figure The construction of node, side is similar to step 8, according to:
It is updated to be iterated, log therein and exp function respectively indicate logarithmic mapping and Lee of the Lie group to Lie algebra Index mapping of the algebra to Lie group;
Step 9.3, if matching is unsuccessful, stop signal is judged whether there is, if being not received by stop signal, returns to step Rapid 2 continue to run, and otherwise enter step 10.

Claims (5)

1. a kind of vision SLAM method of multiple features fusion, which is characterized in that the reality of visual odometry front end and figure optimization rear end Existing method, key step are as follows:
Image preprocessing is carried out to the image of acquisition first in visual odometry front end, to reduce in initial pictures noise for spy Levy the influence of detection;FAST characteristic point is extracted in image after the pre-treatment using a kind of method of adaptive threshold, is selected Wherein Harris responds highest 500 characteristic points of score and calculates rBRIEF descriptor;The line feature for extracting image, uses LSD Algorithm extracts the line segment feature in image, for the line segment feature extracted, needs to filter out length less than 20 pixels, simultaneously It lays down a regulation, it would be possible to be merged for two line segment features of same line segment feature;It is candidate by the method construct of formulation Region feature indicates region feature using minimum planes representation;Characteristic matching is carried out to the image of adjacent two frame, obtains camera The variation of interframe pose, and judge whether present frame is key frame, it is then saved if key frame, and construct local map;
The local map of building is optimized in figure optimization rear end, is played a game position appearance using Levenberg-Marquadt algorithm It optimizes, solves optimal pose;Judge whether present incoming frame can constitute winding, to the part at both ends if it can constitute winding Map is modified, and the robot pose that key frame is represented comes as node, the constraint relationship of both ends local map as side Carry out the amendment of track.
2. a kind of vision SLAM method of multiple features fusion as described in claim 1, which is characterized in that use a kind of new side Method detects FAST feature:
Using the pixel around M16 template selection characteristic point, radius is 3 pixels, selects 10 gray values most in current circle Big and 10 the smallest pixels of gray value calculate threshold value T:
Wherein IiIndicate the average value of all pixels gray value in current circle;Compare first in 16 points again number be 1,5,9, 13 pixel and the relationship of T, the gray scale there are three or more judge the pass of these points intermediate left point and T if being greater than T System, if more than T, then the point is characterized a little.
3. a kind of vision SLAM method of multiple features fusion as described in claim 1, which is characterized in that use minimum planes It indicates to indicate the region feature extracted:
Using point feature obtained in claim 2, line feature is extracted using LSD algorithm, dotted line feature is classified, point is special Sign is classified as SpSet, line feature are classified as SlSet, particularly, line feature seek calculating after its space coordinate the space belonging to it The equation of straight line goes out candidate region feature: 1) three coplanar characteristic points by following three kinds of method constructs;2) characteristic point with One characteristic curve;3) two coplanar characteristic curves;And calculate the general equation of candidate plane: p1x+p2y+p3Z=p4, and will put down The coefficient of face equation normalizes: | | [p1,p2,p3,p4]T| |=1, plane: p=[p is indicated with three amounts with this1,p2, p3]T
4. a kind of vision SLAM method of multiple features fusion as described in claim 1, which is characterized in that sufficiently use dotted line The re-projection error of region feature completes the optimization of local map:
Present frame is refined with the variation estimation of the relative pose between all key frames before, carries out data correlation, The incremental motion for using step 6 to obtain is as the initial input of Gauss-Newton optimization method;Defining ψ is the variable under se (3), The space coordinate point of incremental motion, each point feature including the continuous interframe of every two, line feature endpoint space coordinate point with And the minimum of plane indicates vector, minimizes re-projection error:
Wherein Kl、Pl、Ll、MlLocal map set, point feature set, line characteristic set, region feature set are respectively indicated, Ω is indicated The covariance matrix of incremental motion, point feature projection error eijIt is the two-dimensional distance that j-th of mapping point in the i-th frame observes, It calculates according to the following formula:
eij=xij-π(ζiw,Xwj),
π refers to the mapping from se (3) to (3) SE, ζ in formulaiwIt is the six-vector in se (3) space, XwjIt is the three of point feature Dimension space coordinate, xijFor image coordinate;The projection error of its middle line feature and the projection error of point feature are different, by three-dimensional line segment Endpoint project to the plane of delineation, and regard it as error letter the distance between with Infinite Straight Line corresponding on the plane of delineation Number, eikIt is calculated by following formula:
P, Q are respectively the extreme coordinates of line feature, l in formulaikThe linear equation in plane is represented, is provided by step 3;Region feature Projection error eimIt is calculated according to the logarithmic mapping of quaternary number:
eim=log { q [T (ζiw)Tp(ζim)]-1q(ζiw),
P (ζ in formulaim) indicate plane, it is provided by step 3, q (*) is a quaternary number, T (ζiw) it is the homogeneous of a transition matrix Form.It is iterated update using Levenberg-Marquardt optimization method, acquires optimal incremental motion.
5. a kind of vision SLAM method of multiple features fusion as described in claim 1, which is characterized in that offline building in advance Based on the bag of words of dotted line feature, successful match structural map Optimized model:
The description of BRIEF binary system is carried out to the line feature extracted, obtains description of line feature, the sub- structure of description with point feature It is matched at the vision entry vector of present frame according to the vision bag of words established offline in advance;If successful match, Two local maps where the two key frames need to be optimized with merging, and updated associated with the two local maps Local map is based on figure Optimization Framework, and using interframe incremental motion as the node of figure, the construction on side is similar to step 8, according to:
It is updated to be iterated, log therein and exp function respectively indicate Lie group to the logarithmic mapping and Lie algebra of Lie algebra To the index mapping of Lie group.
CN201910357796.9A 2019-04-30 2019-04-30 A kind of vision SLAM method of multiple features fusion Pending CN110060277A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910357796.9A CN110060277A (en) 2019-04-30 2019-04-30 A kind of vision SLAM method of multiple features fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910357796.9A CN110060277A (en) 2019-04-30 2019-04-30 A kind of vision SLAM method of multiple features fusion

Publications (1)

Publication Number Publication Date
CN110060277A true CN110060277A (en) 2019-07-26

Family

ID=67321742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910357796.9A Pending CN110060277A (en) 2019-04-30 2019-04-30 A kind of vision SLAM method of multiple features fusion

Country Status (1)

Country Link
CN (1) CN110060277A (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599569A (en) * 2019-09-16 2019-12-20 上海市刑事科学技术研究院 Method for generating two-dimensional plane graph inside building, storage device and terminal
CN110631586A (en) * 2019-09-26 2019-12-31 珠海市一微半导体有限公司 Map construction method based on visual SLAM, navigation system and device
CN110807799A (en) * 2019-09-29 2020-02-18 哈尔滨工程大学 Line feature visual odometer method combining depth map inference
CN110926405A (en) * 2019-12-04 2020-03-27 中科新松有限公司 ARV attitude measurement method based on monocular vision vanishing point detection
CN111275764A (en) * 2020-02-12 2020-06-12 南开大学 Depth camera visual mileage measurement method based on line segment shadow
CN111310772A (en) * 2020-03-16 2020-06-19 上海交通大学 Point line feature selection method and system for binocular vision SLAM
CN111369594A (en) * 2020-03-31 2020-07-03 北京旋极信息技术股份有限公司 Method, device, computer storage medium and terminal for realizing target tracking
CN111461141A (en) * 2020-03-30 2020-07-28 歌尔科技有限公司 Equipment pose calculation method device and equipment
CN111693047A (en) * 2020-05-08 2020-09-22 中国航空工业集团公司西安航空计算技术研究所 Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene
CN111899334A (en) * 2020-07-28 2020-11-06 北京科技大学 Visual synchronous positioning and map building method and device based on point-line characteristics
CN112114966A (en) * 2020-09-15 2020-12-22 杭州未名信科科技有限公司 Light beam adjustment calculation method of visual SLAM
CN112258391A (en) * 2020-10-12 2021-01-22 武汉中海庭数据技术有限公司 Fragmented map splicing method based on road traffic marking
CN112305558A (en) * 2020-10-22 2021-02-02 中国人民解放军战略支援部队信息工程大学 Mobile robot track determination method and device by using laser point cloud data
CN112381890A (en) * 2020-11-27 2021-02-19 上海工程技术大学 RGB-D vision SLAM method based on dotted line characteristics
CN112419497A (en) * 2020-11-13 2021-02-26 天津大学 Monocular vision-based SLAM method combining feature method and direct method
CN112631271A (en) * 2020-10-09 2021-04-09 南京凌华微电子科技有限公司 Map generation method based on robot perception fusion
CN112802196A (en) * 2021-02-01 2021-05-14 北京理工大学 Binocular inertia simultaneous positioning and map construction method based on dotted line feature fusion
CN112880687A (en) * 2021-01-21 2021-06-01 深圳市普渡科技有限公司 Indoor positioning method, device, equipment and computer readable storage medium
CN112945233A (en) * 2021-01-15 2021-06-11 北京理工大学 Global drift-free autonomous robot simultaneous positioning and map building method
CN112950709A (en) * 2021-02-21 2021-06-11 深圳市优必选科技股份有限公司 Pose prediction method, pose prediction device and robot
WO2021135645A1 (en) * 2019-12-31 2021-07-08 歌尔股份有限公司 Map updating method and device
CN113608236A (en) * 2021-08-03 2021-11-05 哈尔滨智兀科技有限公司 Mine robot positioning and image building method based on laser radar and binocular camera
CN113763481A (en) * 2021-08-16 2021-12-07 北京易航远智科技有限公司 Multi-camera visual three-dimensional map construction and self-calibration method in mobile scene
CN114119805A (en) * 2021-10-28 2022-03-01 北京理工大学 Semantic map building SLAM method for point-line-surface fusion
CN114202035A (en) * 2021-12-16 2022-03-18 成都理工大学 Multi-feature fusion large-scale network community detection algorithm
CN115578620A (en) * 2022-10-28 2023-01-06 北京理工大学 Point-line-surface multi-dimensional feature-visible light fusion slam method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150103183A1 (en) * 2013-10-10 2015-04-16 Nvidia Corporation Method and apparatus for device orientation tracking using a visual gyroscope
CN104851094A (en) * 2015-05-14 2015-08-19 西安电子科技大学 Improved method of RGB-D-based SLAM algorithm
US20180173991A1 (en) * 2016-02-24 2018-06-21 Soinn Holdings Llc Feature value extraction method and feature value extraction apparatus
CN109523589A (en) * 2018-11-13 2019-03-26 浙江工业大学 A kind of design method of more robust visual odometry
CN109544636A (en) * 2018-10-10 2019-03-29 广州大学 A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150103183A1 (en) * 2013-10-10 2015-04-16 Nvidia Corporation Method and apparatus for device orientation tracking using a visual gyroscope
CN104851094A (en) * 2015-05-14 2015-08-19 西安电子科技大学 Improved method of RGB-D-based SLAM algorithm
US20180173991A1 (en) * 2016-02-24 2018-06-21 Soinn Holdings Llc Feature value extraction method and feature value extraction apparatus
CN109544636A (en) * 2018-10-10 2019-03-29 广州大学 A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN109523589A (en) * 2018-11-13 2019-03-26 浙江工业大学 A kind of design method of more robust visual odometry

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
DYLAN THOMAS等: "A monocular SLAM method for satellite proximity operations", 《2016 AMERICAN CONTROL CONFERENCE (ACC)》 *
仇翔等: "基于RGB-D相机的视觉里程计实现", 《浙江工业大学学报》 *
彭天博等: "增强室内视觉里程计实用性的方法", 《模式识别与人工智能》 *
林志诚等: "移动机器人视觉SLAM过程中图像匹配及相机位姿求解的研究", 《机械设计与制造工程》 *
梁强: "基于深度相机的室内定位***研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599569A (en) * 2019-09-16 2019-12-20 上海市刑事科学技术研究院 Method for generating two-dimensional plane graph inside building, storage device and terminal
CN110631586A (en) * 2019-09-26 2019-12-31 珠海市一微半导体有限公司 Map construction method based on visual SLAM, navigation system and device
CN110807799B (en) * 2019-09-29 2023-05-30 哈尔滨工程大学 Line feature visual odometer method combined with depth map inference
CN110807799A (en) * 2019-09-29 2020-02-18 哈尔滨工程大学 Line feature visual odometer method combining depth map inference
CN110926405A (en) * 2019-12-04 2020-03-27 中科新松有限公司 ARV attitude measurement method based on monocular vision vanishing point detection
CN110926405B (en) * 2019-12-04 2022-02-22 中科新松有限公司 ARV attitude measurement method based on monocular vision vanishing point detection
WO2021135645A1 (en) * 2019-12-31 2021-07-08 歌尔股份有限公司 Map updating method and device
CN111275764A (en) * 2020-02-12 2020-06-12 南开大学 Depth camera visual mileage measurement method based on line segment shadow
CN111275764B (en) * 2020-02-12 2023-05-16 南开大学 Depth camera visual mileage measurement method based on line segment shadows
CN111310772B (en) * 2020-03-16 2023-04-21 上海交通大学 Point line characteristic selection method and system for binocular vision SLAM
CN111310772A (en) * 2020-03-16 2020-06-19 上海交通大学 Point line feature selection method and system for binocular vision SLAM
CN111461141B (en) * 2020-03-30 2023-08-29 歌尔科技有限公司 Equipment pose calculating method and device
CN111461141A (en) * 2020-03-30 2020-07-28 歌尔科技有限公司 Equipment pose calculation method device and equipment
CN111369594A (en) * 2020-03-31 2020-07-03 北京旋极信息技术股份有限公司 Method, device, computer storage medium and terminal for realizing target tracking
CN111693047A (en) * 2020-05-08 2020-09-22 中国航空工业集团公司西安航空计算技术研究所 Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene
CN111693047B (en) * 2020-05-08 2022-07-05 中国航空工业集团公司西安航空计算技术研究所 Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene
CN111899334B (en) * 2020-07-28 2023-04-18 北京科技大学 Visual synchronous positioning and map building method and device based on point-line characteristics
CN111899334A (en) * 2020-07-28 2020-11-06 北京科技大学 Visual synchronous positioning and map building method and device based on point-line characteristics
CN112114966A (en) * 2020-09-15 2020-12-22 杭州未名信科科技有限公司 Light beam adjustment calculation method of visual SLAM
CN112631271A (en) * 2020-10-09 2021-04-09 南京凌华微电子科技有限公司 Map generation method based on robot perception fusion
CN112258391A (en) * 2020-10-12 2021-01-22 武汉中海庭数据技术有限公司 Fragmented map splicing method based on road traffic marking
CN112305558B (en) * 2020-10-22 2023-08-01 中国人民解放军战略支援部队信息工程大学 Mobile robot track determination method and device using laser point cloud data
CN112305558A (en) * 2020-10-22 2021-02-02 中国人民解放军战略支援部队信息工程大学 Mobile robot track determination method and device by using laser point cloud data
CN112419497A (en) * 2020-11-13 2021-02-26 天津大学 Monocular vision-based SLAM method combining feature method and direct method
CN112381890A (en) * 2020-11-27 2021-02-19 上海工程技术大学 RGB-D vision SLAM method based on dotted line characteristics
CN112381890B (en) * 2020-11-27 2022-08-02 上海工程技术大学 RGB-D vision SLAM method based on dotted line characteristics
CN112945233B (en) * 2021-01-15 2024-02-20 北京理工大学 Global drift-free autonomous robot simultaneous positioning and map construction method
CN112945233A (en) * 2021-01-15 2021-06-11 北京理工大学 Global drift-free autonomous robot simultaneous positioning and map building method
CN112880687B (en) * 2021-01-21 2024-05-17 深圳市普渡科技有限公司 Indoor positioning method, device, equipment and computer readable storage medium
CN112880687A (en) * 2021-01-21 2021-06-01 深圳市普渡科技有限公司 Indoor positioning method, device, equipment and computer readable storage medium
CN112802196A (en) * 2021-02-01 2021-05-14 北京理工大学 Binocular inertia simultaneous positioning and map construction method based on dotted line feature fusion
CN112802196B (en) * 2021-02-01 2022-10-21 北京理工大学 Binocular inertia simultaneous positioning and map construction method based on dotted line feature fusion
CN112950709A (en) * 2021-02-21 2021-06-11 深圳市优必选科技股份有限公司 Pose prediction method, pose prediction device and robot
CN112950709B (en) * 2021-02-21 2023-10-24 深圳市优必选科技股份有限公司 Pose prediction method, pose prediction device and robot
CN113608236A (en) * 2021-08-03 2021-11-05 哈尔滨智兀科技有限公司 Mine robot positioning and image building method based on laser radar and binocular camera
CN113763481A (en) * 2021-08-16 2021-12-07 北京易航远智科技有限公司 Multi-camera visual three-dimensional map construction and self-calibration method in mobile scene
CN113763481B (en) * 2021-08-16 2024-04-05 北京易航远智科技有限公司 Multi-camera visual three-dimensional map construction and self-calibration method in mobile scene
CN114119805A (en) * 2021-10-28 2022-03-01 北京理工大学 Semantic map building SLAM method for point-line-surface fusion
CN114119805B (en) * 2021-10-28 2024-06-04 北京理工大学 Semantic mapping SLAM method for point-line-plane fusion
CN114202035A (en) * 2021-12-16 2022-03-18 成都理工大学 Multi-feature fusion large-scale network community detection algorithm
CN115578620A (en) * 2022-10-28 2023-01-06 北京理工大学 Point-line-surface multi-dimensional feature-visible light fusion slam method
CN115578620B (en) * 2022-10-28 2023-07-18 北京理工大学 Point-line-plane multidimensional feature-visible light fusion slam method

Similar Documents

Publication Publication Date Title
CN110060277A (en) A kind of vision SLAM method of multiple features fusion
CN106055091B (en) A kind of hand gestures estimation method based on depth information and correcting mode
CN108682027A (en) VSLAM realization method and systems based on point, line Fusion Features
CN109166149A (en) A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN111563442A (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN108564616A (en) Method for reconstructing three-dimensional scene in the rooms RGB-D of fast robust
CN109544636A (en) A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
KR101257207B1 (en) Method, apparatus and computer-readable recording medium for head tracking
CN107563323A (en) A kind of video human face characteristic point positioning method
CN109460267A (en) Mobile robot offline map saves and real-time method for relocating
CN112446882A (en) Robust visual SLAM method based on deep learning in dynamic scene
CN111476089B (en) Pedestrian detection method, system and terminal for multi-mode information fusion in image
CN105760898A (en) Vision mapping method based on mixed group regression method
CN111368759A (en) Monocular vision-based semantic map construction system for mobile robot
CN114677323A (en) Semantic vision SLAM positioning method based on target detection in indoor dynamic scene
CN116449384A (en) Radar inertial tight coupling positioning mapping method based on solid-state laser radar
CN110070578A (en) A kind of winding detection method
CN111027586A (en) Target tracking method based on novel response map fusion
Zhang Multiple moving objects detection and tracking based on optical flow in polar-log images
CN104778670A (en) Fractal-wavelet self-adaption image denoising method based on multivariate statistical model
Yu et al. Drso-slam: A dynamic rgb-d slam algorithm for indoor dynamic scenes
CN112432653B (en) Monocular vision inertial odometer method based on dotted line characteristics
Xu et al. 3D joints estimation of the human body in single-frame point cloud
CN115855018A (en) Improved synchronous positioning and mapping method based on point-line comprehensive characteristics
CN106558065A (en) The real-time vision tracking to target is realized based on color of image and texture analysiss

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190726

WD01 Invention patent application deemed withdrawn after publication