CN113450412B - Visual SLAM method based on linear features - Google Patents

Visual SLAM method based on linear features Download PDF

Info

Publication number
CN113450412B
CN113450412B CN202110798911.3A CN202110798911A CN113450412B CN 113450412 B CN113450412 B CN 113450412B CN 202110798911 A CN202110798911 A CN 202110798911A CN 113450412 B CN113450412 B CN 113450412B
Authority
CN
China
Prior art keywords
straight line
line
point
features
linear
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110798911.3A
Other languages
Chinese (zh)
Other versions
CN113450412A (en
Inventor
蒋朝阳
王慷
郑晓妮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202110798911.3A priority Critical patent/CN113450412B/en
Publication of CN113450412A publication Critical patent/CN113450412A/en
Application granted granted Critical
Publication of CN113450412B publication Critical patent/CN113450412B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a visual SLAM method based on linear characteristics, which comprises the following steps: s1, constructing a new type point characteristic by the characteristic straight line intersection point; s2, extracting point characteristics from the line characteristics again, and introducing a linear characteristic description method; and S3, matching the line feature word bag in the rear end according to a new line feature description method to determine a unique straight line in the space. The invention can greatly improve the accuracy of feature extraction, and the linear features are not influenced by factors such as illumination, and the like, and the system has good stability and high accuracy.

Description

Visual SLAM method based on linear features
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a visual SLAM method based on linear features.
Technical Field
The characteristic matching method commonly used in the current images is point matching, straight line matching and surface matching. In image matching, since feature points are easily extracted in different types of images, point features are generally a relatively common matching method. However, compared to the characteristic line, the characteristic point has three disadvantages: firstly, the characteristic points are easily affected by the environmental noise and have poor stability, and the characteristic straight line has strong anti-interference capability and high stability to the environmental noise. In addition, compared with the extraction and the positioning of the feature points, the positioning of the feature straight line is more accurate, and the sub-pixel level can be achieved. Secondly, the characteristic straight line is a feature commonly existing in a real scene and an artificial scene, and simultaneously contains important middle layer information in the image, the high-layer information of the image can be constructed by utilizing a plurality of characteristic straight lines containing the middle layer information, and the characteristic straight line has more stable geometric characteristics and can effectively solve the problem of partial shadow or partial occlusion. For example, when a feature point is shielded, the coordinates of the three-dimensional point cannot be obtained by using the intersection in the front direction, and the performance of the three-dimensional reconstruction is directly influenced. Thirdly, in the low-texture image, the extraction of the feature points is difficult, so that the point matching is difficult, and the feature straight line can still be extracted in the low-texture image, so that the straight line matching is more suitable for the low-texture image. The surface characteristics of the object are composed of the area, the perimeter, the centroid, the three-dimensional standard matrix and the like of the object, and compared with characteristic straight lines, the characteristic surface is not easy to extract and describe.
Disclosure of Invention
The invention introduces linear features aiming at the problems of difficult feature point extraction, easy illumination influence, mismatching and the like of the visual SLAM based on the feature points. The linear features with stable characteristics can provide a basis for positioning and mapping the visual SLAM, and meanwhile, a new point feature extraction method is introduced by utilizing the characteristic that points can be formed by intersecting two straight lines.
A visual SLAM method based on straight line features comprises the following steps:
s1, constructing new type point characteristics by characteristic straight line intersection points
In each frame of image, extracting linear characteristics by using an LSD linear detection method;
firstly, judging whether two straight lines are coplanar or not, and calculating a descriptor of an intersection point after calculating the intersection point by utilizing the coplanar straight lines; in the front-end odometer and rear-end optimization processes, the visual constraint weight of the intersection point features is increased, and the feature point weight extracted through common point features is reduced;
s2 method for re-extracting point characteristics from line characteristics and introducing straight line characteristics description method
After detecting and identifying a straight line, traversing and extracting feature points on the straight line, and calculating descriptors for the feature points;
when two adjacent frames jointly observe a straight line and have the same characteristic point in a joint observation area, the two frames can be regarded as the same straight line in the space;
meanwhile, in order to perform loop detection and line matching of the next frame, new feature points of the line of the current frame are added into a line bag library, and in a later line matching stage, LBD descriptors and point features of the line of the current frame are matched with the line of the line bag library to realize matching of the bag of words;
in order to fully represent a straight line in space, feature points outside a common observation region need to be extracted in the current frame, and feature points obtained in all regions observed at present are updated to features of the corresponding straight line, so that constraints are provided for links such as repositioning and loop detection in the next step.
S3, determining the only straight line in the space according to the new line feature description method and the match line feature bag in the back end
The LBD descriptor diagram is composed of a plurality of mutually parallel strips to form a line segment support domain LSR and defines dLAnd dTwo directions to achieve rotational invariance, where dLIs the direction of the line segment, dIs dLClockwise vertical direction of (1). Setting the number of the strips as m, the width as w and the length as the length of the line segment;
the LBD algorithm introduces a global Gaussian weight function to reduce the edge d in the LSR areaWeights directed away from the center row; meanwhile, a local Gaussian weight function is introduced to weaken the boundary effect and avoid sudden change of descriptors between strips.
After LBD characteristics of the straight line characteristics are calculated, characteristic points on the straight line are extracted through traversal, and descriptors of the corresponding characteristic points are calculated;
matching line features in the line feature word bag with the LBD descriptor and the point features, and if the line features are matched with the LBD descriptor, adding the current feature points into the corresponding line features to gradually improve the point features of the line features; if no straight line is matched, the straight line is considered as a newly observed straight line, and the current characteristic straight line is added into the straight line characteristic word bag to complete the updating of the word bag; after the straight line is matched, the change of the pose can be calculated according to the reprojection error of the straight line.
The invention has the following beneficial effects:
1. the introduction of the linear features can greatly improve the accuracy of feature extraction, and the linear features are not influenced by factors such as illumination, and the like, so that the system has good stability and high accuracy.
2. The two straight lines form point features, the stability of the straight line features is correspondingly higher corresponding to the extracted point features, and meanwhile, for the point features, the matching depends on the line features, so the probability of mismatching is correspondingly reduced. Meanwhile, in the process of back-end global optimization, because the credibility of the point feature is high, the newly constructed feature point with the kernel function changing along with the error is set, and the error of the kernel function is larger than that of the sum function of the common point feature along with the increase of the error.
3. And point features are extracted from the line features again, and a new linear feature description method is introduced.
4. And matching the line characteristic word bag, wherein on the basis of the original point characteristics, a straight line characteristic is introduced into the matching word bag, so that the confidence rate of loop detection is increased.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram illustrating the construction of new type point features according to an embodiment;
FIG. 3 is a diagram illustrating feature point-based line matching according to an embodiment;
FIG. 4 is a schematic diagram of an embodiment of extracting point features in a currently observed straight line;
FIG. 5 is a schematic view of an example LBD descriptor;
FIG. 6 is a re-projection of a straight line feature of an embodiment.
Detailed Description
The specific technical scheme of the invention is explained by combining the attached drawings.
The complete technical scheme provided by the invention is implemented through the following three steps in sequence:
as shown in fig. 1, a visual SLAM method based on straight line features includes the following steps:
s1, constructing new type point characteristics by characteristic straight line intersection points
As shown in fig. 2, in each frame image, a line feature is extracted by using the LSD line detection method. Due to the stability of the straight line detection, the point feature constructed by the intersection point of the straight lines cannot be influenced by illumination and other factors. Meanwhile, for the specificity of the characteristic points and the difference of point characteristics formed by non-coplanar straight lines due to the change of observation angles, whether the two straight lines are coplanar or not is judged firstly, and after an intersection point is calculated by utilizing the coplanar straight lines, a descriptor of the intersection point is calculated. In the front-end odometer and rear-end optimization processes, due to the stability of the straight line intersection point feature, the visual constraint weight of the intersection point feature is increased, and the feature point weight extracted through the common point feature is reduced, so that the SLAM positioning accuracy is improved.
S2, point feature is extracted from line feature again, and a new line feature description method is introduced
As shown in fig. 3, the LBD-based line feature description methods are all described based on the condition that a complete line segment is observed, but when the current frame only observes a part of a straight line, the LBD-based feature description method cannot correctly and reliably match the straight line feature. Therefore, after the straight line is detected and identified, the characteristic points are traversed and extracted on the straight line, and descriptors are calculated for the characteristic points. Due to the continuity of the intelligent body in the motion process, the observed straight lines are also continuous, so when two adjacent frames jointly observe a straight line and have the same characteristic point in a joint observation area, the same straight line in the space can be considered. Meanwhile, in order to perform loop detection and linear matching of the next frame, new characteristic points of the linear line of the current frame are added into a linear bag-of-words library, and in the later linear matching stage, the LBD descriptor and point characteristics of the linear line in the current frame are matched with the linear line in the linear bag-of-words library, so that bag-of-words matching can be achieved.
In order to fully represent a straight line in a space, feature points outside a common observation region need to be extracted in a current frame, feature points obtained in all regions observed at present are updated to features of the corresponding straight line, and constraints are provided for links such as repositioning and loop detection in the next step. As shown in fig. 4.
In back-end optimization, matching of straight-line features is also required, and matching of point features is quite mature, so that excessive consideration is not required in design. For the straight line features in each frame, its LBD descriptor is computed. Based on LBD improved by MSLD algorithm, the method is used for describing local appearance of line segments and combining geometric constraint to realize line segment matching. Compared with the MSLD, the algorithm introduces global and local Gaussian weight coefficients, and has higher calculation speed and stronger robustness.
S3, determining the only straight line in the space according to the new line feature description method and the match line feature bag in the back end
As shown in FIG. 5, the LBD descriptor diagram is a line support domain L composed of several parallel stripsSR and defines dLAnd dTwo directions to achieve rotational invariance, where dLIs the direction of the line segment, dIs dLClockwise vertical direction of (1). Let m be the number of strips, w be the width, and length be the length of a line segment. Considering the influence of the pixel gradient farther from the center on the descriptor, the LBD algorithm introduces a global Gaussian weight function to reduce the edge d in the LSR regionWeights directed away from the center row. Meanwhile, a local Gaussian weight function is introduced to weaken the boundary effect and avoid sudden change of descriptors between strips.
After the LBD characteristic is calculated by the straight line characteristic, the characteristic points on the straight line are extracted through traversal, and descriptors of the corresponding characteristic points are calculated. That is, a straight line not only has an LBD descriptor, but also has a feature point on the straight line, but has different feature points in different areas of the straight line. And matching the line characteristics in the line characteristic word bag with the LBD descriptor and the point characteristics, and if the line characteristics are matched with the line characteristics, adding the current characteristic points into the corresponding line characteristics to gradually improve the point characteristics of the line characteristics. And if no straight line is matched, the straight line is considered as a newly observed straight line, and the current characteristic straight line is added into the straight line characteristic word bag to complete the updating of the word bag. After the straight line is matched, the change of the pose can be calculated according to the reprojection error of the straight line, as shown in fig. 6.
The invention has the following features:
1. and meanwhile, in the global optimization process, for the newly constructed feature points, the kernel function error of the newly constructed feature points is larger along with the increase of the error compared with the kernel function error of the common point feature.
2. For the straight line feature, a point feature above it is extracted, and a method in which the point feature above it is described as a line feature determines whether straight lines in two image frames are the same straight line by matching of the point features.
3. In the back-end optimization process, after the LBD characteristic is calculated by the straight line characteristic, the characteristic points on the straight line are extracted through traversal, and descriptors of the corresponding characteristic points are calculated. That is, a straight line not only has an LBD descriptor, but also has a feature point on the straight line, but has different feature points in different areas of the straight line. Line features in the line feature bag are matched by the LBD descriptor as well as the point features.

Claims (2)

1. A visual SLAM method based on straight line features is characterized by comprising the following steps:
s1, constructing new type point characteristics by characteristic straight line intersection points
In each frame of image, extracting linear characteristics by using an LSD linear detection method;
firstly, judging whether two straight lines are coplanar or not, and calculating a descriptor of an intersection point after calculating the intersection point by utilizing the coplanar straight lines; in the front-end odometer and rear-end optimization processes, the visual constraint weight of the intersection point features is increased, and the feature point weight extracted through common point features is reduced;
s2 method for re-extracting point characteristics from line characteristics and introducing straight line characteristics description method
After detecting and identifying a straight line, traversing and extracting feature points on the straight line, and calculating descriptors for the feature points;
when two adjacent frames jointly observe a straight line and have the same characteristic point in a joint observation area, the two frames can be regarded as the same straight line in the space;
meanwhile, in order to perform loop detection and linear matching of the next frame, new characteristic points of the linear line of the current frame are added into a linear bag-of-words library, and in the later linear matching stage, the LBD descriptor and the point characteristics of the linear line in the current frame are matched with the point characteristics and the linear characteristics in the linear bag-of-words library to realize matching of the bags of words;
s3, determining the only straight line in the space according to the new line feature description method and the match line feature bag in the back end
The LBD descriptor diagram is composed of a plurality of mutually parallel strips to form a line segment support domain LSR and defines dLAnd dTwo directions to achieve rotational invariance, where dLIs a line segmentDirection, dIs dLClockwise vertical direction of (a); setting the number of the strips as m, the width as w and the length as the length of the line segment;
the LBD algorithm introduces a global Gaussian weight function to reduce the edge d in the LSR areaWeights directed away from the center row; meanwhile, a local Gaussian weight function is introduced to weaken the boundary effect and avoid sudden change of descriptors between strips;
after LBD characteristics of the straight line characteristics are calculated, characteristic points on the straight line are extracted through traversal, and descriptors of the corresponding characteristic points are calculated;
matching line features in the line feature word bag with the LBD descriptor and the point features, and if the line features are matched with the LBD descriptor, adding the current feature points into the corresponding line features to gradually improve the point features of the line features; if no straight line is matched, the straight line is considered as a newly observed straight line, and the current characteristic straight line is added into the straight line characteristic word bag to complete the updating of the word bag; after the straight line is matched, the change of the pose can be calculated according to the reprojection error of the straight line.
2. The visual SLAM method based on straight-line features of claim 1, wherein in S2, in order to fully characterize a straight line in space, feature points outside the common observation region are extracted from the current frame, and feature points obtained in all regions currently observed are updated to features of the corresponding straight line, so as to provide constraints for next step relocation and loop detection.
CN202110798911.3A 2021-07-15 2021-07-15 Visual SLAM method based on linear features Active CN113450412B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110798911.3A CN113450412B (en) 2021-07-15 2021-07-15 Visual SLAM method based on linear features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110798911.3A CN113450412B (en) 2021-07-15 2021-07-15 Visual SLAM method based on linear features

Publications (2)

Publication Number Publication Date
CN113450412A CN113450412A (en) 2021-09-28
CN113450412B true CN113450412B (en) 2022-06-03

Family

ID=77816285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110798911.3A Active CN113450412B (en) 2021-07-15 2021-07-15 Visual SLAM method based on linear features

Country Status (1)

Country Link
CN (1) CN113450412B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114800504A (en) * 2022-04-26 2022-07-29 平安普惠企业管理有限公司 Robot posture analysis method, device, equipment and storage medium
CN117671022B (en) * 2023-11-02 2024-07-12 武汉大学 Mobile robot vision positioning system and method in indoor weak texture environment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909877B (en) * 2016-12-13 2020-04-14 浙江大学 Visual simultaneous mapping and positioning method based on dotted line comprehensive characteristics
CN108682027A (en) * 2018-05-11 2018-10-19 北京华捷艾米科技有限公司 VSLAM realization method and systems based on point, line Fusion Features
CN112085790A (en) * 2020-08-14 2020-12-15 香港理工大学深圳研究院 Point-line combined multi-camera visual SLAM method, equipment and storage medium
CN112396595B (en) * 2020-11-27 2023-01-24 广东电网有限责任公司肇庆供电局 Semantic SLAM method based on point-line characteristics in dynamic environment
CN112509044A (en) * 2020-12-02 2021-03-16 重庆邮电大学 Binocular vision SLAM method based on dotted line feature fusion

Also Published As

Publication number Publication date
CN113450412A (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN108564616B (en) Fast robust RGB-D indoor three-dimensional scene reconstruction method
CN109146948B (en) Crop growth phenotype parameter quantification and yield correlation analysis method based on vision
CN113450412B (en) Visual SLAM method based on linear features
CN110060277A (en) A kind of vision SLAM method of multiple features fusion
Lieb et al. Adaptive Road Following using Self-Supervised Learning and Reverse Optical Flow.
CN108682027A (en) VSLAM realization method and systems based on point, line Fusion Features
CN109974743B (en) Visual odometer based on GMS feature matching and sliding window pose graph optimization
CN115097442B (en) Water surface environment map construction method based on millimeter wave radar
CN112163622B (en) Global and local fusion constrained aviation wide-baseline stereopair line segment matching method
CN104036524A (en) Fast target tracking method with improved SIFT algorithm
CN107403451B (en) Self-adaptive binary characteristic monocular vision odometer method, computer and robot
CN108597009A (en) A method of objective detection is carried out based on direction angle information
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN113223045A (en) Vision and IMU sensor fusion positioning system based on dynamic object semantic segmentation
CN112037268B (en) Environment sensing method based on probability transfer model in dynamic scene
CN112364865A (en) Method for detecting small moving target in complex scene
CN108765440B (en) Line-guided superpixel coastline extraction method of single-polarized SAR image
Min et al. Coeb-slam: A robust vslam in dynamic environments combined object detection, epipolar geometry constraint, and blur filtering
CN117710806A (en) Semantic visual SLAM method and system based on semantic segmentation and optical flow
CN115717887B (en) Star point rapid extraction method based on gray distribution histogram
CN112053385A (en) Remote sensing video shielding target tracking method based on deep reinforcement learning
CN114283199B (en) Dynamic scene-oriented dotted line fusion semantic SLAM method
Yang et al. Ground plane matters: Picking up ground plane prior in monocular 3d object detection
CN106558065A (en) The real-time vision tracking to target is realized based on color of image and texture analysiss
CN111709997B (en) SLAM implementation method and system based on point and plane characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant