CN109829459B - Visual positioning method based on improved RANSAC - Google Patents

Visual positioning method based on improved RANSAC Download PDF

Info

Publication number
CN109829459B
CN109829459B CN201910052684.2A CN201910052684A CN109829459B CN 109829459 B CN109829459 B CN 109829459B CN 201910052684 A CN201910052684 A CN 201910052684A CN 109829459 B CN109829459 B CN 109829459B
Authority
CN
China
Prior art keywords
data
weight
sample
zero
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910052684.2A
Other languages
Chinese (zh)
Other versions
CN109829459A (en
Inventor
陈文�
张毅
李奎
魏新
刘想德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201910052684.2A priority Critical patent/CN109829459B/en
Publication of CN109829459A publication Critical patent/CN109829459A/en
Application granted granted Critical
Publication of CN109829459B publication Critical patent/CN109829459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a vision positioning method based on improved RANSAC, which comprises the following steps: s1, carrying out rough matching on the acquired image data to obtain a data set D; s2 randomly extracting n data points from the data set D; s3, estimating model parameters by using n data to obtain a model M; s4, updating the weight of the sample set D according to the loss function, and completing the initial sample grouping; s5, considering the preliminary judgment of the randomly-extracted initial sample, and returning to the step S2 to restart the model which does not meet the preliminary judgment condition; s6, updating the weight of the sample set D according to the loss function, and completing the update of the sample groups; s7 if the sample set D is larger than the currently recorded optimal sample B _ D, then B _ D ═ D, and record the model parameters; if the iteration number exceeds a set threshold k, exiting the algorithm, otherwise returning to repeat the steps S2-S6; and S8, estimating the three-dimensional pose information of the camera according to the obtained model parameters.

Description

Visual positioning method based on improved RANSAC
Technical Field
The invention belongs to the technical field of robot synchronous positioning and image construction, and relates to a vision positioning method based on improved RANSAC.
Background
At present, robot vision positioning is a research and development hotspot, good image registration is the premise and key of robot vision positioning movement, and the improvement of registration speed is also of great importance on the premise of ensuring registration. At present, a commonly used feature-based image matching method mainly includes: feature extraction, feature matching and mismatching point pair elimination. In the feature extraction method, the ORB features are very representative real-time image features at present. In feature matching, for Binary descriptors (BRIEF-Binary Robust Independent element features), Hamming distance (Hamming distance) is often used as a metric.
In recent years, no matter what kind of image matching algorithm is adopted, since mismatching points are always generated due to illumination, imaging angles, geometric deformation, ground feature changes and the like, in the image matching technology, high-precision matching results can be obtained only by researching a feature extraction and feature matching technology or a mismatching point detection technology.
Disclosure of Invention
In view of the above, the present invention aims to provide a visual positioning method based on improved RANSAC, which combines the conventional ORB algorithm and RANSAC (random Sampling consensus) algorithm to perform feature matching improvement, and performs coarse matching by using a feature point distance, angle, and rotation consistency method, and then performs fine matching by using a gaussian function as a loss function and a weight classification sample method.
In order to achieve the purpose, the invention provides the following technical scheme:
a vision positioning method based on improved RANSAC specifically comprises the following steps:
s1: roughly matching the acquired image data according to the consistency of the feature points to obtain a data set D;
s2: randomly extracting n data points from the data set D, wherein n is the minimum number of data points suitable for the model, and the minimum sample is marked as Ik
S3: estimating model parameters by using the n data to obtain a model M;
s4: updating the weight of the sample set D according to the loss function, and completing initial sample grouping;
s5: considering the pre-judgment of the randomly extracted initial sample, returning to step S2 to restart for the model that does not satisfy the pre-judgment condition (the number of mismatching points obtained by the test is too large);
s6: updating the weight of the sample set D according to the loss function, and completing the updating of the sample group; data weightI iAll data points greater than zero are the "interior points", the data weight IiAll data points less than zeroIs the "outer point";
s7: if the sample set D is larger than the currently recorded optimal sample B _ D, recording the model parameters, wherein B _ D is D; if the iteration number exceeds a set threshold k, exiting the algorithm, otherwise returning to repeat the steps S2-S6;
s8: and estimating the three-dimensional pose information of the camera according to the obtained model parameters.
Further, in step S1, when the viewing angle is not changed, the distances between two matching points on the normalized planar region image are consistent, and the distances between the two matching points do not change with the rotation and translation of the image; the rotating angle of the main direction of the matching point is consistent with the rotating angle of the corresponding image; the included angle between any straight lines on the image is consistent with the included angle of the matched image.
Further, in step S4, by introducing a gaussian function to describe the matching degree between the data and the model, such a judgment criterion can be considered in the case of data between "inner point" and "outer point", and is specifically expressed as follows:
Figure BDA0001951804280000021
Figure BDA0001951804280000022
wherein: k (x, epsilon) represents the degree of correlation between the data and the model, epsilon represents the error smaller than a set threshold, x represents the independent variable of the function K, loss (e) represents the loss function, IiRepresenting the weight of the data points conforming to the estimation model, and the initial value weight of each data point is zero.
Further, in step S6, the grouping is updated by updating the weight of the data set D according to the data point weight IiIs equal to zero, is larger than zero and is smaller than zero, and the components are divided into three groups; if the data IiEqual to zero, divided into indeterminate groups phi0Gathering; if IiLess than zero, divided into "outer point" groups
Figure BDA0001951804280000023
Gathering; if it is notI iGreater than zero, divided into "interior point" groups
Figure BDA0001951804280000024
And (4) collecting, and updating the weight value in real time every iteration, wherein the expression is as follows:
Figure BDA0001951804280000025
the invention has the beneficial effects that: in the rough matching stage of image feature matching, the method for consistency of the image feature points is added, so that the defect that the RANSAC algorithm is trapped in local optimum at the initial stage is overcome. In the fine matching stage, a Gaussian function is used as a threshold, data are classified according to the weight, secondary optimization is carried out, and the matching accuracy of the feature points is improved. And c, coarse matching and fine matching are combined, so that the image matching accuracy is effectively improved, and the algorithm time consumption is reduced.
Drawings
In order to make the object, technical scheme and beneficial effect of the invention more clear, the invention provides the following drawings for explanation:
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a flow chart of a coarse match culling algorithm.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of the method of the present invention, and as shown in the figure, the method specifically includes the following steps:
s1: roughly matching the acquired image data according to the consistency of the feature points to obtain a data set D;
s2: randomly extracting n data points from the data set D, wherein n is the minimum number of data points suitable for the model, and the minimum sample is recorded asI k
S3: estimating model parameters by using the n data to obtain a model M;
s4: updating the weight of the sample set D according to the loss function, and completing initial sample grouping;
s5: considering the pre-judgment of the randomly extracted initial sample, returning to step S2 to restart for the model that does not satisfy the pre-judgment condition (the number of mismatching points obtained by the test is too large);
s6: updating the weight of the sample set D according to the loss function, and completing the updating of the sample group; data weightI iAll data points greater than zero are the "interior points", the data weight IiAll data points less than zero are "outliers";
s7: if the sample set D is larger than the currently recorded optimal sample B _ D, recording the model parameters, wherein B _ D is D; if the iteration number exceeds a set threshold k, exiting the algorithm, otherwise returning to repeat the steps S2-S6;
s8: and estimating the three-dimensional pose information of the camera according to the obtained model parameters.
In step S1, when the viewing angle is not changed, the distances between two matching points on the normalized planar region image are consistent, and the distances between the two points do not change with the rotation and translation of the image; the rotating angle of the main direction of the matching point is consistent with the rotating angle of the corresponding image; the included angle between any straight lines on the image is consistent with the included angle of the matched image. By utilizing the property, mismatched characteristic point pairs can be quickly eliminated, and a coarse matching algorithm design flow chart is shown in fig. 2.
In step S4, by introducing a gaussian function to describe the matching degree between the data and the model, such a judgment criterion can be considered in the case of data between "inner point" and "outer point", and is specifically expressed as follows:
Figure BDA0001951804280000031
Figure BDA0001951804280000041
wherein: k (x, epsilon) represents the degree of correlation between the data and the model, epsilon represents the error smaller than a set threshold, x represents the independent variable of the function K, loss (e) represents the loss function, IiRepresenting the weight of the data points conforming to the estimation model, and the initial value weight of each data point is zero.
In step S6, the grouping is updated by updating the weight values of the data set D according to the data point weight values IiEqual to zero, more than zero and less than zero are divided into three groups; if the data IiEqual to zero, divided into indeterminate groups phi0Gathering; if IiLess than zero, divided into "outer point" groups
Figure BDA0001951804280000042
Gathering; if IiGreater than zero, divided into "interior point" groups
Figure BDA0001951804280000043
And (4) collecting, and updating the weight value in real time every iteration, wherein the expression is as follows:
Figure BDA0001951804280000044
finally, it is noted that the above-mentioned preferred embodiments illustrate rather than limit the invention, and that, although the invention has been described in detail with reference to the above-mentioned preferred embodiments, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the scope of the invention as defined by the appended claims.

Claims (1)

1. A vision positioning method based on improved RANSAC is characterized in that: the method is combined with the traditional ORB algorithm and RANSAC (random Sampling consensus) algorithm to carry out feature matching improvement, firstly, the method of feature point distance, angle and rotation consistency is utilized to carry out coarse matching, and then, the method of taking a Gaussian function as a loss function and classifying samples by weight is utilized to carry out fine matching;
the method specifically comprises the following steps:
s1: roughly matching the acquired image data according to the consistency of the feature points to obtain a data set D;
s2: randomly extracting n data points from the data set D, wherein n is the minimum number of data points suitable for the model, and the minimum sample is marked as Ik
S3: estimating model parameters by using the n data to obtain a model M;
s4: updating the weight of the sample set D according to the loss function, and completing initial sample grouping;
s5: considering the pre-judgment for the randomly extracted initial sample, and returning to step S2 to restart for the model not meeting the pre-judgment condition;
s6: updating the weight of the sample set D according to the loss function, and completing the updating of the sample group; data weight IiAll data points greater than zero are the "interior points", the data weight IiAll data points less than zero are "outliers";
s7: if the sample set D is larger than the currently recorded optimal sample B _ D, recording the model parameters, wherein B _ D is D; if the iteration number exceeds a set threshold k, exiting the algorithm, otherwise returning to repeat the steps S2-S6;
s8: estimating three-dimensional pose information of the camera according to the obtained model parameters;
in step S1, the rough matching algorithm is:
condition 1: under the condition that the visual angle is not changed, the distances between two matched points on the normalized plane area image are consistent, and the distances between the two points cannot change along with the rotation and translation of the image;
condition 2: the rotating angle of the main direction of the matching point is consistent with the rotating angle of the corresponding image;
condition 3: the included angle between any straight lines on the image is consistent with the included angle of the matched image;
if the condition 1, the condition 2 or the condition 3 is satisfied, correctly matching the point pairs;
if the conditions 1 to 3 are not satisfied, mismatching the point pairs;
in step S4, by introducing a gaussian function to describe the matching degree between the data and the model, such a judgment criterion can be considered in the case of data between "inner point" and "outer point", and is specifically expressed as follows:
Figure FDA0003568745440000011
Figure FDA0003568745440000021
wherein: k (x, epsilon) represents the degree of correlation between the data and the model, epsilon represents the error smaller than a set threshold value, x represents the independent variable of the function K, loss (e) represents a loss function, IiRepresenting the weight of the data points conforming to the estimation model, wherein the weight of the initial value of each data point is zero;
in step S6, the grouping is updated by updating the weight values of the data set D according to the data point weight values IiIs equal to zero, is larger than zero and is smaller than zero, and the components are divided into three groups; if the data IiEqual to zero, divided into indeterminate groups phi0Gathering; if IiLess than zero, divided into "outer dot" groups
Figure FDA0003568745440000023
Gathering; if IiGreater than zero, divided into "interior point" groups
Figure FDA0003568745440000024
And (4) collecting, and updating the weight value in real time every iteration, wherein the expression is as follows:
Figure FDA0003568745440000022
CN201910052684.2A 2019-01-21 2019-01-21 Visual positioning method based on improved RANSAC Active CN109829459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910052684.2A CN109829459B (en) 2019-01-21 2019-01-21 Visual positioning method based on improved RANSAC

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910052684.2A CN109829459B (en) 2019-01-21 2019-01-21 Visual positioning method based on improved RANSAC

Publications (2)

Publication Number Publication Date
CN109829459A CN109829459A (en) 2019-05-31
CN109829459B true CN109829459B (en) 2022-05-17

Family

ID=66860394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910052684.2A Active CN109829459B (en) 2019-01-21 2019-01-21 Visual positioning method based on improved RANSAC

Country Status (1)

Country Link
CN (1) CN109829459B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368893B (en) * 2020-02-27 2023-07-25 Oppo广东移动通信有限公司 Image recognition method, device, electronic equipment and storage medium
CN112084855B (en) * 2020-08-04 2022-05-20 西安交通大学 Outlier elimination method for video stream based on improved RANSAC method
CN112907633B (en) * 2021-03-17 2023-12-01 中国科学院空天信息创新研究院 Dynamic feature point identification method and application thereof

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104239899A (en) * 2014-09-10 2014-12-24 国家电网公司 Electric transmission line spacer identification method for unmanned aerial vehicle inspection
CN106815833A (en) * 2016-12-23 2017-06-09 华中科技大学 A kind of matching process suitable for IC package equipment deformable object
CN107273659A (en) * 2017-05-17 2017-10-20 中国科学院光电技术研究所 RANSAC algorithm-based improved track prediction method for space debris photoelectric tracking
CN107967457A (en) * 2017-11-27 2018-04-27 全球能源互联网研究院有限公司 A kind of place identification for adapting to visual signature change and relative positioning method and system
CN108010045A (en) * 2017-12-08 2018-05-08 福州大学 Visual pattern characteristic point error hiding method of purification based on ORB
CN108154118A (en) * 2017-12-25 2018-06-12 北京航空航天大学 A kind of target detection system and method based on adaptive combined filter with multistage detection
WO2018142228A2 (en) * 2017-01-19 2018-08-09 Mindmaze Holding Sa Systems, methods, apparatuses and devices for detecting facial expression and for tracking movement and location including for at least one of a virtual and augmented reality system
CN109086795A (en) * 2018-06-27 2018-12-25 上海理工大学 A kind of accurate elimination method of image mismatch

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104239899A (en) * 2014-09-10 2014-12-24 国家电网公司 Electric transmission line spacer identification method for unmanned aerial vehicle inspection
CN106815833A (en) * 2016-12-23 2017-06-09 华中科技大学 A kind of matching process suitable for IC package equipment deformable object
WO2018142228A2 (en) * 2017-01-19 2018-08-09 Mindmaze Holding Sa Systems, methods, apparatuses and devices for detecting facial expression and for tracking movement and location including for at least one of a virtual and augmented reality system
CN107273659A (en) * 2017-05-17 2017-10-20 中国科学院光电技术研究所 RANSAC algorithm-based improved track prediction method for space debris photoelectric tracking
CN107967457A (en) * 2017-11-27 2018-04-27 全球能源互联网研究院有限公司 A kind of place identification for adapting to visual signature change and relative positioning method and system
CN108010045A (en) * 2017-12-08 2018-05-08 福州大学 Visual pattern characteristic point error hiding method of purification based on ORB
CN108154118A (en) * 2017-12-25 2018-06-12 北京航空航天大学 A kind of target detection system and method based on adaptive combined filter with multistage detection
CN109086795A (en) * 2018-06-27 2018-12-25 上海理工大学 A kind of accurate elimination method of image mismatch

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Feature Extractionusing ORB-RANSAC for Face Recognition;Vinay.A等;《Procedia Computer Science》;20151231;174-184 *
一种动态光照下视觉VSLAM中的场景特征匹配方法;张慧丽 等;《电子设计工程》;20181231;第26卷(第24期);1-5 *
图像特征点匹配算法的研究;林汀;《现代计算机(专业版)》;20160405(第10期);30-34 *
基于图优化的移动机器人视觉SLAM;张毅 等;《智能***学报》;20180430;第13卷(第2期);290-295 *

Also Published As

Publication number Publication date
CN109829459A (en) 2019-05-31

Similar Documents

Publication Publication Date Title
CN109829459B (en) Visual positioning method based on improved RANSAC
CN110097093A (en) A kind of heterologous accurate matching of image method
CN111899172A (en) Vehicle target detection method oriented to remote sensing application scene
CN104167003A (en) Method for fast registering remote-sensing image
CN108022262A (en) A kind of point cloud registration method based on neighborhood of a point center of gravity vector characteristics
CN107818598B (en) Three-dimensional point cloud map fusion method based on visual correction
CN110399840B (en) Rapid lawn semantic segmentation and boundary detection method
CN104036523A (en) Improved mean shift target tracking method based on surf features
CN105160686B (en) A kind of low latitude various visual angles Remote Sensing Images Matching Method based on improvement SIFT operators
CN110222661B (en) Feature extraction method for moving target identification and tracking
Wang et al. An overview of 3d object detection
CN113628263A (en) Point cloud registration method based on local curvature and neighbor characteristics thereof
CN110111375B (en) Image matching gross error elimination method and device under Delaunay triangulation network constraint
CN110490915B (en) Point cloud registration method based on convolution-limited Boltzmann machine
CN113408584A (en) RGB-D multi-modal feature fusion 3D target detection method
CN112396655B (en) Point cloud data-based ship target 6D pose estimation method
CN104851095A (en) Workpiece image sparse stereo matching method based on improved-type shape context
CN116449384A (en) Radar inertial tight coupling positioning mapping method based on solid-state laser radar
CN110766782A (en) Large-scale construction scene real-time reconstruction method based on multi-unmanned aerial vehicle visual cooperation
CN110910349A (en) Wind turbine state acquisition method based on aerial photography vision
CN110895683A (en) Kinect-based single-viewpoint gesture and posture recognition method
CN115471682A (en) Image matching method based on SIFT fusion ResNet50
CN112581368A (en) Multi-robot grid map splicing method based on optimal map matching
CN116309847A (en) Stacked workpiece pose estimation method based on combination of two-dimensional image and three-dimensional point cloud
CN112614167A (en) Rock slice image alignment method combining single-polarization and orthogonal-polarization images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant