CN112435296B - Image matching method for VSLAM indoor high-precision positioning - Google Patents

Image matching method for VSLAM indoor high-precision positioning Download PDF

Info

Publication number
CN112435296B
CN112435296B CN202011380479.8A CN202011380479A CN112435296B CN 112435296 B CN112435296 B CN 112435296B CN 202011380479 A CN202011380479 A CN 202011380479A CN 112435296 B CN112435296 B CN 112435296B
Authority
CN
China
Prior art keywords
value
matching
illumination
image
alpha
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011380479.8A
Other languages
Chinese (zh)
Other versions
CN112435296A (en
Inventor
刘伟伟
唐蕾
刘婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN202011380479.8A priority Critical patent/CN112435296B/en
Publication of CN112435296A publication Critical patent/CN112435296A/en
Application granted granted Critical
Publication of CN112435296B publication Critical patent/CN112435296B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image matching method for VSLAM indoor high-precision positioning, which can better detect repeatable key points under the condition of intense illumination change by adding an illumination suppression factor into a matching similarity criterion, has illumination robustness, and can improve the accuracy of image matching. In addition, the invention comprehensively utilizes cosine similarity and European measurement, can measure the difference between the value direction and absolute value of the key points in the image matching process, and further improves the accuracy of image matching.

Description

Image matching method for VSLAM indoor high-precision positioning
Technical Field
The invention relates to an image matching method for high-precision positioning in a VSLAM room.
Background
The visual SLAM system is mainly applied to robot visual positioning, and the working process comprises front-end image acquisition, rear-end matching optimization, closed-loop detection and the like, and the image matching comprises the following steps: extracting characteristic points of each frame of image; and performing image matching by using the feature point coordinates and the descriptors to obtain a camera motion trail. The method comprises the steps of calculating a matching relation between a current frame image and an adjacent frame image, and solving a relative spatial relation between two scenes, wherein the acquisition of a camera motion track is a key link.
On the other hand, in the image matching process, euclidean distance or an improved algorithm thereof is a common similarity discrimination criterion, the Euclidean distance mainly discriminates the absolute difference of the real distance between two points in an m-dimensional space, and cosine similarity discriminates the difference from the m-dimensional direction more, but is insensitive to absolute numerical values.
Disclosure of Invention
The inventor finds that in the process of photographing by a depth camera or recording videos, dynamic changes can occur to illumination conditions due to the day-night period, weather changes and unpredictability of illumination environments, extraction accuracy of image feature points can be affected under the condition of severe illumination changes, and problems exist in loop detection and rear-end optimization links, and identification and matching based on the feature points are carried out on images under two illumination conditions with huge differences, so that positioning failure or positioning accuracy reduction is easy to occur.
In addition, in the image matching process, only the numerical value difference of the feature points or the direction vector difference of the feature points are considered, so that the accuracy of image matching is affected.
Aiming at the problems, the invention provides the image matching method for the VSLAM indoor high-precision positioning, which has good illumination robustness and high image matching accuracy.
The technical scheme adopted by the invention is as follows:
an image matching method for high-precision positioning in a VSLAM room comprises the following steps:
Extracting characteristic points of each frame of image;
Step two, establishing a matching similarity criterion, combining European measurement and cosine similarity measurement in the matching similarity criterion, and adding an illumination suppression factor; the matching similarity criterion function E is:
In the formula (1), alpha and beta are both value coefficients, and the value range is (0, 1); sqrt represents positive square root; gamma (t) is an illumination suppression factor, and the value range is (0, 1); w i is a weighting coefficient, and the value range is (0, 1); x i1 denotes the i-th dimensional coordinate of the first feature point, x i2 denotes the i-th dimensional coordinate of the second feature point, i=1, 2, … n;
the light inhibition factor γ (t) is expressed as:
In the formula (2), eta is an inhibition coefficient, and the value range is (0, 1); Δi (t) is a value of a change in luminous intensity of the light source with time, and r is a distance from the camera to the light source;
Step three, calculating a matching similarity measurement value E of the adjacent frame images through a matching similarity criterion function, sequencing the calculated similarity measurement values from large to small, and selecting the smallest E value as a matching frame;
And fourthly, quantitatively estimating the motion of the inter-frame camera according to the motion vector change between the adjacent matching frames to obtain the position and the posture of the camera.
Further, in the third step, when the image positioning accuracy exceeds 1m, judging that the illumination condition is changed drastically, and dynamically adjusting the value step length of the illumination suppression factor gamma (t) in the (0, 1) range; otherwise, the possible value of each alpha and beta parameter is calculated by adjusting the alpha and beta value step length in the range of (0, 1), and then all the combination conditions are traversed to return to the optimal parameter value.
Further, the initial value of β is set to 0.9, and the initial value of α is set to 0.1; or the initial value of beta is set to 0.1, and the initial value of alpha is set to 0.9.
The invention has the beneficial effects that:
According to the invention, the illumination suppression factors are added in the matching similarity criterion, so that repeatable key points under the condition of intense illumination change can be better detected, illumination robustness is realized, and the accuracy of image matching can be improved. In addition, the invention comprehensively utilizes cosine similarity and Euclidean measurement, can measure the difference (difference in space and distance) between the value direction and absolute value of the key point in the image matching process, and further improves the accuracy of image matching.
Drawings
FIG. 1 is a block flow diagram of an image matching method for high accuracy indoor location of a VSLAM of the present invention.
Detailed Description
The image matching method for high-precision positioning in a VSLAM room of the present invention is further described below with reference to the accompanying drawings and specific examples.
As shown in fig. 1, an image matching method for high-precision positioning in a VSLAM room includes the following steps:
Step one, extracting characteristic points (including coordinate information and descriptors and direction information in8 dimensions) of each frame of image.
And step two, establishing a matching similarity criterion, combining the European measurement and the cosine similarity measurement in the matching similarity criterion, and adding an illumination suppression factor. The matching similarity criterion function E is:
In the formula (1), alpha and beta are both value coefficients, and the value range is (0, 1). sqrt represents positive square root. Gamma (t) is an illumination suppression factor, and the value range is (0, 1). w i is a weighting coefficient, and the value range is (0, 1). x i1 denotes the i-th dimensional coordinate of the first feature point, x i2 denotes the i-th dimensional coordinate of the second feature point, i=1, 2, … n.
The light inhibition factor γ (t) is expressed as:
In the formula (2), eta is the inhibition coefficient, and the value range is (0, 1). Δi (t) is a time-dependent change in the luminous intensity of the light source, and r is the distance from the camera to the light source.
In the matching similarity criterion, euclidean measurement d is utilized in consideration of the judgment of the absolute distance difference of the feature points. Taking the influence of the feature point direction dimension information into consideration, cosine similarity is used for measuring cos.
d=sqrt(∑(xi1-xi2)2) (3)
The European metric d measures the absolute distance between feature points.
The closer the cosine value is to 1, the closer the included angle is to 0 degrees, i.e., the more similar the two feature points are.
Step three, calculating the matching similarity measurement value E of the adjacent frame images through the matching similarity criterion function, sequencing the calculated similarity measurement values from large to small, and selecting the matching frame with the minimum E value.
When the image quality does not meet the requirement and the positioning accuracy is poor (exceeds 1 m), judging that the illumination condition is changed drastically, and dynamically adjusting the value step length of the illumination suppression factor gamma (t) in the range of (0, 1). Otherwise, the possible value of each alpha and beta parameter is calculated by adjusting the alpha and beta value step length in the range of (0, 1), and then all the combination conditions are traversed to return to the optimal parameter value.
Beta reflects the distance penalty coefficient of the model to the matching criterion, and alpha reflects the distribution of the data after being mapped to the high-dimensional feature space. The larger the beta, the easier the model is to fit. The smaller β, the easier the model is to under fit. The larger the alpha is, the more feature point descriptor directions are supported, the smaller the alpha value is, and the fewer feature point descriptor directions are supported. The smaller α, the better the generalization of the model, but too small, the model will actually degenerate into a linear model. The larger α, the theoretically any nonlinear data can be fitted.
Before the simulation starts, the values of alpha and beta are set to be between 0.1 and 1, and the optimal parameter range is that the initial value of beta is set to be 0.9 and the initial value of alpha is set to be 0.1. Or the initial value of beta is set to 0.1, and the initial value of alpha is set to 0.9. Then, according to the model precision, β or α is increased (cannot be increased simultaneously), or β or α needs to be reduced, that is, each time multiplied by 0.1 or 10 is used as a step, and after the approximate range is determined, the search matching interval is refined.
And fourthly, quantitatively estimating the motion of the inter-frame camera according to the motion vector change between the adjacent matching frames to obtain the position and the posture of the camera.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any alternatives or modifications, which are easily conceivable by those skilled in the art within the scope of the present invention, should be included in the scope of the present invention.

Claims (2)

1. An image matching method for high-precision positioning in a VSLAM room is characterized by comprising the following steps:
Extracting characteristic points of each frame of image;
Step two, establishing a matching similarity criterion, combining European measurement and cosine similarity measurement in the matching similarity criterion, and adding an illumination suppression factor; the matching similarity criterion function E is:
In the formula (1), alpha and beta are both value coefficients, and the value range is (0, 1); sqrt represents positive square root; gamma (t) is an illumination suppression factor, and the value range is (0, 1); w i is a weighting coefficient, and the value range is (0, 1); x i1 denotes the i-th dimensional coordinate of the first feature point, x i2 denotes the i-th dimensional coordinate of the second feature point, i=1, 2, … n;
the light inhibition factor γ (t) is expressed as:
In the formula (2), eta is an inhibition coefficient, and the value range is (0, 1); Δi (t) is a value of a change in luminous intensity of the light source with time, and r is a distance from the camera to the light source;
Step three, calculating a matching similarity measurement value E of the adjacent frame images through a matching similarity criterion function, sequencing the calculated similarity measurement values from large to small, and selecting the smallest E value as a matching frame;
When the image positioning accuracy exceeds 1m, judging that the illumination condition is changed drastically, and dynamically adjusting the value step length of the illumination suppression factor gamma (t) in the (0, 1) range; otherwise, the possible value of each alpha and beta parameter is calculated by adjusting the alpha and beta value step length in the range of (0, 1), and then all the combination conditions are traversed to return to the optimal parameter value;
And fourthly, quantitatively estimating the motion of the inter-frame camera according to the motion vector change between the adjacent matching frames to obtain the position and the posture of the camera.
2. The image matching method for high-precision positioning in a VSLAM room according to claim 1, wherein the initial value of β is set to 0.9 and the initial value of α is set to 0.1; or the initial value of beta is set to 0.1, and the initial value of alpha is set to 0.9.
CN202011380479.8A 2020-12-01 2020-12-01 Image matching method for VSLAM indoor high-precision positioning Active CN112435296B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011380479.8A CN112435296B (en) 2020-12-01 2020-12-01 Image matching method for VSLAM indoor high-precision positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011380479.8A CN112435296B (en) 2020-12-01 2020-12-01 Image matching method for VSLAM indoor high-precision positioning

Publications (2)

Publication Number Publication Date
CN112435296A CN112435296A (en) 2021-03-02
CN112435296B true CN112435296B (en) 2024-04-19

Family

ID=74699003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011380479.8A Active CN112435296B (en) 2020-12-01 2020-12-01 Image matching method for VSLAM indoor high-precision positioning

Country Status (1)

Country Link
CN (1) CN112435296B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117730348A (en) * 2021-09-23 2024-03-19 英特尔公司 Reliable key point for in-situ learning by introspection self-supervision

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330357A (en) * 2017-05-18 2017-11-07 东北大学 Vision SLAM closed loop detection methods based on deep neural network
CN110097093A (en) * 2019-04-15 2019-08-06 河海大学 A kind of heterologous accurate matching of image method
CN111797938A (en) * 2020-07-15 2020-10-20 燕山大学 Semantic information and VSLAM fusion method for sweeping robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10593060B2 (en) * 2017-04-14 2020-03-17 TwoAntz, Inc. Visual positioning and navigation device and method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330357A (en) * 2017-05-18 2017-11-07 东北大学 Vision SLAM closed loop detection methods based on deep neural network
CN110097093A (en) * 2019-04-15 2019-08-06 河海大学 A kind of heterologous accurate matching of image method
CN111797938A (en) * 2020-07-15 2020-10-20 燕山大学 Semantic information and VSLAM fusion method for sweeping robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Ming Fan,Seung-Won Jung,Sung-Jea Ko.Highly Accurate Scale Estimation from Multiple Keyframes Using RANSAC Plane Fitting With a Novel Scoring Method.IEEE Transactions on Vehicular Technology ( Volume: 69, Issue: 12, December 2020).2020,全文. *
基于场景外观建模的移动机器人视觉闭环检测研究;李博;中国博士论文全文数据库;20111215;全文 *

Also Published As

Publication number Publication date
CN112435296A (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN111739063B (en) Positioning method of power inspection robot based on multi-sensor fusion
CN107833236B (en) Visual positioning system and method combining semantics under dynamic environment
CN1871622B (en) Image collation system and image collation method
CN110889324A (en) Thermal infrared image target identification method based on YOLO V3 terminal-oriented guidance
CN108197604A (en) Fast face positioning and tracing method based on embedded device
CN110533720B (en) Semantic SLAM system and method based on joint constraint
CN103735269B (en) A kind of height measurement method followed the tracks of based on video multi-target
CN108960047B (en) Face duplication removing method in video monitoring based on depth secondary tree
CN112132874B (en) Calibration-plate-free heterogeneous image registration method and device, electronic equipment and storage medium
CN110781790A (en) Visual SLAM closed loop detection method based on convolutional neural network and VLAD
CN111768447B (en) Monocular camera object pose estimation method and system based on template matching
CN113223045B (en) Vision and IMU sensor fusion positioning system based on dynamic object semantic segmentation
CN101777129A (en) Image matching method based on feature detection
CN113554125A (en) Object detection apparatus, method and storage medium combining global and local features
CN112541424A (en) Real-time detection method for pedestrian falling under complex environment
CN104517289A (en) Indoor scene positioning method based on hybrid camera
CN112435296B (en) Image matching method for VSLAM indoor high-precision positioning
CN116363694A (en) Multi-target tracking method of unmanned system crossing cameras matched with multiple pieces of information
CN111210458B (en) Moving target tracking-before-detection method based on pre-detection confidence
CN101320477B (en) Human body tracing method and equipment thereof
CN114529583B (en) Power equipment tracking method and tracking system based on residual regression network
CN109166136B (en) Target object following method of mobile robot based on monocular vision sensor
Savinykh et al. Darkslam: Gan-assisted visual slam for reliable operation in low-light conditions
CN111161323B (en) Complex scene target tracking method and system based on correlation filtering
CN111626417B (en) Closed loop detection method based on unsupervised deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant