CN110335308B - Binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection - Google Patents

Binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection Download PDF

Info

Publication number
CN110335308B
CN110335308B CN201910578572.0A CN201910578572A CN110335308B CN 110335308 B CN110335308 B CN 110335308B CN 201910578572 A CN201910578572 A CN 201910578572A CN 110335308 B CN110335308 B CN 110335308B
Authority
CN
China
Prior art keywords
feature
image
point
time
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910578572.0A
Other languages
Chinese (zh)
Other versions
CN110335308A (en
Inventor
汤淑明
黄馨
张力夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201910578572.0A priority Critical patent/CN110335308B/en
Publication of CN110335308A publication Critical patent/CN110335308A/en
Application granted granted Critical
Publication of CN110335308B publication Critical patent/CN110335308B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of positioning, particularly relates to a binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection, and aims to solve the problems of low positioning precision and low real-time performance caused by low accuracy of characteristic association of the conventional binocular vision odometer. The method comprises the following steps: acquiring images through a binocular camera loaded on a mobile carrier; acquiring newly added feature points and acquiring a new feature point set for the set image by a Shi-Tomasi corner point detection method; tracking the feature points by using a KLT optical flow method, and establishing feature association by combining a parallax constraint method and a self-adaptive bidirectional annular inspection method; based on the feature correlation result, obtaining initial pose estimation by adopting a PNP method, and triangularizing feature points; and obtaining the final pose of the optimal estimation of the pose by minimizing the reprojection error by adopting a beam adjustment method. The invention improves the quality and efficiency of characteristic tracking and improves the positioning precision and the real-time performance of the binocular vision odometer.

Description

Binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection
Technical Field
The invention belongs to the technical field of positioning, and particularly relates to a binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection.
Background
As a part of the most active development of intelligent transportation systems, smart vehicles have received the most important attention of various research institutions and colleges in the world. The positioning technology is one of key technologies for safe movement of intelligent vehicles, and plays an important role in the field of automatic driving.
The binocular vision odometer is a practical vision positioning technology and can effectively estimate the pose of a vehicle. With the development of image processing technology and computing power, binocular vision odometer is gradually applied to embedded systems and becomes an important component of automatic driving. Its advantages are low cost, low energy consumption, convenient installation, high portability and high anti-electromagnetic interference power.
The binocular vision odometer mainly comprises the steps of feature association and pose estimation. The feature association is established based on feature extraction and tracking, and the pose estimation is obtained by minimizing the reprojection error based on the feature association result. Therefore, it is important to establish efficient feature association. The current general method is to adopt front and back inspection and left and right inspection to inspect the feature association obtained by optical flow tracking, but the method still contains more external points, has low-quality feature point tracking, and has to be further improved in precision and real-time.
Disclosure of Invention
In order to solve the above problems in the prior art, that is, to solve the problems of low positioning accuracy and low real-time performance caused by low accuracy of the feature association of the existing binocular vision odometer, a first aspect of the present invention provides a binocular vision odometer calculation method based on parallax constraint and bidirectional ring inspection, the method comprising:
step S100, acquiring a left image and a right image according to a set acquisition frequency through a binocular camera loaded on a mobile carrier to obtain the left image and the right image respectively corresponding to t-1 and t moments;
step S200, carrying out Shi-Tomasi corner point detection on the set image at the t-1 moment obtained in the step S100, extracting a newly added feature point set, and constructing a new feature point set of the set image by combining the feature points of the set image obtained at the t-1 moment;
step S300, based on the new feature point set of the set image, adopting a KLT optical flow method, respectively obtaining feature point sets corresponding to other images at t-1 and t moments through parallax constraint and self-adaptive bidirectional annular inspection, and performing feature association;
step S400, based on the result of the feature association in the step S300, obtaining initial pose estimation of the moving carrier by adopting a PNP pose estimation method, and triangularizing each feature point in the left image and the right image at the time t by a binocular vision method to obtain a three-dimensional space coordinate corresponding to each feature point;
and S500, based on the result of the characteristic association in the step S300, the initial pose estimation, the three-dimensional space coordinates and the two-dimensional image coordinates of each characteristic point in the left image and the right image at the time t, obtaining the maximum likelihood estimation of the pose of the mobile carrier by minimizing the reprojection error by adopting a beam adjustment method, and obtaining the final pose.
In some preferred embodiments, the left and right images acquired in step S100 are corrected by OpenCV library functions.
In some preferred embodiments, the "feature points of the set image obtained at time t-1" are obtained by:
and tracking the characteristic points of the acquired binocular images at the t-1 moment by adopting the methods of the step S200 and the step S300 at the t-1 moment, the t-2 moment and the t-1 moment.
In some preferred embodiments, the method for "extracting a new feature point set" in step S200 is as follows:
extracting the feature points of the set image at the t-1 moment by using a Shi-Tomasi corner detection method, and deleting the feature points of the neighborhood of the original feature point set range of the set image to obtain new feature points; and the original characteristic of the set image is the characteristic point of the set image obtained at the time t-1.
In some preferred embodiments, the setting image is a left image acquired at time t.
In some preferred embodiments, step S300 "acquires feature point sets corresponding to other images at time t-1 and time t, and performs feature association", and the method includes:
the left image at the time point of t-1
Figure BDA0002112598400000031
Characteristic point of
Figure BDA0002112598400000032
Tracking through a KLT optical flow method to obtain a left image at the time t
Figure BDA0002112598400000033
Characteristic point of
Figure BDA0002112598400000034
Screening the feature points of the left image passing the front and back inspection at the t-1 moment, and establishing
Figure BDA0002112598400000035
And
Figure BDA0002112598400000036
the feature association of (1);
the left image at the time t
Figure BDA0002112598400000037
Characteristic point of
Figure BDA0002112598400000038
Tracking through a KLT optical flow method to obtain a right image at the time t
Figure BDA0002112598400000039
Characteristic point of
Figure BDA00021125984000000310
Feature points are checked through left and right and parallax constraint, and established
Figure BDA00021125984000000311
And
Figure BDA00021125984000000312
the feature association of (1);
let the right image at time t be
Figure BDA00021125984000000313
Characteristic point of
Figure BDA00021125984000000314
Tracking through a KLT optical flow method to obtain a right image at the time point of t-1
Figure BDA00021125984000000315
Characteristic point of
Figure BDA00021125984000000316
Screening right images at time t
Figure BDA00021125984000000317
And passing the feature points of the adaptive back-front inspection.
In some preferred embodiments, in step S300, "feature point sets corresponding to other images at time t-1 and time t" are respectively obtained, "if t-1 is an initial time, the feature point sets will be obtained
Figure BDA00021125984000000318
Characteristic point of
Figure BDA00021125984000000319
Tracking by KLT optical flow method is obtained in
Figure BDA00021125984000000320
Characteristic point of
Figure BDA00021125984000000321
And screening to obtain the characteristic points which pass through left and right inspection and parallax constraint.
In some preferred embodiments, the "screening to obtain feature points that pass left and right tests and parallax constraints" is performed by:
obtaining
Figure BDA0002112598400000041
And
Figure BDA0002112598400000042
is less thanFixed threshold value rho1And the feature points are calculated
Figure BDA0002112598400000043
Backward tracking is obtained at
Figure BDA0002112598400000044
Characteristic point of
Figure BDA0002112598400000045
If it is not
Figure BDA0002112598400000046
And
Figure BDA0002112598400000047
is less than a set threshold value delta1Then the left and right tests are passed.
In some preferred embodiments, step S301 "filters the left image at time t-1
Figure BDA0002112598400000048
The characteristic points which pass the front and back inspection in the method comprise the following steps:
feature points
Figure BDA0002112598400000049
Backward tracking is obtained at
Figure BDA00021125984000000410
Characteristic point of
Figure BDA00021125984000000411
If it is not
Figure BDA00021125984000000412
And
Figure BDA00021125984000000413
is less than a set threshold value delta2And the characteristic points pass the front and back inspection.
In some advantagesIn an alternative embodiment, step S302 "filters the left image at time t
Figure BDA00021125984000000414
The characteristic points are subjected to left and right inspection and parallax constraint, and the method comprises the following steps:
if the feature point
Figure BDA00021125984000000415
And
Figure BDA00021125984000000416
is less than a set threshold rho1If yes, the feature point is reserved and the following scheme is executed, otherwise, the feature point is deleted;
feature points
Figure BDA00021125984000000417
Backward tracking is obtained at
Figure BDA00021125984000000418
Characteristic point of
Figure BDA00021125984000000419
If it is not
Figure BDA00021125984000000420
And
Figure BDA00021125984000000421
is less than a set threshold value delta3And then the characteristic points pass the left and right tests.
In some preferred embodiments, step S303 "filters the right image at time t
Figure BDA00021125984000000422
The method of the feature points passing the self-adaptive back-front inspection comprises the following steps:
feature points
Figure BDA00021125984000000423
Backward tracking is obtained at
Figure BDA00021125984000000424
Characteristic point of
Figure BDA00021125984000000425
If it is not
Figure BDA00021125984000000426
And
Figure BDA00021125984000000427
is less than the adaptive threshold delta4Then, the feature points are used as feature points for the inspection after the self-adaptation;
wherein the adaptive threshold value delta4The calculation method comprises the following steps:
Figure BDA00021125984000000428
wherein rho and epsilon are parameters, loss is the number of feature points lost when the feature points of the right-side image track the feature points of the left-side image at the time t-1, and maxtrack is the maximum tracking number of the current feature points.
In some preferred embodiments, in step S500, "obtaining a maximum likelihood estimate of the pose by minimizing the reprojection error" includes:
constructing a space reprojection error and a time reprojection error for each feature point on each image frame according to the feature association result;
a maximum likelihood estimate of the pose in the sliding window is obtained using a beam-balancing method.
In some preferred embodiments, step S600 is further included after step S500:
based on the final pose of each moment before the t moment, acquiring a prior term r of the t moment by an edge method and a Schur decomposition methodpAnd the prior term r is usedpAs a priori error of the maximum likelihood estimate in step S500.
The invention also provides a binocular vision odometer computing system based on parallax constraint and bidirectional annular inspection, which comprises a binocular image acquisition unit, an initial feature point extraction unit, a feature point extraction and feature association unit, an initial pose estimation unit and a maximum likelihood estimation unit;
the binocular image acquisition unit is configured to acquire a left image and a right image according to a set acquisition frequency through a binocular camera loaded on the mobile carrier to obtain the left image and the right image respectively corresponding to t-1 and t moments;
the initial feature point extraction unit is configured to perform Shi-Tomasi corner point detection on the set image at the t-1 moment obtained by the binocular image acquisition unit, extract a newly added feature point set, and construct a new feature point set of the set image by combining the feature points of the set image obtained at the t-1 moment;
the feature point extraction and feature association unit is configured to enable a new feature point set based on the set image to respectively acquire feature point sets corresponding to other images at t-1 and t moments by adopting a KLT optical flow method and through parallax constraint and adaptive bidirectional annular inspection, and perform feature association;
the initial pose estimation unit is configured to obtain initial pose estimation of the moving carrier by adopting a PNP pose estimation method based on the feature association result, and triangularize each feature point in the left image and the right image at the time t by a binocular vision method to obtain a three-dimensional space coordinate corresponding to each feature point;
and the maximum likelihood estimation unit is configured to obtain maximum likelihood estimation of the pose of the mobile carrier by minimizing a reprojection error by adopting a beam adjustment method based on the feature association, the initial pose estimation, the three-dimensional space coordinates and the two-dimensional image coordinates of each feature point in the left image and the right image at the time t, and obtain a final pose.
In a third aspect of the present invention, a storage device having a plurality of programs stored therein is characterized in that the program applications are loaded and executed by a processor to implement the binocular vision odometer calculation method based on parallax constraint and two-way ring inspection described above.
In a fourth aspect of the invention, a processing arrangement comprises a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; wherein the program is adapted to be loaded and executed by a processor to implement the binocular vision odometer calculation method based on parallax constraint and two-way circular inspection.
The invention has the beneficial effects that:
the method combines parallax constraint, can effectively purify the characteristic points, avoids low-quality characteristic point tracking, and improves the quality and efficiency of characteristic tracking; the invention provides the self-adaptive bidirectional annular inspection, further removes the outer points, improves the accuracy of feature association, and can self-adaptively adjust the quantity and quality of the feature points according to the change of the feature points in different motions, thereby improving the positioning accuracy and the real-time property of the binocular vision odometer.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a schematic flow chart of a binocular vision odometer computing method based on parallax constraint and two-way circular inspection according to an embodiment of the invention;
FIG. 2 is a schematic view of the disparity constraint of the binocular vision odometer computing method based on the disparity constraint and the two-way circular inspection according to an embodiment of the invention;
fig. 3 is a schematic diagram of adaptive bidirectional circular inspection of a binocular vision odometer computing method based on parallax constraint and bidirectional circular inspection according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The invention discloses a binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection, which specifically comprises the following steps of:
step S100, acquiring a left image and a right image according to a set acquisition frequency through a binocular camera loaded on a mobile carrier to obtain the left image and the right image respectively corresponding to t-1 and t moments;
step S200, carrying out Shi-Tomasi corner point detection on the set image at the t-1 moment obtained in the step S100, extracting a newly added feature point set, and constructing a new feature point set of the set image by combining the feature points of the set image obtained at the t-1 moment;
step S300, based on the new feature point set of the set image, adopting a KLT optical flow method, respectively obtaining feature point sets corresponding to other images at t-1 and t moments through parallax constraint and self-adaptive bidirectional annular inspection, and performing feature association;
step S400, based on the result of the feature association in the step S300, obtaining initial pose estimation of the moving carrier by adopting a PNP pose estimation method, and triangularizing each feature point in the left image and the right image at the time t by a binocular vision method to obtain a three-dimensional space coordinate corresponding to each feature point;
and S500, based on the result of the characteristic association in the step S300, the initial pose estimation, the three-dimensional space coordinates and the two-dimensional image coordinates of each characteristic point in the left image and the right image at the time t, obtaining the maximum likelihood estimation of the pose of the mobile carrier by minimizing the reprojection error by adopting a beam adjustment method, and obtaining the final pose. In some preferred embodiments, step S600 is further included after step S500:
based on the final pose of each moment before the t moment, acquiring a prior term r of the t moment by an edge method and a Schur decomposition methodpAnd the prior term r is usedpAs a priori error of the maximum likelihood estimate in step S500.
In order to more clearly describe the binocular vision odometer calculation method based on parallax constraint and bidirectional circular inspection, the following describes in detail the steps of the binocular vision odometer calculation method based on parallax constraint and bidirectional circular inspection according to an embodiment of the method in accordance with the present invention, with reference to the accompanying drawings.
The binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection of one embodiment of the invention comprises the following steps S100-S600.
Step S100, acquiring a left image and a right image according to a set acquisition frequency through a binocular camera loaded on a mobile carrier to obtain the left image and the right image respectively corresponding to t-1 and t moments;
in this embodiment, before image acquisition, the assembled binocular camera needs to be calibrated, and the calibrated parameters include intra-camera parameters and inter-camera parameters of the binocular camera;
and correcting the acquired left image and the right image through an OpenCV library function.
Step S200, carrying out Shi-Tomasi corner point detection on the set image at the t-1 moment obtained in the step S100, extracting a newly added feature point set, and constructing a new feature point set of the set image by combining the feature points of the set image obtained at the t-1 moment;
in this embodiment, the number of preset total feature points may be set to 200, but may also be set to other values, such as 100, 150, and the like, in some other embodiments.
The method for acquiring the characteristic points of the set image at the time t-1 in the step comprises the following steps:
and tracking the characteristic points of the acquired binocular images at the t-1 moment by adopting the methods of the step S200 and the step S300 at the t-1 moment, the t-2 moment and the t-1 moment.
In this step, a newly added feature point set is extracted, and the method includes:
and extracting the feature points of the set image at the t-1 moment by using a Shi-Tomasi corner detection method, and deleting the feature points in the neighborhood of the original feature point set range of the set image to obtain new feature points. And setting the original features of the image as the feature points of the set image obtained at the time t-1. The original feature point setting range neighborhood is as follows: and respectively acquiring circular areas corresponding to the original feature points according to the set radius based on the original feature points, wherein the collection of all the circular areas is the set range neighborhood of the original feature points.
In this embodiment, the image for detecting the Shi-Tomasi corner is the left image acquired at time t-1, but of course, in other embodiments, the image may also be the right image acquired at time t.
Shi-Tomasi corner detection determines feature points by comparing minimum feature values of the gradient matrix, and when the feature points are matched, affine transformation is introduced, so that the feature points are matched more accurately between frames, and bad feature points are eliminated. The feature extraction method of the present step is well documented in the art, and will not be described in detail herein.
And step S300, based on the new feature point set of the set image, adopting a KLT optical flow method, respectively obtaining feature point sets corresponding to other images at t-1 and t through parallax constraint and adaptive bidirectional annular inspection, and performing feature association.
The embodiment further comprises a step of counting the tracking times of the feature points: in the process of acquiring the pose at the time t, setting the feature points in the new feature point set of the image to pass parallax constraint and self-adaptive bidirectional annular inspection, so that the tracking is successful, and adding 1 to the maximum tracking frequency of the feature points to realize the statistics of the tracking frequency. For the newly added feature points, the initial tracking times are 0.
Optical flow method for the principle of object detection: each pixel in the image is assigned a velocity vector, thus forming a motion vector field. At a certain moment, the points on the image correspond to the points on the three-dimensional object one by one, and the corresponding relation can be calculated through projection.
1. At the initial moment of image acquisition by the binocular camera, only one pair of images is acquired, namely t is 1 and t-1 does not exist, and at the moment, the left image at the moment is screened
Figure BDA0002112598400000101
And (4) carrying out left-right inspection and parallax constraint on the feature points, and matching the feature points of the left image and the right image.
The specific scheme in the embodiment is as follows: if it is not
Figure BDA0002112598400000102
If the feature point can pass left and right inspection and parallax constraint, the feature point is retained, and feature association is established, otherwise, the feature point is deleted, and the specific method comprises the following steps:
step 3001: will be provided with
Figure BDA0002112598400000103
Characteristic point of
Figure BDA0002112598400000104
Tracking by KLT optical flow method is obtained in
Figure BDA0002112598400000105
Characteristic point of
Figure BDA0002112598400000106
Step 3002, if the feature point
Figure BDA0002112598400000107
And
Figure BDA0002112598400000108
is less than a set threshold rho1If not, deleting the feature point and skipping the following steps;
step 3003, apply the feature points
Figure BDA0002112598400000109
Backward tracking is obtained at
Figure BDA00021125984000001010
Characteristic point of
Figure BDA00021125984000001011
Step 3004, if
Figure BDA0002112598400000111
And
Figure BDA0002112598400000112
is less than a set threshold value delta1Then the left and right tests are passed.
2. The binocular camera can obtain binocular images at the front moment and the rear moment at the second acquisition moment and each moment after the second acquisition moment, and after the binocular image at the t moment is obtained, feature association is performed through the following steps S301 to S303.
Step S301, the left image at the time point t-1
Figure BDA0002112598400000113
Characteristic point of
Figure BDA0002112598400000114
Tracking through a KLT optical flow method to obtain a left image at the time t
Figure BDA0002112598400000115
Characteristic point of
Figure BDA0002112598400000116
Left side of screening time t-1Image of a person
Figure BDA0002112598400000117
The feature points passing the front and back inspection are established
Figure BDA0002112598400000118
And
Figure BDA0002112598400000119
is associated with the feature of (1).
The specific scheme of the step in the embodiment is as follows: if it is not
Figure BDA00021125984000001110
If the characteristic point can pass the front and back inspection, the characteristic point is reserved and the characteristic association is established, otherwise, the characteristic point is deleted, and the specific method comprises the following steps:
step 3011, mixing
Figure BDA00021125984000001111
Characteristic point of
Figure BDA00021125984000001112
Tracking by KLT optical flow method is obtained in
Figure BDA00021125984000001113
Characteristic point of
Figure BDA00021125984000001114
Step 3012, apply the feature points
Figure BDA00021125984000001115
Backward tracking is obtained at
Figure BDA00021125984000001116
Characteristic point of
Figure BDA00021125984000001117
Step 3013, if
Figure BDA00021125984000001118
And
Figure BDA00021125984000001119
is less than a set threshold value delta2And then the test is passed.
Step S302, the left image at the time t
Figure BDA00021125984000001120
Characteristic point of
Figure BDA00021125984000001121
Tracking through a KLT optical flow method to obtain a right image at the time t
Figure BDA00021125984000001122
Characteristic point of
Figure BDA00021125984000001123
Screening left image at time t
Figure BDA00021125984000001124
Feature points are checked through left and right and parallax constraint, and established
Figure BDA00021125984000001125
And
Figure BDA00021125984000001126
is associated with the feature of (1).
The specific scheme of this step in this example is: if it is not
Figure BDA00021125984000001127
If the feature point can pass left and right inspection and parallax constraint, the feature point is retained, and feature association is established, otherwise, the feature point is deleted, and the specific method comprises the following steps:
step 3021: will be provided with
Figure BDA00021125984000001128
Is characterized byDot
Figure BDA00021125984000001129
Tracking by KLT optical flow method is obtained in
Figure BDA00021125984000001130
Characteristic point of
Figure BDA00021125984000001131
Step 3022: if the feature point
Figure BDA00021125984000001132
And
Figure BDA00021125984000001133
is less than a set threshold rho1If not, deleting the feature point and skipping the following steps;
step 3023, feature points are added
Figure BDA0002112598400000121
Backward tracking is obtained at
Figure BDA0002112598400000122
Characteristic point of
Figure BDA0002112598400000123
Step 3024, if
Figure BDA0002112598400000124
And
Figure BDA0002112598400000125
is less than a set threshold value delta3Then the left and right tests are passed.
FIG. 2 is a schematic view of the disparity constraint of binocular vision odometer calculation method based on disparity constraint and two-way circular inspection, showing the method
Figure BDA0002112598400000126
Characteristic point of
Figure BDA0002112598400000127
Tracking by KLT optical flow method is obtained in
Figure BDA0002112598400000128
Characteristic point of
Figure BDA0002112598400000129
The dotted line in the figure shows the allowable range [ + ρ ] of the y coordinate1,-ρ1],
Figure BDA00021125984000001210
Middle characteristic point
Figure BDA00021125984000001211
The solid line on the right represents the feature point
Figure BDA00021125984000001212
And characteristic point
Figure BDA00021125984000001213
The y coordinates are equal.
Step S303, drawing the right side of the time t
Figure BDA00021125984000001214
Characteristic point of
Figure BDA00021125984000001215
Tracking through a KLT optical flow method to obtain a right image at the time point of t-1
Figure BDA00021125984000001216
Characteristic point of
Figure BDA00021125984000001217
Screening right images at time t
Figure BDA00021125984000001218
By self-runningAnd adapting to characteristic points of the post-inspection and the pre-inspection.
The specific scheme of this step in this example is: if it is not
Figure BDA00021125984000001219
The feature point can be checked after self-adapting, then the feature point is reserved, otherwise, the feature point is deleted, and the specific method is as follows:
step 3031, mixing
Figure BDA00021125984000001220
Characteristic point of
Figure BDA00021125984000001221
Tracking by KLT optical flow method is obtained in
Figure BDA00021125984000001222
Characteristic point of
Figure BDA00021125984000001223
Step 3032, characteristic points are obtained
Figure BDA00021125984000001224
Backward tracking is obtained at
Figure BDA00021125984000001225
Characteristic point of
Figure BDA00021125984000001226
Step 3033, if
Figure BDA00021125984000001227
And
Figure BDA00021125984000001228
is less than the adaptive threshold delta4Then, the self-adaptive back-front inspection is passed;
adaptive threshold delta4Is shown in equation (1):
Figure BDA00021125984000001229
where ρ and ∈ are preset parameters (values ρ is 0.23 and ∈ is 5 in this embodiment), where less is the number of feature points that are lost when the feature point of the left-side image tracks the feature point of the left-side image at time t-1 (i.e., the number of feature points of the left-side image at time t minus the number of feature points of the left-side image that are successfully tracked at time t-1 is preset total feature point number, which is 200 in this embodiment), and maxtrack is the maximum tracking number of current feature points. In this embodiment, when pose calculation is performed at each time, bidirectional ring inspection starts from the left image at the previous time. In other embodiments, if the two-way ring check starts from the right image at time t-1, then loss is the number of feature points that were lost when the feature points of the right image at time t tracked the feature points of the right image at time t-1, in which case the two-way ring checks both start from the right image at the previous time.
FIG. 3 is a schematic diagram of adaptive bi-directional circular inspection of binocular vision odometer computing method based on parallax constraint and bi-directional circular inspection, showing
Figure BDA0002112598400000131
And performing bidirectional loop inspection by using a KLT optical flow method, wherein a solid arrow between every two images represents a process of tracking and acquiring the characteristic points in the images indicated by the arrows by using the KLT optical flow method, and a dotted arrow represents a process of reversely tracking and obtaining the characteristic points of the images indicated by the arrows. In the drawings
Figure BDA0002112598400000132
And
Figure BDA0002112598400000133
the solid line arrow and the dotted line arrow in between are respectively the t-1 time
Figure BDA0002112598400000134
And
Figure BDA0002112598400000135
the corresponding characteristic points are obtained through tracking and back tracking.
Step S400, based on the characteristic association, obtaining initial pose estimation of the mobile carrier by adopting a PNP pose estimation method, and triangularizing each characteristic point by using a binocular vision method to obtain a three-dimensional space coordinate corresponding to each characteristic point;
and obtaining initial pose estimation by adopting a PNP method according to the two-dimensional feature points of the current frame and the corresponding three-dimensional space coordinates. Triangularization is carried out on the feature points newly extracted from the current frame through a binocular vision method, and corresponding three-dimensional space coordinates are obtained and used for subsequent pose calculation.
And S500, based on the feature association, the initial pose estimation, the three-dimensional space coordinates and the two-dimensional image coordinates of each feature point in the left image and the right image at the time t, obtaining the maximum likelihood estimation of the pose of the mobile carrier by minimizing the reprojection error by adopting a beam adjustment method, and obtaining the final pose.
The pose directly acquired by the embodiment of the invention is the pose of the left camera of the binocular camera, and the pose of the moving carrier can be obtained through the pose mapping relation between the left camera and the moving carrier. In other embodiments, if the bidirectional ring inspection starts from the right image at the time t-1, the pose directly acquired by the method is the pose of the right camera of the binocular camera, and similarly, the pose of the moving carrier can be obtained through the pose mapping relationship between the right camera and the moving carrier.
And constructing a reprojection error for each feature point on each image frame according to the feature association result. The reprojection error is determined by projecting the feature point from the first observed position into a subsequent image frame. Assuming that the first image frame where the feature point l is observed is i, its reprojection error on the image frame t is as shown in equation (2):
Figure BDA0002112598400000141
wherein the content of the first and second substances,
Figure BDA0002112598400000142
is equivalent to
Figure BDA0002112598400000143
Representing the observation point of the characteristic point l on the image frame t;
Figure BDA0002112598400000144
the method is characterized in that the method is an observation model of a characteristic point l in an image frame t, and chi represents a pose; picIs a pinhole model, projecting features from the camera coordinate system to the image coordinate system; t is a homogeneous matrix of camera poses, TiRepresenting the corresponding pose, T, of the image frame itRepresenting the corresponding pose of the image frame t. Lambda [ alpha ]lThe depth of the feature point l is represented.
For the binocular vision odometer, the method is adopted to simultaneously construct the space reprojection error and the time reprojection error.
The initial pose estimation is used as an initial value, the maximum likelihood estimation of the pose in the sliding window is obtained by using a beam adjustment method, and the equation can be solved by adopting a Gaussian Newton method as shown in a formula (3).
Figure BDA0002112598400000145
Where S is the set of image measurement values (i.e., the set of two-dimensional image coordinates of the feature points),
Figure BDA0002112598400000146
is a covariance matrix, χ*Is the optimal pose (i.e. the final pose obtained by resolving), χ is the pose to be optimized, n is the time sequence number,
Figure BDA0002112598400000147
is the two-dimensional image coordinates of the feature points,
Figure BDA0002112598400000148
is a pinhole modelπc,rpIs the prior term obtained in step S600.
The prior term r is added to the maximum likelihood estimation in the present embodimentpOf course, in some other embodiments, the prior term r may be removed from equation (3)p
In order to improve the timeliness of the binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection, step S600 is added in some embodiments.
Step S600, based on the final pose of each moment before the t moment, acquiring a prior term r of the t moment through an edge method and a Schuler decomposition methodpAnd the prior term r is usedpAs a priori error of the maximum likelihood estimate in step S500.
With the continuous increase of the system state, the quantity of the system state (final pose) is reduced by adopting an edge method, so that the effect of reducing the calculation complexity is achieved, and the binocular vision odometer can run in real time.
The marginalization method converts the previous partial system state into a prior term r through the Schur decompositionpAnd remove it from the sliding window, providing a priori information on the state of the sliding window.
And forming the pose still remained after the marginalization and the new image frame pose in a sliding window, and circularly adopting a beam adjustment method to obtain the maximum likelihood estimation of the pose.
Reference may be made in this step to "Leutenegger S, Lynen S, Bosse M, et al, Keyframe-based visual-inert protocol using nonliner optimization [ J ]. The International Journal of Robotics Research,2015,34(3): 314-.
In the above description of the technical solutions, the step S600 is placed after the step S500 only for clearly describing the technical solutions, not for limiting the sequence of the steps, and in some embodiments, the step S500 may be placed before.
The binocular vision odometer computing system based on parallax constraint and bidirectional annular inspection comprises a binocular image acquisition unit, an initial feature point extraction unit, a feature point extraction and feature association unit, an initial pose estimation unit and a maximum likelihood estimation unit;
the binocular image acquisition unit is configured to acquire the left image and the right image according to a set acquisition frequency through a binocular camera loaded on the mobile carrier to obtain the left image and the right image corresponding to the t moment and the t +1 moment respectively;
the initial feature point extraction unit is configured to perform Shi-Tomasi corner point detection on the set image at the t-1 moment obtained by the binocular image acquisition unit, extract a newly added feature point set, and construct a new feature point set of the set image by combining the feature points of the set image obtained at the t-1 moment;
the feature point extraction and feature association unit is configured to acquire feature point sets corresponding to other images at the time t and the time t +1 respectively by adopting a KLT optical flow method and through parallax constraint and adaptive bidirectional annular inspection based on the new feature point set of the set image, and perform feature association;
the initial pose estimation unit is configured to obtain initial pose estimation of the moving carrier by adopting a PNP pose estimation method based on the feature association result, and triangularize each feature point in the left image and the right image at the time t by a binocular vision method to obtain a three-dimensional space coordinate corresponding to each feature point;
and the maximum likelihood estimation unit is configured to obtain maximum likelihood estimation of the pose of the mobile carrier by minimizing a reprojection error by adopting a beam adjustment method based on the feature association, the initial pose estimation, and the three-dimensional space coordinates and two-dimensional image coordinates of each feature point in the left image and the right image at the time t, so as to obtain a final pose.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
It should be noted that, the binocular vision odometer computing system based on the parallax constraint and the two-way circular inspection provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical applications, the above functions may be allocated to different functional modules according to needs, that is, the modules or steps in the embodiments of the present invention are further decomposed or combined, for example, the modules in the above embodiments may be combined into one module, or may be further split into multiple sub-modules, so as to complete all or part of the above described functions. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
A storage device according to a third embodiment of the present invention stores therein a plurality of programs adapted to be loaded and executed by a processor to implement the binocular visual odometer calculating method based on parallax constraint and two-way circular inspection described above.
A processing apparatus according to a fourth embodiment of the present invention includes a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; the program is adapted to be loaded and executed by a processor to implement the binocular vision odometry calculation method based on parallax constraint and two-way circular inspection described above.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Those of skill in the art would appreciate that the various illustrative modules, method steps, and modules described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that programs corresponding to the software modules, method steps may be located in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (10)

1. A binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection is characterized in that the pose calculation method comprises the following steps:
step S100, acquiring a left image and a right image according to a set acquisition frequency through a binocular camera loaded on a mobile carrier to obtain the left image and the right image respectively corresponding to t-1 and t moments;
step S200, carrying out Shi-Tomasi corner point detection on the set image at the t-1 moment obtained in the step S100, extracting a newly added feature point set, and constructing a new feature point set of the set image by combining the feature points of the set image obtained at the t-1 moment;
step S300, based on the new feature point set of the set image, adopting a KLT optical flow method, respectively obtaining feature point sets corresponding to other images at t-1 and t moments through parallax constraint and self-adaptive bidirectional annular inspection, and performing feature association;
the method specifically comprises the following steps: in the process of acquiring the pose at the time t, setting the feature points in a new feature point set of the image to pass parallax constraint and self-adaptive bidirectional annular inspection, so that the tracking is successful, adding 1 to the maximum tracking frequency of the feature points, and setting the initial tracking frequency of the newly added feature points to be 0;
if it is not
Figure FDA0003106178170000011
If the feature point can pass left and right inspection and parallax constraint, the feature point is retained, and feature association is established, otherwise, the feature point is deleted, and the specific method comprises the following steps:
step 3001: will be provided with
Figure FDA0003106178170000012
Characteristic point of
Figure FDA0003106178170000013
Tracking by KLT optical flow method is obtained in
Figure FDA0003106178170000014
Characteristic point of
Figure FDA0003106178170000015
Step 3002, if the feature point
Figure FDA0003106178170000016
And
Figure FDA0003106178170000017
is less than a set threshold rho1If not, deleting the feature point and skipping the following steps;
step 3003, apply the feature points
Figure FDA0003106178170000018
Backward tracking is obtained at
Figure FDA0003106178170000019
Characteristic point of
Figure FDA00031061781700000110
Step 3004, if
Figure FDA00031061781700000111
And
Figure FDA00031061781700000112
is less than a set threshold value delta1If so, the left and right tests are passed;
step S301, the left image at the time point t-1
Figure FDA00031061781700000113
Characteristic point of
Figure FDA00031061781700000114
Tracking through a KLT optical flow method to obtain a left image at the time t
Figure FDA0003106178170000021
Characteristic point of
Figure FDA0003106178170000022
Screening left side image at time t-1
Figure FDA0003106178170000023
The feature points passing the front and back inspection are established
Figure FDA0003106178170000024
And
Figure FDA0003106178170000025
the feature association of (1);
if it is not
Figure FDA0003106178170000026
If the characteristic point can pass the front and back inspection, the characteristic point is reserved and the characteristic association is established, otherwise, the characteristic point is deleted, and the specific method comprises the following steps:
step 3011, mixing
Figure FDA0003106178170000027
Characteristic point of
Figure FDA0003106178170000028
Tracking by KLT optical flow method is obtained in
Figure FDA0003106178170000029
Characteristic point of
Figure FDA00031061781700000210
Step 3012, apply the feature points
Figure FDA00031061781700000211
Backward tracking is obtained at
Figure FDA00031061781700000212
Characteristic point of
Figure FDA00031061781700000213
Step 3013, if
Figure FDA00031061781700000214
And
Figure FDA00031061781700000215
is less than a set threshold value delta2If so, the test is passed;
step S302, the left image at the time t
Figure FDA00031061781700000216
Characteristic point of
Figure FDA00031061781700000217
Tracking through a KLT optical flow method to obtain a right image at the time t
Figure FDA00031061781700000218
Characteristic point of
Figure FDA00031061781700000219
Screening left image at time t
Figure FDA00031061781700000220
Feature points are checked through left and right and parallax constraint, and established
Figure FDA00031061781700000221
And
Figure FDA00031061781700000222
the feature association of (1);
if it is not
Figure FDA00031061781700000223
If the feature point can pass left and right inspection and parallax constraint, the feature point is retained, and feature association is established, otherwise, the feature point is deleted, and the specific method comprises the following steps:
step 3021: will be provided with
Figure FDA00031061781700000224
Characteristic point of
Figure FDA00031061781700000225
Tracking by KLT optical flow method is obtained in
Figure FDA00031061781700000226
Characteristic point of
Figure FDA00031061781700000227
Step 3022: if the feature point
Figure FDA00031061781700000228
And
Figure FDA00031061781700000229
is less than a set threshold rho1If not, deleting the feature point and skipping the following steps;
step 3023, feature points are added
Figure FDA00031061781700000230
Backward tracking is obtained at
Figure FDA00031061781700000231
Characteristic point of
Figure FDA00031061781700000232
Step 3024, if
Figure FDA00031061781700000233
And
Figure FDA00031061781700000234
is less than a set threshold value delta3If so, the left and right tests are passed;
step S303, drawing the right side of the time t
Figure FDA00031061781700000235
Characteristic point of
Figure FDA00031061781700000236
Tracking through a KLT optical flow method to obtain a right image at the time point of t-1
Figure FDA00031061781700000237
Characteristic point of
Figure FDA00031061781700000238
Screening right images at time t
Figure FDA00031061781700000239
Passing the feature points of the self-adaptive back-front inspection;
if it is not
Figure FDA00031061781700000240
The feature point can be checked after self-adapting, then the feature point is reserved, otherwise, the feature point is deleted, and the specific method is as follows:
step 3031, mixing
Figure FDA00031061781700000241
Characteristic point of
Figure FDA00031061781700000242
Tracking by KLT optical flow method is obtained in
Figure FDA00031061781700000243
Characteristic point of
Figure FDA00031061781700000244
Step 3032, characteristic points are obtained
Figure FDA0003106178170000031
Backward tracking is obtained at
Figure FDA0003106178170000032
Characteristic point of
Figure FDA0003106178170000033
Step 3033, if
Figure FDA0003106178170000034
And
Figure FDA0003106178170000035
is less than the adaptive threshold delta4Then, the self-adaptive back-front inspection is passed;
adaptive threshold delta4The equation of (a) is:
Figure FDA0003106178170000036
wherein rho and epsilon are preset parameters, loss is the number of feature points lost when the feature points of the left image track the feature points of the left image at the time t-1 at the time t, and maxtrack is the maximum tracking frequency of the current feature points; when pose calculation is carried out at each moment, bidirectional annular inspection starts from a left image at the previous moment; if the bidirectional ring inspection starts from the right image at the time t-1, the loss is the number of feature points lost when the feature points of the right image at the time t track the feature points of the right image at the time t-1, and in this case, the bidirectional ring inspection starts from the right image at the previous time;
step S400, based on the result of the feature association in the step S300, obtaining initial pose estimation of the moving carrier by adopting a PNP pose estimation method, and triangularizing each feature point in the left image and the right image at the time t by a binocular vision method to obtain a three-dimensional space coordinate corresponding to each feature point;
and S500, based on the result of the characteristic association in the step S300, the initial pose estimation, the three-dimensional space coordinates and the two-dimensional image coordinates of each characteristic point in the left image and the right image at the time t, obtaining the maximum likelihood estimation of the pose of the mobile carrier by minimizing the reprojection error by adopting a beam adjustment method, and obtaining the final pose.
2. The binocular vision odometer computing method based on parallax constraint and bidirectional loop inspection according to claim 1, wherein the left and right images acquired in step S100 are corrected by an OpenCV library function.
3. The binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection according to claim 1, wherein the step S200 of "feature points of the set image obtained at time t-1" is obtained by:
and tracking the characteristic points of the acquired binocular images at the t-1 moment by adopting the methods of the step S200 and the step S300 at the t-1 moment, the t-2 moment and the t-1 moment.
4. The binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection according to claim 2, wherein in step S200, "extracting a newly added feature point set" comprises:
extracting the feature points of the set image at the t-1 moment by using a Shi-Tomasi corner detection method, and deleting the feature points of the neighborhood of the original feature point set range of the set image to obtain new feature points; and the original characteristic of the set image is the characteristic point of the set image obtained at the time t-1.
5. The binocular vision odometry calculation method based on the parallax constraint and the two-way annular inspection according to any one of claims 1 to 4, wherein the set image is a left image acquired at time t.
6. The binocular vision odometer calculating method based on the parallax constraint and the two-way annular inspection according to any one of claims 1 to 4, wherein in the step S500, the maximum likelihood estimation of the pose is obtained by minimizing the reprojection error, and the method comprises the following steps:
step S501, according to the feature association result, constructing a space reprojection error and a time reprojection error for each feature point on each image frame;
and step S502, obtaining the maximum likelihood estimation of the pose in the sliding window by using a beam adjustment method.
7. The binocular vision odometry calculation method based on parallax constraint and two-way annular inspection according to any one of claims 1 to 4, further comprising step S600 after step S500:
based on the final pose of each moment before the t moment, acquiring a prior term r of the t moment by an edge method and a Schur decomposition methodpAnd the prior term r is usedpAs a priori error of the maximum likelihood estimate in step S500.
8. A binocular vision odometer computing system based on parallax constraint and bidirectional annular inspection is characterized by comprising a binocular image acquisition unit, an initial feature point extraction unit, a feature point extraction and feature association unit, an initial pose estimation unit and a maximum likelihood estimation unit;
the binocular image acquisition unit is configured to acquire the left image and the right image according to a set acquisition frequency through a binocular camera loaded on the mobile carrier to obtain the left image and the right image corresponding to the t moment and the t +1 moment respectively;
the initial feature point extraction unit is configured to perform Shi-Tomasi corner point detection on an image at the t-1 moment obtained by the binocular image acquisition unit, extract a newly added feature point set, and construct a new feature point set of the set image by combining the feature points of the set image obtained at the t-1 moment;
the feature point extraction and feature association unit is configured to acquire feature point sets corresponding to other images at the time t and the time t +1 respectively by adopting a KLT optical flow method and through parallax constraint and adaptive bidirectional annular inspection based on the new feature point set of the set image, and perform feature association;
the method specifically comprises the following steps: in the process of acquiring the pose at the time t, setting the feature points in a new feature point set of the image to pass parallax constraint and self-adaptive bidirectional annular inspection, so that the tracking is successful, adding 1 to the maximum tracking frequency of the feature points, and setting the initial tracking frequency of the newly added feature points to be 0;
if it is not
Figure FDA0003106178170000051
If the feature point can pass left and right inspection and parallax constraint, the feature point is retained, and feature association is established, otherwise, the feature point is deleted, which specifically comprises the following steps:
will be provided with
Figure FDA0003106178170000052
Characteristic point of
Figure FDA0003106178170000053
Tracking by KLT optical flow method is obtained in
Figure FDA0003106178170000054
Characteristic point of
Figure FDA0003106178170000055
If the feature point
Figure FDA0003106178170000056
And
Figure FDA0003106178170000057
is less than a set threshold rho1If not, deleting the feature point and skipping the following steps;
feature points
Figure FDA0003106178170000058
Backward tracking is obtained at
Figure FDA0003106178170000059
Characteristic point of
Figure FDA00031061781700000510
If it is not
Figure FDA00031061781700000511
And
Figure FDA00031061781700000512
is less than a set threshold value delta1If so, the left and right tests are passed;
the left image at the time t-1
Figure FDA0003106178170000061
Characteristic point of
Figure FDA0003106178170000062
Tracking through a KLT optical flow method to obtain a left image at the time t
Figure FDA0003106178170000063
Characteristic point of
Figure FDA0003106178170000064
Screening left side image at time t-1
Figure FDA0003106178170000065
The feature points passing the front and back inspection are established
Figure FDA0003106178170000066
And
Figure FDA0003106178170000067
the feature association of (1);
if it is not
Figure FDA0003106178170000068
If the feature point can pass the front and back inspection, the feature point is reserved, and the feature association is established, otherwise, the feature point is deleted, which specifically comprises the following steps:
will be provided with
Figure FDA0003106178170000069
Characteristic point of
Figure FDA00031061781700000610
Tracking by KLT optical flow method is obtained in
Figure FDA00031061781700000611
Characteristic point of
Figure FDA00031061781700000612
Feature points
Figure FDA00031061781700000613
Backward tracking is obtained at
Figure FDA00031061781700000614
Characteristic point of
Figure FDA00031061781700000615
If it is not
Figure FDA00031061781700000616
And
Figure FDA00031061781700000617
is less than a set threshold value delta2If so, the test is passed;
the left image at the time t
Figure FDA00031061781700000618
Characteristic point of
Figure FDA00031061781700000619
Tracking through a KLT optical flow method to obtain a right image at the time t
Figure FDA00031061781700000620
Characteristic point of
Figure FDA00031061781700000621
Screening left image at time t
Figure FDA00031061781700000622
Feature points are checked through left and right and parallax constraint, and established
Figure FDA00031061781700000623
And
Figure FDA00031061781700000624
the feature association of (1);
if it is not
Figure FDA00031061781700000625
If the feature point can pass left and right inspection and parallax constraint, the feature point is retained, and feature association is established, otherwise, the feature point is deleted, and the specific method comprises the following steps:
will be provided with
Figure FDA00031061781700000626
Characteristic point of
Figure FDA00031061781700000627
Tracking by KLT optical flow method is obtained in
Figure FDA00031061781700000628
Characteristic point of
Figure FDA00031061781700000629
If the feature point
Figure FDA00031061781700000630
And
Figure FDA00031061781700000631
is less than a set threshold rho1If not, deleting the feature point and skipping the following steps;
feature points
Figure FDA00031061781700000632
Backward tracking is obtained at
Figure FDA00031061781700000633
Characteristic point of
Figure FDA00031061781700000634
If it is not
Figure FDA00031061781700000635
And
Figure FDA00031061781700000636
is less than a set threshold value delta3If so, the left and right tests are passed;
the right side graph of the time t
Figure FDA00031061781700000637
Characteristic point of
Figure FDA00031061781700000638
Tracking through a KLT optical flow method to obtain a right image at the time point of t-1
Figure FDA00031061781700000639
Characteristic point of
Figure FDA00031061781700000640
Screening right images at time t
Figure FDA00031061781700000641
Passing the feature points of the self-adaptive back-front inspection;
if it is not
Figure FDA00031061781700000642
The feature point can be checked after self-adapting, then the feature point is reserved, otherwise, the feature point is deleted, and the specific method is as follows:
will be provided with
Figure FDA00031061781700000643
Characteristic point of
Figure FDA00031061781700000644
Tracking by KLT optical flow method is obtained in
Figure FDA00031061781700000645
Characteristic point of
Figure FDA00031061781700000646
Feature points
Figure FDA00031061781700000647
Backward tracking is obtained at
Figure FDA00031061781700000648
Characteristic point of
Figure FDA00031061781700000649
If it is not
Figure FDA00031061781700000650
And
Figure FDA00031061781700000651
is less than the adaptive threshold delta4Then, the self-adaptive back-front inspection is passed;
adaptive threshold delta4The equation of (a) is:
Figure FDA0003106178170000071
wherein rho and epsilon are preset parameters, loss is the number of feature points lost when the feature points of the left image track the feature points of the left image at the time t-1 at the time t, and maxtrack is the maximum tracking frequency of the current feature points; when pose calculation is carried out at each moment, bidirectional annular inspection starts from a left image at the previous moment; if the bidirectional ring inspection starts from the right image at the time t-1, the loss is the number of feature points lost when the feature points of the right image at the time t track the feature points of the right image at the time t-1, and in this case, the bidirectional ring inspection starts from the right image at the previous time;
the initial pose estimation unit is configured to obtain initial pose estimation of the moving carrier by adopting a PNP pose estimation method based on the feature association result, and triangularize each feature point in the left image and the right image at the time t by a binocular vision method to obtain a three-dimensional space coordinate corresponding to each feature point;
and the maximum likelihood estimation unit is configured to obtain maximum likelihood estimation of the pose of the mobile carrier by minimizing a reprojection error by adopting a beam adjustment method based on the feature association, the initial pose estimation, and the three-dimensional space coordinates and two-dimensional image coordinates of each feature point in the left image and the right image at the time t.
9. A storage device having stored therein a plurality of programs, wherein the program applications are loaded and executed by a processor to implement the binocular vision odometry calculation method based on parallax constraint and two-way circle inspection according to any one of claims 1 to 7.
10. A processing arrangement comprising a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; characterized in that said program is adapted to be loaded and executed by a processor to implement the binocular vision odometry calculation method based on parallax constraint and two-way circular inspection according to any one of claims 1 to 7.
CN201910578572.0A 2019-06-28 2019-06-28 Binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection Active CN110335308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910578572.0A CN110335308B (en) 2019-06-28 2019-06-28 Binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910578572.0A CN110335308B (en) 2019-06-28 2019-06-28 Binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection

Publications (2)

Publication Number Publication Date
CN110335308A CN110335308A (en) 2019-10-15
CN110335308B true CN110335308B (en) 2021-07-30

Family

ID=68144566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910578572.0A Active CN110335308B (en) 2019-06-28 2019-06-28 Binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection

Country Status (1)

Country Link
CN (1) CN110335308B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862150B (en) * 2020-06-19 2024-06-14 杭州易现先进科技有限公司 Image tracking method, device, AR equipment and computer equipment
CN112330589A (en) * 2020-09-18 2021-02-05 北京沃东天骏信息技术有限公司 Method and device for estimating pose and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN108519102A (en) * 2018-03-26 2018-09-11 东南大学 A kind of binocular vision speedometer calculation method based on reprojection

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101926563B1 (en) * 2012-01-18 2018-12-07 삼성전자주식회사 Method and apparatus for camera tracking

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN108519102A (en) * 2018-03-26 2018-09-11 东南大学 A kind of binocular vision speedometer calculation method based on reprojection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Real-time stereo visual odometry for autonomous ground vehicles》;Howard A.等;《2008 IEEE/RSJ International Conference on Intelligent Robots and Systems》;20080930;3946-3952 *
《三阶段局部双目光束法平差视觉里程计》;赵彤 等;《光电工程》;20181231;75-85 *

Also Published As

Publication number Publication date
CN110335308A (en) 2019-10-15

Similar Documents

Publication Publication Date Title
CN110108258B (en) Monocular vision odometer positioning method
CN107452015B (en) Target tracking system with re-detection mechanism
WO2016035324A1 (en) Method for estimating motion, mobile agent and non-transitory computer-readable medium encoded with a computer program code for causing a processor to execute a method for estimating motion
EP3504682A1 (en) Simultaneous localization and mapping with an event camera
CN110097586B (en) Face detection tracking method and device
CN111724439A (en) Visual positioning method and device in dynamic scene
EP3654234B1 (en) Moving object detection system and method
CN101860729A (en) Target tracking method for omnidirectional vision
CN105869120A (en) Image stitching real-time performance optimization method
CN113689503B (en) Target object posture detection method, device, equipment and storage medium
WO2018152214A1 (en) Event-based feature tracking
EP3293700B1 (en) 3d reconstruction for vehicle
CN112950696A (en) Navigation map generation method and generation device and electronic equipment
CN110335308B (en) Binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection
EP3216006B1 (en) An image processing apparatus and method
CN115131420A (en) Visual SLAM method and device based on key frame optimization
CN104200492A (en) Automatic detecting and tracking method for aerial video target based on trajectory constraint
CN110706253B (en) Target tracking method, system and device based on apparent feature and depth feature
El Bouazzaoui et al. Enhancing RGB-D SLAM performances considering sensor specifications for indoor localization
Chumerin et al. Ground plane estimation based on dense stereo disparity
CN115830064B (en) Weak and small target tracking method and device based on infrared pulse signals
CN113592947B (en) Method for realizing visual odometer by semi-direct method
CN113723432B (en) Intelligent identification and positioning tracking method and system based on deep learning
CN113873144B (en) Image capturing method, image capturing apparatus, and computer-readable storage medium
CN116092035A (en) Lane line detection method, lane line detection device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant