CN110335308B - Binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection - Google Patents
Binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection Download PDFInfo
- Publication number
- CN110335308B CN110335308B CN201910578572.0A CN201910578572A CN110335308B CN 110335308 B CN110335308 B CN 110335308B CN 201910578572 A CN201910578572 A CN 201910578572A CN 110335308 B CN110335308 B CN 110335308B
- Authority
- CN
- China
- Prior art keywords
- feature
- image
- point
- time
- feature points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007689 inspection Methods 0.000 title claims abstract description 85
- 230000002457 bidirectional effect Effects 0.000 title claims abstract description 40
- 238000004364 calculation method Methods 0.000 title claims abstract description 33
- 238000000034 method Methods 0.000 claims abstract description 131
- 230000003287 optical effect Effects 0.000 claims abstract description 38
- 238000001514 detection method Methods 0.000 claims abstract description 13
- 238000007476 Maximum Likelihood Methods 0.000 claims description 24
- 230000003044 adaptive effect Effects 0.000 claims description 15
- 238000000605 extraction Methods 0.000 claims description 14
- 238000012216 screening Methods 0.000 claims description 13
- 238000012360 testing method Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 6
- 230000000717 retained effect Effects 0.000 claims description 6
- 238000000354 decomposition reaction Methods 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000033001 locomotion Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 241001091282 Trimorphodon lambda Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C22/00—Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radar, Positioning & Navigation (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
The invention belongs to the technical field of positioning, particularly relates to a binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection, and aims to solve the problems of low positioning precision and low real-time performance caused by low accuracy of characteristic association of the conventional binocular vision odometer. The method comprises the following steps: acquiring images through a binocular camera loaded on a mobile carrier; acquiring newly added feature points and acquiring a new feature point set for the set image by a Shi-Tomasi corner point detection method; tracking the feature points by using a KLT optical flow method, and establishing feature association by combining a parallax constraint method and a self-adaptive bidirectional annular inspection method; based on the feature correlation result, obtaining initial pose estimation by adopting a PNP method, and triangularizing feature points; and obtaining the final pose of the optimal estimation of the pose by minimizing the reprojection error by adopting a beam adjustment method. The invention improves the quality and efficiency of characteristic tracking and improves the positioning precision and the real-time performance of the binocular vision odometer.
Description
Technical Field
The invention belongs to the technical field of positioning, and particularly relates to a binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection.
Background
As a part of the most active development of intelligent transportation systems, smart vehicles have received the most important attention of various research institutions and colleges in the world. The positioning technology is one of key technologies for safe movement of intelligent vehicles, and plays an important role in the field of automatic driving.
The binocular vision odometer is a practical vision positioning technology and can effectively estimate the pose of a vehicle. With the development of image processing technology and computing power, binocular vision odometer is gradually applied to embedded systems and becomes an important component of automatic driving. Its advantages are low cost, low energy consumption, convenient installation, high portability and high anti-electromagnetic interference power.
The binocular vision odometer mainly comprises the steps of feature association and pose estimation. The feature association is established based on feature extraction and tracking, and the pose estimation is obtained by minimizing the reprojection error based on the feature association result. Therefore, it is important to establish efficient feature association. The current general method is to adopt front and back inspection and left and right inspection to inspect the feature association obtained by optical flow tracking, but the method still contains more external points, has low-quality feature point tracking, and has to be further improved in precision and real-time.
Disclosure of Invention
In order to solve the above problems in the prior art, that is, to solve the problems of low positioning accuracy and low real-time performance caused by low accuracy of the feature association of the existing binocular vision odometer, a first aspect of the present invention provides a binocular vision odometer calculation method based on parallax constraint and bidirectional ring inspection, the method comprising:
step S100, acquiring a left image and a right image according to a set acquisition frequency through a binocular camera loaded on a mobile carrier to obtain the left image and the right image respectively corresponding to t-1 and t moments;
step S200, carrying out Shi-Tomasi corner point detection on the set image at the t-1 moment obtained in the step S100, extracting a newly added feature point set, and constructing a new feature point set of the set image by combining the feature points of the set image obtained at the t-1 moment;
step S300, based on the new feature point set of the set image, adopting a KLT optical flow method, respectively obtaining feature point sets corresponding to other images at t-1 and t moments through parallax constraint and self-adaptive bidirectional annular inspection, and performing feature association;
step S400, based on the result of the feature association in the step S300, obtaining initial pose estimation of the moving carrier by adopting a PNP pose estimation method, and triangularizing each feature point in the left image and the right image at the time t by a binocular vision method to obtain a three-dimensional space coordinate corresponding to each feature point;
and S500, based on the result of the characteristic association in the step S300, the initial pose estimation, the three-dimensional space coordinates and the two-dimensional image coordinates of each characteristic point in the left image and the right image at the time t, obtaining the maximum likelihood estimation of the pose of the mobile carrier by minimizing the reprojection error by adopting a beam adjustment method, and obtaining the final pose.
In some preferred embodiments, the left and right images acquired in step S100 are corrected by OpenCV library functions.
In some preferred embodiments, the "feature points of the set image obtained at time t-1" are obtained by:
and tracking the characteristic points of the acquired binocular images at the t-1 moment by adopting the methods of the step S200 and the step S300 at the t-1 moment, the t-2 moment and the t-1 moment.
In some preferred embodiments, the method for "extracting a new feature point set" in step S200 is as follows:
extracting the feature points of the set image at the t-1 moment by using a Shi-Tomasi corner detection method, and deleting the feature points of the neighborhood of the original feature point set range of the set image to obtain new feature points; and the original characteristic of the set image is the characteristic point of the set image obtained at the time t-1.
In some preferred embodiments, the setting image is a left image acquired at time t.
In some preferred embodiments, step S300 "acquires feature point sets corresponding to other images at time t-1 and time t, and performs feature association", and the method includes:
the left image at the time point of t-1Characteristic point ofTracking through a KLT optical flow method to obtain a left image at the time tCharacteristic point ofScreening the feature points of the left image passing the front and back inspection at the t-1 moment, and establishingAndthe feature association of (1);
the left image at the time tCharacteristic point ofTracking through a KLT optical flow method to obtain a right image at the time tCharacteristic point ofFeature points are checked through left and right and parallax constraint, and establishedAndthe feature association of (1);
let the right image at time t beCharacteristic point ofTracking through a KLT optical flow method to obtain a right image at the time point of t-1Characteristic point ofScreening right images at time tAnd passing the feature points of the adaptive back-front inspection.
In some preferred embodiments, in step S300, "feature point sets corresponding to other images at time t-1 and time t" are respectively obtained, "if t-1 is an initial time, the feature point sets will be obtainedCharacteristic point ofTracking by KLT optical flow method is obtained inCharacteristic point ofAnd screening to obtain the characteristic points which pass through left and right inspection and parallax constraint.
In some preferred embodiments, the "screening to obtain feature points that pass left and right tests and parallax constraints" is performed by:
obtainingAndis less thanFixed threshold value rho1And the feature points are calculatedBackward tracking is obtained atCharacteristic point ofIf it is notAndis less than a set threshold value delta1Then the left and right tests are passed.
In some preferred embodiments, step S301 "filters the left image at time t-1The characteristic points which pass the front and back inspection in the method comprise the following steps:
If it is notAndis less than a set threshold value delta2And the characteristic points pass the front and back inspection.
In some advantagesIn an alternative embodiment, step S302 "filters the left image at time tThe characteristic points are subjected to left and right inspection and parallax constraint, and the method comprises the following steps:
if the feature pointAndis less than a set threshold rho1If yes, the feature point is reserved and the following scheme is executed, otherwise, the feature point is deleted;
If it is notAndis less than a set threshold value delta3And then the characteristic points pass the left and right tests.
In some preferred embodiments, step S303 "filters the right image at time tThe method of the feature points passing the self-adaptive back-front inspection comprises the following steps:
If it is notAndis less than the adaptive threshold delta4Then, the feature points are used as feature points for the inspection after the self-adaptation;
wherein the adaptive threshold value delta4The calculation method comprises the following steps:
wherein rho and epsilon are parameters, loss is the number of feature points lost when the feature points of the right-side image track the feature points of the left-side image at the time t-1, and maxtrack is the maximum tracking number of the current feature points.
In some preferred embodiments, in step S500, "obtaining a maximum likelihood estimate of the pose by minimizing the reprojection error" includes:
constructing a space reprojection error and a time reprojection error for each feature point on each image frame according to the feature association result;
a maximum likelihood estimate of the pose in the sliding window is obtained using a beam-balancing method.
In some preferred embodiments, step S600 is further included after step S500:
based on the final pose of each moment before the t moment, acquiring a prior term r of the t moment by an edge method and a Schur decomposition methodpAnd the prior term r is usedpAs a priori error of the maximum likelihood estimate in step S500.
The invention also provides a binocular vision odometer computing system based on parallax constraint and bidirectional annular inspection, which comprises a binocular image acquisition unit, an initial feature point extraction unit, a feature point extraction and feature association unit, an initial pose estimation unit and a maximum likelihood estimation unit;
the binocular image acquisition unit is configured to acquire a left image and a right image according to a set acquisition frequency through a binocular camera loaded on the mobile carrier to obtain the left image and the right image respectively corresponding to t-1 and t moments;
the initial feature point extraction unit is configured to perform Shi-Tomasi corner point detection on the set image at the t-1 moment obtained by the binocular image acquisition unit, extract a newly added feature point set, and construct a new feature point set of the set image by combining the feature points of the set image obtained at the t-1 moment;
the feature point extraction and feature association unit is configured to enable a new feature point set based on the set image to respectively acquire feature point sets corresponding to other images at t-1 and t moments by adopting a KLT optical flow method and through parallax constraint and adaptive bidirectional annular inspection, and perform feature association;
the initial pose estimation unit is configured to obtain initial pose estimation of the moving carrier by adopting a PNP pose estimation method based on the feature association result, and triangularize each feature point in the left image and the right image at the time t by a binocular vision method to obtain a three-dimensional space coordinate corresponding to each feature point;
and the maximum likelihood estimation unit is configured to obtain maximum likelihood estimation of the pose of the mobile carrier by minimizing a reprojection error by adopting a beam adjustment method based on the feature association, the initial pose estimation, the three-dimensional space coordinates and the two-dimensional image coordinates of each feature point in the left image and the right image at the time t, and obtain a final pose.
In a third aspect of the present invention, a storage device having a plurality of programs stored therein is characterized in that the program applications are loaded and executed by a processor to implement the binocular vision odometer calculation method based on parallax constraint and two-way ring inspection described above.
In a fourth aspect of the invention, a processing arrangement comprises a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; wherein the program is adapted to be loaded and executed by a processor to implement the binocular vision odometer calculation method based on parallax constraint and two-way circular inspection.
The invention has the beneficial effects that:
the method combines parallax constraint, can effectively purify the characteristic points, avoids low-quality characteristic point tracking, and improves the quality and efficiency of characteristic tracking; the invention provides the self-adaptive bidirectional annular inspection, further removes the outer points, improves the accuracy of feature association, and can self-adaptively adjust the quantity and quality of the feature points according to the change of the feature points in different motions, thereby improving the positioning accuracy and the real-time property of the binocular vision odometer.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a schematic flow chart of a binocular vision odometer computing method based on parallax constraint and two-way circular inspection according to an embodiment of the invention;
FIG. 2 is a schematic view of the disparity constraint of the binocular vision odometer computing method based on the disparity constraint and the two-way circular inspection according to an embodiment of the invention;
fig. 3 is a schematic diagram of adaptive bidirectional circular inspection of a binocular vision odometer computing method based on parallax constraint and bidirectional circular inspection according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The invention discloses a binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection, which specifically comprises the following steps of:
step S100, acquiring a left image and a right image according to a set acquisition frequency through a binocular camera loaded on a mobile carrier to obtain the left image and the right image respectively corresponding to t-1 and t moments;
step S200, carrying out Shi-Tomasi corner point detection on the set image at the t-1 moment obtained in the step S100, extracting a newly added feature point set, and constructing a new feature point set of the set image by combining the feature points of the set image obtained at the t-1 moment;
step S300, based on the new feature point set of the set image, adopting a KLT optical flow method, respectively obtaining feature point sets corresponding to other images at t-1 and t moments through parallax constraint and self-adaptive bidirectional annular inspection, and performing feature association;
step S400, based on the result of the feature association in the step S300, obtaining initial pose estimation of the moving carrier by adopting a PNP pose estimation method, and triangularizing each feature point in the left image and the right image at the time t by a binocular vision method to obtain a three-dimensional space coordinate corresponding to each feature point;
and S500, based on the result of the characteristic association in the step S300, the initial pose estimation, the three-dimensional space coordinates and the two-dimensional image coordinates of each characteristic point in the left image and the right image at the time t, obtaining the maximum likelihood estimation of the pose of the mobile carrier by minimizing the reprojection error by adopting a beam adjustment method, and obtaining the final pose. In some preferred embodiments, step S600 is further included after step S500:
based on the final pose of each moment before the t moment, acquiring a prior term r of the t moment by an edge method and a Schur decomposition methodpAnd the prior term r is usedpAs a priori error of the maximum likelihood estimate in step S500.
In order to more clearly describe the binocular vision odometer calculation method based on parallax constraint and bidirectional circular inspection, the following describes in detail the steps of the binocular vision odometer calculation method based on parallax constraint and bidirectional circular inspection according to an embodiment of the method in accordance with the present invention, with reference to the accompanying drawings.
The binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection of one embodiment of the invention comprises the following steps S100-S600.
Step S100, acquiring a left image and a right image according to a set acquisition frequency through a binocular camera loaded on a mobile carrier to obtain the left image and the right image respectively corresponding to t-1 and t moments;
in this embodiment, before image acquisition, the assembled binocular camera needs to be calibrated, and the calibrated parameters include intra-camera parameters and inter-camera parameters of the binocular camera;
and correcting the acquired left image and the right image through an OpenCV library function.
Step S200, carrying out Shi-Tomasi corner point detection on the set image at the t-1 moment obtained in the step S100, extracting a newly added feature point set, and constructing a new feature point set of the set image by combining the feature points of the set image obtained at the t-1 moment;
in this embodiment, the number of preset total feature points may be set to 200, but may also be set to other values, such as 100, 150, and the like, in some other embodiments.
The method for acquiring the characteristic points of the set image at the time t-1 in the step comprises the following steps:
and tracking the characteristic points of the acquired binocular images at the t-1 moment by adopting the methods of the step S200 and the step S300 at the t-1 moment, the t-2 moment and the t-1 moment.
In this step, a newly added feature point set is extracted, and the method includes:
and extracting the feature points of the set image at the t-1 moment by using a Shi-Tomasi corner detection method, and deleting the feature points in the neighborhood of the original feature point set range of the set image to obtain new feature points. And setting the original features of the image as the feature points of the set image obtained at the time t-1. The original feature point setting range neighborhood is as follows: and respectively acquiring circular areas corresponding to the original feature points according to the set radius based on the original feature points, wherein the collection of all the circular areas is the set range neighborhood of the original feature points.
In this embodiment, the image for detecting the Shi-Tomasi corner is the left image acquired at time t-1, but of course, in other embodiments, the image may also be the right image acquired at time t.
Shi-Tomasi corner detection determines feature points by comparing minimum feature values of the gradient matrix, and when the feature points are matched, affine transformation is introduced, so that the feature points are matched more accurately between frames, and bad feature points are eliminated. The feature extraction method of the present step is well documented in the art, and will not be described in detail herein.
And step S300, based on the new feature point set of the set image, adopting a KLT optical flow method, respectively obtaining feature point sets corresponding to other images at t-1 and t through parallax constraint and adaptive bidirectional annular inspection, and performing feature association.
The embodiment further comprises a step of counting the tracking times of the feature points: in the process of acquiring the pose at the time t, setting the feature points in the new feature point set of the image to pass parallax constraint and self-adaptive bidirectional annular inspection, so that the tracking is successful, and adding 1 to the maximum tracking frequency of the feature points to realize the statistics of the tracking frequency. For the newly added feature points, the initial tracking times are 0.
Optical flow method for the principle of object detection: each pixel in the image is assigned a velocity vector, thus forming a motion vector field. At a certain moment, the points on the image correspond to the points on the three-dimensional object one by one, and the corresponding relation can be calculated through projection.
1. At the initial moment of image acquisition by the binocular camera, only one pair of images is acquired, namely t is 1 and t-1 does not exist, and at the moment, the left image at the moment is screenedAnd (4) carrying out left-right inspection and parallax constraint on the feature points, and matching the feature points of the left image and the right image.
The specific scheme in the embodiment is as follows: if it is notIf the feature point can pass left and right inspection and parallax constraint, the feature point is retained, and feature association is established, otherwise, the feature point is deleted, and the specific method comprises the following steps:
step 3001: will be provided withCharacteristic point ofTracking by KLT optical flow method is obtained inCharacteristic point of
Step 3002, if the feature pointAndis less than a set threshold rho1If not, deleting the feature point and skipping the following steps;
2. The binocular camera can obtain binocular images at the front moment and the rear moment at the second acquisition moment and each moment after the second acquisition moment, and after the binocular image at the t moment is obtained, feature association is performed through the following steps S301 to S303.
Step S301, the left image at the time point t-1Characteristic point ofTracking through a KLT optical flow method to obtain a left image at the time tCharacteristic point ofLeft side of screening time t-1Image of a personThe feature points passing the front and back inspection are establishedAndis associated with the feature of (1).
The specific scheme of the step in the embodiment is as follows: if it is notIf the characteristic point can pass the front and back inspection, the characteristic point is reserved and the characteristic association is established, otherwise, the characteristic point is deleted, and the specific method comprises the following steps:
step 3011, mixingCharacteristic point ofTracking by KLT optical flow method is obtained inCharacteristic point of
Step S302, the left image at the time tCharacteristic point ofTracking through a KLT optical flow method to obtain a right image at the time tCharacteristic point ofScreening left image at time tFeature points are checked through left and right and parallax constraint, and establishedAndis associated with the feature of (1).
The specific scheme of this step in this example is: if it is notIf the feature point can pass left and right inspection and parallax constraint, the feature point is retained, and feature association is established, otherwise, the feature point is deleted, and the specific method comprises the following steps:
step 3021: will be provided withIs characterized byDotTracking by KLT optical flow method is obtained inCharacteristic point of
Step 3022: if the feature pointAndis less than a set threshold rho1If not, deleting the feature point and skipping the following steps;
FIG. 2 is a schematic view of the disparity constraint of binocular vision odometer calculation method based on disparity constraint and two-way circular inspection, showing the methodCharacteristic point ofTracking by KLT optical flow method is obtained inCharacteristic point ofThe dotted line in the figure shows the allowable range [ + ρ ] of the y coordinate1,-ρ1],Middle characteristic pointThe solid line on the right represents the feature pointAnd characteristic pointThe y coordinates are equal.
Step S303, drawing the right side of the time tCharacteristic point ofTracking through a KLT optical flow method to obtain a right image at the time point of t-1Characteristic point ofScreening right images at time tBy self-runningAnd adapting to characteristic points of the post-inspection and the pre-inspection.
The specific scheme of this step in this example is: if it is notThe feature point can be checked after self-adapting, then the feature point is reserved, otherwise, the feature point is deleted, and the specific method is as follows:
step 3031, mixingCharacteristic point ofTracking by KLT optical flow method is obtained inCharacteristic point of
Step 3032, characteristic points are obtainedBackward tracking is obtained atCharacteristic point of
Step 3033, ifAndis less than the adaptive threshold delta4Then, the self-adaptive back-front inspection is passed;
adaptive threshold delta4Is shown in equation (1):
where ρ and ∈ are preset parameters (values ρ is 0.23 and ∈ is 5 in this embodiment), where less is the number of feature points that are lost when the feature point of the left-side image tracks the feature point of the left-side image at time t-1 (i.e., the number of feature points of the left-side image at time t minus the number of feature points of the left-side image that are successfully tracked at time t-1 is preset total feature point number, which is 200 in this embodiment), and maxtrack is the maximum tracking number of current feature points. In this embodiment, when pose calculation is performed at each time, bidirectional ring inspection starts from the left image at the previous time. In other embodiments, if the two-way ring check starts from the right image at time t-1, then loss is the number of feature points that were lost when the feature points of the right image at time t tracked the feature points of the right image at time t-1, in which case the two-way ring checks both start from the right image at the previous time.
FIG. 3 is a schematic diagram of adaptive bi-directional circular inspection of binocular vision odometer computing method based on parallax constraint and bi-directional circular inspection, showingAnd performing bidirectional loop inspection by using a KLT optical flow method, wherein a solid arrow between every two images represents a process of tracking and acquiring the characteristic points in the images indicated by the arrows by using the KLT optical flow method, and a dotted arrow represents a process of reversely tracking and obtaining the characteristic points of the images indicated by the arrows. In the drawingsAndthe solid line arrow and the dotted line arrow in between are respectively the t-1 timeAndthe corresponding characteristic points are obtained through tracking and back tracking.
Step S400, based on the characteristic association, obtaining initial pose estimation of the mobile carrier by adopting a PNP pose estimation method, and triangularizing each characteristic point by using a binocular vision method to obtain a three-dimensional space coordinate corresponding to each characteristic point;
and obtaining initial pose estimation by adopting a PNP method according to the two-dimensional feature points of the current frame and the corresponding three-dimensional space coordinates. Triangularization is carried out on the feature points newly extracted from the current frame through a binocular vision method, and corresponding three-dimensional space coordinates are obtained and used for subsequent pose calculation.
And S500, based on the feature association, the initial pose estimation, the three-dimensional space coordinates and the two-dimensional image coordinates of each feature point in the left image and the right image at the time t, obtaining the maximum likelihood estimation of the pose of the mobile carrier by minimizing the reprojection error by adopting a beam adjustment method, and obtaining the final pose.
The pose directly acquired by the embodiment of the invention is the pose of the left camera of the binocular camera, and the pose of the moving carrier can be obtained through the pose mapping relation between the left camera and the moving carrier. In other embodiments, if the bidirectional ring inspection starts from the right image at the time t-1, the pose directly acquired by the method is the pose of the right camera of the binocular camera, and similarly, the pose of the moving carrier can be obtained through the pose mapping relationship between the right camera and the moving carrier.
And constructing a reprojection error for each feature point on each image frame according to the feature association result. The reprojection error is determined by projecting the feature point from the first observed position into a subsequent image frame. Assuming that the first image frame where the feature point l is observed is i, its reprojection error on the image frame t is as shown in equation (2):
wherein the content of the first and second substances,is equivalent toRepresenting the observation point of the characteristic point l on the image frame t;the method is characterized in that the method is an observation model of a characteristic point l in an image frame t, and chi represents a pose; picIs a pinhole model, projecting features from the camera coordinate system to the image coordinate system; t is a homogeneous matrix of camera poses, TiRepresenting the corresponding pose, T, of the image frame itRepresenting the corresponding pose of the image frame t. Lambda [ alpha ]lThe depth of the feature point l is represented.
For the binocular vision odometer, the method is adopted to simultaneously construct the space reprojection error and the time reprojection error.
The initial pose estimation is used as an initial value, the maximum likelihood estimation of the pose in the sliding window is obtained by using a beam adjustment method, and the equation can be solved by adopting a Gaussian Newton method as shown in a formula (3).
Where S is the set of image measurement values (i.e., the set of two-dimensional image coordinates of the feature points),is a covariance matrix, χ*Is the optimal pose (i.e. the final pose obtained by resolving), χ is the pose to be optimized, n is the time sequence number,is the two-dimensional image coordinates of the feature points,is a pinhole modelπc,rpIs the prior term obtained in step S600.
The prior term r is added to the maximum likelihood estimation in the present embodimentpOf course, in some other embodiments, the prior term r may be removed from equation (3)p。
In order to improve the timeliness of the binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection, step S600 is added in some embodiments.
Step S600, based on the final pose of each moment before the t moment, acquiring a prior term r of the t moment through an edge method and a Schuler decomposition methodpAnd the prior term r is usedpAs a priori error of the maximum likelihood estimate in step S500.
With the continuous increase of the system state, the quantity of the system state (final pose) is reduced by adopting an edge method, so that the effect of reducing the calculation complexity is achieved, and the binocular vision odometer can run in real time.
The marginalization method converts the previous partial system state into a prior term r through the Schur decompositionpAnd remove it from the sliding window, providing a priori information on the state of the sliding window.
And forming the pose still remained after the marginalization and the new image frame pose in a sliding window, and circularly adopting a beam adjustment method to obtain the maximum likelihood estimation of the pose.
Reference may be made in this step to "Leutenegger S, Lynen S, Bosse M, et al, Keyframe-based visual-inert protocol using nonliner optimization [ J ]. The International Journal of Robotics Research,2015,34(3): 314-.
In the above description of the technical solutions, the step S600 is placed after the step S500 only for clearly describing the technical solutions, not for limiting the sequence of the steps, and in some embodiments, the step S500 may be placed before.
The binocular vision odometer computing system based on parallax constraint and bidirectional annular inspection comprises a binocular image acquisition unit, an initial feature point extraction unit, a feature point extraction and feature association unit, an initial pose estimation unit and a maximum likelihood estimation unit;
the binocular image acquisition unit is configured to acquire the left image and the right image according to a set acquisition frequency through a binocular camera loaded on the mobile carrier to obtain the left image and the right image corresponding to the t moment and the t +1 moment respectively;
the initial feature point extraction unit is configured to perform Shi-Tomasi corner point detection on the set image at the t-1 moment obtained by the binocular image acquisition unit, extract a newly added feature point set, and construct a new feature point set of the set image by combining the feature points of the set image obtained at the t-1 moment;
the feature point extraction and feature association unit is configured to acquire feature point sets corresponding to other images at the time t and the time t +1 respectively by adopting a KLT optical flow method and through parallax constraint and adaptive bidirectional annular inspection based on the new feature point set of the set image, and perform feature association;
the initial pose estimation unit is configured to obtain initial pose estimation of the moving carrier by adopting a PNP pose estimation method based on the feature association result, and triangularize each feature point in the left image and the right image at the time t by a binocular vision method to obtain a three-dimensional space coordinate corresponding to each feature point;
and the maximum likelihood estimation unit is configured to obtain maximum likelihood estimation of the pose of the mobile carrier by minimizing a reprojection error by adopting a beam adjustment method based on the feature association, the initial pose estimation, and the three-dimensional space coordinates and two-dimensional image coordinates of each feature point in the left image and the right image at the time t, so as to obtain a final pose.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
It should be noted that, the binocular vision odometer computing system based on the parallax constraint and the two-way circular inspection provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical applications, the above functions may be allocated to different functional modules according to needs, that is, the modules or steps in the embodiments of the present invention are further decomposed or combined, for example, the modules in the above embodiments may be combined into one module, or may be further split into multiple sub-modules, so as to complete all or part of the above described functions. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
A storage device according to a third embodiment of the present invention stores therein a plurality of programs adapted to be loaded and executed by a processor to implement the binocular visual odometer calculating method based on parallax constraint and two-way circular inspection described above.
A processing apparatus according to a fourth embodiment of the present invention includes a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; the program is adapted to be loaded and executed by a processor to implement the binocular vision odometry calculation method based on parallax constraint and two-way circular inspection described above.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Those of skill in the art would appreciate that the various illustrative modules, method steps, and modules described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that programs corresponding to the software modules, method steps may be located in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
Claims (10)
1. A binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection is characterized in that the pose calculation method comprises the following steps:
step S100, acquiring a left image and a right image according to a set acquisition frequency through a binocular camera loaded on a mobile carrier to obtain the left image and the right image respectively corresponding to t-1 and t moments;
step S200, carrying out Shi-Tomasi corner point detection on the set image at the t-1 moment obtained in the step S100, extracting a newly added feature point set, and constructing a new feature point set of the set image by combining the feature points of the set image obtained at the t-1 moment;
step S300, based on the new feature point set of the set image, adopting a KLT optical flow method, respectively obtaining feature point sets corresponding to other images at t-1 and t moments through parallax constraint and self-adaptive bidirectional annular inspection, and performing feature association;
the method specifically comprises the following steps: in the process of acquiring the pose at the time t, setting the feature points in a new feature point set of the image to pass parallax constraint and self-adaptive bidirectional annular inspection, so that the tracking is successful, adding 1 to the maximum tracking frequency of the feature points, and setting the initial tracking frequency of the newly added feature points to be 0;
if it is notIf the feature point can pass left and right inspection and parallax constraint, the feature point is retained, and feature association is established, otherwise, the feature point is deleted, and the specific method comprises the following steps:
step 3001: will be provided withCharacteristic point ofTracking by KLT optical flow method is obtained inCharacteristic point of
Step 3002, if the feature pointAndis less than a set threshold rho1If not, deleting the feature point and skipping the following steps;
Step 3004, ifAndis less than a set threshold value delta1If so, the left and right tests are passed;
step S301, the left image at the time point t-1Characteristic point ofTracking through a KLT optical flow method to obtain a left image at the time tCharacteristic point ofScreening left side image at time t-1The feature points passing the front and back inspection are establishedAndthe feature association of (1);
if it is notIf the characteristic point can pass the front and back inspection, the characteristic point is reserved and the characteristic association is established, otherwise, the characteristic point is deleted, and the specific method comprises the following steps:
step 3011, mixingCharacteristic point ofTracking by KLT optical flow method is obtained inCharacteristic point of
step S302, the left image at the time tCharacteristic point ofTracking through a KLT optical flow method to obtain a right image at the time tCharacteristic point ofScreening left image at time tFeature points are checked through left and right and parallax constraint, and establishedAndthe feature association of (1);
if it is notIf the feature point can pass left and right inspection and parallax constraint, the feature point is retained, and feature association is established, otherwise, the feature point is deleted, and the specific method comprises the following steps:
step 3021: will be provided withCharacteristic point ofTracking by KLT optical flow method is obtained inCharacteristic point of
Step 3022: if the feature pointAndis less than a set threshold rho1If not, deleting the feature point and skipping the following steps;
Step 3024, ifAndis less than a set threshold value delta3If so, the left and right tests are passed;
step S303, drawing the right side of the time tCharacteristic point ofTracking through a KLT optical flow method to obtain a right image at the time point of t-1Characteristic point ofScreening right images at time tPassing the feature points of the self-adaptive back-front inspection;
if it is notThe feature point can be checked after self-adapting, then the feature point is reserved, otherwise, the feature point is deleted, and the specific method is as follows:
step 3031, mixingCharacteristic point ofTracking by KLT optical flow method is obtained inCharacteristic point of
Step 3032, characteristic points are obtainedBackward tracking is obtained atCharacteristic point of
Step 3033, ifAndis less than the adaptive threshold delta4Then, the self-adaptive back-front inspection is passed;
adaptive threshold delta4The equation of (a) is:
wherein rho and epsilon are preset parameters, loss is the number of feature points lost when the feature points of the left image track the feature points of the left image at the time t-1 at the time t, and maxtrack is the maximum tracking frequency of the current feature points; when pose calculation is carried out at each moment, bidirectional annular inspection starts from a left image at the previous moment; if the bidirectional ring inspection starts from the right image at the time t-1, the loss is the number of feature points lost when the feature points of the right image at the time t track the feature points of the right image at the time t-1, and in this case, the bidirectional ring inspection starts from the right image at the previous time;
step S400, based on the result of the feature association in the step S300, obtaining initial pose estimation of the moving carrier by adopting a PNP pose estimation method, and triangularizing each feature point in the left image and the right image at the time t by a binocular vision method to obtain a three-dimensional space coordinate corresponding to each feature point;
and S500, based on the result of the characteristic association in the step S300, the initial pose estimation, the three-dimensional space coordinates and the two-dimensional image coordinates of each characteristic point in the left image and the right image at the time t, obtaining the maximum likelihood estimation of the pose of the mobile carrier by minimizing the reprojection error by adopting a beam adjustment method, and obtaining the final pose.
2. The binocular vision odometer computing method based on parallax constraint and bidirectional loop inspection according to claim 1, wherein the left and right images acquired in step S100 are corrected by an OpenCV library function.
3. The binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection according to claim 1, wherein the step S200 of "feature points of the set image obtained at time t-1" is obtained by:
and tracking the characteristic points of the acquired binocular images at the t-1 moment by adopting the methods of the step S200 and the step S300 at the t-1 moment, the t-2 moment and the t-1 moment.
4. The binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection according to claim 2, wherein in step S200, "extracting a newly added feature point set" comprises:
extracting the feature points of the set image at the t-1 moment by using a Shi-Tomasi corner detection method, and deleting the feature points of the neighborhood of the original feature point set range of the set image to obtain new feature points; and the original characteristic of the set image is the characteristic point of the set image obtained at the time t-1.
5. The binocular vision odometry calculation method based on the parallax constraint and the two-way annular inspection according to any one of claims 1 to 4, wherein the set image is a left image acquired at time t.
6. The binocular vision odometer calculating method based on the parallax constraint and the two-way annular inspection according to any one of claims 1 to 4, wherein in the step S500, the maximum likelihood estimation of the pose is obtained by minimizing the reprojection error, and the method comprises the following steps:
step S501, according to the feature association result, constructing a space reprojection error and a time reprojection error for each feature point on each image frame;
and step S502, obtaining the maximum likelihood estimation of the pose in the sliding window by using a beam adjustment method.
7. The binocular vision odometry calculation method based on parallax constraint and two-way annular inspection according to any one of claims 1 to 4, further comprising step S600 after step S500:
based on the final pose of each moment before the t moment, acquiring a prior term r of the t moment by an edge method and a Schur decomposition methodpAnd the prior term r is usedpAs a priori error of the maximum likelihood estimate in step S500.
8. A binocular vision odometer computing system based on parallax constraint and bidirectional annular inspection is characterized by comprising a binocular image acquisition unit, an initial feature point extraction unit, a feature point extraction and feature association unit, an initial pose estimation unit and a maximum likelihood estimation unit;
the binocular image acquisition unit is configured to acquire the left image and the right image according to a set acquisition frequency through a binocular camera loaded on the mobile carrier to obtain the left image and the right image corresponding to the t moment and the t +1 moment respectively;
the initial feature point extraction unit is configured to perform Shi-Tomasi corner point detection on an image at the t-1 moment obtained by the binocular image acquisition unit, extract a newly added feature point set, and construct a new feature point set of the set image by combining the feature points of the set image obtained at the t-1 moment;
the feature point extraction and feature association unit is configured to acquire feature point sets corresponding to other images at the time t and the time t +1 respectively by adopting a KLT optical flow method and through parallax constraint and adaptive bidirectional annular inspection based on the new feature point set of the set image, and perform feature association;
the method specifically comprises the following steps: in the process of acquiring the pose at the time t, setting the feature points in a new feature point set of the image to pass parallax constraint and self-adaptive bidirectional annular inspection, so that the tracking is successful, adding 1 to the maximum tracking frequency of the feature points, and setting the initial tracking frequency of the newly added feature points to be 0;
if it is notIf the feature point can pass left and right inspection and parallax constraint, the feature point is retained, and feature association is established, otherwise, the feature point is deleted, which specifically comprises the following steps:
will be provided withCharacteristic point ofTracking by KLT optical flow method is obtained inCharacteristic point of
If the feature pointAndis less than a set threshold rho1If not, deleting the feature point and skipping the following steps;
the left image at the time t-1Characteristic point ofTracking through a KLT optical flow method to obtain a left image at the time tCharacteristic point ofScreening left side image at time t-1The feature points passing the front and back inspection are establishedAndthe feature association of (1);
if it is notIf the feature point can pass the front and back inspection, the feature point is reserved, and the feature association is established, otherwise, the feature point is deleted, which specifically comprises the following steps:
will be provided withCharacteristic point ofTracking by KLT optical flow method is obtained inCharacteristic point of
the left image at the time tCharacteristic point ofTracking through a KLT optical flow method to obtain a right image at the time tCharacteristic point ofScreening left image at time tFeature points are checked through left and right and parallax constraint, and establishedAndthe feature association of (1);
if it is notIf the feature point can pass left and right inspection and parallax constraint, the feature point is retained, and feature association is established, otherwise, the feature point is deleted, and the specific method comprises the following steps:
will be provided withCharacteristic point ofTracking by KLT optical flow method is obtained inCharacteristic point of
If the feature pointAndis less than a set threshold rho1If not, deleting the feature point and skipping the following steps;
the right side graph of the time tCharacteristic point ofTracking through a KLT optical flow method to obtain a right image at the time point of t-1Characteristic point ofScreening right images at time tPassing the feature points of the self-adaptive back-front inspection;
if it is notThe feature point can be checked after self-adapting, then the feature point is reserved, otherwise, the feature point is deleted, and the specific method is as follows:
will be provided withCharacteristic point ofTracking by KLT optical flow method is obtained inCharacteristic point of
If it is notAndis less than the adaptive threshold delta4Then, the self-adaptive back-front inspection is passed;
adaptive threshold delta4The equation of (a) is:
wherein rho and epsilon are preset parameters, loss is the number of feature points lost when the feature points of the left image track the feature points of the left image at the time t-1 at the time t, and maxtrack is the maximum tracking frequency of the current feature points; when pose calculation is carried out at each moment, bidirectional annular inspection starts from a left image at the previous moment; if the bidirectional ring inspection starts from the right image at the time t-1, the loss is the number of feature points lost when the feature points of the right image at the time t track the feature points of the right image at the time t-1, and in this case, the bidirectional ring inspection starts from the right image at the previous time;
the initial pose estimation unit is configured to obtain initial pose estimation of the moving carrier by adopting a PNP pose estimation method based on the feature association result, and triangularize each feature point in the left image and the right image at the time t by a binocular vision method to obtain a three-dimensional space coordinate corresponding to each feature point;
and the maximum likelihood estimation unit is configured to obtain maximum likelihood estimation of the pose of the mobile carrier by minimizing a reprojection error by adopting a beam adjustment method based on the feature association, the initial pose estimation, and the three-dimensional space coordinates and two-dimensional image coordinates of each feature point in the left image and the right image at the time t.
9. A storage device having stored therein a plurality of programs, wherein the program applications are loaded and executed by a processor to implement the binocular vision odometry calculation method based on parallax constraint and two-way circle inspection according to any one of claims 1 to 7.
10. A processing arrangement comprising a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; characterized in that said program is adapted to be loaded and executed by a processor to implement the binocular vision odometry calculation method based on parallax constraint and two-way circular inspection according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910578572.0A CN110335308B (en) | 2019-06-28 | 2019-06-28 | Binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910578572.0A CN110335308B (en) | 2019-06-28 | 2019-06-28 | Binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110335308A CN110335308A (en) | 2019-10-15 |
CN110335308B true CN110335308B (en) | 2021-07-30 |
Family
ID=68144566
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910578572.0A Active CN110335308B (en) | 2019-06-28 | 2019-06-28 | Binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110335308B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111862150B (en) * | 2020-06-19 | 2024-06-14 | 杭州易现先进科技有限公司 | Image tracking method, device, AR equipment and computer equipment |
CN112330589A (en) * | 2020-09-18 | 2021-02-05 | 北京沃东天骏信息技术有限公司 | Method and device for estimating pose and computer readable storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107025668A (en) * | 2017-03-30 | 2017-08-08 | 华南理工大学 | A kind of design method of the visual odometry based on depth camera |
CN108519102A (en) * | 2018-03-26 | 2018-09-11 | 东南大学 | A kind of binocular vision speedometer calculation method based on reprojection |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101926563B1 (en) * | 2012-01-18 | 2018-12-07 | 삼성전자주식회사 | Method and apparatus for camera tracking |
-
2019
- 2019-06-28 CN CN201910578572.0A patent/CN110335308B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107025668A (en) * | 2017-03-30 | 2017-08-08 | 华南理工大学 | A kind of design method of the visual odometry based on depth camera |
CN108519102A (en) * | 2018-03-26 | 2018-09-11 | 东南大学 | A kind of binocular vision speedometer calculation method based on reprojection |
Non-Patent Citations (2)
Title |
---|
《Real-time stereo visual odometry for autonomous ground vehicles》;Howard A.等;《2008 IEEE/RSJ International Conference on Intelligent Robots and Systems》;20080930;3946-3952 * |
《三阶段局部双目光束法平差视觉里程计》;赵彤 等;《光电工程》;20181231;75-85 * |
Also Published As
Publication number | Publication date |
---|---|
CN110335308A (en) | 2019-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110108258B (en) | Monocular vision odometer positioning method | |
CN107452015B (en) | Target tracking system with re-detection mechanism | |
WO2016035324A1 (en) | Method for estimating motion, mobile agent and non-transitory computer-readable medium encoded with a computer program code for causing a processor to execute a method for estimating motion | |
EP3504682A1 (en) | Simultaneous localization and mapping with an event camera | |
CN110097586B (en) | Face detection tracking method and device | |
CN111724439A (en) | Visual positioning method and device in dynamic scene | |
EP3654234B1 (en) | Moving object detection system and method | |
CN101860729A (en) | Target tracking method for omnidirectional vision | |
CN105869120A (en) | Image stitching real-time performance optimization method | |
CN113689503B (en) | Target object posture detection method, device, equipment and storage medium | |
WO2018152214A1 (en) | Event-based feature tracking | |
EP3293700B1 (en) | 3d reconstruction for vehicle | |
CN112950696A (en) | Navigation map generation method and generation device and electronic equipment | |
CN110335308B (en) | Binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection | |
EP3216006B1 (en) | An image processing apparatus and method | |
CN115131420A (en) | Visual SLAM method and device based on key frame optimization | |
CN104200492A (en) | Automatic detecting and tracking method for aerial video target based on trajectory constraint | |
CN110706253B (en) | Target tracking method, system and device based on apparent feature and depth feature | |
El Bouazzaoui et al. | Enhancing RGB-D SLAM performances considering sensor specifications for indoor localization | |
Chumerin et al. | Ground plane estimation based on dense stereo disparity | |
CN115830064B (en) | Weak and small target tracking method and device based on infrared pulse signals | |
CN113592947B (en) | Method for realizing visual odometer by semi-direct method | |
CN113723432B (en) | Intelligent identification and positioning tracking method and system based on deep learning | |
CN113873144B (en) | Image capturing method, image capturing apparatus, and computer-readable storage medium | |
CN116092035A (en) | Lane line detection method, lane line detection device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |