CN114529576A - RGBD and IMU hybrid tracking registration method based on sliding window optimization - Google Patents
RGBD and IMU hybrid tracking registration method based on sliding window optimization Download PDFInfo
- Publication number
- CN114529576A CN114529576A CN202210002435.4A CN202210002435A CN114529576A CN 114529576 A CN114529576 A CN 114529576A CN 202210002435 A CN202210002435 A CN 202210002435A CN 114529576 A CN114529576 A CN 114529576A
- Authority
- CN
- China
- Prior art keywords
- imu
- information
- sliding window
- rgbd
- optimization
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an RGBD and IMU hybrid tracking registration method based on sliding window optimization, which comprises the following steps: visual measurement front-end step: firstly, RGB information and depth information are obtained through an RGBD camera, and feature extraction and tracking are carried out; calibrating external parameters, bias and absolute scales of the three sensors, estimating poses of all frames in a sliding window and inverse depths of observation points through a PnP (pseudo noise) and ICP (inductively coupled plasma) algorithm, and aligning with an IMU (inertial measurement Unit) pre-integration result to solve related parameters; further optimizing initial pose information transmitted from a vision measurement front end by adopting a method based on a sliding window, constructing a target error function through prior information constraint, vision constraint and IMU constraint, and continuously optimizing the pose and bias information of all frames by using a graph optimization algorithm based on the sliding window; loop detection and optimization: and performing loop detection by using a DboW algorithm, and if loop generation is detected, performing closed-loop optimization on the whole track in a repositioning mode.
Description
Technical Field
The invention belongs to the field of augmented reality, and is suitable for an indoor industrial mechanical product assembly scene. In particular to an RGBD and IMU hybrid tracking registration method based on sliding window optimization.
Background
In today's manufacturing industry, the assembly of complex mechanical products has been a critical issue in the manufacturing process. According to statistics, the assembly workload in modern manufacturing accounts for 20% -70% of the whole product development workload, the average is 45%, and the assembly time accounts for 40% -60% of the whole manufacturing time. At present, the assembly process of complex mechanical products is still mainly manual assembly, and the assembly mode has the problems of long assembly period, low assembly efficiency and the like, and directly influences the development cost of the products.
In recent years, with the continuous improvement of the scientific and technical level, the problems existing in the manual assembly process are solved to a certain extent by an augmented reality auxiliary assembly technology. The method generates virtual assembly auxiliary information related to assembly by means of a computer, and superimposes the virtual assembly auxiliary information on a real assembly operation environment, so that a virtual-real integrated augmented reality environment is provided for assembly personnel, the assembly process of a product is simpler and more visual, and the development cost of the product is effectively reduced.
The tracking registration technology is to superimpose virtual object information and other information generated by a computer on a real scene, which is also a key for realizing virtual-real fusion by an augmented reality technology. Many scholars now propose many valuable methods in the field of tracking registration, including tracking registration methods based on artificial markers, tracking registration methods based on natural features, and tracking registration methods based on models, which are all based on visual methods. However, in the field of industrial mechanical product assembly, industrial parts have the characteristics of sparse texture and smooth surface, and meanwhile, in the process of head-mounted equipment movement, the problem of image blurring and shaking caused by movement is easily caused. Therefore, in order to solve the above problems, measurement information of other types of sensors can be introduced on the basis of a method for measuring by a visual sensor, so as to overcome the problem that a pure visual tracking registration method is poor in tracking registration robustness in an industrial machinery product assembly environment. Aiming at the problems, on the basis of a vision measurement sensor, other sensors with small volume and low cost can be introduced for measurement, and the advantage complementation among different sensors is realized through the idea of multi-sensor fusion, so that the robustness of the augmented reality tracking registration process is improved.
Compared with a common RGB camera, the depth sensor can obtain dense and stable depth feature information in the surface of a mechanical product with a smooth surface and sparse texture. Meanwhile, the introduction of the depth information can also overcome the problems of scale uncertainty under a monocular camera and scale estimation inaccuracy under a binocular camera, and can assist the RGB characteristic information to perform more stable tracking registration through an ICP (inductively coupled plasma) algorithm. However, this method has the following problems: firstly, when a camera moves rapidly, 3D features are lost, so that an ICP (iterative closest point) algorithm is easy to fall into a local optimal solution in an iterative process and fail, and secondly, when the working range (0.15-4 m) of a depth sensor is exceeded, a frame loss phenomenon is easy to cause, and therefore tracking and registration failure is caused.
Inertial trackers, also known as Inertial Measurement Units (IMUs), include Inertial gyroscopes and microaccelerometers, which use the strapdown Inertial navigation principle to calculate direction and position. The method has the characteristics of small volume, no shielding, unlimited working range and high updating frequency, and can effectively help the RGBD-based tracking registration method to overcome the problem of tracking registration robustness reduction caused by rapid movement and limited depth measurement range. Although the inertial measurement unit is easily subjected to the corresponding of noise and self-bias, the error is accumulated continuously along with the increase of the running time of the system, and the gyroscope bias of the IMU can be calibrated due to the existence of the RGBD camera.
Aiming at the problems, the depth sensor and the Inertial Measurement Unit (IMU) which are small in size and low in cost are introduced on the basis of the traditional vision measurement sensor by combining the assembling environment characteristics of mechanical products, the robustness of the augmented reality tracking registration process is improved through a multi-sensor fusion algorithm, and the assembling efficiency and the stability of complex mechanical products are improved.
Application publication No. CN111462231A, a positioning method based on RGBD sensor and IMU sensor, the process is as follows: firstly, a mobile robot carries an RGBD sensor and an IMU sensor to acquire data; secondly, initializing RGBD sensor data by adopting a visual SLAM algorithm to obtain pose information of the RGBD sensor data, pre-integrating IMU sensor data, and initializing the mobile robot to obtain initial state information of the mobile robot; optimally tracking pose information, speed and IMU sensor bias of the mobile robot based on the reprojection error; meanwhile, a sliding window-based nonlinear optimization algorithm is adopted to optimize pose information and a map in real time; and loop detection is added to avoid the zero offset condition. The positioning method comprehensively utilizes the RGBD sensor and the IMU sensor, and overcomes the defects that the pure vision SLAM algorithm is easy to fail in tracking when the pure rotation and the background are single and the IMU sensor generates a zero offset phenomenon in long-time movement.
Although the above patent also proposes a tracking and positioning method based on fusion of RGBD information and IMU information, the following differences exist in detail processing between the present patent and the present patent:
1. since mechanical parts under an industrial assembly field have the characteristics of sparse texture and smooth surface, which directly results in that an RGB camera cannot extract a sufficient number of stable feature points for tracking registration, in the vision measurement front end described in claim 1, it is proposed to obtain dense point cloud by using depth information acquired by a depth camera on the basis of the RGB camera, and to implement tracking registration in a point cloud registration manner, although 2D ORB features are also extracted by the RGB camera, the ORB features are sparse in a weak texture scene, and pose information calculated by the features in the present invention has only two purposes: the first method is to serve as initial pose information in the point cloud registration process to accelerate the iteration process; the second is that in the case where the depth of the depth camera is not measurable, the error of the measurement, which is the only basis for the visual tracking registration process, is compensated by the following IMU. In the above patent, only the ORB features collected by the RGB camera are used for tracking and matching the visual features, and the depth information collected by the depth sensor is not used, so that the robustness of tracking registration in the scene described in this patent cannot be guaranteed.
2. In the method, marginalized prior information is introduced into a back-end optimized error function, in a sliding window, if a variable is removed from the window, the variable and related constraint information are not directly removed, a new constraint relation is generated between the variable and the variable in the constraint relation, information carried by the discarded variable is transmitted to a residual variable, prior information is provided for a subsequent optimization process, and therefore more accurate pose estimation is obtained through optimization. In the above patent, only a constraint relation with adjacent frames is established in the back-end optimization process, and then the constraint information of the marginalized variables does not participate in the optimization, so the precision of tracking registration is not superior to that of the patent, and the above patent does not perform optimization on the marginalization strategy, so the real-time performance of tracking registration may be poor.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. An RGBD and IMU hybrid tracking registration method based on sliding window optimization is provided. The technical scheme of the invention is as follows:
an RGBD and IMU hybrid tracking registration method based on sliding window optimization comprises the following steps:
visual measurement front-end step: firstly, RGB information and depth information are obtained through an RGBD camera, feature extraction and tracking are carried out, 3D information of a frame position and an observation point at the current moment is calculated by adopting a tracking registration method based on RGBD, the tracked and matched feature information is added into a key frame queue and is delivered to the rear end for optimization, meanwhile, an IMU carries out pre-integration operation on the collected pose information to obtain pose information between adjacent frames, the pose information is used as an initial pose when the tracking registration method based on RGBD is calculated and is delivered to the rear end for optimization;
multi-sensor joint initialization: calibrating external parameters, bias and absolute scales of the three sensors, estimating the pose of two frames in the sliding window and the inverse depth of an observation point through an SLAM algorithm, and aligning with an IMU pre-integration result to solve related parameters;
a back end optimization step: further optimizing initial pose information transmitted from a vision measurement front end by adopting a method based on a sliding window, constructing a target error function through prior information constraint, vision constraint and IMU constraint, and continuously optimizing the pose and bias information of all frames by a graph optimization algorithm based on the sliding window;
loop detection and optimization: and performing loop detection by using a DboW visual bag-of-words algorithm, and if loop generation is detected, performing closed-loop optimization on the whole track in a repositioning mode.
Further, the feature extraction and tracking is to extract and match feature points by using an ORB algorithm, and the ORB algorithm specifically includes the following steps: firstly, finding out the corner feature of the part by comparing the difference between the current pixel point and the surrounding pixel points, secondly, selecting N point pairs by a certain rule near the selected corner, combining the difference results of the N point pairs to be used as a descriptor of the feature point, and finally, in the feature matching stage, performing feature matching through the similarity of the descriptor of the corner;
and rejecting the mismatching points through an RANSAC algorithm, wherein the rejection of the mismatching points through the RANSAC algorithm specifically comprises the following steps: firstly, mapping matched corner characteristic point pairs to a three-dimensional space through depth information acquisition, randomly assuming a group of local points, namely correct matching points, as initial values, and fitting the initial values to form a model; then, trying to fit other feature points by using the assumed model, expanding the model if the model is suitable for the model, and removing the point which is regarded as an outlier, namely no matching point; and finally, after the maximum iteration times are reached, if the number of the local points in the assumed model is less than a certain threshold value, abandoning the assumed model and assuming iteration again, otherwise, considering that the confidence coefficient of the model is high, reserving the model, and removing the external points of the model, namely the non-matching points.
Further, the RGBD-based tracking registration method calculates the position of the frame at the current time and the 3D information of the observation point, and specifically includes:
on the basis of ORB feature matching, whether depth information of corresponding feature point pairs can be measured or not is judged, for the measurable feature point pairs, the depth information of the corresponding feature point pairs is measured, initial pose information of a current frame is calculated through an ICP algorithm, dense point cloud information is obtained through a depth image, the initial pose is used as an initial value of ICP iteration, and further ICP iteration optimization is carried out to obtain more accurate pose information; and for the characteristic point pair information of which the depth cannot be obtained, the pose estimation is still carried out by using the PnP method.
Further, the steps of the SFM method are:
the calculating of the initial pose information of the current frame through the ICP algorithm specifically includes: first, the centroid-removing coordinates of each point of the two sets of matched point clouds P1 and P2 are calculated,pi and p'iThen, transforming the point cloud P1 into P2 by the transformation matrix T, and constructing an error function; finally, continuously optimizing the transformation matrix T by an iterative method until the error function is lower than a certain threshold value or reaches the maximum iteration number;
for the feature point pair information of which the depth cannot be obtained, the pose estimation is still performed by using a PnP method, which specifically includes: firstly, finding out an observation point q with a common-view relation between a current frame and a previous frame according to a feature matching point, and converting the point into a pixel coordinate of the current frame through a transformation matrix T; and then, constructing a reprojection error function, and continuously optimizing the transformation matrix T by an iterative least square method to further solve the pose of the current frame. Furthermore, the IMU performs pre-integration operation on the acquired pose information, the IMU measures the pose information and combines pvq pose information at the ith moment to obtain pvq pose information at the jth moment, and the calculation formula is as follows:
wherein the content of the first and second substances,the position information of the IMU in a world coordinate system at the moment j and i;is the velocity of the IMU in the world coordinate system at time j, i;is the rotation information of the IMU under the world coordinate system at the moment j, i;is the IMU acceleration at time t;is the angular velocity of the IMU at time t. The integral parts of the above three equations, representing the integral increments from time i to time j, are respectively denoted,these three quantities will be non-linearly optimized as inter-frame constraint information for the visual measurements of the back-end.
Further, the specific steps of the multi-sensor joint initialization are as follows:
step 1: estimating an external parameter q between an IMU and an RGBD camera using rotation constraintsbc(ii) a Wherein the rotation is constrained as follows, whereinIs the rotational quaternion of the IMU itself from time k to time k +1,the method comprises the steps that a rotation quaternion of an RGBD camera from a moment k to a moment k +1 is formed;
step 2: and constructing an error function through the visual observation value and the gyroscope measurement value by utilizing the rotation constraint to carry out least square iteration to solve the gyroscope bias bgAnd recalculate the IMU pre-integration at the current bias.
Step 3: absolute scale translation integrated by IMU at time k and time k +1And relative scale translation of RGBDAnd rotating external parameter q between IMU and RGBDbcTranslating external parameter pbcAligning an IMU track with an RGBD camera track to obtain an absolute scale s;
step 4: the method comprises the steps of constructing a minimized IMU pre-integral error through translation constraint, and solving the initialization speed of a systemAnd initializing gravitySpecifically, the translation constraint calculation formula is as followsThe following:
wherein the content of the first and second substances,for the relative scale displacement between the RGBD cameras from the IMU system to the 0 th time at the k-th time,is the relative scale displacement between the RGBD cameras of two frames from the k time point to the 0 time point,is the rotation matrix of IMU relative to RGBD camera system at the k-th time.
The specific IMU pre-integration error calculation formula between the kth time and the (k + 1) th time is:
wherein the content of the first and second substances,is the displacement variation error of the IMU between the kth moment and the kth +1 moment; is a measured value and a predicted value of the displacement variation of the IMU between the k-th moment and the k + 1-th moment, is the amount of change in IMU between time k and time k +1Measured values and predicted values. The remaining symbolic meanings refer to the comments above.
Step 5: the gravity in step4 is influenced by the constraint information of gravity module length g | | |And carrying out parameterization and substituting the parameterized and parameterized IMU pre-integration error calculation formula for calculation so as to correct the initialized gravity ruler quantity. The parameterized expression is:
in the formula, | g | | | is a known gravity magnitude,is a unit vector of the direction of gravity, b1、b2Are two orthogonal basis vectors, w, across the tangent plane1、w2Is a variable to be optimized, representing a displacement along two orthogonal basis vectors, which together constitute
Further, the method based on the sliding window is adopted to further optimize the initial pose information transmitted from the vision measurement front end, a target error function is constructed through prior information constraint, vision constraint and IMU constraint, and the pose and offset information of all frames are continuously optimized through a graph optimization algorithm based on the sliding window, and specifically comprises the following steps:
step 1: determining an optimized state vector, wherein the state vector needing to be optimized comprises the following components: the states of all cameras in the sliding window, the inverse depths of m +1 observation points;
wherein the content of the first and second substances,representing all the state quantities in the sliding window, n is the number of frames in the sliding window, m represents the total number of feature points in the sliding window, lambdakInverse depth information representing each observation point; x is the number ofkRepresenting pose information at each frame, including displacement of IMU in world coordinate systemSpeed of rotationAngular velocityAccelerometer biasingGyroscope biasing
Step 2: continuously optimizing state vectors in a sliding window by constructing a least squares-based target error function through visual errors and inertial measurement errors
Wherein e ispriorIs a priori information of the marginalization of the sliding window, rbIs the IMU pre-integration error function between adjacent frames,is an IMU pre-integration observation between time k and time k +1,is the covariance of the IMU pre-integration noise term, rcIs the visual reprojection error between adjacent frames,is the observation of observation point l by the RGBD camera at time j,is the covariance of the RGBD observations;
step 3: expanding and differentiating the least square objective function in Step2, and calculating each error term about the state vectorThe Jacobian matrix J;
step 4: the Jacobian matrix J obtained by the solution is used for the state vector through the Gauss-Newton method with the damping factorAnd carrying out iterative optimization to accelerate the iterative process of optimization.
Step 5: and updating the state vector, stopping when the target error function is lower than a certain threshold value or reaches the maximum iteration number N, and otherwise, jumping to Step 3.
Further, in Step2, the marginalized prior information is obtained by removing old variables through a sliding window to reestablish constraint information between the variables: the variable removal policy for the sliding window is:
(1) after a new frame is added into the sliding window, a landmark point l1 which can be observed by all frames in the current sliding window is found,Triangularization optimization is carried out through the frames, after the optimization is completed, a sliding window is removed from the observation point, and meanwhile constraint is built among all the frames;
(2) observation point l not visible for new frame2Removing the observation point and establishing constraint between old frames which can observe the observation point;
(3) after removing the observation points, if some frames in the sliding window have no observable points, the observation frames are directly removed.
Further, after the variables are removed, the influence of the removed variables on other state vectors in the sliding window, namely marginalized prior information, is calculated according to a schulren's complement mode;
further, the looping detection is performed by using a DboW visual bag-of-words algorithm, which specifically includes: firstly, ORB operator description is carried out on the new and old angular points of a current frame, then, similarity scores of the current frame and a word bag are calculated, the similarity scores are compared with all frames in a key frame database, closed loop consistency detection is carried out, and finally, if closed loops are found, state vectors are detected through an SLAM algorithmAnd performing loop optimization.
The invention has the following advantages and beneficial effects:
(1) because mechanical parts have the characteristics of sparse texture and smooth surface, great difficulty is brought to 2D feature point extraction, so that the robustness of tracking registration cannot be guaranteed, and the depth camera can extract dense point cloud information in a weak texture environment and can supplement sparse 2D features. Therefore, claim 3 proposes a tracking registration method based on RGBD, which introduces depth information on 2D features, calculates an initial pose through ORB features, and uses the initial pose as a pose initial value in a subsequent dense point cloud ICP registration process, so that more accurate pose estimation can be obtained while accelerating ICP registration.
(2) In the mechanical product assembly process, the problem of fuzzy jitter of an RGB image and a depth image is caused due to head movement of an operator, so that the accurate pose estimation can not be carried out by the RGBD feature-based tracking registration method, and the pose of the IMU sensor can be estimated in real time in the movement process. Therefore, in the claims 5 to 8, the IMU measurement information is introduced into the RGBD-based tracking registration method, and the IMU measurement information is also fused, so that when the RGBD-based tracking registration method can work normally, the pose information can be further optimized through the inter-frame constraint information of the IMU; when the RGBD feature-based tracking registration method cannot work normally due to movement, the IMU still can provide self pose estimation in real time, so that the defect of poor robustness of the RGBD camera in tracking registration in a moving scene is well overcome, and meanwhile the pose estimation precision of the RGBD feature-based tracking registration method is further improved.
(3) When the robustness of tracking registration of an augmented reality system is improved, claims 7 to 8 propose an RGBD and IMU hybrid tracking registration method based on sliding window optimization, wherein in the moving process of a sliding window, constraint information of a removed variable is transmitted to a residual variable in a priori information mode to participate in back-end nonlinear optimization, and the tracking registration precision is further improved; meanwhile, a sliding window variable eliminating strategy is provided on the basis, redundant state variables in the sliding window are eliminated, the response time of back-end nonlinear optimization is shortened, and the instantaneity of the tracking registration process is improved.
Drawings
FIG. 1 is a flow chart of an augmented reality hybrid tracking registration method based on RGBD and IMU fusion in accordance with a preferred embodiment of the present invention;
FIG. 2 is a visual inertial constraint topology and a sliding window marginalization graph;
FIG. 3 is a flow chart of a pose estimation method based on RGBD;
fig. 4 is a schematic diagram of a sliding window based multi-sensor fusion strategy.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
aiming at the problems of the augmented reality technology in the industrial mechanical product assembly scene, a hybrid tracking registration method based on fusion of RGBD and an inertia measurement unit is provided. The technical scheme of the invention is as follows:
a hybrid tracking registration method based on fusion of RGBD and an inertial measurement unit is disclosed, and a flow framework of the method is shown in FIG. 1, and the method comprises the following steps:
visual measurement front-end step: firstly, RGB information and depth information are obtained through an RGBD camera, feature extraction and tracking are carried out, 3D information of a frame position and an observation point at the current moment is calculated by adopting a tracking registration method based on RGBD, the tracked and matched feature information is added into a key frame queue and is delivered to the rear end for optimization, meanwhile, an IMU carries out pre-integration operation on the collected pose information to obtain pose information between adjacent frames, the pose information is used as an initial pose when the tracking registration method based on RGBD is calculated and is delivered to the rear end for optimization;
multi-sensor joint initialization: calibrating external parameters, bias and absolute scales of the three sensors, estimating the pose of two frames in the sliding window and the inverse depth of an observation point through an SLAM algorithm, and aligning with an IMU pre-integration result to solve related parameters;
a back end optimization step: further optimizing initial pose information transmitted from a vision measurement front end by adopting a method based on a sliding window, constructing a target error function through prior information constraint, vision constraint and IMU constraint, and continuously optimizing the pose and bias information of all frames by using a graph optimization algorithm based on the sliding window;
loop detection and optimization: and performing loop detection by using a DboW visual bag-of-words algorithm, and if loop generation is detected, performing closed-loop optimization on the whole track in a repositioning mode.
Further, the feature extraction and tracking is to extract and match feature points by using an ORB algorithm, and the ORB algorithm specifically includes the following steps: firstly, finding out the corner feature of the part by comparing the difference between the current pixel point and the surrounding pixel points, secondly, selecting N point pairs by a certain rule near the selected corner, combining the difference results of the N point pairs to be used as a descriptor of the feature point, and finally, carrying out feature matching by the similarity of the descriptor of the corner in the feature matching stage;
and rejecting the mismatching points through an RANSAC algorithm, wherein the rejection of the mismatching points through the RANSAC algorithm specifically comprises the following steps: firstly, mapping matched corner characteristic point pairs to a three-dimensional space through depth information acquisition, randomly assuming a group of local points, namely correct matching points, as initial values, and fitting the initial values to form a model; then, trying to fit other feature points by using the assumed model, expanding the model if the model is suitable for the model, and removing the point which is regarded as an outlier, namely no matching point; and finally, after the maximum iteration times are reached, if the number of the local points in the assumed model is less than a certain threshold value, abandoning the assumed model and assuming iteration again, otherwise, considering that the confidence coefficient of the model is high, reserving the model, and removing the external points of the model, namely the non-matching points.
Further, the RGBD-based tracking registration method calculates the position of the frame at the current time and the 3D information of the observation point, and specifically includes:
on the basis of ORB feature matching, whether depth information of corresponding feature point pairs can be measured or not is judged, for the measurable feature point pairs, the depth information of the corresponding feature point pairs is measured, initial pose information of a current frame is calculated through an ICP algorithm, dense point cloud information is obtained through a depth image, the initial pose is used as an initial value of ICP iteration, and further ICP iteration optimization is carried out to obtain more accurate pose information; and for the characteristic point pair information of which the depth cannot be obtained, the pose estimation is still carried out by using the PnP method.
Further, the steps of the SFM method are:
the calculating of the initial pose information of the current frame through the ICP algorithm specifically includes: first, the centroid-removed coordinates, P, of each point of two sets of matched point clouds P1 and P2 are calculatediAnd p'iThen, transforming the point cloud P1 into P2 by the transformation matrix T, and constructing an error function; finally, continuously optimizing the transformation matrix T by an iterative method until the error function is lower than a certain threshold or reaches the maximum iteration number;
for the feature point pair information of which the depth cannot be obtained, the pose estimation is still performed by using a PnP method, which specifically includes: firstly, finding out an observation point q with a common-view relation between a current frame and a previous frame according to a feature matching point, and converting the point into a pixel coordinate of the current frame through a transformation matrix T; and then, constructing a reprojection error function, and continuously optimizing the transformation matrix T by an iterative least square method to further solve the pose of the current frame. Further, the IMU performs pre-integration operation on the acquired pose information, obtains pvq pose information at the j time by IMU measurement and combination with pvq pose information at the i time, and the calculation formula is as follows:
wherein the content of the first and second substances,the position information of the IMU in a world coordinate system at the moment j and i;is the velocity of the IMU in the world coordinate system at time j, i;is the rotation information of the IMU under the world coordinate system at the moment j, i;is the IMU acceleration at time t;is the angular velocity of the IMU at time tAnd (4) degree. The integral portions of the above three equations, representing the sum integral increments from time i to time j, are denoted respectively,these three quantities will be non-linearly optimized as inter-frame constraint information for the visual measurements of the back-end.
Further, the specific steps of the multi-sensor joint initialization are as follows:
step 1: estimating an external parameter q between an IMU and an RGBD camera using rotation constraintsbc(ii) a Wherein the rotation is constrained as follows, whereinIs the rotational quaternion of the IMU itself from time k to time k +1,the method comprises the steps that a rotation quaternion of an RGBD camera from a moment k to a moment k +1 is formed;
step 2: and constructing an error function through the visual observation value and the gyroscope measurement value by utilizing the rotation constraint to carry out least square iteration to solve the gyroscope bias bgAnd recalculate the IMU pre-integration at the current bias.
Step 3: absolute scale translation integrated by IMU at time k and time k +1And relative scale translation of RGBDAnd rotating external parameter q between IMU and RGBDbcTranslating the external reference pbcAligning an IMU track with an RGBD camera track to obtain an absolute scale s;
step 4: minimizing IMU pre-integration errors by translational constraint construction,solving the initialization speed of the systemAnd initializing gravitySpecifically, the translation constraint calculation formula is as follows:
wherein the content of the first and second substances,the relative scale displacement between the IMU system and the RGBD camera at the 0 th time at the k time,is the relative scale displacement between the RGBD cameras of two frames from the k time point to the 0 time point,is the rotation matrix of IMU relative to RGBD camera system at the k-th time.
The specific IMU pre-integration error calculation formula between the k-th time and the k + 1-th time is as follows:
wherein the content of the first and second substances,is the displacement variation error of the IMU between the kth moment and the kth +1 moment; is the variation of IMU displacement between the k-th time and the k + 1-th timeThe measured value and the predicted value of the chemical quantity, is a measure and a predicted value of the IMU variation between time k and time k + 1. The remaining symbolic meanings refer to the comments above.
Step 5: the gravity in step4 is influenced by the constraint information of gravity module length g | | |And carrying out parameterization and substituting the parameterized and parameterized IMU pre-integration error calculation formula for calculation so as to correct the initialized gravity ruler quantity. The parameterized expression is:
in the formula, | g | | | is a known gravity magnitude,is a unit vector of the direction of gravity, b1、b2Are two orthogonal basis vectors, w, across the tangent plane1、w2Is a variable to be optimized, representing a displacement along two orthogonal basis vectors, which together constitute
Further, the method that the initial pose information transmitted from the vision measurement front end is further optimized by adopting a method based on a sliding window, a target error function is constructed through prior information constraint, vision constraint and IMU constraint, and the pose and bias information of all frames are continuously optimized through a graph optimization algorithm based on the sliding window specifically comprises the following steps:
step 1: determining an optimized state vector, wherein the state vector needing to be optimized comprises the following components: the states of all cameras in the sliding window, the inverse depths of m +1 observation points;
wherein the content of the first and second substances,representing all the state quantities in the sliding window, n is the number of frames in the sliding window, m represents the total number of feature points in the sliding window, lambdakInverse depth information representing each observation point; x is the number ofkRepresenting pose information at each frame, including displacement of IMU in world coordinate systemSpeed of rotationAngular velocityAccelerometer biasingGyroscope biasing
Step 2: continuously optimizing state vectors in a sliding window by constructing a least squares-based target error function through visual errors and inertial measurement errors
Wherein e ispriorIs a priori information of the marginalization of the sliding window, rbIs the IMU pre-integration error function between adjacent frames,is an IMU pre-integration observation between time k and time k +1,is the covariance of the IMU pre-integration noise term, rcIs the visual reprojection error between adjacent frames,is the observation of observation point l by the RGBD camera at time j,is the covariance of the RGBD observations;
step 3: expanding and differentiating the least square objective function in Step2, and calculating each error term about the state vectorThe Jacobian matrix J;
step 4: the Jacobian matrix J obtained by the solution is used for carrying out the state vector alignment by the Gauss-Newton method with the damping factorAnd carrying out iterative optimization to accelerate the iterative process of optimization.
Step 5: and updating the state vector, stopping when the target error function is lower than a certain threshold value or reaches the maximum iteration number N, and otherwise, jumping to Step 3.
Further, in Step2, the marginalized prior information is obtained by removing old variables through a sliding window to reestablish constraint information between the variables: the variable removal policy for the sliding window is:
(1) after a new frame is added into the sliding window, a road sign point l which can be observed by all frames in the current sliding window is searched1Carrying out triangularization optimization through the frames, removing a sliding window from the observation point after the optimization is finished, and simultaneously establishing constraint among all the frames;
(2) observation point l not visible for new frame2Removing the observation point and establishing constraint between old frames which can observe the observation point;
(3) after removing the observation points, if some frames in the sliding window have no observable points, the observation frames are directly removed.
Further, after the variables are removed, the influence of the removed variables on other state vectors in the sliding window, namely marginalized prior information, is calculated according to a schulren's complement mode;
further, the looping detection is performed by using a DboW visual bag-of-words algorithm, which specifically includes: firstly, ORB operator description is carried out on the new and old angular points of a current frame, then, similarity scores of the current frame and a word bag are calculated, the similarity scores are compared with all frames in a key frame database, closed loop consistency detection is carried out, and finally, if closed loops are found, state vectors are detected through an SLAM algorithmAnd performing loop optimization.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.
Claims (9)
1. An RGBD and IMU hybrid tracking registration method based on sliding window optimization is characterized by comprising the following steps:
visual measurement front-end step: firstly, RGB information and depth information are obtained through an RGBD camera, feature extraction and tracking are carried out, 3D information of a frame position and an observation point at the current moment is calculated by adopting an RGBD-based tracking registration method, the tracked and matched feature information is added into a key frame queue and is delivered to the rear end for optimization, meanwhile, an IMU carries out pre-integration operation on the collected pose information to obtain pose information between adjacent frames, the pose information is used as initial pose information when the RGBD-based tracking registration method is used for calculation, and the initial pose information is delivered to the rear end for optimization;
multi-sensor joint initialization: calibrating external parameters, bias and absolute scales of the three sensors, estimating the pose of two frames in the sliding window and the inverse depth of an observation point through an SLAM algorithm, and aligning with an IMU pre-integration result to solve related parameters;
a back end optimization step: further optimizing initial pose information transmitted from a vision measurement front end by adopting a method based on a sliding window, constructing a target error function through prior information constraint, vision constraint and IMU constraint, and continuously optimizing the pose and bias information of all frames by using a graph optimization algorithm based on the sliding window;
loop detection and optimization: and performing loop detection by using a DboW visual bag-of-words algorithm, and if loop generation is detected, performing closed-loop optimization on the whole track in a repositioning mode.
2. The RGBD and IMU hybrid tracking registration method based on sliding window optimization according to claim 1, wherein the feature extraction and tracking is to extract and match feature points by using an ORB algorithm, and the ORB algorithm specifically includes: firstly, finding out the corner feature of the part by comparing the difference between the current pixel point and the surrounding pixel points, secondly, selecting N point pairs by a certain rule near the selected corner, combining the difference results of the N point pairs to be used as a descriptor of the feature point, and finally, in the feature matching stage, performing feature matching through the similarity of the descriptor of the corner;
and rejecting the mismatching points through an RANSAC algorithm, wherein the rejection of the mismatching points through the RANSAC algorithm specifically comprises the following steps: firstly, mapping matched corner characteristic point pairs to a three-dimensional space through depth information acquisition, randomly assuming a group of local points, namely correct matching points, as initial values, and fitting the initial values to form a model; then, trying to fit other feature points by using the assumed model, expanding the model if the model is suitable for the model, and removing the point which is regarded as an outlier, namely no matching point; and finally, after the maximum iteration times are reached, if the number of the local points in the assumed model is less than a certain threshold value, abandoning the assumed model and assuming iteration again, otherwise, considering that the confidence coefficient of the model is high, reserving the model, and removing the external points of the model, namely the non-matching points.
3. The RGBD and IMU hybrid tracking registration method based on sliding window optimization according to claim 2, wherein the RGBD-based tracking registration method is used for calculating 3D information of a position and an observation point of a frame at a current time, and specifically comprises:
on the basis of ORB feature matching, whether depth information of corresponding feature point pairs can be measured or not is judged, for the measurable feature point pairs, the depth information of the corresponding feature point pairs is measured, initial pose information of a current frame is calculated through an ICP algorithm, dense point cloud information is obtained through a depth image, the initial pose is used as an initial value of ICP iteration, and further ICP iteration optimization is carried out to obtain more accurate pose information; and for the characteristic point pair information of which the depth cannot be obtained, the pose estimation is still carried out by using the PnP method.
4. The RGBD and IMU hybrid tracking registration method based on sliding window optimization as claimed in claim 3, wherein the steps of the SFM method are:
the calculating of the initial pose information of the current frame through the ICP algorithm specifically includes: first, the centroid-removed coordinates, P, of each point of two sets of matched point clouds P1 and P2 are calculatediAnd p'iThen, transforming the point cloud P1 into P2 by the transformation matrix T, and constructing an error function; finally, continuously optimizing the transformation matrix T by an iterative method until the error function is lower than a certain threshold value or reaches the maximum iteration number;
for the feature point pair information of which the depth cannot be obtained, the pose estimation is still performed by using a PnP method, which specifically includes: firstly, finding out an observation point q with a common-view relation between a current frame and a previous frame according to a feature matching point, and converting the point into a pixel coordinate of the current frame through a transformation matrix T; and then, constructing a reprojection error function, and continuously optimizing the transformation matrix T by an iterative least square method to further solve the pose of the current frame.
5. The RGBD and IMU hybrid tracking and registering method based on sliding window optimization according to claim 1, wherein the IMU performs pre-integration operation on the collected pose information, obtains pvq pose information at a j time by IMU measurement and combining pvq pose information at the i time, and the calculation formula is as follows:
wherein the content of the first and second substances,the position information of the IMU in a world coordinate system at the moment j and i;is the velocity of the IMU in the world coordinate system at time j, i;is the rotation information of the IMU under the world coordinate system at the moment j, i;is the IMU acceleration at time t;is the angular velocity of the IMU at time t. The integral parts of the above three equations, representing the integral increments from time i to time j, are respectively denoted,these three quantities will be non-linearly optimized as inter-frame constraint information for the visual measurements of the back-end.
6. The RGBD and IMU hybrid tracking and registration method based on sliding window optimization according to claim 5, wherein the specific steps of the multi-sensor joint initialization are as follows:
step 1: estimating an external parameter q between an IMU and an RGBD camera using rotation constraintsbc(ii) a Wherein the rotation is constrained as follows, whereinIs the rotational quaternion of the IMU itself from time k to time k +1,the method comprises the steps that a rotation quaternion of an RGBD camera from a moment k to a moment k +1 is formed;
step 2: and constructing an error function through the visual observation value and the gyroscope measurement value by utilizing the rotation constraint to carry out least square iteration to solve the gyroscope bias bgAnd recalculate the IMU pre-integration at the current bias.
Step 3: absolute scale translation integrated by IMU at time k and time k +1And relative scale translation of RGBDAnd rotating external parameter q between IMU and RGBDbcTranslating external parameter pbcAligning an IMU track with an RGBD camera track to obtain an absolute scale s;
step 4: constructing minimized IMU pre-integral error through translation constraint, and solving initialization speed of systemAnd initializing gravitySpecifically, the translation constraint calculation formula is as follows:
wherein the content of the first and second substances,for the relative scale displacement between the RGBD cameras from the IMU system to the 0 th time at the k-th time,is the relative scale displacement between the RGBD cameras of two frames from the k time point to the 0 time point,is the rotation matrix of IMU relative to RGBD camera system at the k-th time.
The specific IMU pre-integration error calculation formula between the k-th time and the k + 1-th time is as follows:
wherein the content of the first and second substances,is the displacement variation error of the IMU between the kth moment and the kth +1 moment; is a measured value and a predicted value of the displacement variation of the IMU between the k-th moment and the k + 1-th moment, are the measured and predicted values of IMU variation between time k and time k + 1. The remaining symbolic meanings refer to the comments above.
Step 5: the gravity in step4 is influenced by the constraint information of gravity module length g | | |And carrying out parameterization and substituting into the IMU pre-integral error calculation formula for calculation so as to correct the initialized gravity scale amount. Parameterized expressionComprises the following steps:
7. The RGBD and IMU hybrid tracking registration method based on sliding window optimization according to claim 6, wherein the method based on sliding window is adopted to further optimize initial pose information transmitted from a vision measurement front end, a target error function is constructed through prior information constraint, vision constraint and IMU constraint, and pose and offset information of all frames are continuously optimized through a graph optimization algorithm based on sliding window, specifically comprising:
step 1: determining an optimized state vector, wherein the state vector needing to be optimized comprises the following components: the states of all cameras in the sliding window, the inverse depths of m +1 observation points;
χ=[x0,x1,…xn,λ0,λ1,…λm]
wherein χ represents all state quantities in the sliding window, n is the number of frames in the sliding window, m represents the total number of feature points in the sliding window, λkInverse depth information representing each observation point; x is the number ofkRepresenting pose information at each frame, including displacement of IMU in world coordinate systemSpeed of rotationAngular velocityAccelerometer biasingGyroscope biasing
Step 2: constructing a target error function based on least square according to the visual error and the inertial measurement error to continuously optimize a state vector χ in a sliding window;
wherein e ispriorIs a priori information of the marginalization of the sliding window, rbIs the IMU pre-integration error function between adjacent frames,is an IMU pre-integration observation between time k and time k +1,is the covariance of the IMU pre-integration noise term, rcIs the visual reprojection error between adjacent frames,is the observation of observation point l by the RGBD camera at time j,is the covariance of the RGBD observations;
step 3: expanding and deriving a least square target function in Step2, and calculating a Jacobian matrix J of each error term relative to the state vector χ;
step 4: and (3) performing iterative optimization on the state vector χ through the solved Jacobian matrix J by a Gauss-Newton method with a damping factor to accelerate the iterative process of the optimization.
Step 5: and updating the state vector, stopping when the target error function is lower than a certain threshold value or reaches the maximum iteration number N, and otherwise, jumping to Step 3.
8. The RGBD and IMU hybrid tracking registration method based on sliding window optimization of claim 7, wherein in Step2, the marginalized prior information is obtained by removing old variables through sliding window to reestablish inter-variable constraint information: the variable removal policy for the sliding window is:
(1) after a new frame is added into the sliding window, a road sign point l which can be observed by all frames in the current sliding window is searched1Triangularization optimization is carried out through the frames, after the optimization is completed, a sliding window is removed from the observation point, and meanwhile constraint is built among all the frames;
(2) observation point l not visible for new frame2Removing the observation point and establishing constraint between old frames which can observe the observation point;
(3) after removing the observation points, if some frames in the sliding window have no observable points, the observation frames are directly removed.
Further, after removing the variables, calculating the influence of the removed variables on other state vectors in the sliding window according to a schulren's complement mode, namely marginalized prior information.
9. The RGBD and IMU hybrid tracking registration method based on sliding window optimization according to claim 8, wherein the performing loop detection by DboW visual bag of words algorithm specifically comprises: firstly, ORB operator description is carried out on a new corner and an old corner of a current frame, then, similarity scores of the current frame and a word bag are calculated, comparison is carried out on the similarity scores and all frames in a key frame database, closed loop consistency detection is carried out, and finally, if closed loops are found, loop optimization is carried out on a state vector χ through an SLAM algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210002435.4A CN114529576A (en) | 2022-01-04 | 2022-01-04 | RGBD and IMU hybrid tracking registration method based on sliding window optimization |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210002435.4A CN114529576A (en) | 2022-01-04 | 2022-01-04 | RGBD and IMU hybrid tracking registration method based on sliding window optimization |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114529576A true CN114529576A (en) | 2022-05-24 |
Family
ID=81620936
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210002435.4A Pending CN114529576A (en) | 2022-01-04 | 2022-01-04 | RGBD and IMU hybrid tracking registration method based on sliding window optimization |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114529576A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115371665A (en) * | 2022-09-13 | 2022-11-22 | 哈尔滨工业大学 | Mobile robot positioning method based on depth camera and inertia fusion |
CN115451958A (en) * | 2022-11-10 | 2022-12-09 | 中国人民解放军国防科技大学 | Camera absolute attitude optimization method based on relative rotation angle |
CN115829833A (en) * | 2022-08-02 | 2023-03-21 | 爱芯元智半导体(上海)有限公司 | Image generation method and mobile device |
CN117421384A (en) * | 2023-10-24 | 2024-01-19 | 哈尔滨理工大学 | Visual inertia SLAM system sliding window optimization method based on common view projection matching |
CN117765084A (en) * | 2024-02-21 | 2024-03-26 | 电子科技大学 | Visual positioning method for iterative solution based on dynamic branch prediction |
-
2022
- 2022-01-04 CN CN202210002435.4A patent/CN114529576A/en active Pending
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115829833A (en) * | 2022-08-02 | 2023-03-21 | 爱芯元智半导体(上海)有限公司 | Image generation method and mobile device |
CN115829833B (en) * | 2022-08-02 | 2024-04-26 | 爱芯元智半导体(上海)有限公司 | Image generation method and mobile device |
CN115371665A (en) * | 2022-09-13 | 2022-11-22 | 哈尔滨工业大学 | Mobile robot positioning method based on depth camera and inertia fusion |
CN115451958A (en) * | 2022-11-10 | 2022-12-09 | 中国人民解放军国防科技大学 | Camera absolute attitude optimization method based on relative rotation angle |
CN115451958B (en) * | 2022-11-10 | 2023-02-03 | 中国人民解放军国防科技大学 | Camera absolute attitude optimization method based on relative rotation angle |
CN117421384A (en) * | 2023-10-24 | 2024-01-19 | 哈尔滨理工大学 | Visual inertia SLAM system sliding window optimization method based on common view projection matching |
CN117765084A (en) * | 2024-02-21 | 2024-03-26 | 电子科技大学 | Visual positioning method for iterative solution based on dynamic branch prediction |
CN117765084B (en) * | 2024-02-21 | 2024-05-03 | 电子科技大学 | Visual positioning method for iterative solution based on dynamic branch prediction |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110375738B (en) | Monocular synchronous positioning and mapping attitude calculation method fused with inertial measurement unit | |
CN109307508B (en) | Panoramic inertial navigation SLAM method based on multiple key frames | |
CN109029433B (en) | Method for calibrating external parameters and time sequence based on vision and inertial navigation fusion SLAM on mobile platform | |
CN106679648B (en) | Visual inertia combination SLAM method based on genetic algorithm | |
CN107869989B (en) | Positioning method and system based on visual inertial navigation information fusion | |
CN112634451B (en) | Outdoor large-scene three-dimensional mapping method integrating multiple sensors | |
CN114529576A (en) | RGBD and IMU hybrid tracking registration method based on sliding window optimization | |
CN111795686B (en) | Mobile robot positioning and mapping method | |
CN109166149A (en) | A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU | |
CN107909614B (en) | Positioning method of inspection robot in GPS failure environment | |
CN110726406A (en) | Improved nonlinear optimization monocular inertial navigation SLAM method | |
CN106056664A (en) | Real-time three-dimensional scene reconstruction system and method based on inertia and depth vision | |
CN112649016A (en) | Visual inertial odometer method based on point-line initialization | |
CN112815939B (en) | Pose estimation method of mobile robot and computer readable storage medium | |
CN111288989B (en) | Visual positioning method for small unmanned aerial vehicle | |
CN111780781B (en) | Template matching vision and inertia combined odometer based on sliding window optimization | |
CN109579825A (en) | Robot positioning system and method based on binocular vision and convolutional neural networks | |
CN115371665B (en) | Mobile robot positioning method based on depth camera and inertial fusion | |
CN113503873B (en) | Visual positioning method for multi-sensor fusion | |
CN112985450B (en) | Binocular vision inertial odometer method with synchronous time error estimation | |
CN112179373A (en) | Measuring method of visual odometer and visual odometer | |
Li et al. | A binocular MSCKF-based visual inertial odometry system using LK optical flow | |
CN116772844A (en) | Navigation method of visual inertial indoor robot based on dynamic environment | |
CN117367427A (en) | Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment | |
Beauvisage et al. | Robust multispectral visual-inertial navigation with visual odometry failure recovery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |