CN113091738A - Mobile robot map construction method based on visual inertial navigation fusion and related equipment - Google Patents

Mobile robot map construction method based on visual inertial navigation fusion and related equipment Download PDF

Info

Publication number
CN113091738A
CN113091738A CN202110382654.5A CN202110382654A CN113091738A CN 113091738 A CN113091738 A CN 113091738A CN 202110382654 A CN202110382654 A CN 202110382654A CN 113091738 A CN113091738 A CN 113091738A
Authority
CN
China
Prior art keywords
state
imu
robot
map
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110382654.5A
Other languages
Chinese (zh)
Other versions
CN113091738B (en
Inventor
陈孟元
郭俊阳
陈何宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Polytechnic University
Original Assignee
Anhui Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Polytechnic University filed Critical Anhui Polytechnic University
Priority to CN202110382654.5A priority Critical patent/CN113091738B/en
Publication of CN113091738A publication Critical patent/CN113091738A/en
Application granted granted Critical
Publication of CN113091738B publication Critical patent/CN113091738B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a mobile robot map construction method based on visual inertial navigation fusion and related equipment, wherein the method comprises the following steps: s1, determining the fusion achievement degree according to the fusion state of the visual attitude and the inertial attitude, and completing visual initialization, visual inertial calibration alignment and inertial initialization; s2, constructing a situation evaluation function to analyze the posture trend in real time; s3, determining local expected value of the fusion state and outputting the state; s4, dividing the key frames into strong common-view key frames and weak common-view key frames according to the sparsity degree of the public map points; and S5, sending the strong co-view key frame into a sliding window according to the grading result, optimizing the strong co-view key frame by using the interframe pose intensive continuity, and drawing and correcting the self-perception map by means of the relation of the observation map points. The invention ensures accurate fusion between the camera and IMU data, and the system has high efficiency and small error in reading the pose information of the current map point.

Description

Mobile robot map construction method based on visual inertial navigation fusion and related equipment
Technical Field
The invention belongs to the technical field of Simultaneous Location And map creation (SLAM), And relates to a mobile robot map construction method based on visual inertial navigation fusion And related equipment.
Background
The core problem of Simultaneous Localization And Mapping (SLAM) is that a robot is required to find an environment to know the environment (construct a map) in a strange environment, And the robot is synchronously tracked in the position (localization) in the environment by using the map. The traditional solution to the SLAM problem is mainly based on mathematical probability methods, wherein kalman filters, particle filters, highly desirable algorithms, and the like are the basic solutions to the SLAM problem of robots. Although the traditional SLAM algorithm still uses laser ranging and sonar ranging to acquire information, the sensors acquire information, the obtained data is often inaccurate, the use of the sensors has certain limitation, and the laser ranging sensors are replaced by vision sensors under more conditions.
However, the traditional visual SLAM algorithm cannot meet the requirements of scientific research only by a single sensor, and cannot capture the motion information of the robot in time in a complex and fast motion environment and feed the motion information back to the master control machine for motion compensation. Researchers begin to integrate the shortcomings of information intention complementation of two different sensors, and expect that an Inertial Measurement Unit (IMU) can replace a camera to capture physical information of camera motion during rapid motion to improve the positioning accuracy of a visual SLAM, and the camera can make up for the problem of accumulated error drift caused by overlong use time of the IMU during slow motion.
However, when a map is constructed by the existing fusion method, the moving posture trend is not analyzed in real time, and the local real-time optimal state of inertial navigation fusion cannot be determined, so that the data of a camera and an IMU cannot be accurately fused, and the efficiency of reading the pose information of the current map point is low and the error is large. Meanwhile, the method has the problems of large back-end calculation amount, long running time, poor real-time performance of processing errors and the like.
Disclosure of Invention
The invention aims to provide a mobile robot map construction method based on visual inertial navigation fusion, and aims to solve the technical problems that in the prior art, camera and IMU data cannot be fused accurately, so that the efficiency of reading the pose information of a current map point is low, the error is large, and the real-time performance of a back-end processing error is poor.
The mobile robot map construction method based on the visual inertial navigation fusion comprises the following steps:
step S1, determining the fusion achievement degree according to the fusion state of the visual attitude and the inertial attitude, and completing the visual initialization, the visual inertial calibration alignment and the inertial initialization;
step S2, constructing a situation evaluation function to analyze the posture trend in real time;
step S3, determining a fusion state local expected value and outputting the state: judging whether the motion state observation data of the robot has high certainty through comparing the evaluation function value with a threshold value, and meanwhile, increasing a time threshold value to set a double-constraint evaluation state to determine whether to terminate the visual inertia joint initialization and output the current state;
step S4, dividing the key frames into strong common-view key frames and weak common-view key frames according to the sparsity degree of the public map points;
and step S5, sending the strong co-view key frame into a sliding window according to the grading result, optimizing the strong co-view key frame by using the interframe pose intensive continuity, and drawing and correcting the self-perception map by means of the relation of the observation map points.
Preferably, in step S1, the IMU measurement value is composed of a true value, a random walk deviation, and white noise, the tangential accelerometer is still affected by the gravitational acceleration, the IMU motion state variables are discretized by a median method to facilitate state fusion between the camera and the IMU, and the discretized motion state variables are calculated as follows:
Figure BDA0003013560310000021
ppre、vpreand q ispreInitial values representing IMU position, velocity and attitude quaternions, respectively;
Figure BDA0003013560310000022
and
Figure BDA0003013560310000023
respectively representing the accelerated speeds of the IMU under a world coordinate system at the ith moment and the i +1 th moment;
Figure BDA0003013560310000024
and
Figure BDA0003013560310000025
respectively representing the angular velocities of the IMU under the world coordinate system at the ith moment and the (i + 1) th moment;
Figure BDA0003013560310000026
an algorithm for inner product;
Figure BDA0003013560310000027
and
Figure BDA0003013560310000028
representing the variation of the IMU acceleration deviation and angular velocity deviation in the time periods of the i-th and i + 1-th moments;
Figure BDA0003013560310000029
is a rotation matrix transferred from an IMU coordinate system to a world coordinate system, and delta t is the interval time of adjacent moments;
the above motion state variable calculation formula can calculate that the calculation result at i +1 time is zi+1,z1The initial value is determined by directly utilizing the information collected by the camera during the first calculation, and the subsequent calculation utilizes the information of the previous moment of the IMU to calculate and obtain the observation data of the current moment.
Preferably, in step S1, Z is usedFThe observation data representing the motion state of the robot exists ZF={z1,z2,...,zn},ziIs ZFThe specific observation data at the ith time in (1); if the motion state of the robot observes data ZF={z1,z2,...,znObey the probability distribution f (Z)F(ii) a Y), wherein Y is the robot state variable objective function to be solved, andthe resulting state estimation function is as follows:
Figure BDA0003013560310000031
constructing a robot state variable objective function Y to measure the observation state data of the robot and estimating the confidence coefficient of the trend of the observed pose along with the change of time, wherein the confidence coefficient of the motion state of the robot is expressed as follows:
Figure BDA0003013560310000032
wherein omega (Z)F(ii) a Y) is the observation data Z of the motion state of the robotFThe confidence coefficient E (-) is the observation data Z of the motion state of the computing robotFS (-) is calculating robot motion state observation data ZFThe standard deviation Var (-) is the observation data Z of the motion state of the computing robotFAnd determining fusion achievement degree through the confidence coefficient of the motion state of the robot.
Preferably, in step S2, the visual inertial observation sample situation evaluation function is defined as follows:
Figure BDA0003013560310000033
wherein f isλ(ZF) Evaluation function, omega (Z), for the situation of the observation data of the robot motion stateF(ii) a Y) is observed from data Z relating to the state of motion of the evaluation robotFThe position, the speed, the rotation quaternion (scale, gravity vector) and the deviation, and the like,
Figure BDA0003013560310000034
observing data Z for a certain key frame F in the motion state of the robotFAnd F is a certain image frame in the operation process, and delta tau is an image information matrix related to the image frame.
Preferably, in step S2, the robot motion state observation data Z is constructedFThe performance metric of the situation evaluation function is given by
Figure BDA0003013560310000041
Wherein f isdet(ZF) Measuring the performance of observation data of a certain image frame F containing the state information of an inertial measurement unit and the visual state information when the mobile robot is in motionF(ii) a Y) is the confidence coefficient of observation data of a certain image frame F state in the motion state of the robot, det (-) refers to the determinant for solving the matrix,
Figure BDA0003013560310000042
observing data Z for certain image frame F in robot motion stateFAnd a corresponding covariance matrix, where Δ τ is an image information matrix associated with the image frame and log (-) is a logarithmic function.
Preferably, in the step S3, the situation assessment function value is defined to be smaller than the threshold SthThe measured initialization state has higher confidence, the fusion state at the moment is considered to have high certainty, the initialization is terminated, the corresponding pre-integration and initialization data are fused, and the next step of visual inertia real-time optimization is carried out; meanwhile, additionally adding a time threshold setting in the initialization process, if the situation evaluation function in the time threshold does not reach higher confidence, directly terminating initialization when the time threshold is reached, and sending the current initialization result to the next stage for real-time optimization.
Preferably, in step S4, a local optimization probability map model of the key frame based on the common-view constraint is constructed, the key frame is graded according to the strength of the common view, and the inter-frame observation information and the information such as map points on the key frame are used as constraint items to optimize the strong common-view key frame; the co-view constraint probability map model can be regarded as an undirected weighted map, and the joint probability distribution of the undirected weighted map can be decomposed into a product of a plurality of groups Q potential energy functions, as follows:
Figure BDA0003013560310000043
wherein Fk={fk1,fk2,...,fknIs the key frame, P (F)k) In order to be a joint probability,
Figure BDA0003013560310000044
the psi is a regularization factor for a sparse weight function of a key frame containing the common observation map point, and the effectiveness of joint probability is satisfied through regularization; wherein Fk={fk1,fk2,...,fknIs the key frame, P (F)k) In order to be a joint probability,
Figure BDA0003013560310000045
sparse weight function for keyframes containing common observation map points.
Preferably, in step S5, during sliding window optimization, only the strong co-view key frame is optimized, and meanwhile, the pose information of the weak co-view key frame is retained as a constraint item; in the method, an objective function is constructed by the estimation error term of the visual pose and the pre-integration error term component of the IMU, the reprojection error and the IMU error are minimized, and the objective function is shown as the following formula:
Figure BDA0003013560310000051
wherein E isreprojThe reprojection error between two adjacent key frames; eimuFor the error of the IMU over the time period deltat,
Figure BDA0003013560310000052
consisting of deviation measurements under the combined influence of vision and IMU. e.g. of the typeobserveIn order to observe the error for the camera,
Figure BDA0003013560310000053
for camera about operating state ZF={z1,z2,...,znInformation matrix of ep、evAnd eqAre errors of position, velocity and attitude quaternions of the IMU during operation,
Figure BDA0003013560310000054
relating to operating state Z for IMUF={z1,z2,...,znThe information matrix of (a) }, ρ (-) is the robust kernel function, ebIs a set of random deviations at a certain point in time of the IMU,
Figure BDA0003013560310000055
relating to operating state Z for the combined effect of camera and IMUF={z1,z2,...,znThe information matrix of.
The present invention also provides a computer readable storage medium having stored thereon a computer program which, when being executed by a processor, implements the steps of the method for mobile robot mapping based on visual-inertial-navigation fusion as described above.
The invention also provides computer equipment which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of the mobile robot map construction method based on the visual inertial navigation fusion.
The invention has the technical effects that: 1. the method uses a dual-vision inertial navigation initialization method based on a Fisher-Tropsch information matrix as a leading factor, constructs an observation sample situation evaluation function aiming at a state observation value, evaluates a state error information matrix, and judges a local real-time optimal state of the vision inertial navigation fusion as a termination condition, thereby effectively ensuring accurate fusion between a camera and IMU data, and having high efficiency and small error in reading the pose information of the current map point by the system. In order to prevent the evaluation time from being too long, a time threshold is increased to set a dual-constraint evaluation state observation change trend, the initialization time is shortened, and the system robustness is improved.
2. And optimizing a strong common-view key frame by using a sparse degree grading key frame containing common observation map points, keeping map point information of a weak common-view key frame to enter a sliding window as a constraint item, and constraining a pose observation state on the strong common-view key frame together with a reprojection error and an IMU error. Therefore, on the premise of ensuring the reliability of error processing, the calculation amount is reduced, the consumed time is greatly shortened, and the problem of poor real-time performance of the back-end processing error is solved.
Drawings
Fig. 1 is a schematic flow chart of a method for constructing a fusion map of a visual inertial navigation mobile robot based on dual threshold influence.
FIG. 2 is a schematic flow chart illustrating IMU initialization in the initialization phase of constructing the visual inertial navigation system.
FIG. 3 is a flow chart illustrating IMU initialization status evaluation.
FIG. 4 shows the variation of the deviation in the visual-inertial joint initialization phase according to the present invention.
FIG. 5 is a diagram illustrating the effect of the co-view classification method applied in the sliding optimization stage according to the present invention.
Fig. 6 is a diagram of the final positioning effect of the present invention.
Detailed Description
The following detailed description of the embodiments of the present invention will be given in order to provide those skilled in the art with a more complete, accurate and thorough understanding of the inventive concept and technical solutions of the present invention.
According to the camera sensor and the inertial measurement unit, the environmental information of the outside and the free environment is read, and the mutual exchange of the two kinds of information is realized. The scheme of the invention is obtained by improving the prior SLAM technology by utilizing the principle.
The first embodiment is as follows:
the invention provides a mobile robot map construction method based on visual inertial navigation fusion.
The simulated vision system comprises: two visual perception calibrators, a signal conditioning circuit and an analog-to-digital (AD) converter. Wherein, two vision calibrators are symmetrically arranged, and the main control chip is preferably an STM32 main control chip.
The simulated inertial system comprises: the gyroscope, the accelerometer, the signal conditioning circuit and the analog-to-digital (AD) converter. The inertial measurement unit selects an MPU6050 chip with strong integration, and a three-axis gyroscope, a three-axis accelerometer and a three-axis magnetometer are contained in the inertial measurement unit.
As shown in fig. 1-3, the method specifically includes;
and step S1, determining the fusion achievement degree according to the fusion state of the visual attitude and the inertial attitude, and finishing the visual initialization, the visual inertial calibration alignment and the inertial initialization.
In step S1, an IMU measurement value (including a gyroscope and an accelerometer) is first defined to be composed of three components, namely a true value, a random walk deviation and white noise, and besides, the accelerometer is still affected by the gravity acceleration. In order to facilitate the state fusion of the camera and the IMU, the IMU motion state variable is discretized by a median method, and the state variable is expressed as the following formula:
Figure BDA0003013560310000071
ppre、vpreand q ispreInitial values representing IMU position, velocity and attitude quaternions, respectively;
Figure BDA0003013560310000072
and
Figure BDA0003013560310000073
respectively representing the accelerated speeds of the IMU under a world coordinate system at the ith moment and the i +1 th moment;
Figure BDA0003013560310000074
and
Figure BDA0003013560310000075
respectively representing the angular velocities of the IMU under the world coordinate system at the ith moment and the (i + 1) th moment;
Figure BDA0003013560310000076
an algorithm for inner product;
Figure BDA0003013560310000077
and
Figure BDA0003013560310000078
representing the variation of the IMU acceleration deviation and angular velocity deviation in the time periods of the i-th and i + 1-th moments;
Figure BDA0003013560310000079
is a rotation matrix transferred from an IMU coordinate system to a world coordinate system, and delta t is the interval time of adjacent moments.
The above motion state variable calculation formula can calculate that the calculation result at i +1 time is Zi+1,z1The initial value is determined by directly utilizing the information collected by the camera during the first calculation, and the subsequent calculation utilizes the information of the previous moment of the IMU to calculate and obtain the observation data of the current moment.
By ZFThe observation data representing the motion state of the robot exists ZF={z1,z2,...,zn},ziIs ZFThe observation data at the i-th time point in (1). Z is the calculation result at i +1 times obtained from the state variable calculation formula in step S1i+1And z is1The initial value is determined by directly utilizing the information collected by the camera during the first calculation, and the subsequent calculation utilizes the information of the previous moment of the IMU to calculate and obtain the observation data of the current moment. If the motion state of the robot observes data ZF={z1,z2,...,znObey the probability distribution f (Z)F(ii) a Y), Y is the robot state variable objective function to be solved obtained in step S1, and the obtained state estimation function is as follows
Figure BDA0003013560310000081
Constructing a robot state variable objective function Y to measure the observation state data of the robot and estimating the confidence coefficient of the trend of the observed pose (position, speed, rotation quaternion and deviation) along with the change of time, wherein the confidence coefficient of the motion state of the robot is expressed as the following formula
Figure BDA0003013560310000082
Wherein omega (Z)F(ii) a Y) is the observation data Z of the motion state of the robotFThe confidence coefficient E (-) is the observation data Z of the motion state of the computing robotFS (-) is calculating robot motion state observation data ZFThe standard deviation Var (-) is the observation data Z of the motion state of the computing robotFAnd determining fusion achievement degree through the confidence coefficient of the motion state of the robot.
Step S1 completes the visual initialization, the visual inertial calibration alignment, and the inertial initialization, which specifically includes:
s1.1 visual initialization. The camera reads the visual information and calculates the reprojection error;
s1.2, aligning and registering visual inertial navigation. Synchronizing the acquisition frequency of a camera and the acquisition frequency of an IMU (inertial measurement Unit), and ensuring that the physical pose information of an image frame can be acquired while the image frame is acquired;
s1.3 solving the gyroscope bias. Solving the gyroscope bias by minimizing the pre-integral value and the visual rotation difference value;
s1.4, initializing the scale, the speed and the gravity acceleration. Constructing a linear equation of the scale, the acceleration and the gravity acceleration to solve;
and S1.5, refining the gravity acceleration. Continuously iterating and finely adjusting the gravity vector on the gravity vector tangent plane;
s1.6 sets a world coordinate system. And setting a world coordinate system according to the gravity vector obtained by initialization.
And step S2, analyzing the posture trend in real time according to the fusion condition.
Defining a visual inertial observation sample situation assessment function as follows:
Figure BDA0003013560310000091
wherein f isλ(ZF) Is composed ofEvaluation function of robot motion state observation data situation, omega (Z)F(ii) a Y) is observed from data Z relating to the state of motion of the evaluation robotFPosition, speed, rotation quaternion (scale, gravity vector) and deviation, and the like.
Figure BDA0003013560310000092
Observing data Z for a certain key frame F in the motion state of the robotFAnd F is a certain image frame in the operation process, and delta tau is an image information matrix related to the image frame.
In order to minimize the local radius of the confidence coefficient of the robot motion state observation data information matrix, the determinant of the covariance of the information matrix needs to be minimized, and since the obtained minimized local radius of the confidence coefficient is the same as the logarithm determinant of the maximized information matrix, the performance metric of the robot motion state observation data situation evaluation function is obtained:
Figure BDA0003013560310000093
wherein f isdet(ZF) The det (-) is the determinant for solving the motion state observation data information matrix, which is the observation data situation performance measurement of a certain key frame F containing the state information of the inertial measurement unit and the visual state information and shot by the mobile robot when the mobile robot moves,
Figure BDA0003013560310000094
observing data Z for a certain key frame F in the motion state of the robotFAnd a corresponding covariance matrix, where Δ τ is an image information matrix associated with the image frame and log (-) is a logarithmic function.
Step S3, determines the fusion state expectation value and outputs the state.
Defining a situation evaluation function value less than a threshold SthThe initialization state is measured to have higher confidence coefficient, the high certainty is considered to be possessed at the moment, the initialization is terminated, the corresponding pre-integration and initialization data are fused, and the next step is enteredAnd (3) optimizing the visual inertia in real time in one step. In order to avoid the accumulation of IMU errors caused by the overlong calculation time of the condition evaluation function, a 15s time threshold value is additionally set in the initialization process, if the higher confidence coefficient is not reached in 15s, the initialization is directly terminated at 15s, and the current initialization result is sent to the next stage for real-time optimization.
Step S3 specifically includes:
s3.1, starting visual inertia joint initialization;
and S3.2, judging whether the initialization time reaches a set time threshold value. If not, executing step S3.3; if yes, executing step S3.5;
s3.3, constructing a worst case evaluation function to calculate the state expected value of the state observation value in real time;
s3.4 judging whether the expected state value omega (z) reaches the expected state threshold Sth. If not, executing step S3.1; if yes, executing step S3.5;
and S3.5, finishing the visual inertia joint initialization.
And step S4, dividing the key frames into strong co-view key frames and weak co-view key frames according to the sparsity degree of the public map points.
And constructing a key frame local optimization probability graph model based on common-view constraint, performing hierarchical processing on key frames according to the common-view strength, reserving the strong common-view key frames, ignoring the weak common-view key frames, and optimizing the strong common-view key frames by taking the inter-frame observation information and the information such as map points on the key frames as constraint items. At this time, the co-view constraint probability map model can be regarded as an undirected weighted map, and the joint probability distribution thereof can be decomposed into a product of a plurality of groups Q potential energy functions (factors), as follows:
Figure BDA0003013560310000101
wherein Fk={fk1,fk2,...,fknIs the key frame, P (F)k) For joint probability, ψ is a regularization factor, and the validity of the joint probability is satisfied by regularization. Wherein Fk={fk1,fk2,...,fknIs the key frame, P (F)k) Psi is a regularization factor for the joint probability, the effectiveness of the joint probability is satisfied by the regularization,
Figure BDA0003013560310000102
sparse weight function for keyframes containing common observation map points.
And step S5, sending the strong co-vision key frame into a sliding window according to the grading result and optimizing the strong co-vision key by using the interframe pose intensive continuity.
And when the sliding window is optimized, only the strong common-view key frame is optimized, and meanwhile, the pose information of the weak common-view key frame is kept as a constraint item. In the method, an objective function is constructed by the estimation error term of the visual pose and the pre-integration error term component of the IMU, the reprojection error and the IMU error are minimized, and the objective function is shown as the following formula:
Figure BDA0003013560310000111
wherein E isreprojThe reprojection error between two adjacent key frames; eimuFor the error of the IMU over the time period deltat,
Figure BDA0003013560310000112
consisting of deviation measurements under the combined influence of vision and IMU. e.g. of the typeobserveIn order to observe the error for the camera,
Figure BDA0003013560310000113
for camera about operating state ZF={z1,z2,...,znInformation matrix of ep、evAnd eqAre errors of position, velocity and attitude quaternions of the IMU during operation,
Figure BDA0003013560310000114
relating to operating state Z for IMUF={z1,z2,...,znThe information matrix of (a) }, ρ (-) is the robust kernel function, ebIs an IMUA set of random deviations at a certain point in time,
Figure BDA0003013560310000115
relating to operating state Z for the combined effect of camera and IMUF={z1,z2,...,znThe information matrix of.
IMU measurement errors include conventional pose errors ep、evAnd eqAnd deviation ebThe error components are shown as follows:
Figure BDA0003013560310000116
Figure BDA0003013560310000117
Figure BDA0003013560310000118
eb=Δ(bg,ba)(i,j)
wherein the content of the first and second substances,
Figure BDA0003013560310000121
for the rotation matrix at which the IMU state is transferred from the own coordinate system (Body) to the World coordinate system (World) at time i,
Figure BDA0003013560310000122
and
Figure BDA0003013560310000123
the position, speed and rotation quaternion Jacobian matrix obtained by calculating the residual error of the gyroscope in an IMU pre-integration stage,
Figure BDA0003013560310000124
is a velocity Jacobian matrix, delta t, obtained by calculating the acceleration residual error in the IMU pre-integration stageijFrom time i to time jThe time period of (a) is,
Figure BDA0003013560310000125
representing the deviation of the angular velocity at the instant j,
Figure BDA0003013560310000126
for the position change value of the IMU from the world coordinate system at time i to its own coordinate system at time j,
Figure BDA0003013560310000127
for the velocity change value of the IMU from the world coordinate system to its own coordinate system at time i to time j,
Figure BDA0003013560310000128
the rotation matrix for the IMU state is transferred from the World coordinate system (World) to the own coordinate system (Body) at time j. Δ R(i,j)Is the difference in rotational quaternion between time j and time i, Δ v(i,j)Is the speed variation between time j and time i, Δ p(i,j)Is the value of the change in position between time j and time i, Δ (b)g,ba)(i,j)Is the variation of the deviation between time j and time i.
And finally, performing information fusion by using the optimized strong common-view key frame storage layer, the pose information storage layer and the visual perception storage layer, and drawing a self-perception map or correcting and deleting the previous self-perception map.
The above process of constructing the dual threshold change visual inertial odometer will be described with reference to specific experiments.
Fig. 4 shows the variation of the deviation in the visual inertia joint initialization phase of the present invention. The initialization algorithm can realize the rapid convergence of the deviation, and constructs a situation evaluation function to evaluate the current deviation state in real time, so that the precision and the convergence are higher than those of other methods. The acceleration convergence process is generally similar to the gyroscope bias convergence process, and finally tends to be constant, and only individual estimated values have errors. The condition evaluation function is set to meet the requirement of convergence of the parameters to be estimated, so that the IMU can achieve the purpose of rapid convergence, the increase of IMU deviation is effectively inhibited to a certain extent, the initialization time is effectively saved, the error of the IMU measured value is reduced, and the optimization precision of the subsequent stage is improved.
Fig. 5 shows an effect diagram of the co-vision hierarchical method adopted in the optimization stage of the sliding window of the present invention (adopting the public data set EuRoC data set), for the MH _01_ easy data set and the MH _03_ medium data set, after optimization of the sliding window, the maximum value of the position drift is about ± 100mm, and for the complex data set (such as MH _05_ difficult), the position error is within ± 200 mm.
TABLE 1
Figure BDA0003013560310000131
The comparison of the running time in the EuRoC test is shown in the table 1, and in the test, the running time following efficiency of the invention reaches 90 percent under different environments, and the invention can calculate the image information of the camera in time and transmit the image information to the main control computer for real-time motion control, thereby obtaining better following precision and efficiency.
TABLE 2
Figure BDA0003013560310000132
Comparison of the track accuracy under the EuRoC data sets is shown in table 2. From the data analysis in the table, the trajectory error RMSE will be different for different environments, and the total error is not higher than 0.14 m. Wherein, the precision error of the test track on the data set V1_01_ easy sequence can reach 0.027 m.
The final positioning effect diagram of the present invention is shown in fig. 6.
When a complex unknown environment exists, the mobile robot vision inertial navigation fusion map construction method based on dual threshold influence has obvious real-time advantage, improves positioning accuracy, reduces operation time by utilizing a common-view hierarchical relation, and enhances following efficiency.
Example two:
in accordance with a second embodiment of the present invention, a computer-readable storage medium is provided, on which a computer program is stored, and the computer program, when executed by a processor, implements the following steps according to the first embodiment of the present invention:
and step S1, determining the fusion achievement degree according to the fusion state of the visual attitude and the inertial attitude, and finishing the visual initialization, the visual inertial calibration alignment and the inertial initialization.
And step S2, constructing a situation evaluation function to analyze the posture trend in real time.
Step S3, a fusion state local expectation value is determined and the state is output.
And step S4, dividing the key frames into strong co-view key frames and weak co-view key frames according to the sparsity degree of the public map points.
And step S5, sending the strong co-view key frame into a sliding window according to the grading result, optimizing the strong co-view key frame by using the interframe pose intensive continuity, and drawing and correcting the self-perception map by means of the relation of the observation map points.
The storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), an optical disk, and various other media capable of storing program codes.
The above specific limitations regarding the implementation steps after the program in the computer-readable storage medium is executed can be referred to in the first embodiment, and will not be described in detail here.
Example three:
correspondingly to the embodiment of the present invention, a third embodiment of the present invention provides a computer device, including a memory, a processor, and a computer program stored in the memory and operable on the processor, where the processor implements the following steps when executing the program:
and step S1, determining the fusion achievement degree according to the fusion state of the visual attitude and the inertial attitude, and finishing the visual initialization, the visual inertial calibration alignment and the inertial initialization.
And step S2, constructing a situation evaluation function to analyze the posture trend in real time.
Step S3, a fusion state local expectation value is determined and the state is output.
And step S4, dividing the key frames into strong co-view key frames and weak co-view key frames according to the sparsity degree of the public map points.
And step S5, sending the strong co-view key frame into a sliding window according to the grading result, optimizing the strong co-view key frame by using the interframe pose intensive continuity, and drawing and correcting the self-perception map by means of the relation of the observation map points.
The above specific limitations regarding the implementation steps of the computer device can be referred to as embodiment one, and will not be described in detail here.
It will be understood that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, in the description of the invention, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The invention is described above with reference to the accompanying drawings, it is obvious that the specific implementation of the invention is not limited by the above-mentioned manner, and it is within the scope of the invention to adopt various insubstantial modifications of the inventive concept and solution of the invention, or to apply the inventive concept and solution directly to other applications without modification.

Claims (10)

1. A mobile robot map construction method based on visual inertial navigation fusion is characterized by comprising the following steps:
step S1, determining the fusion achievement degree according to the fusion state of the visual attitude and the inertial attitude, and completing the visual initialization, the visual inertial calibration alignment and the inertial initialization;
step S2, constructing a situation evaluation function to analyze the posture trend in real time;
step S3, determining a fusion state local expected value and outputting the state: judging whether the motion state observation data of the robot has high certainty through comparing the evaluation function value with a threshold value, and meanwhile, increasing a time threshold value to set a double-constraint evaluation state to determine whether to terminate the visual inertia joint initialization and output the current state;
step S4, dividing the key frames into strong common-view key frames and weak common-view key frames according to the sparsity degree of the public map points;
and step S5, sending the strong co-view key frame into a sliding window according to the grading result, optimizing the strong co-view key frame by using the interframe pose intensive continuity, and drawing and correcting the self-perception map by means of the relation of the observation map points.
2. The method for constructing a map of a mobile robot based on visual inertial navigation fusion of claim 1, wherein in step S1, the IMU measurement value is composed of three components, namely a true value, a random walk deviation and white noise, the tangential accelerometer is still affected by the gravity acceleration, the IMU motion state variables are discretized by using a median method to facilitate the state fusion between the camera and the IMU, and the discretized motion state variables are calculated as follows:
Figure FDA0003013560300000011
ppre、vpreand q ispreInitial values representing IMU position, velocity and attitude quaternions, respectively;
Figure FDA0003013560300000012
and
Figure FDA0003013560300000013
respectively representing the accelerated speeds of the IMU under a world coordinate system at the ith moment and the i +1 th moment;
Figure FDA0003013560300000014
and
Figure FDA0003013560300000015
respectively representing the angular velocities of the IMU under the world coordinate system at the ith moment and the (i + 1) th moment;
Figure FDA0003013560300000019
an algorithm for inner product;
Figure FDA0003013560300000016
and
Figure FDA0003013560300000017
representing the variation of the IMU acceleration deviation and angular velocity deviation in the time periods of the i-th and i + 1-th moments;
Figure FDA0003013560300000018
is a rotation matrix transferred from an IMU coordinate system to a world coordinate system, and delta t is the interval time of adjacent moments;
the above motion state variable calculation formula can calculate that the calculation result at i +1 time is zi+1,z1The initial value is determined by directly utilizing the information collected by the camera during the first calculation, and the subsequent calculation utilizes the information of the previous moment of the IMU to calculate and obtain the observation data of the current moment.
3. The method for constructing a map of a mobile robot based on inertial navigation fusion of vision according to claim 2, wherein in step S2, Z is usedFThe observation data representing the motion state of the robot exists ZF={z1,z2,...,zn},ziIs ZFThe specific observation data at the ith time in (1); if the motion state of the robot observes data ZF={z1,z2,...,znObey the probability distribution f (Z)F(ii) a Y), wherein Y is a robot state variable objective function to be solved, and the obtained state estimation function is as follows:
Figure FDA0003013560300000021
constructing a robot state variable objective function Y to measure the observation state data of the robot and estimating the confidence coefficient of the trend of the observed pose along with the change of time, wherein the confidence coefficient of the motion state of the robot is expressed as follows:
Figure FDA0003013560300000022
wherein omega (Z)F(ii) a Y) is the observation data Z of the motion state of the robotFThe confidence coefficient E (-) is the observation data Z of the motion state of the computing robotFS (-) is calculating robot motion state observation data ZFThe standard deviation Var (-) is the observation data Z of the motion state of the computing robotFAnd determining fusion achievement degree through the confidence coefficient of the motion state of the robot.
4. The method for constructing a map of a mobile robot based on visual inertial navigation fusion according to claim 1, wherein in step S2, a situation evaluation function of a visual inertial observation sample is defined as follows:
Figure FDA0003013560300000023
wherein f isλ(ZF) Evaluation function, omega (Z), for the situation of the observation data of the robot motion stateF(ii) a Y) is observed from data Z relating to the state of motion of the evaluation robotFThe position, the speed, the rotation quaternion (scale, gravity vector) and the deviation, and the like,
Figure FDA0003013560300000031
observing data Z for a certain key frame F in the motion state of the robotFAnd F is a certain image frame in the operation process, and delta tau is an image information matrix related to the image frame.
5. The mobile robot map construction method based on visual inertial navigation fusion according to claim 4, wherein the steps are as followsIn step S2, robot motion state observation data Z is constructedFThe performance metric of the situation evaluation function is given by
Figure FDA0003013560300000032
Wherein f isdet(ZF) Measuring the performance of observation data of a certain image frame F containing the state information of an inertial measurement unit and the visual state information when the mobile robot is in motionF(ii) a Y) is the confidence coefficient of observation data of a certain image frame F state in the motion state of the robot, det (-) refers to the determinant for solving the matrix,
Figure FDA0003013560300000033
observing data Z for certain image frame F in robot motion stateFAnd a corresponding covariance matrix, where Δ τ is an image information matrix associated with the image frame and log (-) is a logarithmic function.
6. The method for constructing a map of a mobile robot based on inertial navigation fusion of vision according to claim 1, wherein in step S3, the value of the function of the situation assessment function is defined to be less than a threshold SthThe measured initialization state has higher confidence, the fusion state at the moment is considered to have high certainty, the initialization is terminated, the corresponding pre-integration and initialization data are fused, and the next step of visual inertia real-time optimization is carried out; meanwhile, additionally adding a time threshold setting in the initialization process, if the situation evaluation function in the time threshold does not reach higher confidence, directly terminating initialization when the time threshold is reached, and sending the current initialization result to the next stage for real-time optimization.
7. The method for constructing a map of a mobile robot based on visual inertial navigation fusion according to claim 1, wherein in step S4, a local optimized probabilistic graph model of a key frame based on common-view constraint is constructed, the key frame is graded according to the strength of common view, and the inter-frame observation information and the information such as map points on the key frame are used as constraint items to optimize the strong common-view key frame; the co-view constraint probability map model can be regarded as an undirected weighted map, and the joint probability distribution of the undirected weighted map can be decomposed into a product of a plurality of groups Q potential energy functions, as follows:
Figure FDA0003013560300000041
wherein Fk={fk1,fk2,...,fknIs the key frame, P (F)k) In order to be a joint probability,
Figure FDA0003013560300000042
the psi is a regularization factor for a sparse weight function of a key frame containing the common observation map point, and the effectiveness of joint probability is satisfied through regularization; wherein Fk={fk1,fk2,...,fknIs the key frame, P (F)k) In order to be a joint probability,
Figure FDA0003013560300000043
sparse weight function for keyframes containing common observation map points.
8. The method for constructing a map of a mobile robot based on visual inertial navigation fusion according to claim 1, wherein in step S5, only the strong co-view keyframe is optimized during the sliding window optimization, and meanwhile, the pose information of the weak co-view keyframe is retained as a constraint term; in the method, an objective function is constructed by the estimation error term of the visual pose and the pre-integration error term component of the IMU, the reprojection error and the IMU error are minimized, and the objective function is shown as the following formula:
Figure FDA0003013560300000044
wherein E isreprojThe reprojection error between two adjacent key frames; eimuFor IMU in timeThe error of the segment at is such that,
Figure FDA0003013560300000045
consisting of deviation measurements under the combined influence of vision and IMU. e.g. of the typeobserveIn order to observe the error for the camera,
Figure FDA0003013560300000046
for camera about operating state ZF={z1,z2,...,znInformation matrix of ep、evAnd eqAre errors of position, velocity and attitude quaternions of the IMU during operation,
Figure FDA0003013560300000047
relating to operating state Z for IMUF={z1,z2,...,znThe information matrix of (a) }, ρ (-) is the robust kernel function, ebIs a set of random deviations at a certain point in time of the IMU,
Figure FDA0003013560300000048
relating to operating state Z for the combined effect of camera and IMUF={z1,z2,...,znThe information matrix of.
9. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program when being executed by a processor realizes the steps of the method for mobile robot mapping based on visual inertial navigation fusion according to any one of claims 1-8.
10. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein: the processor, when executing the computer program, realizes the steps of the method for mobile robot mapping based on visual inertial navigation fusion according to any one of claims 1-8.
CN202110382654.5A 2021-04-09 2021-04-09 Mobile robot map construction method based on visual inertial navigation fusion and related equipment Active CN113091738B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110382654.5A CN113091738B (en) 2021-04-09 2021-04-09 Mobile robot map construction method based on visual inertial navigation fusion and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110382654.5A CN113091738B (en) 2021-04-09 2021-04-09 Mobile robot map construction method based on visual inertial navigation fusion and related equipment

Publications (2)

Publication Number Publication Date
CN113091738A true CN113091738A (en) 2021-07-09
CN113091738B CN113091738B (en) 2022-02-08

Family

ID=76675895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110382654.5A Active CN113091738B (en) 2021-04-09 2021-04-09 Mobile robot map construction method based on visual inertial navigation fusion and related equipment

Country Status (1)

Country Link
CN (1) CN113091738B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114252073A (en) * 2021-11-25 2022-03-29 江苏集萃智能制造技术研究所有限公司 Robot attitude data fusion method
CN114543786A (en) * 2022-03-31 2022-05-27 华中科技大学 Wall-climbing robot positioning method based on visual inertial odometer
CN115235454A (en) * 2022-09-15 2022-10-25 中国人民解放军国防科技大学 Pedestrian motion constraint visual inertial fusion positioning and mapping method and device
CN115574816A (en) * 2022-11-24 2023-01-06 东南大学 Bionic vision multi-source information intelligent perception unmanned platform
CN116202516A (en) * 2022-12-23 2023-06-02 中国铁路设计集团有限公司 Track three-dimensional reconstruction method for track BIM multidimensional parameter auxiliary IMU
CN116399326A (en) * 2023-04-06 2023-07-07 安徽工程大学 Robot map construction method based on self-adaptive key frame selection, storage medium and equipment
WO2023165093A1 (en) * 2022-03-01 2023-09-07 上海商汤智能科技有限公司 Training method for visual inertial odometer model, posture estimation method and apparatuses, electronic device, computer-readable storage medium, and program product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103026395A (en) * 2010-11-15 2013-04-03 图像传感***有限公司 Hybrid traffic sensor system and associated method
CN105953796A (en) * 2016-05-23 2016-09-21 北京暴风魔镜科技有限公司 Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
CN109029448A (en) * 2018-06-28 2018-12-18 东南大学 The IMU of monocular vision inertial positioning assists trace model
CN110345944A (en) * 2019-05-27 2019-10-18 浙江工业大学 Merge the robot localization method of visual signature and IMU information
CN111862148A (en) * 2020-06-05 2020-10-30 中国人民解放军军事科学院国防科技创新研究院 Method, device, electronic equipment and medium for realizing visual tracking
CN111882607A (en) * 2020-07-14 2020-11-03 中国人民解放军军事科学院国防科技创新研究院 Visual inertial navigation fusion pose estimation method suitable for augmented reality application
CN112240768A (en) * 2020-09-10 2021-01-19 西安电子科技大学 Visual inertial navigation fusion SLAM method based on Runge-Kutta4 improved pre-integration

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103026395A (en) * 2010-11-15 2013-04-03 图像传感***有限公司 Hybrid traffic sensor system and associated method
US20130151135A1 (en) * 2010-11-15 2013-06-13 Image Sensing Systems, Inc. Hybrid traffic system and associated method
CN105953796A (en) * 2016-05-23 2016-09-21 北京暴风魔镜科技有限公司 Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
CN109029448A (en) * 2018-06-28 2018-12-18 东南大学 The IMU of monocular vision inertial positioning assists trace model
CN110345944A (en) * 2019-05-27 2019-10-18 浙江工业大学 Merge the robot localization method of visual signature and IMU information
CN111862148A (en) * 2020-06-05 2020-10-30 中国人民解放军军事科学院国防科技创新研究院 Method, device, electronic equipment and medium for realizing visual tracking
CN111882607A (en) * 2020-07-14 2020-11-03 中国人民解放军军事科学院国防科技创新研究院 Visual inertial navigation fusion pose estimation method suitable for augmented reality application
CN112240768A (en) * 2020-09-10 2021-01-19 西安电子科技大学 Visual inertial navigation fusion SLAM method based on Runge-Kutta4 improved pre-integration

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114252073A (en) * 2021-11-25 2022-03-29 江苏集萃智能制造技术研究所有限公司 Robot attitude data fusion method
CN114252073B (en) * 2021-11-25 2023-09-15 江苏集萃智能制造技术研究所有限公司 Robot attitude data fusion method
WO2023165093A1 (en) * 2022-03-01 2023-09-07 上海商汤智能科技有限公司 Training method for visual inertial odometer model, posture estimation method and apparatuses, electronic device, computer-readable storage medium, and program product
CN114543786A (en) * 2022-03-31 2022-05-27 华中科技大学 Wall-climbing robot positioning method based on visual inertial odometer
CN114543786B (en) * 2022-03-31 2024-02-02 华中科技大学 Wall climbing robot positioning method based on visual inertial odometer
CN115235454B (en) * 2022-09-15 2022-12-30 中国人民解放军国防科技大学 Pedestrian motion constraint visual inertial fusion positioning and mapping method and device
CN115235454A (en) * 2022-09-15 2022-10-25 中国人民解放军国防科技大学 Pedestrian motion constraint visual inertial fusion positioning and mapping method and device
CN115574816B (en) * 2022-11-24 2023-03-14 东南大学 Bionic vision multi-source information intelligent perception unmanned platform
CN115574816A (en) * 2022-11-24 2023-01-06 东南大学 Bionic vision multi-source information intelligent perception unmanned platform
CN116202516A (en) * 2022-12-23 2023-06-02 中国铁路设计集团有限公司 Track three-dimensional reconstruction method for track BIM multidimensional parameter auxiliary IMU
CN116202516B (en) * 2022-12-23 2024-02-20 中国铁路设计集团有限公司 Track three-dimensional reconstruction method for track BIM multidimensional parameter auxiliary IMU
CN116399326A (en) * 2023-04-06 2023-07-07 安徽工程大学 Robot map construction method based on self-adaptive key frame selection, storage medium and equipment
CN116399326B (en) * 2023-04-06 2023-10-13 安徽工程大学 Robot map construction method based on self-adaptive key frame selection, storage medium and equipment

Also Published As

Publication number Publication date
CN113091738B (en) 2022-02-08

Similar Documents

Publication Publication Date Title
CN113091738B (en) Mobile robot map construction method based on visual inertial navigation fusion and related equipment
CN114526745B (en) Drawing construction method and system for tightly coupled laser radar and inertial odometer
Yin et al. Dynam-SLAM: An accurate, robust stereo visual-inertial SLAM method in dynamic environments
Campos et al. Fast and robust initialization for visual-inertial SLAM
CN110553643B (en) Pedestrian self-adaptive zero-speed updating point selection method based on neural network
CN104730537B (en) Infrared/laser radar data fusion target tracking method based on multi-scale model
CN112815939B (en) Pose estimation method of mobile robot and computer readable storage medium
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN116205947B (en) Binocular-inertial fusion pose estimation method based on camera motion state, electronic equipment and storage medium
CN112004183B (en) Robot autonomous positioning method based on convolution neural network fusion IMU and WiFi information
Zhang et al. Vision-aided localization for ground robots
CN116878501A (en) High-precision positioning and mapping system and method based on multi-sensor fusion
CN114964212A (en) Multi-machine collaborative fusion positioning and mapping method oriented to unknown space exploration
CN113076988B (en) Mobile robot vision SLAM key frame self-adaptive screening method based on neural network
Wang et al. Arbitrary spatial trajectory reconstruction based on a single inertial sensor
CN102830391B (en) Accuracy index calculating method of infrared search and track system
US20230304802A1 (en) Passive combined indoor positioning system and method based on intelligent terminal sensor
CN109682372A (en) A kind of modified PDR method of combination fabric structure information and RFID calibration
CN111735478B (en) LSTM-based pedestrian real-time navigation zero-speed detection method
CN114690230A (en) Automatic driving vehicle navigation method based on visual inertia SLAM
CN113566828A (en) Impact-resistant scanning matching method and system based on multi-sensor decision fusion
CN110849392A (en) Robot mileage counting data correction method and robot
Deng et al. Data-Driven Based Cascading Orientation and Translation Estimation for Inertial Navigation
CN117576218B (en) Self-adaptive visual inertial navigation odometer output method
Zhao The Application of Autoencoder In IMU Data Process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Guo Junyang

Inventor after: Chen Mengyuan

Inventor after: Chen Hebao

Inventor before: Chen Mengyuan

Inventor before: Guo Junyang

Inventor before: Chen Hebao

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant