CN116929338B - Map construction method, device and storage medium - Google Patents

Map construction method, device and storage medium Download PDF

Info

Publication number
CN116929338B
CN116929338B CN202311194224.6A CN202311194224A CN116929338B CN 116929338 B CN116929338 B CN 116929338B CN 202311194224 A CN202311194224 A CN 202311194224A CN 116929338 B CN116929338 B CN 116929338B
Authority
CN
China
Prior art keywords
state
optimal
error state
matrix
error
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311194224.6A
Other languages
Chinese (zh)
Other versions
CN116929338A (en
Inventor
熊伟成
张亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Smart Mapping Tech Co ltd
Original Assignee
Shenzhen Smart Mapping Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Smart Mapping Tech Co ltd filed Critical Shenzhen Smart Mapping Tech Co ltd
Priority to CN202311194224.6A priority Critical patent/CN116929338B/en
Publication of CN116929338A publication Critical patent/CN116929338A/en
Application granted granted Critical
Publication of CN116929338B publication Critical patent/CN116929338B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/183Compensation of inertial measurements, e.g. for temperature effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application discloses a map construction method, which is applied to a robot, wherein the robot comprises an inertial measurement unit, a laser radar and a wheel speed meter, and the method comprises the following steps: acquiring a nominal state and an error state corresponding to the nominal state by using an inertial measurement unit; determining an error state covariance based on the error state; acquiring original data of each laser point by using a laser radar, and performing motion compensation on each laser point based on a nominal state to obtain correction data of each laser point; acquiring speed information by using a wheel speed meter; constructing various residual errors based on the speed information, the correction data of the laser points and the ground prior information; updating the error state and the error state covariance by utilizing various residual errors to obtain an optimal state and an optimal error state covariance; and updating the initial map based on the optimal state to obtain a target map. The map construction method and device can improve the accuracy of map construction.

Description

Map construction method, device and storage medium
Technical Field
The present disclosure relates to the field of intelligent robots, and in particular, to a map building method, apparatus, and storage medium.
Background
In an intelligent robotic system, instant localization and mapping (Simultaneous Localization and Mapping, SLAM) provides the intelligent robotic system with a map of an unknown scene and real-time location information in that scene. On one hand, the intelligent robot system performs planning according to map information to acquire a walking path, and on the other hand, real-time positioning information provides accurate positions and directions of the intelligent robot under an unknown scene, and instant positioning and map construction are the basis of successful navigation of the intelligent robot system.
In the process of researching and practicing the related technology, the inventor of the application finds that in an indoor scene, the current method for instantly positioning and constructing a map mainly depends on a 2D laser radar, a camera and the like and is divided into a 2D laser SLAM and a visual SLAM, but the 2D laser SLAM and the visual SLAM have the defects, for example, the 2D laser SLAM is realized by a fusion scheme of fusing a multi-source sensor such as the 2D laser radar, an inertial measurement unit, a wheel speed meter and the like, and the scheme has the degradation problem in some special scenes, for example, the phenomenon of stagnation and the like can occur in the movement of an intelligent robot in a long corridor scene due to insufficient characteristic characterization, and the accuracy and the robustness are poor.
Disclosure of Invention
The application provides a map construction method, map construction equipment and a storage medium, which can improve the accuracy of map construction.
In a first aspect, the present application provides a map construction method applied to a robot, the robot including an inertial measurement unit, a lidar and a wheel speed meter, the method comprising:
acquiring a nominal state and an error state corresponding to the nominal state by using an inertial measurement unit;
determining an error state covariance based on the error state;
acquiring original data of each laser point by using a laser radar, and performing motion compensation on each laser point based on a nominal state to obtain correction data of each laser point;
acquiring speed information by using a wheel speed meter;
constructing various residual errors based on the speed information, the correction data of the laser points and the ground prior information;
updating the error state and the error state covariance by utilizing various residual errors to obtain an optimal state and an optimal error state covariance;
and updating the initial map based on the optimal state to obtain a target map.
The further technical scheme is that the speed information comprises linear speed and angular speed, and the various residuals comprise probability map matching residuals, state residuals and ground constraint residuals;
Constructing a plurality of residual errors based on the speed information, the correction data of the laser points and the ground prior information, wherein the residual errors comprise:
constructing probability map matching residual errors based on correction data of laser points;
constructing a state residual error based on the linear velocity and the angular velocity;
constructing a ground constraint residual error based on the ground priori information of the robot;
updating the error state and the error state covariance by utilizing various residual errors to obtain an optimal state and an optimal error state covariance, wherein the method comprises the following steps:
based on the probability map matching residual error, the ground constraint residual error and the state residual error, updating the error state and the error state covariance by using iterative error Kalman filtering to obtain an optimal state and an optimal error state covariance.
The further technical scheme is that the probability map matching residual error is constructed based on correction data of laser points, and the method comprises the following steps:
based on correction data of each laser point in a preset period and noise of the laser points, obtaining a true value corresponding to each laser point;
calculating the probability value of each laser point in the local map by using a bicubic interpolation method based on the true value;
based on the probability value, determining an observation value corresponding to each laser point;
determining covariance corresponding to the observed value based on a preset noise jacobian matrix;
And determining a probability map matching residual corresponding to each laser point based on the observed value and the covariance.
The further technical scheme is that the state residual comprises a position residual and a posture residual, the state residual is constructed based on the linear speed and the angular speed, and the method comprises the following steps:
acquiring the optimal position and the optimal posture of the previous frame;
obtaining a position estimation value of the current frame based on the optimal position, the linear speed and the linear speed noise;
based on the optimal gesture, the angular velocity and the angular velocity noise, obtaining a gesture estimation value of the current frame;
obtaining a position observation value of the current frame based on the position estimation value, the linear velocity noise and the current position to be optimized;
based on the attitude estimation value, the angular velocity noise and the current attitude to be optimized, acquiring an attitude observation value of the current frame;
determining a position residual based on the position observation and the position estimate;
based on the pose observation value and the pose estimation value, a pose residual is determined.
The method further comprises the steps that the original data of each laser point comprise original position data and original posture data, the original data of each laser point are obtained by using a laser radar, and motion compensation is performed on each laser point based on a nominal state to obtain correction data of each laser point, and the method comprises the following steps:
Acquiring a laser frame in a preset time stamp by using a laser radar;
under an inertial measurement unit coordinate system, acquiring a nominal state corresponding to each time stamp in a preset time stamp by using an inertial measurement unit;
determining a target timestamp corresponding to each target laser point based on a plurality of target laser points contained in the laser frame;
selecting nominal states corresponding to adjacent time stamps of the target time stamp from nominal states corresponding to each time stamp;
based on the nominal states corresponding to the adjacent time stamps, position data and gesture data corresponding to the target time stamp are obtained;
obtaining an original position matrix corresponding to each target laser point based on the original position data of each target laser point;
obtaining a correction position matrix of each target laser point based on the position data and the posture data, an original position matrix corresponding to each target laser point, an external parameter matrix, a rotation matrix of a preset time stamp and the position matrix;
and obtaining correction data of each target laser point based on the correction position matrix of each target laser point.
The further technical scheme is that based on the optimal state, the initial map is updated to obtain a target map, and the method comprises the following steps:
Extracting distance information and angle information from the optimal state;
if the distance information and the angle information meet the preset requirements, the current frame is indicated to be a key frame;
converting the point data of the key frame into a global coordinate system;
and updating the initial map by utilizing the principle of occupying the grid map to obtain the target map.
The further technical scheme is that the method for determining the covariance of the error state based on the error state comprises the following steps:
acquiring the optimal error state covariance of the previous frame;
acquiring a state matrix and a noise matrix;
and obtaining the error state covariance of the current frame based on the state matrix, the noise matrix, the optimal error state covariance of the previous frame and a preset diagonal matrix.
The further technical scheme is that the error state and the error state covariance are updated by utilizing various residual errors to obtain an optimal state and an optimal error state covariance, and the method comprises the following steps:
respectively utilizing a probability map to match an observation model, a wheel speed meter observation model and a plane constraint observation model to obtain a corresponding observation equation;
determining a residual error corresponding to each observation model based on each observation equation;
decomposing an observation jacobian matrix in each observation equation to obtain an orthogonal matrix and an upper triangular matrix corresponding to each observation equation; wherein the dimension of the upper triangular matrix is consistent with the dimension of the error state;
Based on the decomposed orthogonal matrix and the upper triangular matrix, determining a gain matrix corresponding to each observation equation;
and respectively carrying out iterative updating on the error state and the error state covariance by using a gain matrix and a residual error corresponding to each observation equation to obtain an optimal state and an optimal error state covariance.
In a second aspect, the present application provides a computer device comprising a memory and a processor, the memory having stored thereon a computer program for performing the steps of any of the methods described herein.
In a third aspect, the present application also provides a computer readable storage medium storing a computer program for implementing the above-described map construction method when executed by a processor.
The beneficial effects of this application are: different from the condition of the prior art, the method and the device have the advantages that the nominal state acquired by the inertial measurement unit is utilized to perform motion compensation on the original position data and the original posture data acquired by the laser radar to obtain corrected position data and corrected posture data, and the original data layer is fused instead of the state quantity layer, so that the advantages of different sensors can be fully exerted, and the problem that the motion estimation of the laser radar is easy to degrade in long corridor and other complex scenes is solved; in addition, the state data acquired by the inertial measurement unit is continuously updated by utilizing various residual errors, so that the problem that the intelligent robot moves in a stagnation state due to the degradation of the laser radar can be solved, and the accuracy of map construction is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
fig. 1 is a schematic flow chart of a first embodiment of a map construction method provided in the present application;
FIG. 2 is a schematic diagram of motion compensation according to an embodiment provided herein;
FIG. 3 is a schematic flow chart of a second embodiment of a map construction method provided in the present application;
FIG. 4 is a schematic flow chart of a third embodiment of a map construction method provided in the present application;
FIG. 5 is a general flow diagram of a map construction method provided herein;
fig. 6 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are for purposes of illustration only and are not limiting. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present application are shown in the drawings. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In an intelligent robotic system, instant localization and mapping (Simultaneous Localization and Mapping, SLAM) provides the intelligent robotic system with a map of an unknown scene and real-time location information in that scene. On one hand, the intelligent robot system performs planning according to map information to acquire a walking path, and on the other hand, real-time positioning information provides accurate positions and directions of the intelligent robot under an unknown scene, and instant positioning and map construction are the basis of successful navigation of the intelligent robot system.
In the process of researching and practicing the related technology, the inventor of the application finds that in an indoor scene, the current method for instantly positioning and constructing a map mainly depends on a 2D laser radar, a camera and the like and is divided into a 2D laser SLAM and a visual SLAM, but the 2D laser SLAM and the visual SLAM have the defects, for example, the 2D laser SLAM is realized by a fusion scheme of fusing a multi-source sensor such as the 2D laser radar, an inertial measurement unit, a wheel speed meter and the like, and the scheme has the degradation problem in some special scenes, for example, the phenomenon of stagnation and the like can occur in the movement of an intelligent robot in a long corridor scene due to insufficient characteristic characterization, and the accuracy and the robustness are poor.
Therefore, in order to solve the technical problem that the intelligent robot can be stopped before moving due to the degradation of the 2D laser radar in the prior art, the application provides a map construction method which is applied to a robot, wherein the robot comprises an inertial measurement unit, the laser radar and a wheel speed meter. See in particular the examples below.
The map construction method provided by the application is described in detail below. Referring to fig. 1 specifically, fig. 1 is a schematic flow chart of a first embodiment of a map construction method provided in the present application. The method is applied to the processor, and comprises the following steps:
step 101: and acquiring the nominal state and an error state corresponding to the nominal state by using the inertial measurement unit.
The inertial measurement unit (Inertial Measurement Unit, IMU) includes an accelerometer and a gyroscope, and the following processes may be specifically referred to for state prediction by using the inertial measurement unit:
1) State definition
The measured values of the original sensors output by the accelerometer and the gyroscope of the inertial measurement unit are respectively accelerationAnd angular velocity->The random noise of the accelerometer is +.>Bias to->Bias random noise is +.>Random noise of gyroscope is +. >Bias to->Bias random noise is +.>Gravitational acceleration of->. An inertial measurement unit is provided->Is the bulk coordinate system, i.e. IMU system,>is a global coordinate system (wherein the global coordinate system coincides with the origin of the first frame IMU system, the Z axis is parallel to gravity), the inertial measurement unit and the laser radar are rigid bodies, and the external parameters are +.>. The nominal state set without regard to noise and disturbance is +.>Nominal state->The state quantity in the system is the position, the gesture, the speed, the accelerometer bias and the gyroscope bias of the inertial measurement unit under the global coordinate system, and the corresponding real state and error state are +.>And->Then:
wherein the operatorRepresenting a state addition operator.
It should be noted that the nominal stateThe position, velocity, and accelerometer bias and gyroscope bias are vectors of 3*1 and R is a matrix of 3*3, thus the position, velocity, and accelerometer bias and gyroscope bias operations can be directly numerically summed, such as->For the operation of the gesture, the gesture is mapped to algebraic part to solve, namely +.>,/>Is the mapping of the rotation vector to the rotation matrix.
2) State prediction model
Is provided withIs->The nominal state of the frame IMU,for the corresponding error state +.>For the original input quantity of the IMU,setting +.>Nominal shape of frame IMUThe state is->Error state is +.>The time interval between two frames is +.>Then->Nominal state of frame IMU->Error status->The method comprises the following steps of:
nominal stateThe method comprises the following steps:
(2)
error stateThe method comprises the following steps:
(3)
wherein,right jacobian for vectors, +.>For vector-corresponding antisymmetric matrices, i.e. forThe method comprises the following steps:
(4)。
step 102: based on the error state, an error state covariance is determined.
In some embodiments, step 102 specifically includes the following procedures:
1) Acquiring the optimal error state covariance of the previous frame;
2) Acquiring a state matrix and a noise matrix;
3) And obtaining the error state covariance of the current frame based on the state matrix, the noise matrix, the optimal error state covariance of the previous frame and a preset diagonal matrix.
Specifically, the error state derived from step 101Then an error state system equation can be constructed as:
(5)
wherein,is the error state covariance of the previous frame, wherein,/- >The updated optimum error state covariance can be used to substitute +.>The state matrix is specifically:
(6)
the noise matrix is specifically:
(7)。
setting the covariance of the error state of the previous frameThe corresponding covariance matrix is +.>,/>The variances corresponding to the Gaussian white noise are respectively set as +.>、/>、/>、/>The error state covariance of the current frame is:
(8)。
wherein,,/>in order to preset the diagonal matrix,is 3 times 3 identity matrix.
Step 103: and acquiring the original data of each laser point by using a laser radar, and performing motion compensation on each laser point based on the nominal state to obtain the correction data of each laser point.
The laser radar of the embodiment is a 2D laser radar, the 2D laser radar obtains 2D distance data of a scene through a time flight principle, and when a robot moves, the position and the gesture of each point are different in a frame data obtaining process, so that 2D point cloud obtained in the current frame can generate distortion due to movement, and therefore, the 2D data needs to be converted into a body coordinate system of the 2D laser radar through motion compensation.
In some embodiments, the original data of each laser point includes original position data and original pose data, and motion compensation is performed on each laser point, and in particular, reference may be made to fig. 2 and the following steps:
Step 1031: and acquiring a laser frame in a preset time stamp by using a laser radar.
As shown in fig. 2, the implementation in fig. 2 represents a 2D lidar, and the dashed line represents an IMU, where the preset time stamp isTo->
Step 1032: and under the coordinate system of the inertial measurement unit, acquiring a nominal state corresponding to each time stamp in the preset time stamp by using the inertial measurement unit.
As shown in fig. 2, for example, a preset time stampAnd->Between which is included->、/>、/>Wherein, from the time sequence,</></></></>then it is necessary to obtain +.>、/>、/>、/>、/>Corresponding nominal states.
Step 1033: and determining a target time stamp corresponding to each target laser point based on the plurality of target laser points contained in the laser frame.
Such as at a preset time stampTo->The laser frame acquired internally is->The frame contains laser points integrated into,/>Therein is as followsIs a target laser spot of +.>The corresponding target timestamp is +.>
Step 1034: and selecting the nominal states corresponding to the adjacent time stamps of the target time stamp from the nominal states corresponding to each time stamp.
For example, a target timestampIs +.>And->,/>And->The corresponding nominal states are +.>And
wherein,and->Can be obtained according to step 101 described above.
Step 1035: and obtaining the position data and the gesture data corresponding to the target time stamp based on the nominal states corresponding to the adjacent time stamps respectively.
For example, it willAnd->Integration is performed to obtain +.>Corresponding position data->And posture data->The following formula may be specifically adopted:
wherein the angular velocity in equation (9) may beAnd->The mean value of the sum of the angular velocities is taken directly from this example, since the difference is not large>Angular velocity at the location, but is due to the fact that from +.>The process is started, so the time difference is from +.>Starting.
Step 1036: and obtaining an original position matrix corresponding to each target laser point based on the original position data of each target laser point.
Step 1037: and obtaining a correction position matrix of each target laser point based on the position data and the posture data, an original position matrix corresponding to each target laser point, an external parameter matrix, and a rotation matrix and a position matrix of a preset time stamp.
Wherein, corresponding rotation matrix and position matrix can be obtained based on the position data and the gesture data.
Step 1038: and obtaining correction data of each target laser point based on the correction position matrix of each target laser point.
For steps 1036 to 1038, reference may be made to the following:
Setting target laser spotThe target laser spot after motion compensation is +.>The following steps are:
wherein,、/>respectively preset time stamp->Nominal state +.>A corresponding rotation matrix and a position matrix are provided,is an external reference matrix->For the target laser spot->Corresponding original bitMatrix placement->For each target laser spot correction position matrix, the original laser spot set +.>Correction to laser spot set
Wherein the meaning of formula (11) is thatTo->All target laser points in (a) are converted into the same coordinate systemA local coordinate system represented by the time of day.
Step 104: the speed information is acquired using a wheel speed meter.
The speed information may include a linear speed and an angular speed, among others.
Step 105: and constructing various residual errors based on the speed information, the correction data of the laser points and the ground prior information.
For example, a state residual may be constructed based on the linear velocity and the angular velocity acquired by the wheel speed meter, a 2D lidar frame and map matching residual may be constructed based on the corrected data of the laser point, and a ground constraint residual may be constructed based on ground prior information.
Step 106: and updating the error state and the error state covariance by utilizing various residual errors to obtain an optimal state and an optimal error state covariance.
The error state and the error state covariance may be updated by using each residual error and other parameters, or all the residual errors may be fused and calculated, and then the total fused residual error and other parameters update the error state and the error state covariance, where the optimal state and the optimal error state covariance may be the final result after iteration.
Step 107: and updating the initial map based on the optimal state to obtain a target map.
According to the embodiment, the original position data and the original posture data acquired by the laser radar are subjected to motion compensation by utilizing the nominal state acquired by the inertial measurement unit, so that corrected position data and corrected posture data are obtained, and the fusion is carried out from the original data layer, rather than the layer of the state quantity, so that the advantages of different sensors can be fully exerted, and the problem that the motion estimation of the laser radar is easy to degrade in long corridor and other complex scenes is solved; in addition, the state data acquired by the inertial measurement unit is continuously updated by utilizing various residual errors, so that the problem that the intelligent robot moves in a stagnation state due to the degradation of the laser radar can be solved, and the accuracy of map construction is improved.
In some embodiments, the speed information includes a linear speed and an angular speed, and the various residuals include a probability map matching residual, a state residual and a ground constraint residual, and the following steps may be specifically referred to:
step 201: and acquiring the nominal state and an error state corresponding to the nominal state by using the inertial measurement unit.
Step 202: based on the error state, an error state covariance is determined.
Step 203: and acquiring the original data of each laser point by using a laser radar, and performing motion compensation on each laser point based on the nominal state to obtain the correction data of each laser point.
Step 204: the speed information is acquired using a wheel speed meter.
Step 205: and constructing a probability map matching residual error based on the correction data of the laser points.
Step 205 may include, among other things, the following:
step 2051: and obtaining a true value corresponding to each laser point based on the correction data of each laser point and the noise of the laser point in a preset period.
For example, local probability gridThe network map isI.e. the grid resolution of the map is +.>Its minimum plane coordinate is +.>The maximum plane coordinate is +.>. For the current laser frame->Each corrected laser spotThe following formula is provided:
Wherein,for the true value of the laser spot, +.>For correction data of the laser spot, +.>For the noise term of the laser spot, let +.>
Step 2052: based on the true values, a bicubic interpolation method is used to calculate the probability value of each laser point in the local map.
The laser point can be projected onto the local map, the neighborhood of the projection point of the local map is obtained, and then the probability value of the laser point in the local map is obtained by using a bicubic interpolation method.
Step 2053: and determining an observation value corresponding to each laser point based on the probability value.
Step 2054: and determining covariance corresponding to the observed value based on a preset noise jacobian matrix.
Step 2055: and determining a probability map matching residual corresponding to each laser point based on the observed value and the covariance.
For steps 2051 to 2054, reference may be made specifically to formula (14) and formula (15):
wherein,indicating +.>Bicubic interpolation, i.e. +.>Is laser spot->Probability value of>Is an observed value.
Since the indoor robot is typically on a nearly horizontal ground, consider the position and attitude of the noise term as:
(16)
(17)。
since the probability value of the laser spot is obtained by the position and rotation (i.e., the pose) of the laser spot, which are the state quantities to be estimated, and the optimization by observation is required, equation (15) converts the maximum probability value into the observation Z j ,Z j The smaller then M smoth The larger is the observation equation of the highest probability composition. For example, assume that the probability value of a laser spot i is M i The observed value Z corresponding to the maximum probability of this laser spot i j =1-(M i ) min
The observation equation is:
wherein,,/>,/>observation equation pair +.>,/>And->The observed value corresponds to covariance +.>The method comprises the following steps:
(19)。
step 206: based on the linear and angular velocities, a state residual is constructed.
Wherein the state residuals include position residuals and attitude residuals, step 206 may specifically include the following:
step 2061: and acquiring the optimal position and the optimal posture of the previous frame.
Step 2062: and obtaining a position estimation value of the current frame based on the optimal position, the linear speed and the linear speed noise.
Step 2063: and obtaining an estimated attitude value of the current frame based on the optimal attitude, the angular velocity and the angular velocity noise.
Step 2064: and obtaining a position observation value of the current frame based on the position estimation value, the linear velocity noise and the current position to be optimized.
Step 2065: and obtaining an attitude observation value of the current frame based on the attitude estimation value, the angular velocity noise and the current attitude to be optimized.
Step 2066: based on the position observations and the position estimates, a position residual is determined.
Step 2067: based on the pose observation value and the pose estimation value, a pose residual is determined.
For steps 2061 to 2065, reference may be made specifically to the following:
the robot model is assumed to be a two-wheel differential model, and the center points of the two wheels are coincident with the origin of the IMU coordinate system. Setting laser frameTo->Multiple data fed back by multiple wheel speed meters, laser frame->The optimal position is->And the optimal posture is +.>Laser frame->The position and the gesture which can be estimated by the wheel speed meter are respectively as follows:
wherein in formula (21)Optimal pose by optimization ∈>Substitution of->For the line speed +.>For angular velocity +.>Time interval for adjacent wheel speed, +.>And->Noise term obtained by integrating the linear velocity and the angular velocity of the speedometer, i.e. +.>Is linear velocity noise>Is angular velocity noise>For the current laser frame->Position estimate of>Current laser frame->Then the observation equation is:
wherein,and->Representing the current laser frame +.>Position and posture to be optimized, < >>For the current laser frame->Position observations of>For the current laser frame->In some embodiments, the position and posture of the first frame is generally set to 0, and the subsequent frames continue to be recursively calculated based on the optimal value of the preceding frame.
Step 207: and constructing a ground constraint residual error based on the ground priori information of the robot.
Wherein a plane constraint observation model can be adopted, in particular, a robot is arranged on a planeUpward movement, rotation matrix of the robot +.>Corresponding rotation vector +.>Is in the same direction as the normal vector of the plane. In an indoor scene, a wheeled robot such as an indoor sweeping robot or the like generally moves on a ground surface near a horizontal plane, at which time the plane +.>Is parallel to the direction of gravity, the rotation vector can be considered +.>X and Y components->For disturbance noise, satisfyTranslation vector->The Z component of (2) is disturbance noise, satisfying +.>. Is provided with,/>,/>,/>The plane constraint observation equation is:
wherein,,/>,/>,/>the bases in three directions of XYZ of the rotation axis.
According to the plane constraint observation equation, a ground constraint residual error is constructed, and the calculation mode of the residual error is not limited herein.
Step 208: based on the probability map matching residual error, the ground constraint residual error and the state residual error, updating the error state and the error state covariance by using iterative error Kalman filtering to obtain an optimal state and an optimal error state covariance.
Reference is made to the following examples for details of step 208.
Step 209: and updating the initial map based on the optimal state to obtain a target map.
According to the embodiment, the original position data and the original posture data acquired by the laser radar are subjected to motion compensation by utilizing the nominal state acquired by the inertial measurement unit, so that corrected position data and corrected posture data are obtained, and the fusion is carried out from the original data layer, rather than the layer of the state quantity, so that the advantages of different sensors can be fully exerted, and the problem that the motion estimation of the laser radar is easy to degrade in long corridor and other complex scenes is solved; in addition, the state data acquired by the inertial measurement unit is continuously updated by utilizing various residual errors, so that the problem that the intelligent robot moves in a stagnation state due to the degradation of the laser radar can be solved, and the accuracy of map construction is improved.
Referring to fig. 3, fig. 3 is a flowchart of a second embodiment of the map construction method provided in the present application. The method is applied to the processor, and comprises the following steps:
step 301: and acquiring the nominal state and an error state corresponding to the nominal state by using the inertial measurement unit.
Step 302: based on the error state, an error state covariance is determined.
Step 303: and acquiring the original data of each laser point by using a laser radar, and performing motion compensation on each laser point based on the nominal state to obtain the correction data of each laser point.
Step 304: the speed information is acquired using a wheel speed meter.
Step 305: and constructing various residual errors based on the speed information, the correction data of the laser points and the ground prior information.
Step 306: and updating the error state and the error state covariance by utilizing various residual errors to obtain an optimal state and an optimal error state covariance.
Step 307: distance information and angle information are extracted from the optimal state.
Step 308: and if the distance information and the angle information meet the preset requirements, the current frame is indicated to be a key frame.
Wherein, can setThe threshold is determined, for example, every 0.1 meter, and the angle of 2 degrees can be considered as a key frame.
Step 309: the point data of the key frame is converted into a global coordinate system.
Since the data of the 2D point of the current frame is acquired under the body coordinate system of the lidar, the data needs to be converted into the same coordinate system (i.e. global coordinate system) for unified calculation.
Step 3010: and updating the initial map by utilizing the principle of occupying the grid map to obtain the target map.
When the initial map is updated by utilizing the principle of occupying the grid map, a local subgraph formed by combining a series of frames is obtained, and then a target map is obtained based on the local subgraph.
Steps 301 to 306 have the same or similar technical solutions as the above embodiments, and reference may be made to the above related statements, which are not repeated here.
According to the embodiment, the original position data and the original posture data acquired by the laser radar are subjected to motion compensation by utilizing the nominal state acquired by the inertial measurement unit, so that corrected position data and corrected posture data are obtained, and the fusion is carried out from the original data layer, rather than the layer of the state quantity, so that the advantages of different sensors can be fully exerted, and the problem that the motion estimation of the laser radar is easy to degrade in long corridor and other complex scenes is solved; in addition, the state data acquired by the inertial measurement unit is continuously updated by utilizing various residual errors, so that the problem that the intelligent robot moves in a stagnation state due to the degradation of the laser radar can be solved, and the accuracy of map construction is improved.
Referring to fig. 4, fig. 4 is a flowchart of a third embodiment of the map construction method provided in the present application. The method is applied to the processor, and comprises the following steps:
Step 401: and acquiring the nominal state and an error state corresponding to the nominal state by using the inertial measurement unit.
Step 402: based on the error state, an error state covariance is determined.
Step 403: and acquiring the original data of each laser point by using a laser radar, and performing motion compensation on each laser point based on the nominal state to obtain the correction data of each laser point.
Step 404: the speed information is acquired using a wheel speed meter.
Step 405: and constructing various residual errors based on the speed information, the correction data of the laser points and the ground prior information.
Step 406: and respectively utilizing the probability map to match the observation model, the wheel speed meter observation model and the plane constraint observation model to obtain a corresponding observation equation.
The probability map is used for matching an observation model, a wheel speed meter observation model and a plane constraint observation model to obtain a corresponding observation equation, and specific reference can be made to the related description of the embodiment.
Step 407: based on each observation equation, a residual error corresponding to each observation model is determined.
Step 408: and decomposing the observation jacobian matrix in each observation equation to obtain an orthogonal matrix and an upper triangular matrix corresponding to each observation equation.
Wherein the dimension of the upper triangular matrix is consistent with the dimension of the error state.
Step 409: and determining a gain matrix corresponding to each observation equation based on the decomposed orthogonal matrix and the upper triangular matrix.
Step 4010: and respectively carrying out iterative updating on the error state and the error state covariance by using a gain matrix and a residual error corresponding to each observation equation to obtain an optimal state and an optimal error state covariance.
Step 4011: and updating the initial map based on the optimal state to obtain a target map.
For steps 408 to 4010, reference may be made specifically to the following:
is assumed to be inThe observations available through the sensor at the moment are:
(24)。
wherein,for observations->For the observation equation +.>Is->Real state quantity of moment->Is covariance +>White noise observed in (i.e.)>
Using the state prediction model of the first embodiment, it is possible to obtainThe predicted value of the real state quantity at the moment is +.>The predicted value of the nominal state is +.>Predictive value of error state +.>Error State covariance +.>Updating the error state and covariance thereof to obtain:
(25)
(26)
(27)
(28)。
wherein,for posterior real state quantity, ++>For the observation equation->The jacobian of error states is also known as the observed jacobian. Since the number of observation equations is far greater than the data of the state quantity in practice, the gain matrix is +. >When the calculation is performed, the Jacobian matrix is observed +.>The number of rows of (a) is very large, resulting in a p +.>The inversion calculation of (a) can be time consuming, so that the original observables (i.e., observables prior to compression) need to be compressed to reduce the amount of calculation.
Set pair observation jacobian matrixAnd (3) performing QR decomposition to obtain:
(29)。/>
wherein Q is an orthogonal matrix, R is an upper triangular matrix, the dimension of the upper triangular matrix R is consistent with the dimension of the error state, such asA vector of 15 x 1, a dimension of 15, then R1 is a square matrix of 15 x 15.
Then the original observation equations are combined with:
(30)
wherein,and->Is the decomposed observation value.
Wherein,substituting the data into a state update equation to obtain:
(31)
(32)。
it should be noted that the number of the substrates,is obtained by combining the three observation models, i.e. how many observation models can be stacked to form a whole (so +.>A matrix of m 1), and the three models are for the same state +.>Wherein, the above state update equations can be used for the above observation model respectively, namely, the gain matrix and residual error corresponding to each observation equation are used for carrying out iterative update on the error state and the error state covariance to obtain the optimal state->And optimum error State covariance- >For example, the final iteration result can be regarded as the optimal state +.>And optimum error State covariance->
In addition, the embodiment adopts the observed quantity compression method, so that the calculated quantity of the system can be greatly reduced on the premise of not affecting the precision, the efficiency of the system is greatly improved, and the system can also be efficiently and stably operated on some embedded equipment with limited resources.
Steps 401 to 407 have the same or similar technical solutions as the above embodiments, and are not described here again.
According to the embodiment, the original position data and the original posture data acquired by the laser radar are subjected to motion compensation by utilizing the nominal state acquired by the inertial measurement unit, so that corrected position data and corrected posture data are obtained, and the fusion is carried out from the original data layer, rather than the layer of the state quantity, so that the advantages of different sensors can be fully exerted, and the problem that the motion estimation of the laser radar is easy to degrade in long corridor and other complex scenes is solved; in addition, the state data acquired by the inertial measurement unit is continuously updated by utilizing various residual errors, so that the problem that the intelligent robot moves in a stagnation state due to the degradation of the laser radar can be solved, and the accuracy of map construction is improved.
In combination with the above embodiments, it can be seen that the map construction method provided by the present invention is mainly divided into the following four steps, and in particular, fig. 5 can be combined.
The first step is to predict the state by using the original inertial data output by the inertial measurement unit under the error Kalman filtering frame to obtain the covariance matrix of the nominal state and the corresponding error state.
Wherein the frequency of the inertial measurement unit may be 200Hz.
And secondly, carrying out backward integration in a 2D laser radar time interval by using a state prediction model, namely the nominal state acquired in the first step, so as to accurately perform motion compensation on the 2D laser radar data. Wherein the frequency of the 2D lidar may be 10Hz.
Thirdly, constructing a 2D laser radar frame and map matching residual error, a speedometer state residual error and a ground constraint residual error, updating state quantity and covariance, and performing acceleration processing on an updating process by utilizing QR decomposition to obtain a current optimal state quantity.
The method comprises the steps that a wheel speed meter with the frequency of 50Hz can be selected to obtain speed information (such as angular speed and linear speed), and a theoretical speed meter state residual error is constructed based on the speed information; constructing a ground constraint residual error based on the ground priori information of the robot; the specific process can refer to the related description of the above embodiment, the map matching residual, the state residual and the ground constraint residual are combined together, the maximum posterior estimation is performed by using iterative error Kalman filtering, the real-time high-precision map and positioning information are obtained, and the update process is accelerated by using QR decomposition, so that the current optimal state is obtained.
And step four, updating the initial map by utilizing the optimal state to obtain an optimal map.
When the initial map is updated by utilizing the optimal state, whether the current frame is a key frame or not can be judged, and the map is updated on the premise of being the key frame.
That is, the 2D laser radar occupancy probability information, plane priori information of the robot and wheel speed meter attitude integral recursion information are utilized, the information is tightly coupled with the inertial information of the inertial measurement unit from the original data layer by utilizing error Kalman filtering, the advantages of different sensors are fully utilized, and the accuracy of the system is effectively improved.
Secondly, the observation values are effectively compressed, the dimension conversion is carried out on the observation groups through QR decomposition according to the characteristic of independent distribution among the observation values, so that the dimension of the observation equation is consistent with the dimension of the state quantity, the calculated amount is greatly reduced, and the efficiency of the system is improved.
In addition, the state is predicted by using the original inertial data provided by the six-axis inertial measurement unit, and the original pose state quantity to be estimated is six degrees of freedom, namely three position quantities and three pose quantities, and the observed values of the 2D laser radar, the wheel speed meter and the like can only provide information of three degrees of freedom, namely two plane positions and one course angle pose quantity, so that the information of different dimensionalities is difficult to estimate. Therefore, the invention adds the state quantity of other three degrees of freedom into the observed quantity by converting the prior information of the ground, so that the estimation of the state quantity is more stable, and the robustness of the system is provided.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of a computer readable storage medium provided in the present application, where the computer readable storage medium 90 is used to store a computer program 91, and the computer program 91 when executed by a processor is used to implement the following method steps:
acquiring a nominal state and an error state corresponding to the nominal state by using an inertial measurement unit; determining an error state covariance based on the error state; acquiring original data of each laser point by using a laser radar, and performing motion compensation on each laser point based on a nominal state to obtain correction data of each laser point; acquiring speed information by using a wheel speed meter; constructing various residual errors based on the speed information, the correction data of the laser points and the ground prior information; updating the error state and the error state covariance by utilizing various residual errors to obtain an optimal state and an optimal error state covariance; and updating the initial map based on the optimal state to obtain a target map.
It will be appreciated that the computer program 91, when executed by a processor, is also operative to implement aspects of any of the embodiments of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatuses may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units of the other embodiments described above may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as stand alone products. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all or part of the technical solution contributing to the prior art or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is only the embodiments of the present application, and not the patent scope of the present application is limited by the foregoing description, but all equivalent structures or equivalent processes using the contents of the present application and the accompanying drawings, or directly or indirectly applied to other related technical fields, which are included in the patent protection scope of the present application.

Claims (9)

1. A map construction method, characterized by being applied to a robot including an inertial measurement unit, a lidar, and a wheel speed meter, the method comprising:
acquiring a nominal state and an error state corresponding to the nominal state by using the inertial measurement unit;
determining an error state covariance based on the error state;
acquiring original data of each laser point by using the laser radar, wherein the original data of each laser point comprises original position data and original posture data;
acquiring a laser frame in a preset time stamp by using the laser radar;
under the coordinate system of the inertial measurement unit, acquiring a nominal state corresponding to each timestamp in the preset timestamp by using the inertial measurement unit;
determining a target time stamp corresponding to each target laser point based on a plurality of target laser points contained in the laser frame;
Selecting the nominal states corresponding to adjacent time stamps of the target time stamp from the nominal states corresponding to each time stamp;
based on the nominal states respectively corresponding to the adjacent time stamps, position data and gesture data corresponding to the target time stamp are obtained;
obtaining an original position matrix corresponding to each target laser point based on the original position data of each target laser point;
based on the position data and the posture data, the original position matrix, the external reference matrix and the rotation matrix and the position matrix of a preset time stamp corresponding to each target laser point, obtaining a correction position matrix of each target laser point;
based on the correction position matrix of each target laser point, obtaining correction data of each target laser point;
acquiring speed information by using the wheel speed meter;
constructing various residual errors based on the speed information, the correction data of the laser points and the ground prior information;
updating the error state and the error state covariance by using a plurality of residual errors to obtain an optimal state and an optimal error state covariance;
and updating the initial map based on the optimal state to obtain a target map.
2. The method of claim 1, wherein the speed information comprises a linear speed and an angular speed, and the plurality of residuals comprises a probability map matching residual, a status residual, and a ground constraint residual;
the constructing a plurality of residual errors based on the speed information, the correction data of the laser points and the ground prior information comprises the following steps:
constructing a probability map matching residual error based on the correction data of the laser points;
constructing a state residual based on the linear velocity and the angular velocity;
constructing a ground constraint residual error based on the ground priori information of the robot;
updating the error state and the error state covariance by using the residuals to obtain an optimal state and an optimal error state covariance, wherein the updating comprises the following steps:
and updating the error state and the error state covariance by using iterative error Kalman filtering based on the probability map matching residual error, the ground constraint residual error and the state residual error to obtain an optimal state and an optimal error state covariance.
3. The method of claim 2, wherein constructing a probability map matching residual based on the correction data for the laser spot comprises:
Based on correction data of each laser point and noise of the laser points in a preset period, obtaining a true value corresponding to each laser point;
calculating the probability value of each laser point in the local map by using a bicubic interpolation method based on the true value;
based on the probability values, determining an observation value corresponding to each laser point;
determining covariance corresponding to the observed value based on a preset noise jacobian matrix;
and determining the probability map matching residual error corresponding to each laser point based on the observed value and the covariance.
4. The method of claim 2, wherein the state residuals include position residuals and attitude residuals, the constructing the state residuals based on the linear velocity and the angular velocity including:
acquiring the optimal position and the optimal posture of the previous frame;
obtaining a position estimation value of the current frame based on the optimal position, the linear speed and the linear speed noise;
obtaining an estimated attitude value of the current frame based on the optimal attitude, the angular velocity and the angular velocity noise;
obtaining a position observation value of the current frame based on the position estimation value, the linear velocity noise and the current position to be optimized;
Obtaining an attitude observation value of the current frame based on the attitude estimation value, the angular velocity noise and the current attitude to be optimized;
determining the position residual based on the position observation and the position estimate;
and determining the gesture residual error based on the gesture observed value and the gesture estimated value.
5. The method of claim 1, wherein updating the initial map based on the optimal state to obtain the target map comprises:
extracting distance information and angle information from the optimal state;
if the distance information and the angle information meet the preset requirements, the current frame is indicated to be a key frame;
converting the point data of the key frame into a global coordinate system;
and updating the initial map by utilizing the principle of occupying the grid map to obtain a target map.
6. The method of claim 1, wherein the determining an error state covariance based on the error state comprises:
acquiring the optimal error state covariance of the previous frame;
acquiring a state matrix and a noise matrix;
and obtaining the error state covariance of the current frame based on the state matrix, the noise matrix, the optimal error state covariance of the previous frame and a preset diagonal matrix.
7. The method of claim 1, wherein updating the error state and the error state covariance with the plurality of residuals to obtain an optimal state and an optimal error state covariance comprises:
respectively utilizing a probability map to match an observation model, a wheel speed meter observation model and a plane constraint observation model to obtain a corresponding observation equation;
determining a residual error corresponding to each observation model based on each observation equation;
decomposing an observation jacobian matrix in each observation equation to obtain an orthogonal matrix and an upper triangular matrix corresponding to each observation equation; wherein the dimension of the upper triangular matrix is consistent with the dimension of the error state;
based on the decomposed orthogonal matrix and the upper triangular matrix, determining a gain matrix corresponding to each observation equation;
and respectively carrying out iterative updating on the error state and the error state covariance by using the gain matrix and the residual error corresponding to each observation equation to obtain an optimal state and an optimal error state covariance.
8. A computer device, characterized in that it comprises a memory on which a computer program is stored and a processor which, when executing the computer program, implements the method according to any of claims 1-7.
9. A computer readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method according to any of claims 1-7.
CN202311194224.6A 2023-09-15 2023-09-15 Map construction method, device and storage medium Active CN116929338B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311194224.6A CN116929338B (en) 2023-09-15 2023-09-15 Map construction method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311194224.6A CN116929338B (en) 2023-09-15 2023-09-15 Map construction method, device and storage medium

Publications (2)

Publication Number Publication Date
CN116929338A CN116929338A (en) 2023-10-24
CN116929338B true CN116929338B (en) 2024-01-09

Family

ID=88375759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311194224.6A Active CN116929338B (en) 2023-09-15 2023-09-15 Map construction method, device and storage medium

Country Status (1)

Country Link
CN (1) CN116929338B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117848331A (en) * 2024-03-06 2024-04-09 河北美泰电子科技有限公司 Positioning method and device based on visual tag map

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111189449A (en) * 2020-01-21 2020-05-22 杭州大数云智科技有限公司 Robot map construction method
CN113066105A (en) * 2021-04-02 2021-07-02 北京理工大学 Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit
CN114136311A (en) * 2021-11-08 2022-03-04 上海应用技术大学 Laser SLAM positioning method based on IMU pre-integration
CN115265523A (en) * 2022-09-27 2022-11-01 泉州装备制造研究所 Robot simultaneous positioning and mapping method, device and readable medium
CN115436955A (en) * 2022-08-04 2022-12-06 苏州和仲智能科技有限公司 Indoor and outdoor environment positioning method
CN116465393A (en) * 2023-04-27 2023-07-21 北京石头创新科技有限公司 Synchronous positioning and mapping method and device based on area array laser sensor
CN116608847A (en) * 2023-04-27 2023-08-18 北京石头创新科技有限公司 Positioning and mapping method based on area array laser sensor and image sensor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113945206B (en) * 2020-07-16 2024-04-19 北京图森未来科技有限公司 Positioning method and device based on multi-sensor fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111189449A (en) * 2020-01-21 2020-05-22 杭州大数云智科技有限公司 Robot map construction method
CN113066105A (en) * 2021-04-02 2021-07-02 北京理工大学 Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit
CN114136311A (en) * 2021-11-08 2022-03-04 上海应用技术大学 Laser SLAM positioning method based on IMU pre-integration
CN115436955A (en) * 2022-08-04 2022-12-06 苏州和仲智能科技有限公司 Indoor and outdoor environment positioning method
CN115265523A (en) * 2022-09-27 2022-11-01 泉州装备制造研究所 Robot simultaneous positioning and mapping method, device and readable medium
CN116465393A (en) * 2023-04-27 2023-07-21 北京石头创新科技有限公司 Synchronous positioning and mapping method and device based on area array laser sensor
CN116608847A (en) * 2023-04-27 2023-08-18 北京石头创新科技有限公司 Positioning and mapping method based on area array laser sensor and image sensor

Also Published As

Publication number Publication date
CN116929338A (en) 2023-10-24

Similar Documents

Publication Publication Date Title
CN112347840B (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN110243358B (en) Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system
Indelman et al. Factor graph based incremental smoothing in inertial navigation systems
Wu et al. Hand-eye calibration: 4-D procrustes analysis approach
CN107516326B (en) Robot positioning method and system fusing monocular vision and encoder information
CN111795686B (en) Mobile robot positioning and mapping method
CN112304307A (en) Positioning method and device based on multi-sensor fusion and storage medium
CN114862932B (en) BIM global positioning-based pose correction method and motion distortion correction method
CN116929338B (en) Map construction method, device and storage medium
CN115479598A (en) Positioning and mapping method based on multi-sensor fusion and tight coupling system
Zhang et al. Vision-aided localization for ground robots
CN114001733A (en) Map-based consistency efficient visual inertial positioning algorithm
Zhao et al. Review of slam techniques for autonomous underwater vehicles
CN113763549A (en) Method, device and storage medium for simultaneous positioning and mapping by fusing laser radar and IMU
CN113701742B (en) Mobile robot SLAM method based on cloud and edge fusion calculation
CN117073720A (en) Method and equipment for quick visual inertia calibration and initialization under weak environment and weak action control
CN111474932A (en) Mobile robot mapping and navigation method integrating scene experience
CN116202509A (en) Passable map generation method for indoor multi-layer building
CN116380039A (en) Mobile robot navigation system based on solid-state laser radar and point cloud map
CN115930948A (en) Orchard robot fusion positioning method
CN114723920A (en) Point cloud map-based visual positioning method
Hu et al. Efficient Visual-Inertial navigation with point-plane map
Xu et al. Multi-Sensor Fusion Framework Based on GPS State Detection
CN112581616B (en) Nearest neighbor UKF-SLAM method based on sequential block filtering
Kelly et al. Simultaneous mapping and stereo extrinsic parameter calibration using GPS measurements

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant