CN115164887A - Pedestrian navigation positioning method and device based on laser radar and inertia combination - Google Patents
Pedestrian navigation positioning method and device based on laser radar and inertia combination Download PDFInfo
- Publication number
- CN115164887A CN115164887A CN202211044235.1A CN202211044235A CN115164887A CN 115164887 A CN115164887 A CN 115164887A CN 202211044235 A CN202211044235 A CN 202211044235A CN 115164887 A CN115164887 A CN 115164887A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- key frame
- pedestrian
- current moment
- closed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000001514 detection method Methods 0.000 claims abstract description 49
- 238000005259 measurement Methods 0.000 claims abstract description 38
- 238000001914 filtration Methods 0.000 claims abstract description 15
- 238000004364 calculation method Methods 0.000 claims abstract description 14
- 230000033001 locomotion Effects 0.000 claims description 24
- 238000005457 optimization Methods 0.000 claims description 11
- 230000001133 acceleration Effects 0.000 claims description 10
- 230000000737 periodic effect Effects 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 238000004422 calculation algorithm Methods 0.000 abstract description 17
- 230000005021 gait Effects 0.000 abstract description 12
- 230000002829 reductive effect Effects 0.000 abstract description 5
- 239000011159 matrix material Substances 0.000 description 12
- 238000004590 computer program Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 8
- 230000009466 transformation Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000003068 static effect Effects 0.000 description 4
- 238000010276 construction Methods 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000036961 partial effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 241001480982 Angelina Species 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1652—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Automation & Control Theory (AREA)
- Traffic Control Systems (AREA)
Abstract
The application relates to a pedestrian navigation positioning method and device based on a laser radar and inertia combination. The method comprises the following steps: the method comprises the steps of obtaining the pose of the current moment through three-dimensional point cloud and inertia data acquired by fusing a laser radar and a built-in inertia measurement unit thereof based on Kalman filtering, extracting gait information of a pedestrian through the inertia measurement unit arranged on the foot of the pedestrian, constructing a point cloud key frame at the non-zero-speed moment, and not constructing the key frame in the continuous zero-speed state, so that the algorithm calculation force is greatly reduced, the algorithm operation efficiency is ensured, and then the closed-loop detection algorithm based on a position and point cloud descriptor effectively judges whether closed-loop constraint exists or not, so that the accuracy of closed-loop detection can be improved, the pose estimated by a mileage meter can be optimized and updated, and the precision of the laser radar and inertia combined positioning method is improved.
Description
Technical Field
The application relates to the technical field of pedestrian navigation, in particular to a pedestrian navigation positioning method and device based on laser radar and inertia combination.
Background
Pedestrian navigation is a navigation technology for providing the current pedestrian position and planning a path. Getting rid of the dependence on external signals, it is extremely important to realize the autonomy of pedestrian positioning. Currently, the pedestrian autonomous navigation positioning technology is mainly a positioning technology based on an Inertial Measurement Unit (IMU). With the rapid development of Micro-mechanical technology, the cost of the IMU is greatly reduced, and a pedestrian navigation system based on a Micro-inertial measurement unit (Micro-IMU, MIMU) also tends to mature.
The pedestrian navigation system based on the IMU mainly adopts an inertial navigation resolving mode, and current pedestrian position and attitude data are obtained through calculation according to output data of a built-in gyroscope and an accelerometer. However, due to the existence of accumulated errors, the position result obtained by the inertial navigation solution of the IMU is increasingly unreliable. Therefore, the IMU often corrects the accumulated error in combination with other auxiliary information to improve the positioning accuracy. Zero speed correction is a cheap and effective way to correct IMU errors without introducing other sensors. The pedestrian movement process can be decomposed into a combination of gait cycles, in one gait cycle, when the foot of the pedestrian contacts the ground, the foot and the ground are in a relative static state, and the relative speed of the sensor to the ground is zero. Therefore, the gait characteristics can be used for effectively estimating and correcting the navigation error of the IMU.
With the rapid development of computer vision technology, low-cost And miniaturized laser radar is also used for pedestrian navigation, and is also called laser SLAM (Simultaneous Localization And Mapping). The laser radar can accurately measure the distance information between the radar and surrounding environment targets, and the detection distance can reach hundreds of meters. The laser SLAM can correct pose errors resolved by IMU inertial navigation, improves positioning accuracy, enables recorded three-dimensional point cloud information to be used for reconstructing surrounding environments, and enables a generated point cloud map to provide physical reference for pedestrian navigation. However, due to the accumulation of errors over time, the pose estimated by the laser SLAM is also inaccurate, and thus globally consistent tracks and maps cannot be obtained. It is necessary to add a closed loop constraint in the laser SLAM system to estimate the long term cumulative drift due to local feature matching. Closed loop detection is the ability to identify previously visited locations by measuring the similarity between any two locations. Once closed loop is detected, the offset can be corrected to the appropriate position based on the detected closed loop point. Therefore, accurate closed-loop determination is particularly important for improving the adaptability and reliability of the laser SLAM.
The existing closed loop detection method mainly focuses on two aspects: one is based on the geometrical relationship of the odometer. And if the pose estimation result is consistent with a historical pose result, generating a closed-loop relation. However, the accumulated error causes the pose estimation result to be not necessarily accurate, and obviously, the method cannot be directly used for closed-loop detection. And secondly, determining a closed loop relation according to the similarity of the two frames of point clouds. Bosse et al achieve location identification by directly extracting key points from a 3D point cloud and describing them using a hand-made 3D descriptor; magnusson et al describe the appearance of the 3D point cloud using Normal Distribution Transform (NDT), and determine the similarity of the two frames of point clouds based on the histogram of the NDT descriptor. In addition to this, learning-based methods have also been used for closed loop detection in recent years. SegMatch proposed by De et al implements location recognition by matching semantic features of buildings, vehicles, etc.; angelina et al achieves location identification by extracting global descriptors from the end-to-end network, but it requires training in conjunction with PointNet and NetVLAD. Although the learning-based method avoids the complexity of manually making descriptors, the calculation amount is huge, and the performance too depends on a data set in the training process.
In consideration of the small field angle and the special scanning mode of the solid-state laser radar, the point cloud descriptor of the 360-degree field angle of the mechanical radar is not suitable for the purpose, so that the environment feature needs to be described by selecting a proper point cloud descriptor. Meanwhile, as time is accumulated, historical key frame data is also huge calculation power for algorithm operation, and instantaneity of the laser radar and the inertial combined positioning algorithm is influenced. Most of the existing closed loop detection is used under the condition of large open loop error, and if the open loop error is small, the direct matching of historical poses is obviously feasible. And if matching is only carried out according to the point cloud key frames, the closed loop is judged to have inevitable defects through similarity, and wrong closed loops are easily caused in some structured scenes.
Disclosure of Invention
In view of the above, there is a need to provide a method and an apparatus for pedestrian navigation and positioning based on a combination of lidar and inertia, which can reduce the computational power of the algorithm and improve the positioning accuracy.
A pedestrian navigation positioning method based on laser radar and inertia combination, the method comprises the following steps:
acquiring three-dimensional point cloud and first inertia data which are respectively acquired by a laser radar and a built-in inertia measurement unit thereof at the current moment;
acquiring second inertial data acquired at the current moment by an inertial measurement unit arranged on the foot of the pedestrian;
fusing the three-dimensional point cloud and the first inertia data by using iterative Kalman filtering to obtain the pose of the pedestrian at the current moment;
judging whether the pedestrian is in a motion state currently according to the second inertial data, if so, constructing a point cloud key frame according to the three-dimensional point cloud, and calculating a key frame descriptor corresponding to the point cloud key frame;
and performing closed-loop detection on the position of the pedestrian according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment, performing closed-loop detection based on the point cloud descriptor if the position of the current moment is judged to be close to the position of the historical moment, performing optimization updating on the pose in a closed-loop interval if the closed loop is detected, outputting the optimized pose at the current moment, and directly outputting the pose at the current moment if the closed loop is not detected.
In one embodiment, the determining whether the pedestrian is currently in a moving state according to the second inertia data includes:
the second inertia data comprises acceleration, and whether the pedestrian is in a motion state or not is judged according to the module value of the acceleration, the periodic change of the variance and a preset first threshold value.
In one embodiment, if the motion state is satisfied, constructing a point cloud key frame according to the three-dimensional point cloud includes:
and if the current moment is in a motion state, accumulating the original three-dimensional point clouds with preset frame numbers by taking the moment as an initial moment, splicing the accumulated original three-dimensional point clouds with the preset frame numbers, and constructing the point cloud key frame.
In one embodiment, the calculating the key frame descriptor corresponding to the point cloud key frame comprises: and segmenting the point cloud key frame, and describing the segmented point cloud key frame by using a 2D histogram to obtain the key frame descriptor.
In one embodiment, the performing closed-loop detection on the position of the pedestrian according to the point cloud key frame constructed at the current time and the point cloud key frame constructed at the historical time includes:
calculating a path distance and an Euclidean distance between two point cloud key frames according to the point cloud key frames constructed at the current moment and the poses corresponding to the point cloud key frames constructed at the historical moment;
and comparing the ratio of the Euclidean distance to the path distance with a preset second threshold, and if the ratio is smaller than the second threshold, judging that the position of the current moment is near the position of the history.
In one embodiment, the performing closed-loop detection based on the point cloud descriptor includes: and calculating the similarity between two corresponding point cloud key frames according to the key frame descriptors respectively corresponding to the current moment and the historical moment, and judging that closed loops are detected if the similarity is greater than a preset third threshold value.
A lidar and inertia combination-based pedestrian navigation positioning device, the device comprising:
the system comprises a first data acquisition module, a second data acquisition module and a third data acquisition module, wherein the first data acquisition module is used for acquiring three-dimensional point cloud and first inertia data of the laser radar and the current moment respectively acquired by an internal inertia measurement unit of the laser radar;
the second data acquisition module is used for acquiring second inertial data of the pedestrian at the current moment acquired by the inertial measurement unit arranged on the foot of the pedestrian;
the current time pose estimation module is used for fusing the three-dimensional point cloud and the first inertia data by utilizing iterative Kalman filtering to obtain a current time pose;
the key frame descriptor calculation module is used for judging whether the pedestrian is in a motion state currently according to the second inertial data, constructing a point cloud key frame according to the three-dimensional point cloud if the pedestrian is in the motion state, and calculating a key frame descriptor corresponding to the point cloud key frame;
and the double closed-loop detection module is used for performing closed-loop detection on the position of a pedestrian according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment, performing closed-loop detection based on the point cloud descriptor if the position of the current moment is judged to be near the position of the current moment in the historical time, optimizing and updating the pose in a closed-loop interval if the closed loop is detected, outputting the optimized pose at the current moment, and directly outputting the pose at the current moment if the closed loop is not detected.
A computer device comprising a memory storing a computer program and a processor implementing the following steps when the computer program is executed:
acquiring three-dimensional point cloud and first inertia data which are respectively acquired by a laser radar and a built-in inertia measurement unit thereof at the current moment;
acquiring second inertial data acquired by an inertial measurement unit arranged on the foot of the pedestrian at the current moment;
fusing the three-dimensional point cloud and the first inertia data by using iterative Kalman filtering to obtain the pose of the pedestrian at the current moment;
judging whether the pedestrian is in a motion state currently according to the second inertial data, if so, constructing a point cloud key frame according to the three-dimensional point cloud, and calculating a key frame descriptor corresponding to the point cloud key frame;
and performing closed loop detection on the position of the pedestrian according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment, performing closed loop detection based on the point cloud descriptor if the position of the current moment is judged to be near the position of the historical moment, optimizing and updating the pose in a closed loop interval if the closed loop is detected, outputting the optimized pose at the current moment, and directly outputting the pose at the current moment if the closed loop is not detected.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring three-dimensional point cloud and first inertia data which are respectively acquired by a laser radar and a built-in inertia measurement unit thereof at the current moment;
acquiring second inertial data acquired at the current moment by an inertial measurement unit arranged on the foot of the pedestrian;
fusing the three-dimensional point cloud and the first inertia data by using iterative Kalman filtering to obtain the pedestrian pose at the current moment;
judging whether the pedestrian is in a motion state currently according to the second inertial data, if so, constructing a point cloud key frame according to the three-dimensional point cloud, and calculating a key frame descriptor corresponding to the point cloud key frame;
and performing closed-loop detection on the position of the pedestrian according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment, performing closed-loop detection based on the point cloud descriptor if the position of the current moment is judged to be close to the position of the historical moment, performing optimization updating on the pose in a closed-loop interval if the closed loop is detected, outputting the optimized pose at the current moment, and directly outputting the pose at the current moment if the closed loop is not detected.
According to the pedestrian navigation positioning method and device based on the laser radar and inertia combination, the pose at the current moment is obtained through three-dimensional point cloud and inertia data acquired by fusing the laser radar and the built-in inertia measurement unit thereof based on Kalman filtering, the pedestrian gait information is extracted through the inertia measurement unit arranged on the foot of a pedestrian, a point cloud key frame is constructed at the non-zero-speed moment, and no key frame is constructed in the continuous zero-speed state, so that the algorithm calculation force is greatly reduced, the algorithm operation efficiency is ensured, then the closed-loop detection algorithm based on the position and point cloud descriptors effectively judges whether closed-loop constraint exists or not, the accuracy of closed-loop detection can be improved, the pose estimated by mileage can also be optimized and updated, and the precision of the laser radar and inertia combination positioning method is improved.
Drawings
FIG. 1 is a flowchart illustrating a method for navigating and positioning a pedestrian according to an embodiment;
FIG. 2 is a flowchart illustrating an implementation of a pedestrian navigation positioning method according to an embodiment;
FIG. 3 is a schematic flow chart of a method for constructing a point cloud key frame for pedestrian gait assistance in one embodiment;
FIG. 4 is a schematic flow chart diagram of a method for closed loop detection based on similarity of location and point cloud descriptors in one embodiment;
FIG. 5 is a diagram illustrating calculation of a path distance and Euclidean distance in one embodiment;
FIG. 6 is a block diagram of a pedestrian navigation positioning apparatus in one embodiment;
FIG. 7 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
As shown in fig. 1, a pedestrian navigation positioning method based on a combination of laser radar and inertia is provided, which includes the following steps:
s100, acquiring three-dimensional point cloud and first inertial data respectively acquired by a laser radar and a built-in inertial measurement unit thereof at the current moment;
step S110, second inertia data collected at the current moment by an inertia measuring unit arranged on the foot of the pedestrian is obtained;
s120, fusing the three-dimensional point cloud and the first inertia data by using iterative Kalman filtering to obtain the pose of the current moment;
step S130, judging whether the pedestrian is in a motion state at present according to the second inertia data, if so, constructing a point cloud key frame according to the three-dimensional point cloud, and calculating a key frame descriptor corresponding to the point cloud key frame;
and step S140, performing closed-loop detection on the position of the pedestrian according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment, performing closed-loop detection based on the point cloud descriptor if the position of the current moment is judged to be close to the position of the historical moment, performing optimization updating on the pose in a closed-loop interval if the closed loop is detected, outputting the pose at the current moment after optimization, and directly outputting the pose at the current moment if the closed loop is not detected.
In this embodiment, steps S100 and S110 are both data acquisition modules, where the lidar, the built-in inertial measurement unit of the lidar and the inertial measurement unit disposed under the foot of the pedestrian acquire data in real time, and the data is processed by the methods in steps S120 to S140 at each moment, so as to obtain an optimal pose of the pedestrian at each moment, where the pose includes a position coordinate and a movement direction.
As shown in fig. 2, a flow chart of steps for implementing the method is also provided, and the data processing procedure will be described by taking the flow chart as an example.
In step S120, the data acquired by the lidar and the inertial measurement unit built therein are processed to obtain the pose of the pedestrian at the moment of acquiring the data, which can be seen from fig. 2 and calculated by the lidar-inertial odometer module.
Specifically, first, the first inertial data (i.e., IMU data) is preprocessed: performing state recurrence estimation on the IMU, and following the following formula:
in line with the description:the amount of the real state is represented,which represents the estimated quantity of state,representing the amount of state error between the true state quantity and the estimated state quantity.
In formula (1), subscripts (e.g.) An index of IMU measurements;the system state quantity comprises the position, the posture, the speed, the noise deviation of the IMU and a gravity vector of the carrier;i.e. the difference between two adjacent IMU sampling times in one radar scan (IMU sampling interval);is measurement data of the IMU;is the measurement noise of the IMU; function(s)Is a about、、As a function of (a) or (b),is a custom operation that represents state recurrence estimation.
Therefore, the error state dynamic model is:
in the formula (2), the first and second groups,representing the real state quantity of the system;obtaining state estimators for the system recursion;a custom operation for obtaining the error amount between the two,is thatAndthe amount of error of (1).
Measuring IMU noiseIs denoted as Q, the covariance of the state recursion estimate isThe iterative formula is:
in the formula (3), the first and second groups,to representThe covariance of (a) is determined,is thatAboutThe partial differential in time is taken to be,is thatAboutPartial differential in time.
Then, preprocessing the three-dimensional point cloud data: the laser radar can generate point cloud distortion in the advancing scanning process, and the sampling frequency is far less than that of the IMU, so that point cloud distortion removal can be reversely carried out by utilizing a state recursive estimation formula as follows:
the pose of each feature point in a frame of point cloud can be obtained through the formula (4), but only the pose of the point is scanned relative to the radar at the moment. To facilitate data processing, the lidar may be scanned once per cycle using equation (5)The point cloud in time is converted to the scan end time (i.e., the scan end time)Time of day) radar coordinate system:
in the formula (5), the first and second groups,an index of the scanning time of the laser radar;to representPoint cloud coordinates of point clouds scanned by the laser radar at the moment under a laser radar coordinate system;a coordinate transformation matrix representing a transformation from a lower right-hand coordinate system to an upper left-hand coordinate system,is an IMU coordinate system and is used as a reference,is a laser radar coordinate system;representing a coordinate transformation matrix derived from a reverse recursive estimation (Time lidar coordinate system toTime of day lidar coordinate system).
And finally, updating the Kalman filtering state by utilizing the preprocessed IMU data and the three-dimensional point cloud data.
In particular, to correlate the computed residual with the predicted state propagated from the IMU valuesSum covarianceBlending, requiring linearization to sum the residual with the true stateMeasurement model associated with measurement noise:
in the formula (6), the first and second groups,as a measuring pointThe directional noise of (2) is reduced,as a measuring pointCorresponding to the true feature point coordinates in the global coordinate system,is a characteristic pointThe normal vector (or edge direction) of the corresponding plane (or edge) on which it lies,is a coordinate transformation matrix from the IMU coordinate system to the global coordinate system,to representAndthe mapping relationship between them.
in the case of the formula (7),the residual error is defined as the distance between the coordinate estimation of the point cloud feature under the global coordinate system and the nearest plane or edge on the map;is thatAboutA Jacobian matrix of;derived from measurement noise。
The iterative kalman filter formula is used as follows:
in the formula (8) and the formula (9),the number of iterations of Kalman filtering;is composed ofAboutThe partial differential of (a) is,,is a matrix of a Jacobian matrix,is a matrix of the covariance,is an identity matrix.
Estimation updated from the aboveAnd then used to iterate state updates. This process continues until the formula converges:
After convergence, the optimal state estimate and covariance can be obtained:
in step S130, when constructing the point cloud key frame, the construction is performed only when the pedestrian moves, which can greatly reduce the calculation amount while ensuring the precision of the closed loop, and reduce the false closed loop to a certain extent.
As shown in fig. 3, a method for constructing a point cloud key frame in combination with pedestrian gait features is provided.
Firstly, judging whether the pedestrian is in a static state or a moving state at the current moment based on a zero-speed correction module according to second inertial data acquired by an inertial measurement unit arranged on the foot of the pedestrian.
Specifically, the second inertial data includes acceleration, and whether the pedestrian is in a moving state is judged according to the module value of the acceleration, the periodic variation of the variance and a preset first threshold.
then, the acceleration variance within a certain window range is calculated according to the formula (12), and the influence of gravity is removed, so that the periodic variation rule of the foot can be highlighted. Variance of acceleration within a certain windowCan be calculated according to the following equationTo:
in the formula (13), the first and second groups,the average acceleration value in the current window range can be obtainedThe calculation results in that,is an index of the moment of the acceleration data,is the selected window size.
When passing the preset thresholdThe current time is the zero speed time or the stepping time (namely, the foot is off the ground) according to the following formula:
in consideration of the unique non-repeated scanning mode of the small solid-state laser radar, scene information cannot be fully described by one frame of point cloud, so that it is not advisable to directly use a certain frame of original point cloud as a key frame. But a plurality of frame point clouds are spliced to be used as a key frame, richer scene information can be obtained by utilizing the characteristic of non-repetitive scanning of the solid laser radar, and environmental features are extracted more easily, so that the accuracy of closed-loop detection is improved.
A point cloud key frame composed of(it is preferable to do) Frame raw point cloudSynthesis, i.e. stitching together the original point clouds using the pose estimated by the lidar-odometer module, thus a key frame point cloudCan be expressed as:
on the other hand, considering the influence of the historical key frame data on the algorithm operation efficiency, the selection of the key frame (that is, the point cloud key frame) needs to be chosen. The IMU arranged on the foot can obtain the gait information of the pedestrian through the zero-speed correction module, so that the motion state of the pedestrian is judged. When the device is in a continuous static state, point cloud data obtained by scanning of the laser radar does not change obviously, and the accuracy of a closed loop is not improved by repeatedly constructing a plurality of key frames, so that the calculated amount is greatly increased. Only when the pedestrian moves, the point cloud data changes greatly. Therefore, whether the key frame is constructed can be judged by combining the zero-speed gait information of the pedestrian.
When the pedestrian is in zero speed gait, the build keyframe is not triggered. Once non-zero speed gait information is detected, namely the current time is in a motion state, the time is taken as the starting time to start to construct a key frame, and the key frame is accumulated and collected(preferably)) The frame original point clouds are spliced into a complete key frame, and then the descriptor of the key frame is calculated.
In this embodiment, calculating the key frame descriptor corresponding to the point cloud key frame includes: and segmenting the point cloud key frame, and describing the segmented point cloud key frame by using a 2D histogram to obtain a key frame descriptor.
Specifically, when the non-zero-speed moment is detected, all points of the frame point cloud are traversed every time a frame of point cloud data is received, and a small cube is created with the point as a starting point or the point is added into the created small cube.
The rules for creating a minicube are as follows: if a point isNot belonging to any small cube, then using the point as initial point to create a new cube whose size is fixed and whose length in X, Y and Z directions is、、And may be set to 0.8m. Coordinates of center point of cubeThe position of the initial point in the world coordinate system determines:
hypothesis cubeAll of them shareCalculating the mean value and covariance of the cube according to the coordinates of all the points:
after traversing each frame point cloud constituting the key frame, the key frame can be finally divided into a plurality of cubes. A cube can use the center pointMean value ofCovariance, covarianceAnd sets of coordinates of all points thereinIs shown as. This divides all the point clouds in the keyframe into small cubes. The descriptor of the key frame may also be represented by a set of all minicube features, and computing the key frame descriptor is the process of computing all minicube features.
Next, for each cube, its feature type and feature orientation are determined according to the following strategy based on the position information of all the points contained therein.
Firstly, the covariance matrix obtained in the process of creating the cube is usedAnd (3) decomposing the characteristic values, and arranging according to the size sequence of the characteristic values to obtain three characteristic values:
the feature type of each cube is determined according to the condition of the feature value:
a. if it is notFar greater thanThen the feature of the cube is considered to be a face feature, the feature direction of whichIs the third column of the eigenvector matrix, i.e., the normal vector of the face.
b. If not a facial feature, andfar greater thanThe cube is considered to be a line feature with its feature directionIs the first column of the eigenvector matrix, i.e., the direction of the line.
c. If a cube is neither a face feature nor a line feature, it is marked as no feature.
Meanwhile, the feature descriptors of the key frames should have rotation invariance, and all feature directions obtained by the previous calculationIt is rotated by multiplying a rotation matrix R to expect that most of the cube feature directions lie on the X-axis and most of the rest lie on the Y-axis in a key frame.
For a characteristic direction vector which has not undergone rotation changeThe characteristic direction vector after rotation transformation can be expressed as:
in the formula (19), in the following formula,express edgeUnit vector in positive direction of vector X axis.
The rotation transformation essentially projects the characteristic direction vector to the positive direction of the X axis in the world coordinate system, and the Euler angle of the rotation transformation can be expressed as:
in the formula (20), in the following formula,is a pitch angle,The two ranges are both 0-180 degrees for the yaw angle.
Dividing the horizontal angle into intervals according to every 3 degrees, wherein each interval corresponds to one element, the element is the number of the features in the interval, and all the elements form the 2D histogram features corresponding to the key frame, namely the finally constructed key frame descriptor.
In step S140, in order to further reduce the calculation amount while ensuring the closed-loop accuracy, the closed-loop detection algorithm adds position-based closed-loop detection before calculating the descriptor similarity, performs preliminary comparison on the keyframes, and further calculates the similarity only if the positions satisfy the threshold condition, thereby reducing the calculation amount while improving the closed-loop detection efficiency, reducing the performance requirements on the computing device, and enhancing the real-time performance of the closed-loop detection algorithm. Accumulated errors caused by sensor motion can be effectively eliminated by introducing a closed-loop optimization algorithm, and the precision of the laser radar inertial combination positioning method is further improved.
As shown in fig. 4, a closed loop detection method is provided, in which a path distance and a euclidean distance between two point cloud key frames are first calculated according to a point cloud key frame constructed at a current time and a pose corresponding to a point cloud key frame constructed at a historical time, a comparison is made according to a ratio between the euclidean distance and the path distance and a preset second threshold, and if the ratio is smaller than the second threshold, it is determined that a position at the current time is near a position at which the current time is located during the history.
Since the position error of the odometer is large and can be accumulated continuously along with time, the navigation result can be rapidly dispersed along with the increase of the walking path, and the preset threshold value of the distance between the two key frames can be invalid. However, this phenomenon can also be improved by calculating the path distance and the euclidean distance between two key frames based on the position information of a plurality of key frames, and using the ratio of the euclidean distance and the path distance as a determination condition.
Specifically, the Euclidean distance is directly calculated by the positions of two key frames in the world coordinate system, as shown in FIG. 5Is thatKey frame of time of day construction andeuclidean distance between key frames constructed at time:
in the formula (21), the first and second groups,is composed ofThe position of the lidar at the time in the global coordinate system.
Specifically, the path distance is obtained by accumulating the Euclidean distances of all key frames between two key frames, as shown in FIG. 5Key frame of time of day construction andpath distance between temporally constructed keyframesComprises the following steps:
for the ratio of Euclidean distance to path distanceWhen the ratio is less than the set threshold valueThen, the laser radar is judged to be inIs located at the momentThe location of the time of day. Therefore, the next closed-loop detection algorithm based on the point cloud descriptor can be carried out.
After passing through closed-loop detection based on the pedestrian position, it is necessary to further determine whether the descriptors of the two key frames are similar.
In step S130, the key frames are already described by using the 2D histogram, so that it is only necessary to calculate the similarity of the histograms corresponding to the key frames to determine whether the two key frames form a closed loop.
in the formula (23), the first and second groups,is thatAverage value of (2) andis an elementIs used to determine the index of (1).
If the similarity between two key framesAbove a preset threshold, a closed loop is considered detected. And when closed-loop information is detected, adding closed-loop constraint in the back-end optimization module, optimizing and updating the pose in a closed-loop interval, otherwise, directly outputting the pose estimated by the laser radar-inertial odometer.
According to the pedestrian navigation positioning method based on the laser radar and inertia combination, the pose of the current moment is obtained through the Kalman filtering fusion based on the laser radar and the three-dimensional point cloud and inertia data acquired by the built-in inertia measurement unit of the laser radar, the gait information of a pedestrian is extracted through the inertia measurement unit arranged on the foot of the pedestrian, the point cloud key frame is constructed at the non-zero-speed moment, and the key frame is not constructed in the continuous zero-speed state, so that the algorithm calculation force is greatly reduced, the operation efficiency of the algorithm is ensured, and then the closed-loop detection algorithm based on the position and point cloud descriptor effectively judges whether closed-loop constraint exists or not, so that the accuracy of closed-loop detection can be improved, the pose estimated by mileage can be optimized and updated, and the precision of the laser radar and inertia combination positioning method is improved.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 1 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 6, there is provided a pedestrian navigation and positioning device based on a combination of laser radar and inertia, comprising: a first data acquisition module 600, a second data acquisition module 610, a current-time pose estimation module 620, a keyframe descriptor computation module 630, and a double closed-loop detection module 640, wherein:
the first data acquisition module 600 is configured to acquire a three-dimensional point cloud and first inertial data of a current moment, which are respectively acquired by a laser radar and a built-in inertial measurement unit thereof;
a second data acquisition module 610, configured to acquire second inertial data at a current time, acquired by an inertial measurement unit disposed on a foot of a pedestrian;
a current-time pose estimation module 620, configured to fuse the three-dimensional point cloud and the first inertial data by using iterative kalman filtering to obtain a pose at the current time;
a key frame descriptor calculating module 630, configured to determine whether the pedestrian is currently in a motion state according to the second inertial data, construct a point cloud key frame according to the three-dimensional point cloud if the pedestrian is in the motion state, and calculate a key frame descriptor corresponding to the point cloud key frame;
the double closed-loop detection module 640 is configured to perform closed-loop detection on a pedestrian position according to the point cloud key frame constructed at the current time and the point cloud key frame constructed at the historical time, perform closed-loop detection based on the point cloud descriptor if it is determined that the position at the current time is near the position at the historical time, perform optimization update on the pose in the closed-loop interval if the closed loop is detected, output the pose at the current time after optimization, and directly output the pose at the current time if the closed loop is not detected.
For specific limitations of the pedestrian navigation positioning device based on the combination of the lidar and the inertia, reference may be made to the above limitations of the pedestrian navigation positioning method based on the combination of the lidar and the inertia, and details are not repeated here. The modules in the pedestrian navigation and positioning device based on the combination of the laser radar and the inertia can be wholly or partially realized through software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The database of the computer device is used to store key frame data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to realize a pedestrian navigation positioning method based on the combination of laser radar and inertia.
It will be appreciated by those skilled in the art that the configuration shown in fig. 7 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory having a computer program stored therein and a processor that when executing the computer program performs the steps of:
acquiring a three-dimensional point cloud and first inertia data of a current moment, which are respectively acquired by a laser radar and a built-in inertia measurement unit thereof;
acquiring second inertial data of the current moment acquired by an inertial measurement unit arranged on the foot of the pedestrian;
fusing the three-dimensional point cloud and the first inertia data by using iterative Kalman filtering to obtain the pose of the current moment;
judging whether the pedestrian is in a motion state at present according to the second inertia data, if so, constructing a point cloud key frame according to the three-dimensional point cloud, and calculating a key frame descriptor corresponding to the point cloud key frame;
and performing closed loop detection on the position of the pedestrian according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment, performing closed loop detection based on the point cloud descriptor if the position of the current moment is judged to be near the position of the historical moment, performing optimization updating on the pose in a closed loop interval if the closed loop is detected, outputting the optimized pose at the current moment, and directly outputting the pose at the current moment if the closed loop is not detected.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring three-dimensional point cloud and first inertia data of a laser radar and a built-in inertia measurement unit of the laser radar at the current moment, wherein the three-dimensional point cloud and the first inertia data are respectively acquired by the laser radar and the built-in inertia measurement unit;
acquiring second inertial data of the current moment acquired by an inertial measurement unit arranged on the foot of the pedestrian;
fusing the three-dimensional point cloud and the first inertia data by using iterative Kalman filtering to obtain the pose of the current moment;
judging whether the pedestrian is in a motion state at present according to the second inertia data, if so, constructing a point cloud key frame according to the three-dimensional point cloud, and calculating a key frame descriptor corresponding to the point cloud key frame;
and performing closed loop detection on the position of the pedestrian according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment, performing closed loop detection based on the point cloud descriptor if the position of the current moment is judged to be near the position of the historical moment, performing optimization updating on the pose in a closed loop interval if the closed loop is detected, outputting the optimized pose at the current moment, and directly outputting the pose at the current moment if the closed loop is not detected.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (7)
1. The pedestrian navigation positioning method based on the combination of the laser radar and the inertia is characterized by comprising the following steps:
acquiring three-dimensional point cloud and first inertia data which are respectively acquired by a laser radar and a built-in inertia measurement unit thereof at the current moment;
acquiring second inertial data acquired by an inertial measurement unit arranged on the foot of the pedestrian at the current moment;
fusing the three-dimensional point cloud and the first inertia data by using iterative Kalman filtering to obtain the pose of the pedestrian at the current moment;
judging whether the pedestrian is in a motion state currently according to the second inertial data, if so, constructing a point cloud key frame according to the three-dimensional point cloud, and calculating a key frame descriptor corresponding to the point cloud key frame;
and performing closed-loop detection on the position of the pedestrian according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment, performing closed-loop detection based on the point cloud descriptor if the position of the current moment is judged to be close to the position of the historical moment, performing optimization updating on the pose in a closed-loop interval if the closed loop is detected, outputting the optimized pose at the current moment, and directly outputting the pose at the current moment if the closed loop is not detected.
2. The method for navigating and positioning the pedestrian according to claim 1, wherein the determining whether the pedestrian is currently in the moving state according to the second inertial data comprises:
the second inertial data comprises acceleration;
and judging whether the pedestrian is in a motion state or not according to the module value of the acceleration, the periodic change of the variance and a preset first threshold value.
3. The method of claim 1, wherein constructing a point cloud key frame from the three-dimensional point cloud if the moving object is in motion comprises:
and if the current moment is in a motion state, accumulating the original three-dimensional point clouds with preset frame numbers by taking the moment as the starting moment, splicing the accumulated original three-dimensional point clouds with the preset frame numbers, and constructing the point cloud key frame.
4. The method according to claim 1, wherein the calculating a key frame descriptor corresponding to the point cloud key frame comprises: and segmenting the point cloud key frame, and describing the segmented point cloud key frame by using a 2D histogram to obtain the key frame descriptor.
5. The pedestrian navigation and positioning method according to claim 1, wherein the performing closed-loop detection of the pedestrian position according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment comprises:
calculating a path distance and an Euclidean distance between two point cloud key frames according to the point cloud key frame constructed at the current moment and the pose corresponding to the point cloud key frame constructed at the historical moment;
and comparing the ratio of the Euclidean distance to the path distance with a preset second threshold, and if the ratio is smaller than the second threshold, judging that the position of the current moment is near the position of the historical moment.
6. The pedestrian navigation and positioning method according to claim 4, wherein the performing closed-loop detection based on the point cloud descriptor comprises:
and calculating the similarity between two corresponding point cloud key frames according to the key frame descriptors respectively corresponding to the current moment and the historical moment, and judging that closed loops are detected if the similarity is greater than a preset third threshold value.
7. Pedestrian navigation positioner based on laser radar and inertia combination, its characterized in that, the device includes:
the system comprises a first data acquisition module, a second data acquisition module and a third data acquisition module, wherein the first data acquisition module is used for acquiring three-dimensional point cloud and first inertial data which are respectively acquired by a laser radar and a built-in inertial measurement unit thereof at the current moment;
the second data acquisition module is used for acquiring second inertial data acquired by an inertial measurement unit arranged on the foot of the pedestrian at the current moment;
the current time pose estimation module is used for fusing the three-dimensional point cloud and the first inertia data by using iterative Kalman filtering to obtain the pedestrian pose at the current time;
the key frame descriptor calculation module is used for judging whether the pedestrian is in a motion state currently according to the second inertial data, constructing a point cloud key frame according to the three-dimensional point cloud if the pedestrian is in the motion state, and calculating a key frame descriptor corresponding to the point cloud key frame;
and the double closed-loop detection module is used for performing closed-loop detection on the position of a pedestrian according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment, performing closed-loop detection based on the point cloud descriptor if the position of the current moment is judged to be near the position of the historical moment, optimizing and updating the pose in a closed-loop interval if the closed loop is detected, outputting the optimized pose at the current moment, and directly outputting the pose at the current moment if the closed loop is not detected.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211044235.1A CN115164887A (en) | 2022-08-30 | 2022-08-30 | Pedestrian navigation positioning method and device based on laser radar and inertia combination |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211044235.1A CN115164887A (en) | 2022-08-30 | 2022-08-30 | Pedestrian navigation positioning method and device based on laser radar and inertia combination |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115164887A true CN115164887A (en) | 2022-10-11 |
Family
ID=83481808
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211044235.1A Pending CN115164887A (en) | 2022-08-30 | 2022-08-30 | Pedestrian navigation positioning method and device based on laser radar and inertia combination |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115164887A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115326068A (en) * | 2022-10-17 | 2022-11-11 | 北京理工大学 | Design method and system for laser radar-inertial measurement unit fusion odometer |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110887490A (en) * | 2019-11-29 | 2020-03-17 | 上海有个机器人有限公司 | Key frame selection method, medium, terminal and device for laser positioning navigation |
CN113466890A (en) * | 2021-05-28 | 2021-10-01 | 中国科学院计算技术研究所 | Lightweight laser radar inertial combination positioning method and system based on key feature extraction |
CN113674399A (en) * | 2021-08-16 | 2021-11-19 | 杭州图灵视频科技有限公司 | Mobile robot indoor three-dimensional point cloud map construction method and system |
CN114674311A (en) * | 2022-03-22 | 2022-06-28 | 中国矿业大学 | Indoor positioning and map building method and system |
CN114723917A (en) * | 2022-04-08 | 2022-07-08 | 广州高新兴机器人有限公司 | Pose optimization method, device, medium and equipment of laser odometer |
CN114782626A (en) * | 2022-04-14 | 2022-07-22 | 国网河南省电力公司电力科学研究院 | Transformer substation scene mapping and positioning optimization method based on laser and vision fusion |
CN114924287A (en) * | 2022-04-21 | 2022-08-19 | 深圳市正浩创新科技股份有限公司 | Map construction method, apparatus and medium |
-
2022
- 2022-08-30 CN CN202211044235.1A patent/CN115164887A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110887490A (en) * | 2019-11-29 | 2020-03-17 | 上海有个机器人有限公司 | Key frame selection method, medium, terminal and device for laser positioning navigation |
CN113466890A (en) * | 2021-05-28 | 2021-10-01 | 中国科学院计算技术研究所 | Lightweight laser radar inertial combination positioning method and system based on key feature extraction |
CN113674399A (en) * | 2021-08-16 | 2021-11-19 | 杭州图灵视频科技有限公司 | Mobile robot indoor three-dimensional point cloud map construction method and system |
CN114674311A (en) * | 2022-03-22 | 2022-06-28 | 中国矿业大学 | Indoor positioning and map building method and system |
CN114723917A (en) * | 2022-04-08 | 2022-07-08 | 广州高新兴机器人有限公司 | Pose optimization method, device, medium and equipment of laser odometer |
CN114782626A (en) * | 2022-04-14 | 2022-07-22 | 国网河南省电力公司电力科学研究院 | Transformer substation scene mapping and positioning optimization method based on laser and vision fusion |
CN114924287A (en) * | 2022-04-21 | 2022-08-19 | 深圳市正浩创新科技股份有限公司 | Map construction method, apparatus and medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115326068A (en) * | 2022-10-17 | 2022-11-11 | 北京理工大学 | Design method and system for laser radar-inertial measurement unit fusion odometer |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109974693B (en) | Unmanned aerial vehicle positioning method and device, computer equipment and storage medium | |
Dellenbach et al. | Ct-icp: Real-time elastic lidar odometry with loop closure | |
CN112985416B (en) | Robust positioning and mapping method and system based on laser and visual information fusion | |
CN109887053B (en) | SLAM map splicing method and system | |
JP6760114B2 (en) | Information processing equipment, data management equipment, data management systems, methods, and programs | |
CN111583369B (en) | Laser SLAM method based on facial line angular point feature extraction | |
CN107167826B (en) | Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving | |
CN112230242B (en) | Pose estimation system and method | |
US8437501B1 (en) | Using image and laser constraints to obtain consistent and improved pose estimates in vehicle pose databases | |
CN109059907B (en) | Trajectory data processing method and device, computer equipment and storage medium | |
CN113313763B (en) | Monocular camera pose optimization method and device based on neural network | |
CN113466890B (en) | Light laser radar inertial combination positioning method and system based on key feature extraction | |
CN115690338A (en) | Map construction method, map construction device, map construction equipment and storage medium | |
KR101985344B1 (en) | Sliding windows based structure-less localization method using inertial and single optical sensor, recording medium and device for performing the method | |
CN115143954B (en) | Unmanned vehicle navigation method based on multi-source information fusion | |
CN115479598A (en) | Positioning and mapping method based on multi-sensor fusion and tight coupling system | |
CN113920198B (en) | Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment | |
CN114001733A (en) | Map-based consistency efficient visual inertial positioning algorithm | |
CN116429084A (en) | Dynamic environment 3D synchronous positioning and mapping method | |
CN115164887A (en) | Pedestrian navigation positioning method and device based on laser radar and inertia combination | |
Chiu et al. | Precise vision-aided aerial navigation | |
CN115127554A (en) | Unmanned aerial vehicle autonomous navigation method and system based on multi-source vision assistance | |
CN112612034B (en) | Pose matching method based on laser frame and probability map scanning | |
CN112067007B (en) | Map generation method, computer storage medium, and electronic device | |
Emter et al. | Simultaneous localization and mapping for exploration with stochastic cloning EKF |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20221011 |
|
RJ01 | Rejection of invention patent application after publication |