CN113358112A - Map construction method and laser inertia odometer - Google Patents

Map construction method and laser inertia odometer Download PDF

Info

Publication number
CN113358112A
CN113358112A CN202110618535.5A CN202110618535A CN113358112A CN 113358112 A CN113358112 A CN 113358112A CN 202110618535 A CN202110618535 A CN 202110618535A CN 113358112 A CN113358112 A CN 113358112A
Authority
CN
China
Prior art keywords
information
map
laser
estimated
imu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110618535.5A
Other languages
Chinese (zh)
Other versions
CN113358112B (en
Inventor
赖豪文
姜晓旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chaoxing Future Technology Co ltd
Original Assignee
Beijing Chaoxing Future Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chaoxing Future Technology Co ltd filed Critical Beijing Chaoxing Future Technology Co ltd
Priority to CN202110618535.5A priority Critical patent/CN113358112B/en
Publication of CN113358112A publication Critical patent/CN113358112A/en
Application granted granted Critical
Publication of CN113358112B publication Critical patent/CN113358112B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/50Systems of measurement based on relative movement of target
    • G01S17/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention belongs to the field of automatic driving, and particularly relates to a map construction method and a laser inertia odometer, wherein the map construction method comprises a sensor information acquisition step, a dynamic object removal step, a mileage count data acquisition step, an information comparison step, a loop detection step, a nonlinear optimization step and a map drawing step; the laser inertia odometer comprises a dot matrix scanning device and an inertia measuring device which are used for data interaction; the inertial measurement unit and the scanning probe of the dot matrix scanning unit are fixed in space. The map construction method and the laser inertia odometer have strong adaptability and robustness to the environment, and the constructed map has high precision.

Description

Map construction method and laser inertia odometer
Technical Field
The invention belongs to the field of automatic driving, and particularly relates to a map construction method and a laser inertia odometer.
Background
The automatic driving technology is a hot topic in recent years, and the automatic driving brings subversive changes in the fields of relieving traffic jam, improving road safety, reducing air pollution and the like. Since the automatic driving faces the often uncontrollable environment, one of the key difficulties is unpredictability of the environment, and how to accurately, quickly and effectively acquire and master the environment information and accurately position the environment information is a prerequisite and important guarantee for normal operation of other modules.
In an outdoor scene, absolute positioning can be performed by means of a Global Navigation Satellite System (GNSS), but a signal of the GNSS is often blocked by a high-rise building, a bridge, a tunnel and the like, so that positioning accuracy is unstable, and even position information is lost.
In order to ensure the stability of positioning, a feasible method is to build a map of the environment in advance, and then determine the position and the attitude in the existing map in real time according to the sensor data. The method can effectively avoid the problem of satellite signal shielding, can stably work under various environments, and is increasingly applied to automatic driving. At this time, the precision and quality of the pre-established environment map are very important, which directly affects the accuracy of the subsequent positioning.
In order to construct a point cloud map suitable for Localization, a Simultaneous Localization and Mapping (SLAM) method is often adopted. SLAM is a problem of tight coupling of positioning and mapping, and a classical SLAM framework can be mainly divided into sensor information acquisition, front-end (odometer), back-end (nonlinear optimization), loop detection, mapping and the like.
For the front end of the mapping, the visual odometer is affected by illumination change, weather, motion blur and the like outdoors, and the accuracy and the robustness are not high. The laser odometer is resistant to environmental interference to a certain extent, however, in the prior art, adjacent laser frames are often directly matched, or an IMU is introduced to provide a matching initial value, so that a loosely-coupled laser inertial odometer is formed, and the precision is relatively low. In addition, the IMU tends to have a certain zero offset, and direct use without estimated calibration also introduces errors.
For the back end of the graph construction, the optimization method based on the filter has the defects of too fast memory increase, poor global optimization performance and the like. The method based on deterministic graph optimization, such as BA and pose graphs, optimizes the whole graph every time a new node is added, consumes a large amount of time when the number of nodes and edges is large, and has many repeated calculations and low efficiency.
For the existing complete mapping technical scheme, the existing complete mapping technical scheme is often designed aiming at a determined sensor form, is difficult to expand, and is not fused with multiple sensors to improve the mapping precision. In addition, the schemes do not consider dynamic obstacles in the environment, such as vehicles, pedestrians and the like, and the direct application of the schemes to map building not only can introduce front-end odometer matching errors, but also can cause smear in the map and influence the quality of the map.
Disclosure of Invention
In view of the above, the invention provides a high-precision high-quality point cloud map construction method, which has strong adaptability and robustness to the environment and high precision of the constructed map.
In order to achieve the technical purpose, the invention adopts the following specific technical scheme:
a map construction method, comprising:
the method comprises the following steps of obtaining sensor information, wherein the sensor information is used for measuring position information, speed information, attitude information and surrounding environment information of a map building device in which the sensor information is located in the motion process through a dot matrix scanning device;
removing the dynamic object, namely removing the dynamic object in the surrounding environment information by using a point cloud segmentation algorithm of a convolutional neural network to obtain estimated environment information;
acquiring mileage count data, wherein the mileage count data is used for acquiring the speed and the posture of the map building device in the movement process based on the inertia measuring device;
comparing the information obtained by the sensor information and the information obtained by the mileage counting data to generate estimated position information, estimated speed information and estimated attitude information;
loop detection, which is used for comparing the estimated environmental information obtained from the last map building device to the circulating detection place to obtain deviation correction information when the map building device reaches the circulating detection place;
nonlinear optimization and map drawing, wherein estimated position information, estimated speed information, estimated attitude information, estimated environment information and deviation correction information are substituted into a world map based on a factor graph optimization framework, and the map is drawn by performing nonlinear optimization on accumulated deviation between two times of arrival at the cyclic detection site according to the deviation correction information;
wherein, there are at least three identical scanning points between the adjacent scanning positions of the dot matrix scanning device.
Further, the dot matrix scanning device is a 3D laser radar.
Further, the Inertial Measurement Unit is an Inertial Measurement Unit (IMU).
Further, the point cloud segmentation algorithm is cnn _ seg.
Further, the nonlinear optimization is based on a GTSAM optimization library.
Further, the specific method for dynamic object removal comprises the following steps:
s101, inputting a laser frame of a laser radar into a point cloud segmentation network, and outputting label category information of all scanning points of the laser frame;
s102: and in the label category information of all the scanning points indexed and removed in the laser frame, the scanning points belonging to the dynamic object category are obtained to obtain a dynamic object removal laser frame.
Further, the specific method for dynamic object removal further comprises,
s103: and further indexing and removing the scanning points belonging to the dynamic object class in the dynamic object removal laser frame.
Further, the position, the speed and the posture of the map building device obtained by the IMU in the motion process are calculated by the following formulas:
Figure BDA0003098736670000041
Figure BDA0003098736670000042
wherein: v is velocity, p is position, R is attitude rotation matrix, and g is gravitational acceleration constant.
Further, the method also comprises the following steps:
IMU position and attitude calibration, wherein the IMU is calibrated by removing noise of the IMU to obtain the speed and attitude of the map building device in the motion process.
Further, the position information and the information comparison method specifically include:
s201, extracting point characteristics and surface characteristics from the laser frame, substituting position information and attitude information, fusing the point characteristics and the surface characteristics with the world map, and generating a local line characteristic map and/or a local surface characteristic map;
s202: combining all local line feature maps and/or local surface feature maps according to the position, speed and posture of a map construction device obtained by the IMU in the motion process to form a single-frame map;
s203: estimating the IMU zero offset by using a calculated estimate function in a GTSAM optimization library, wherein the generated estimated value is a pre-offset value, and the pre-offset value is used as the space offset of a single-frame map of the next laser frame of the laser radar.
Meanwhile, the invention also provides a laser inertia odometer which comprises a dot matrix scanning device and an inertia measuring device, wherein the dot matrix scanning device and the inertia measuring device are used for data interaction; the inertial measurement unit and the scanning probe of the dot matrix scanning unit are fixed in space.
By adopting the technical scheme, the invention can bring the following beneficial effects:
1. the invention realizes a high-precision high-quality point cloud map construction method, has strong adaptability and robustness to the environment, and the constructed map has high precision;
2. the invention adopts the factor graph as the nonlinear optimization rear end, is designed for conveniently realizing multi-sensor fusion on the whole structure, and has strong expandability. Meanwhile, the advantages of the sensors can be complemented, and the drawing construction precision and the environmental adaptability are further improved;
3. the invention has a dynamic object removing module, can effectively avoid the smear phenomenon of dynamic objects in the map, and has high map building quality. Meanwhile, the module can prevent the front end from being interfered by dynamic objects in the matching process, and the accuracy of the front end odometer is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic overall flow chart of a map construction method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of the nonlinear optimization in the embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the drawings only show the components related to the present invention rather than the number, shape and size of the components in practical implementation, and the type, quantity and proportion of the components in practical implementation can be changed freely, and the layout of the components can be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
In an embodiment of the present invention, a map construction method is provided, including:
the method comprises the following steps of obtaining sensor information, wherein the sensor information is used for measuring position information, speed information, attitude information and surrounding environment information of a map building device in which the sensor information is located in the motion process through a dot matrix scanning device;
the dynamic object removing step is used for removing dynamic objects in the surrounding environment information by using a point cloud segmentation algorithm of a convolutional neural network to obtain estimated environment information;
acquiring odometer data, namely acquiring odometer position information, odometer speed information and odometer posture information of the map construction device in the movement process based on the inertia measurement device;
comparing the information, namely comparing the information obtained by acquiring the sensor information with the information obtained by acquiring the odometer data, and generating estimated position information, estimated speed information and estimated attitude information;
loop detection, which is used for comparing the estimated environmental information obtained from the last map building device to the circulating detection place to obtain deviation correction information when the map building device reaches the circulating detection place;
nonlinear optimization and map drawing, wherein estimated position information, estimated speed information, estimated attitude information, estimated environment information and deviation correction information are substituted into a world map based on a factor graph optimization framework, and the map is drawn by performing nonlinear optimization on accumulated deviation between two times of arrival at the cyclic detection site according to the deviation correction information;
wherein, there are at least three identical scanning points between the adjacent scanning positions of the dot matrix scanning device.
In this embodiment, the matrix scanning device is a laser radar, which is a short for laser detection and ranging system, and is composed of a transmitter, an antenna, a receiver, a tracking frame, and an information processing system, and the principle of the laser radar is similar to that of a conventional radar. By setting a fixed scanning frequency, position information, speed information, posture information and surrounding environment information of the map construction device in the movement process can be obtained by recording and comparing scanning results before and after the movement process for multiple times;
in this embodiment, the inertial measurement unit is an IMU (inertial sensor), and the IMU of this embodiment should further include a sensor capable of outputting acceleration and angular velocity information, such as an AHRS (Attitude and Heading Reference System) or a GNSS (Global Navigation Satellite System).
In this embodiment, the point cloud segmentation algorithm is a network having functions of inputting a laser frame point cloud and outputting a class label of each point, such as cnn _ seg. Nonlinear optimization relies on a GTSAM optimization library.
In this embodiment, the specific method for dynamic object removal includes the following steps:
s101, inputting a laser frame of a laser radar into a point cloud segmentation network, and outputting label category information of all scanning points of the laser frame;
s102: and in the label type information of all the scanning points indexed and removed in the laser frame, the scanning points belonging to the dynamic object type are obtained, and the dynamic object removal laser frame is obtained.
In this embodiment, the specific method of dynamic object removal further comprises,
s103: the scanning points belonging to the dynamic object class in the dynamic object removal laser frame are further indexed and removed.
In this embodiment, the position, velocity and attitude of the mapping device obtained by the IMU during motion are calculated by the following formulas:
Figure BDA0003098736670000101
Figure BDA0003098736670000102
wherein: v is velocity, p is position, R is attitude rotation matrix, and g is gravitational acceleration constant.
This embodiment still includes:
and IMU position and attitude calibration, namely calibrating the speed and attitude of the IMU in the motion process by removing the noise of the IMU.
In this embodiment, the method for comparing the location information with the information specifically includes:
s201, extracting point characteristics and surface characteristics from a laser frame, substituting position information and attitude information, fusing the point characteristics and the surface characteristics with a world map, and generating a local line characteristic map and/or a local surface characteristic map;
s202: combining all local line feature maps and/or local surface feature maps according to the position, speed and posture of a map construction device obtained by the IMU in the motion process to form a single-frame map;
s203: and estimating the zero offset of the IMU by using a calcutateEstimate function, wherein the generated estimated value is a pre-offset value, and the pre-offset value is used as the space offset of a single-frame map of the next laser frame of the laser radar.
The embodiment also provides a laser inertia odometer, which comprises a data interactive dot matrix scanning device and an inertia measuring device; the inertial measurement unit and the scanning probe of the dot matrix scanning unit are fixed in space.
The specific method for constructing the high-precision and high-quality point cloud map by combining the laser inertia odometer comprises the following steps:
the high-precision high-quality point cloud map construction method comprises the following four parts: dynamic object removal, odometer front-end processing (sensor information acquisition), non-linear optimization back-end processing (non-linear optimization), loopback detection, and map building blocks. In the general flow chart of this embodiment, as shown in fig. 1, sensor information is first input to the dynamic object removal module, and the output laser frame data without the dynamic object and other sensor data are used as input of subsequent modules. The front-end module gives the pose relationship among the nodes in real time, and the rear-end module performs nonlinear optimization on the pose relationship among the nodes with errors and noises. Meanwhile, the loop detection module judges whether to return to a place which is arrived previously or not according to the sensor information and the existing node relation, and adds corresponding constraint to the rear end. And finally, outputting the optimized node pose information by the rear end, and integrating the node pose information into a map by combining laser frame data.
The following is a step-by-step description:
1) dynamic object removal
A large number of dynamic objects such as pedestrians and vehicles often exist in the environment, if the data collected by the sensors are directly mapped, the quality is affected due to the fact that the maps are smeared, and meanwhile the accuracy of front-end matching is affected by the dynamic objects. Therefore, the treatment is required first.
The dynamic object removal module is mainly implemented by a point cloud segmentation algorithm based on a convolutional neural network, such as cnn _ seg and the like, for removing points represented by dynamic objects from a laser frame obtained by a laser radar. The network input of the point cloud segmentation is a laser frame obtained by a laser radar, and the output is label category information to which all points in the frame belong. After the label category information is obtained, points belonging to the dynamic object category are indexed and directly removed from a laser frame obtained by the laser radar, and the result at the moment is used as an output result of the dynamic object removal module.
Because the point cloud segmentation algorithm often has a certain omission factor, the dynamic object removal module can perform detection and removal operation once on the laser frame which is used for building the map after the rear end is optimized so as to ensure the quality of the map.
2) Odometer front-end processing
The odometer front-end module is used for providing pose information of the current sensor, and the positioning accuracy of the odometer front-end module determines the accuracy of the final map building to a great extent. The embodiment provides a laser inertia odometer method with tightly coupled IMU and laser radar, which can provide accurate pose information.
The IMU can directly give the acceleration value and the angular velocity value of the IMU, and the change conditions of the position, the velocity and the attitude of the IMU within a period of time delta t can be obtained through integration. In order to avoid recalculating all the integral values after updating the estimated IMU zero offset, the present embodiment determines its position, velocity and attitude by using a pre-integration method, and the position, velocity and attitude are expressed by formula
Figure BDA0003098736670000121
Figure BDA0003098736670000122
Δ Rij ═ RiTRj, where v is the velocity, p is the position, R is the attitude rotation matrix, and g is the gravitational acceleration constant.
Because the data of the IMU often has certain noise and is amplified after high-frequency integration, the relative position and attitude values between the nodes obtained by pre-integration need to be further calibrated. Pose T obtained by pre-integrating IMU in the embodimentiAs an initial value of the laser frame matching, the final result of the laser frame matching, i.e., the pose relationship between adjacent nodesIs used as the output of the odometer front end.
Laser frame matching uses a feature and local map fusion based approach. Feature extraction is based on the roughness of points, let piFor a point at which roughness is to be calculated, the set S is ANDiSet of adjacent points in the same laser channel (i.e., with the same ring value), half of each point being at piThe roughness can be calculated by the formula:
Figure BDA0003098736670000131
by setting a threshold value clAnd chPoints can be divided into line features (c)>ch) Kneading feature (c)<cl). Respectively extracting line-surface characteristics of the laser frames of the nearest n nodes, and utilizing the position and orientation information { T ] of the nodesi-n,Ti-n+1,…,Ti-1And converting the feature points to a uniform world coordinate system W for fusion to obtain a locally fused line feature map and a locally fused surface feature map respectively.
After the line feature map and the surface feature map which are locally fused are obtained, the position and the attitude which are obtained after IMU pre-integration can be used as initial values to match laser frames. Firstly, respectively extracting line and surface characteristics of a current laser frame, and then obtaining a pose T by utilizing IMU pre-integrationiAnd transforming the line and plane characteristics to a world coordinate system W. For the k-th line feature, two points closest to the feature point in the local fused line feature map are searched and recorded as
Figure BDA0003098736670000132
For the k-th surface feature, finding the three points closest to the feature point in the locally fused surface feature map, and recording as
Figure BDA0003098736670000133
Next, determining a distance error value of the line and surface features: distance error for the kth line feature
Figure BDA0003098736670000134
Is the feature point to
Figure BDA0003098736670000135
The distance of the determined straight line; distance error for k-th surface feature
Figure BDA0003098736670000136
Is the feature point to
Figure BDA0003098736670000137
The distance of the determined plane. Finally, the following function is optimized
Figure BDA0003098736670000138
Node pose information T output by the front end of the odometer can be obtainedi
Obtaining node pose information T by laser frame matchingiThen, the estimation of the IMU zero offset can be realized by using the calculated estimate function in ISAM2 class of the GTSAM optimized library, and the estimated result will be used as the offset value used in the next IMU pre-integration.
3) Non-linear optimized back-end processing
In this embodiment, a factor graph is used as a back end of nonlinear optimization, and a GTSAM open source optimization library is used in specific implementation, and a schematic diagram thereof is shown in fig. 2. Wherein l is a road sign, x is a pose node, z is observation data, and the pose node x1The next pose node x is generated under the action of the motion equation f2. The constraints of the motion equation f on the position node x include the pose constraint generated by the IMU pre-integration and the pose constraint generated by the laser frame matching, and the loop detection constraint described below. The landmark points l and observation z may then be positioning results given by GPS or RTK. Note that the motion equation f, landmark points l, and observation z may be various, not limited to the illustrated examples, for example, the motion equation f may also be a constraint given by a wheel speed odometer (wheel odometer), and the landmark points l and observation z may also be feature points extracted through a camera observation environment and corresponding relationship constraints. Therefore, the embodiment can conveniently and effectively realize multi-sensor fusion.
4) Loop detection
The front end part of the odometer of the embodiment realizes the oppositeLocation, i.e. next site xi+1Pose transformation constraint Ti+1Is relative to the current node xiIn (1). Accumulated errors are inevitably introduced over time, so the embodiment introduces a loop detection module, which aims to judge whether to return to a place which has been reached previously or not through sensor information and existing node relationships, and adds corresponding constraints to the back end to eliminate the accumulated errors.
The loop detection module of the present embodiment is based on distance detection and laser frame matching. For new state node xi+1First, through Euclidean distance | · | | non-woven phosphor2In the existing state node { x1,x2,...,xiFind the node with the minimum distance in the } set as xl. Then, fusing m nodes { x before and after the found nodel-m,...,xl,...,xl+mAnd obtaining a locally fused line feature map and a locally fused surface feature map by using the laser frame. The fusion method is the same as the method based on feature and local map fusion adopted by laser frame matching at the front end of the odometer in the step (2).
After the line feature map and the surface feature map which are partially fused are obtained, a new state node x can be calculated by utilizing a method of matching laser frames at the front end of the odometer (2)i+1And loopback node xlRelative transformation of (T)l,i+1. The relative transformation Tl,i+1Can be added into a factor graph at the back end of the (3) nonlinear optimization as the constraint of the motion equation f, and unified optimization is carried out so as to reduce the overall accumulated error.
5) Map construction
After all sensor data are processed, and (3) the nonlinear optimization back end optimizes all nodes, each node { x ] can be obtained1,x2,...,xnPose transformation information of { T }1,T2,...,Tn}. At this time, the node belongs to each node { x1,x2,...,xnAnd (3) carrying out dynamic object detection and removal on the laser frame through the dynamic object removal module (1). Then, according to { T1,T2,...,TnThe laser frames of all the nodes can be converted into a unified frameAnd under a world coordinate system W, directly overlapping each laser frame to obtain the finally constructed high-precision high-quality point cloud map about the environment.
As shown in fig. 1, the sensor information of this embodiment is first input to the dynamic object removal module, and the output laser frame data without the dynamic object and other sensor data are used as the input of the subsequent modules. The front-end module gives the pose relationship among the nodes in real time, and the rear-end module performs nonlinear optimization on the pose relationship among the nodes with errors and noises. Meanwhile, the loop detection module judges whether to return to a place which is arrived previously or not according to the sensor information and the existing node relation, and adds corresponding constraint to the rear end. And finally, outputting the optimized node pose information by the rear end, and integrating the node pose information into a map by combining laser frame data.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A map construction method, comprising:
the method comprises the following steps of obtaining sensor information, wherein the sensor information is used for measuring position information, speed information, attitude information and surrounding environment information of a map building device in which the sensor information is located in the motion process through a dot matrix scanning device;
removing the dynamic object, namely removing the dynamic object in the surrounding environment information by using a point cloud segmentation algorithm of a convolutional neural network to obtain estimated environment information;
acquiring mileage count data, wherein the mileage count data is used for acquiring the speed and the posture of the map building device in the movement process based on the inertia measuring device;
comparing the information obtained by the sensor information and the information obtained by the mileage counting data to generate estimated position information, estimated speed information and estimated attitude information;
loop detection, which is used for comparing the estimated environmental information obtained from the last map building device to the circulating detection place to obtain deviation correction information when the map building device reaches the circulating detection place;
nonlinear optimization and map drawing, wherein estimated position information, estimated speed information, estimated attitude information, estimated environment information and deviation correction information are substituted into a world map based on a factor graph optimization framework, and the map is drawn by performing nonlinear optimization on accumulated deviation between two times of arrival at the cyclic detection site according to the deviation correction information;
wherein, there are at least three identical scanning points between the adjacent scanning positions of the dot matrix scanning device.
2. The map construction method according to claim 1, characterized in that: the dot matrix scanning device is a 3D laser radar.
3. The map construction method according to claim 1, characterized in that: the Inertial Measurement Unit is an Inertial Measurement Unit (IMU).
4. The map construction method according to claim 1, characterized in that: the point cloud segmentation algorithm is cnn _ seg; the nonlinear optimization relies on a GTSAM optimization library.
5. The map building method according to claim 2, wherein the specific method for dynamic object removal comprises the following steps:
s101, inputting a laser frame of a laser radar into a point cloud segmentation network, and outputting label category information of all scanning points of the laser frame;
s102: and in the label category information of all the scanning points indexed and removed in the laser frame, the scanning points belonging to the dynamic object category are obtained to obtain a dynamic object removal laser frame.
6. The mapping method according to claim 5, wherein the specific method of dynamic object removal further comprises
S103: and further indexing and removing the scanning points belonging to the dynamic object class in the dynamic object removal laser frame.
7. The mapping method of claim 3, wherein the position, velocity and attitude of the IMU-derived mapping device during motion are calculated by the following equations:
Figure FDA0003098736660000021
Figure FDA0003098736660000031
wherein: v is velocity, p is position, R is attitude rotation matrix, and g is gravitational acceleration constant.
8. The mapping method according to claim 3, further comprising:
IMU position and attitude calibration, wherein the IMU is calibrated by removing noise of the IMU to obtain the speed and attitude of the map building device in the motion process.
9. The map construction method according to claim 3 or 5, wherein the position information and the information comparison method specifically include:
s201: extracting point features and surface features from the laser frame, substituting position information and attitude information, fusing with the world map, and generating a local line feature map and/or a local surface feature map;
s202: combining all local line feature maps and/or local surface feature maps according to the position, speed and posture of a map construction device obtained by the IMU in the motion process to form a single-frame map;
s203: estimating the IMU zero offset by using a calculated estimate function in a GTSAM optimization library, wherein the generated estimated value is a pre-offset value, and the pre-offset value is used as the space offset of a single-frame map of the next laser frame of the laser radar.
10. A laser odometer according to the mapping method of claims 1-9, characterized in that: the device comprises a dot matrix scanning device and an inertia measuring device which are used for data interaction; the inertial measurement unit and the scanning probe of the dot matrix scanning unit are fixed in space.
CN202110618535.5A 2021-06-03 2021-06-03 Map construction method and laser inertia odometer Active CN113358112B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110618535.5A CN113358112B (en) 2021-06-03 2021-06-03 Map construction method and laser inertia odometer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110618535.5A CN113358112B (en) 2021-06-03 2021-06-03 Map construction method and laser inertia odometer

Publications (2)

Publication Number Publication Date
CN113358112A true CN113358112A (en) 2021-09-07
CN113358112B CN113358112B (en) 2023-01-17

Family

ID=77531894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110618535.5A Active CN113358112B (en) 2021-06-03 2021-06-03 Map construction method and laser inertia odometer

Country Status (1)

Country Link
CN (1) CN113358112B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850864A (en) * 2021-09-14 2021-12-28 中南大学 GNSS/laser radar loop detection method for mobile robot
CN114018284A (en) * 2021-10-13 2022-02-08 上海师范大学 Wheel speed odometer correction method based on vision
CN114170320A (en) * 2021-10-29 2022-03-11 广西大学 Automatic positioning and working condition self-adaption method of pile driver based on multi-sensor fusion
CN115200588A (en) * 2022-09-14 2022-10-18 煤炭科学研究总院有限公司 SLAM autonomous navigation method and device for mobile robot
CN115326068A (en) * 2022-10-17 2022-11-11 北京理工大学 Design method and system for laser radar-inertial measurement unit fusion odometer
CN115493579A (en) * 2022-09-02 2022-12-20 松灵机器人(深圳)有限公司 Positioning correction method, positioning correction device, mowing robot and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570454A (en) * 2016-10-10 2017-04-19 同济大学 Pedestrian traffic parameter extraction method based on mobile laser scanning
CN110261870A (en) * 2019-04-15 2019-09-20 浙江工业大学 It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method
US20190384318A1 (en) * 2017-01-31 2019-12-19 Arbe Robotics Ltd. Radar-based system and method for real-time simultaneous localization and mapping
CN111105495A (en) * 2019-11-26 2020-05-05 四川阿泰因机器人智能装备有限公司 Laser radar mapping method and system fusing visual semantic information
CN111583136A (en) * 2020-04-25 2020-08-25 华南理工大学 Method for simultaneously positioning and establishing image of autonomous mobile platform in rescue scene
CN111929699A (en) * 2020-07-21 2020-11-13 北京建筑大学 Laser radar inertial navigation odometer considering dynamic obstacles and mapping method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570454A (en) * 2016-10-10 2017-04-19 同济大学 Pedestrian traffic parameter extraction method based on mobile laser scanning
US20190384318A1 (en) * 2017-01-31 2019-12-19 Arbe Robotics Ltd. Radar-based system and method for real-time simultaneous localization and mapping
CN110261870A (en) * 2019-04-15 2019-09-20 浙江工业大学 It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method
CN111105495A (en) * 2019-11-26 2020-05-05 四川阿泰因机器人智能装备有限公司 Laser radar mapping method and system fusing visual semantic information
CN111583136A (en) * 2020-04-25 2020-08-25 华南理工大学 Method for simultaneously positioning and establishing image of autonomous mobile platform in rescue scene
CN111929699A (en) * 2020-07-21 2020-11-13 北京建筑大学 Laser radar inertial navigation odometer considering dynamic obstacles and mapping method and system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850864A (en) * 2021-09-14 2021-12-28 中南大学 GNSS/laser radar loop detection method for mobile robot
CN113850864B (en) * 2021-09-14 2024-04-12 中南大学 GNSS/LIDAR loop detection method for outdoor mobile robot
CN114018284A (en) * 2021-10-13 2022-02-08 上海师范大学 Wheel speed odometer correction method based on vision
CN114018284B (en) * 2021-10-13 2024-01-23 上海师范大学 Wheel speed odometer correction method based on vision
CN114170320A (en) * 2021-10-29 2022-03-11 广西大学 Automatic positioning and working condition self-adaption method of pile driver based on multi-sensor fusion
CN114170320B (en) * 2021-10-29 2022-10-28 广西大学 Automatic positioning and working condition self-adaption method of pile driver based on multi-sensor fusion
CN115493579A (en) * 2022-09-02 2022-12-20 松灵机器人(深圳)有限公司 Positioning correction method, positioning correction device, mowing robot and storage medium
CN115200588A (en) * 2022-09-14 2022-10-18 煤炭科学研究总院有限公司 SLAM autonomous navigation method and device for mobile robot
CN115326068A (en) * 2022-10-17 2022-11-11 北京理工大学 Design method and system for laser radar-inertial measurement unit fusion odometer
CN115326068B (en) * 2022-10-17 2023-01-24 北京理工大学 Design method and system for laser radar-inertial measurement unit fusion odometer

Also Published As

Publication number Publication date
CN113358112B (en) 2023-01-17

Similar Documents

Publication Publication Date Title
CN113358112B (en) Map construction method and laser inertia odometer
CN110160542B (en) Method and device for positioning lane line, storage medium and electronic device
CN110412635B (en) GNSS/SINS/visual tight combination method under environment beacon support
CN109341706A (en) A kind of production method of the multiple features fusion map towards pilotless automobile
CN111652179A (en) Semantic high-precision map construction and positioning method based on dotted line feature fusion laser
CN114526745B (en) Drawing construction method and system for tightly coupled laser radar and inertial odometer
CN112162297B (en) Method for eliminating dynamic obstacle artifacts in laser point cloud map
CN112734841B (en) Method for realizing positioning by using wheel type odometer-IMU and monocular camera
CN113933818A (en) Method, device, storage medium and program product for calibrating laser radar external parameter
CN112859051A (en) Method for correcting laser radar point cloud motion distortion
CN112965063B (en) Robot mapping and positioning method
CN114323033B (en) Positioning method and equipment based on lane lines and feature points and automatic driving vehicle
CN114612348B (en) Laser point cloud motion distortion correction method and device, electronic equipment and storage medium
CN113447949B (en) Real-time positioning system and method based on laser radar and prior map
CN110515110B (en) Method, device, equipment and computer readable storage medium for data evaluation
CN116452763A (en) Three-dimensional point cloud map construction method based on error Kalman filtering and factor graph
CN114119886A (en) High-precision map point cloud reconstruction method and device, vehicle, equipment and storage medium
CN115127543A (en) Method and system for eliminating abnormal edges in laser mapping optimization
CN115930977A (en) Method and system for positioning characteristic degradation scene, electronic equipment and readable storage medium
CN113838129B (en) Method, device and system for obtaining pose information
Nguyen et al. Mcd: Diverse large-scale multi-campus dataset for robot perception
CN113959437A (en) Measuring method and system for mobile measuring equipment
CN117388830A (en) External parameter calibration method, device, equipment and medium for laser radar and inertial navigation
CN110927765B (en) Laser radar and satellite navigation fused target online positioning method
CN115326068B (en) Design method and system for laser radar-inertial measurement unit fusion odometer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Map Construction Method and a Laser Inertial Odometer

Effective date of registration: 20230727

Granted publication date: 20230117

Pledgee: Haidian Beijing science and technology enterprise financing Company limited by guarantee

Pledgor: Beijing Chaoxing Future Technology Co.,Ltd.

Registration number: Y2023110000311

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Granted publication date: 20230117

Pledgee: Haidian Beijing science and technology enterprise financing Company limited by guarantee

Pledgor: Beijing Chaoxing Future Technology Co.,Ltd.

Registration number: Y2023110000311

PC01 Cancellation of the registration of the contract for pledge of patent right