CN116958303A - Intelligent mapping method and system based on outdoor laser radar - Google Patents

Intelligent mapping method and system based on outdoor laser radar Download PDF

Info

Publication number
CN116958303A
CN116958303A CN202311203176.2A CN202311203176A CN116958303A CN 116958303 A CN116958303 A CN 116958303A CN 202311203176 A CN202311203176 A CN 202311203176A CN 116958303 A CN116958303 A CN 116958303A
Authority
CN
China
Prior art keywords
point cloud
cloud data
data
radar
ground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311203176.2A
Other languages
Chinese (zh)
Inventor
孙振行
庞先昂
董利亚
乔文静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Boang Information Technology Co ltd
Original Assignee
Shandong Boang Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Boang Information Technology Co ltd filed Critical Shandong Boang Information Technology Co ltd
Priority to CN202311203176.2A priority Critical patent/CN116958303A/en
Publication of CN116958303A publication Critical patent/CN116958303A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention relates to the technical field of data processing, in particular to an intelligent graph construction method and system based on an outdoor laser radar. The method comprises the following steps: acquiring Lei Dadian cloud data; preprocessing Lei Dadian cloud data; angle differentiation is carried out on the preprocessed radar point cloud data to obtain angle differentiation data of the point cloud; according to the obtained angle differential data of the point cloud, determining a fuzzy area; classifying the fuzzy areas and then finding out radar point cloud data corresponding to the ground; and removing radar point cloud data corresponding to the ground from the radar point cloud data. Through the technical scheme, the invention can help understand the environment structure and the topological relation. By separating ground points and non-ground points, clearer and more accurate scene information can be obtained.

Description

Intelligent mapping method and system based on outdoor laser radar
Technical Field
The invention relates to the technical field of data processing, in particular to an intelligent graph construction method and system based on an outdoor laser radar.
Background
During the automatic cleaning and navigation process of the unmanned intelligent cleaning vehicle, the data of the laser radar and the original map need to be loaded in real time to work normally. Because the installation position of the laser radar is relatively close to the cleaning brush, the water jet and the like, data generated by the laser radar can carry a lot of pseudo data, and therefore the autonomous navigation effect of the vehicle is affected. Therefore, the problem to be solved is to automatically remove the pseudo data in the navigation process so as to realize the normal navigation effect.
In the prior art, a gaussian filter is generally used to reduce noise and abnormal values in the laser radar data, and an Inertial Measurement Unit (IMU) or other sensors are used to acquire motion information, so that the laser radar data can be subjected to motion compensation, and thus the existence of false data is reduced.
The technical means has the following defects: the method for removing the pseudo data may result in partial information loss or blurring of the real target, and excessive filtering or fine segmentation may weaken edge information or details of the target, so that an intelligent mapping method based on the outdoor laser radar is needed.
Disclosure of Invention
In order to solve the above-mentioned problems, the present invention provides an intelligent mapping method and system based on an outdoor laser radar.
In a first aspect, the invention provides an intelligent mapping method based on an outdoor laser radar, which adopts the following technical scheme:
an intelligent mapping method based on an outdoor laser radar comprises the following steps:
acquiring Lei Dadian cloud data;
preprocessing Lei Dadian cloud data;
angle differentiation is carried out on the preprocessed radar point cloud data to obtain angle differentiation data of the point cloud;
according to the obtained angle differential data of the point cloud, determining a fuzzy area;
classifying the fuzzy areas and then finding out radar point cloud data corresponding to the ground;
and removing radar point cloud data corresponding to the ground from the radar point cloud data.
Further, the preprocessing of the radar point cloud data comprises denoising, simplification, registration and hole filling.
Further, the preprocessing of the radar point cloud data further comprises cutting the preprocessed radar point cloud data, including cutting high point clouds and radar remote sparse point clouds, so as to reduce data size.
Further, the angle differentiation of the preprocessed radar point cloud data is carried out to obtain angle differentiation data of the point cloud, the angle differentiation data of the point cloud is obtained by descending the radar point cloud data from a three-dimensional space to a two-dimensional plane space, calculating the included angle from each point cloud to the positive direction of the vehicle X, and obtaining the angle differentiation data of the point cloud based on the radar point cloud data corresponding to each included angle.
Further, the determining of the fuzzy area according to the obtained angle differential data of the point cloud includes converting the angle differential data of the point cloud into one-dimensional data, and defining an area corresponding to the point cloud data between (-0.5, 0.5) in the one-dimensional data as the fuzzy area.
Further, the radar point cloud data corresponding to the ground is found after the fuzzy area is classified, the fuzzy area is subjected to sobel characterization, the fuzzy area is converted into 4096-dimensional feature vectors, the feature vectors are sent into a classification feature classifier for classification, and the ground with the value of 1 is obtained.
Further, the step of removing the radar point cloud data corresponding to the ground from the radar point cloud data comprises the steps of constructing a plane model by utilizing plane fitting, and performing ground point extraction by calculating the distance from the point to the plane through the plane model.
In a second aspect, an intelligent mapping system based on an outdoor lidar includes:
the data acquisition module is configured to acquire Lei Dadian cloud data and preprocess Lei Dadian cloud data;
the differentiating module is configured to conduct angle differentiation on the preprocessed radar point cloud data to obtain angle differentiated data of the point cloud;
the area determining module is configured to determine a fuzzy area according to the obtained angle differential data of the point cloud; classifying the fuzzy areas and then finding out radar point cloud data corresponding to the ground;
and the clearing module is configured to clear the radar point cloud data corresponding to the ground from the radar point cloud data.
In a third aspect, the present invention provides a computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor of a terminal device and to perform the intelligent mapping method based on outdoor lidar.
In a fourth aspect, the present invention provides a terminal device, including a processor and a computer readable storage medium, where the processor is configured to implement instructions; the computer readable storage medium is for storing a plurality of instructions adapted to be loaded by a processor and to perform the intelligent mapping method based on the outdoor lidar.
In summary, the invention has the following beneficial technical effects:
through the technical scheme, the invention can help understand the environment structure and the topological relation. By separating ground points and non-ground points, clearer and accurate scene information can be obtained to support tasks such as object identification, obstacle detection, path planning and the like, and a direction similar to the gravity vector of the earth can be obtained, so that the attitude of the sensor can be calibrated and estimated.
Drawings
Fig. 1 is a schematic diagram of an intelligent map building based on an outdoor lidar according to embodiment 1 of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Example 1
Referring to fig. 1, an intelligent mapping method based on an outdoor laser radar in this embodiment includes:
acquiring Lei Dadian cloud data;
preprocessing Lei Dadian cloud data;
angle differentiation is carried out on the preprocessed radar point cloud data to obtain angle differentiation data of the point cloud;
according to the obtained angle differential data of the point cloud, determining a fuzzy area;
classifying the fuzzy areas and then finding out radar point cloud data corresponding to the ground;
and removing radar point cloud data corresponding to the ground from the radar point cloud data.
Further, the preprocessing of the radar point cloud data comprises denoising, simplification, registration and hole filling.
Further, the preprocessing of the radar point cloud data further comprises cutting the preprocessed radar point cloud data, including cutting high point clouds and radar remote sparse point clouds, so as to reduce data size.
Further, the angle differentiation of the preprocessed radar point cloud data is carried out to obtain angle differentiation data of the point cloud, the angle differentiation data of the point cloud is obtained by descending the radar point cloud data from a three-dimensional space to a two-dimensional plane space, calculating the included angle from each point cloud to the positive direction of the vehicle X, and obtaining the angle differentiation data of the point cloud based on the radar point cloud data corresponding to each included angle.
Further, the determining of the fuzzy area according to the obtained angle differential data of the point cloud includes converting the angle differential data of the point cloud into one-dimensional data, and defining an area corresponding to the point cloud data between (-0.5, 0.5) in the one-dimensional data as the fuzzy area.
Further, the radar point cloud data corresponding to the ground is found after the fuzzy area is classified, the fuzzy area is subjected to sobel characterization, the fuzzy area is converted into 4096-dimensional feature vectors, the feature vectors are sent into a classification feature classifier for classification, and the ground with the value of 1 is obtained.
Further, the removing the radar point cloud data corresponding to the ground from the radar point cloud data includes
Specifically, the method comprises the following steps:
s1, acquiring Lei Dadian cloud data;
in the embodiment, the point cloud data within 30 cm of the ground is acquired through the radar.
S2, preprocessing Lei Dadian cloud data;
wherein, include:
(1) Denoising method
Principle of: denoising aims to reduce or eliminate noise in lidar data to improve data quality and accuracy. Common methods include filters, statistical analysis, machine learning, etc., including gaussian filters, median filters, neighborhood-average based filters, etc
(2) Simplified, simplified process converts complex lidar point cloud data into a more compact representation to reduce data volume and preserve target characteristics. There are simplification based on the sampling rate, simplification based on the error threshold, simplification based on gridding, and the like.
(3) Registering, namely aligning the laser radar data sets in space to form a globally consistent point cloud model, such as registering based on feature descriptors, registering based on a ground plane, registering based on an optimization algorithm and the like.
(4) And hole filling, wherein the hole filling process is used for filling holes in the laser radar data due to shielding or other reasons, so that the point cloud model is more complete and continuous. Such as hole filling based on neighborhood interpolation, hole filling based on surface fitting, hole filling based on machine learning, etc.
As a further embodiment of the method of the present invention,
and cutting the preprocessed radar point cloud data, wherein the cutting comprises cutting high point cloud and radar remote sparse point cloud so as to reduce the data quantity.
Wherein, cut through high point cloud
Cutting through high point clouds is to remove point cloud data exceeding a set height by setting a height threshold.
Examples it is desirable to crop point cloud data having a height of more than 2 meters, and point cloud points having a height of more than 2 meters may simply be deleted.
Remote sparse point cloud for cutting radar
Cutting the remote sparse point cloud is to remove sparse point cloud data which are too far away by setting a distance threshold.
Examples it is desirable to crop sparse point cloud data that is more than 50 meters away, and all point cloud points that are more than 50 meters away may be deleted.
S3, angle differentiation is carried out on the preprocessed radar point cloud data to obtain angle differentiation data of the point cloud;
and (3) reducing the radar point cloud data from the three-dimensional space to the two-dimensional plane space, calculating the included angle between each point cloud and the positive direction of the vehicle X, and obtaining differential data of Lei Dadian cloud data based on the radar point cloud data corresponding to each included angle.
Wherein, the radar point cloud data is projected to a ground plane: the method assumes that the ground is a plane and projects a point cloud onto the plane so that it is represented in a two-dimensional plane space. A ground plane model is extracted using a ground segmentation algorithm, such as a RANSAC algorithm-based ground estimation, and then a point cloud is projected onto the plane.
Differential data, i.e., normal vectors, of the point cloud data are calculated.
Specifically, it is assumed that there is one point cloud data including 3 points P1 (1, 0, 1), P2 (0, 1), and P3 (1, 0).
Calculating nearest neighbors of each point: for each point, it is necessary to find its nearest K neighbor points, and this K value needs to be chosen according to the specific situation. In this example, we choose k=3, i.e. the nearest neighbor of each point is itself and the other two points closest to its euclidean distance. Thus, the nearest neighbors of point P1 are P1, P2, and P3, the nearest neighbors of point P2 are P2, P1, and P3, and the nearest neighbors of point P3 are P3, P1, and P2.
Calculating the normal vector of the plane where each point is located: for each point, a normal vector to the plane in which it lies needs to be calculated. Assuming that nearest neighbors of point i are point j1, point j2, and point j3, a vector of three adjacent points can be used to represent a normal vector of a plane in which the point is located. The normal vector of the plane can be calculated by using a vector cross product method.
For example, the nearest neighbors of point P1 are P1, P2, and P3, where vector P2-P1 and vector P3-P1 can be used to calculate the plane normal vector N1 by:
N1 = (P2 - P1) X (P3 - P1)
wherein X represents a cross product symbol. Similarly, for point P2 and point P3, their normal vectors can be calculated as:
N2 = (P1 - P2) X (P3 - P2)
N3 = (P2 - P3) X (P1 - P3)
normalizing the normal vector of each point: after the normal vector of the plane where each point is located is calculated, normalization processing is needed to be carried out on the normal vector, and the length of the normal vector is ensured to be 1, namely, a unit vector representing the normal vector of the plane;
for example, for the normal vector N1 of the plane in which the point P1 is located, the normalization processing may be performed as follows:
N1' = N1 / || N1 ||
where N1 represents the length of vector N1.
Thus, the normal vector of the plane in which each point is located can be obtained. In practical application, the accuracy and efficiency of normal vector calculation can be adjusted by calculating the distance between adjacent points and setting proper parameters such as K value, neighborhood radius and the like.
The method provides information about surface shape and curvature variations by estimating the normal vector for each point in the point cloud data. A normal vector for each point is calculated using a normal estimation algorithm, such as nearest neighbor search or a surface fitting based method.
S4, determining a fuzzy area according to the obtained angle differential data of the point cloud; the region corresponding to the point cloud data in which the angle differential data of the point cloud is between (-0.5, 0.5) is defined as a blurred region.
Wherein, based on the normal vector of the point cloud data, the three-dimensional normal vector of each point is converted into a scalar value, namely the modulo length of the normal vector. The differential data is reduced to one-dimensional data by calculating the modulo length of the normal vector of each point.
The Open3D library loads the point cloud data and calculates a normal vector for each point in the point cloud data using an estimate_normal function. Then, the normal vector of each point is reduced to one-dimensional data by calculating the modulo length of the normal vector. And finally, saving the data subjected to the dimension reduction into an output file.
The specific operation method comprises the following steps:
import open3d as o3d
import numpy as np
# 1 loading Point cloud data
pointcloud = o3d.io.read_point_cloud("input.ply")
# 2 calculating the normal vector
pointcloud.estimate_normals(search_param=o3d.geometry.KDTreeSearchParamKNN(knn=10))
# 3 extracting normal vector
normals = np.asarray(pointcloud.normals)
# 4 reducing the normal vector to one-dimensional data
normal_length=np.ling.norm (normal, axis=1) # calculate the module length of the normal vector
# 5 creation of a New Point cloud object
output_pointcloud = o3d.geometry.PointCloud()
Point cloud location information is set by output_pointgroup.points=pointgroup.points #
# 6. Save the normal vector after dimension reduction as the color of the point to the output point cloud object
output_pointcloud.colors = o3d.utility.Vector3dVector(np.expand_dims(normals_length, axis=1))
# 7 save output Point cloud object to File
o3d.io.write_point_cloud("output.ply", output_pointcloud)。
S5, classifying the fuzzy areas and then finding out radar point cloud data corresponding to the ground; and (3) carrying out sobel characterization on the fuzzy region, converting the fuzzy region into 4096-dimensional feature vectors, and sending the feature vectors into a classification feature classifier for classification, wherein the value of 1 is the ground.
Wherein the horizontal and vertical gradients of the data are calculated using the Sobel operator. And calculating the gradient amplitude and direction of each pixel point according to the horizontal gradient and the vertical gradient.
Vector conversion and classification step
The gradient amplitude and direction are converted into feature vectors, and the feature vectors are classified by using a classifier.
The specific operation method comprises the following steps:
import numpy as np
from sklearn import svm
from sklearn.model_selection import train_test_split
# 1. Assuming that the point cloud data has been loaded, an N×3 array points is obtained, representing the three-dimensional coordinates of each point
# 2. Calculate gradients of Point cloud data
gradient = np.gradient(points)
# 3 calculating gradient magnitude and direction
magnitude = np.linalg.norm(gradient, axis=0)
angle = np.arctan2(gradient[1], gradient[0])
# 4 converting gradient direction into eigenvectors
feature_vectors = np.column_stack((magnitude, angle))
# 5 defining tags (classifications) and partitioning training and test sets
labels=np.array ([ 0, 0,1,1 ])# assumes two classes, 0 and 1
X_train, X_test, y_train, y_test = train_test_split(feature_vectors, labels, test_size=0.2, random_state=42)
# 6. Create classifier, here using Support Vector Machine (SVM) as an example
classifier = svm.SVC()
# 7 training classifier
classifier.fit(X_train, y_train)
# 8 prediction on test set
y_pred = classifier.predict(X_test)
print("Predicted labels:", y_pred)。
S6, eliminating radar point cloud data corresponding to the ground from the radar point cloud data.
Specific:
1. pretreatment: the input point cloud data is preprocessed, including outlier removal, downsampling, filtering and the like, to reduce noise and data redundancy.
2. Plane fitting: and searching a plane model conforming to the ground characteristics in the point cloud by adopting a random sampling consistency (RANSAC) algorithm or a least square method and other methods. In each iteration, a set of points is randomly selected from the point cloud as a candidate ground point set, and then a planar model is fitted based on the points. And calculating the distance from the point to the plane by using the fitted plane model, and dividing the point which has the fitting error with the model smaller than the threshold value into ground points.
3. Extracting ground points: points with a fitting error from the planar model less than a threshold value are marked as ground points according to the result of the planar fitting, and can be stored in a single ground point cloud.
4. Non-ground point extraction: the remaining points that are not marked as ground points are divided into non-ground points, which may be saved in another separate non-ground point cloud.
5. Post-treatment: further processing is carried out on the ground points and the non-ground points, such as removing isolated points, filling holes and the like, so as to obtain more accurate ground and non-ground point cloud data.
The specific operation method comprises the following steps:
1. assuming that the point cloud data has been loaded, an N×3 array points is obtained, representing the three-dimensional coordinates of each point
# 2 computing the range of Point cloud data
x_range = np.max(points[:, 0]) - np.min(points[:, 0])
y_range = np.max(points[:, 1]) - np.min(points[:, 1])
z_range = np.max(points[:, 2]) - np.min(points[:, 2])
# 3. Creating a cuboid mesh and registering it with the point cloud data
voxel size=max (x_range, y_range, z_range)/50 # sets the mesh size
mesh_box = o3d.geometry.TriangleMesh.create_box(width=x_range, height=y_range, depth=voxel_size)
mesh_box.translate((np.mean(points[:, 0]), np.mean(points[:, 1]), np.min(points[:, 2])))
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(points)
o3d.io.write_point_cloud("pcd.pcd", pcd)
o3d.io.write_triangle_mesh("mesh_box.ply", mesh_box)
# 4. Projecting the mesh onto the Point cloud data, and extracting the altitude value
voxel_grid = o3d.geometry.VoxelGrid.create_from_triangle_mesh(mesh_box, voxel_size=voxel_size)
occupancy = voxel_grid.get_voxels()
heights = []
for point in points:
voxel_indices = voxel_grid.get_voxel(point)
if len(voxel_indices)>0:
height = point[2] - voxel_indices[0][2] * voxel_size
heights.append(height)
# 5 clustering Point cloud data Using altitude values
height_threshold=np_std (heights)/2 # sets the height threshold
labels = np.zeros(len(points))
clusters = []
current_label = 1
for i, point in enumerate(points):
if labels[i] == 0:
cluster = [i]
for j, point2 in enumerate(points[i+1:], i+1):
if labels[j] == 0:
if abs(point2[2] - point[2])<height_threshold:
cluster.append(j)
if len(cluster)>= 3:
clusters.append(cluster)
for k in cluster:
labels[k] = current_label
current_label += 1
# 6. Calculate the planar fitting method vector for each cluster
planes = []
for cluster in clusters:
pcd_cluster = pcd.select_down_sample(cluster)
plane_model, _ = pcd_cluster.segment_plane(distance_threshold=0.01, ransac_n=3, num_iterations=1000)
planes.append(plane_model[:3])
# 7. Check if each point is considered a ground point
ground_threshold=0.2# sets the ground point threshold value
is_ground = np.zeros(len(points), dtype=np.bool)
for i, point in enumerate(points):
for plane in planes:
if abs(np.dot(point, plane))<ground_threshold:
is_ground[i] = True
break
# 8 extracting non-ground point cloud data
points_nonground = points[~is_ground]
# 9 for the case where visualization is needed, non-ground point clouds and ground point clouds may be drawn
pcd.points = o3d.utility.Vector3dVector(points_nonground)
pcd.colors = o3d.utility.Vector3dVector(np.ones((len(points_nonground), 3)))
pcd_ground = pcd.select_down_sample(np.where(is_ground)[0])
pcd_ground.paint_uniform_color([1, 0, 0])
o3d.visualization.draw_geometries([pcd, pcd_ground])。
Example 2
The embodiment provides an intelligent mapping system based on an outdoor laser radar, which comprises:
the data acquisition module is configured to acquire Lei Dadian cloud data and preprocess Lei Dadian cloud data;
the differentiating module is configured to conduct angle differentiation on the preprocessed radar point cloud data to obtain angle differentiated data of the point cloud;
the area determining module is configured to determine a fuzzy area according to the obtained angle differential data of the point cloud; classifying the fuzzy areas and then finding out radar point cloud data corresponding to the ground;
and the clearing module is configured to clear the radar point cloud data corresponding to the ground from the radar point cloud data.
A computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor of a terminal device and to perform the intelligent mapping method based on outdoor lidar.
A terminal device comprising a processor and a computer readable storage medium, the processor configured to implement instructions; the computer readable storage medium is for storing a plurality of instructions adapted to be loaded by a processor and to perform the intelligent mapping method based on the outdoor lidar.
The above embodiments are not intended to limit the scope of the present invention, so: all equivalent changes in structure, shape and principle of the invention should be covered in the scope of protection of the invention.

Claims (10)

1. An intelligent mapping method based on an outdoor laser radar is characterized by comprising the following steps:
acquiring Lei Dadian cloud data;
preprocessing Lei Dadian cloud data;
angle differentiation is carried out on the preprocessed radar point cloud data to obtain angle differentiation data of the point cloud;
according to the obtained angle differential data of the point cloud, determining a fuzzy area;
classifying the fuzzy areas and then finding out radar point cloud data corresponding to the ground;
and removing radar point cloud data corresponding to the ground from the radar point cloud data.
2. The intelligent mapping method based on the outdoor laser radar according to claim 1, wherein the preprocessing of Lei Dadian cloud data comprises denoising, simplification, registration and hole filling.
3. The intelligent mapping method based on the outdoor laser radar according to claim 2, wherein the preprocessing of the Lei Dadian cloud data further comprises clipping the preprocessed radar point cloud data, including clipping high point clouds and radar far sparse point clouds, so as to reduce the data volume.
4. The intelligent mapping method based on the outdoor laser radar according to claim 3, wherein the performing angle differentiation on the preprocessed radar point cloud data to obtain angle differentiation data of the point cloud comprises the steps of descending the radar point cloud data from a three-dimensional space to a two-dimensional plane space, calculating an included angle between each point cloud and the positive direction of the vehicle X, and obtaining the angle differentiation data of the point cloud based on the radar point cloud data corresponding to each included angle.
5. The intelligent mapping method based on the outdoor laser radar according to claim 4, wherein the determining the fuzzy area according to the obtained angle differential data of the point cloud includes converting the angle differential data of the point cloud into one-dimensional data, and defining the area corresponding to the point cloud data between (-0.5, 0.5) in the one-dimensional data as the fuzzy area.
6. The intelligent mapping method based on the outdoor laser radar according to claim 5, wherein the step of finding out radar point cloud data corresponding to the ground after classifying the fuzzy area comprises the steps of performing sobel characterization on the fuzzy area, converting the fuzzy area into 4096-dimensional feature vectors, and sending the feature vectors into a classification feature classifier for classification, wherein the value of 1 is the ground.
7. The intelligent mapping method based on the outdoor laser radar according to claim 6, wherein the step of removing radar point cloud data corresponding to the ground from the radar point cloud data comprises the steps of constructing a plane model by using plane fitting, and performing ground point extraction by calculating a point-to-plane distance through the plane model.
8. Intelligent mapping system based on outdoor laser radar, characterized by comprising:
the data acquisition module is configured to acquire Lei Dadian cloud data and preprocess Lei Dadian cloud data;
the differentiating module is configured to conduct angle differentiation on the preprocessed radar point cloud data to obtain angle differentiated data of the point cloud;
the area determining module is configured to determine a fuzzy area according to the obtained angle differential data of the point cloud; classifying the fuzzy areas and then finding out radar point cloud data corresponding to the ground;
and the clearing module is configured to clear the radar point cloud data corresponding to the ground from the radar point cloud data.
9. A computer readable storage medium, in which a plurality of instructions are stored, characterized in that the instructions are adapted to be loaded by a processor of a terminal device and to perform an intelligent mapping method based on outdoor lidar as claimed in claim 1.
10. A terminal device comprising a processor and a computer readable storage medium, the processor configured to implement instructions; a computer readable storage medium for storing a plurality of instructions adapted to be loaded by a processor and to perform an intelligent mapping method based on an outdoor lidar as defined in claim 1.
CN202311203176.2A 2023-09-19 2023-09-19 Intelligent mapping method and system based on outdoor laser radar Pending CN116958303A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311203176.2A CN116958303A (en) 2023-09-19 2023-09-19 Intelligent mapping method and system based on outdoor laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311203176.2A CN116958303A (en) 2023-09-19 2023-09-19 Intelligent mapping method and system based on outdoor laser radar

Publications (1)

Publication Number Publication Date
CN116958303A true CN116958303A (en) 2023-10-27

Family

ID=88449564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311203176.2A Pending CN116958303A (en) 2023-09-19 2023-09-19 Intelligent mapping method and system based on outdoor laser radar

Country Status (1)

Country Link
CN (1) CN116958303A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210090263A1 (en) * 2019-09-24 2021-03-25 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for detecting ground point cloud points
CN113920134A (en) * 2021-09-27 2022-01-11 山东大学 Slope ground point cloud segmentation method and system based on multi-line laser radar
WO2022141116A1 (en) * 2020-12-29 2022-07-07 深圳市大疆创新科技有限公司 Three-dimensional point cloud segmentation method and apparatus, and movable platform
CN116109601A (en) * 2023-02-20 2023-05-12 重庆邮电大学 Real-time target detection method based on three-dimensional laser radar point cloud
CN116682080A (en) * 2023-05-04 2023-09-01 陕西法士特汽车传动集团有限责任公司 Vehicle drivable area target recognition method and system based on three-dimensional point cloud

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210090263A1 (en) * 2019-09-24 2021-03-25 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for detecting ground point cloud points
WO2022141116A1 (en) * 2020-12-29 2022-07-07 深圳市大疆创新科技有限公司 Three-dimensional point cloud segmentation method and apparatus, and movable platform
CN113920134A (en) * 2021-09-27 2022-01-11 山东大学 Slope ground point cloud segmentation method and system based on multi-line laser radar
CN116109601A (en) * 2023-02-20 2023-05-12 重庆邮电大学 Real-time target detection method based on three-dimensional laser radar point cloud
CN116682080A (en) * 2023-05-04 2023-09-01 陕西法士特汽车传动集团有限责任公司 Vehicle drivable area target recognition method and system based on three-dimensional point cloud

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王张飞;刘春阳;隋新;杨芳;马喜强;陈立海;: "基于深度投影的三维点云目标分割和碰撞检测", 光学精密工程, no. 07, pages 191 - 199 *

Similar Documents

Publication Publication Date Title
CN111680542B (en) Steel coil point cloud identification and classification method based on multi-scale feature extraction and Pointnet neural network
CN108152831B (en) Laser radar obstacle identification method and system
CN112529874B (en) Obstacle detection method and device based on three-dimensional radar, medium and robot
CN108509820B (en) Obstacle segmentation method and device, computer equipment and readable medium
Melzer et al. Extraction and modeling of power lines from ALS point clouds
CN108470174B (en) Obstacle segmentation method and device, computer equipment and readable medium
CN113345008B (en) Laser radar dynamic obstacle detection method considering wheel type robot position and posture estimation
CN115372989A (en) Laser radar-based long-distance real-time positioning system and method for cross-country automatic trolley
CN115049700A (en) Target detection method and device
CN115240149A (en) Three-dimensional point cloud detection and identification method and device, electronic equipment and storage medium
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN114022760B (en) Railway tunnel barrier monitoring and early warning method, system, equipment and storage medium
CN116482711A (en) Local static environment sensing method and device for autonomous selection of landing zone
CN115147798A (en) Method, model and device for predicting travelable area and vehicle
CN114241448A (en) Method and device for obtaining heading angle of obstacle, electronic equipment and vehicle
Yazdanpanah et al. Sky segmentation by fusing clustering with neural networks
CN113536959A (en) Dynamic obstacle detection method based on stereoscopic vision
CN113111787A (en) Target detection method, device, equipment and storage medium
Sun et al. Automated segmentation of LiDAR point clouds for building rooftop extraction
CN116540206A (en) Foot-type robot elevation estimation method, device and system
CN111507341A (en) Method, device and equipment for adjusting target bounding box and storage medium
CN113077473A (en) Three-dimensional laser point cloud pavement segmentation method, system, computer equipment and medium
CN116958303A (en) Intelligent mapping method and system based on outdoor laser radar
CN112651986B (en) Environment recognition method, recognition device, recognition system, electronic equipment and medium
CN112884026A (en) Image recognition assisted power transmission line laser LiDAR point cloud classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination