CN111985322B - Road environment element sensing method based on laser radar - Google Patents

Road environment element sensing method based on laser radar Download PDF

Info

Publication number
CN111985322B
CN111985322B CN202010674121.XA CN202010674121A CN111985322B CN 111985322 B CN111985322 B CN 111985322B CN 202010674121 A CN202010674121 A CN 202010674121A CN 111985322 B CN111985322 B CN 111985322B
Authority
CN
China
Prior art keywords
data
points
point
grid
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010674121.XA
Other languages
Chinese (zh)
Other versions
CN111985322A (en
Inventor
宁小娟
马婷
张金磊
王映辉
金海燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202010674121.XA priority Critical patent/CN111985322B/en
Publication of CN111985322A publication Critical patent/CN111985322A/en
Application granted granted Critical
Publication of CN111985322B publication Critical patent/CN111985322B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a road environment element sensing method based on a laser radar, which comprises the following steps: step 1, performing downsampling on point cloud data of an original laser radar, acquiring an interested region, obtaining simplified point cloud data, and dividing ground data to obtain a result; step 2, adopting a DBSCAN clustering algorithm based on voxelization to realize barrier segmentation of non-ground data; obstacle classification recognition based on multiple features; step 3, designing a road boundary extraction method suitable for vehicle-mounted laser radar data by utilizing data point elevation mutation, screening boundary points by using priori knowledge after extracting the road boundary points, and fitting a road boundary line by adopting a least square method; and (3) marking a non-obstacle area by utilizing a grid, and detecting a passable area in a laser radar road scene. The method of the invention has simple algorithm, small data error, safety and reliability.

Description

Road environment element sensing method based on laser radar
Technical Field
The invention belongs to the technical field of computer vision and artificial intelligence, and relates to a road environment element sensing method based on a laser radar.
Background
The road environment perception technology is an important technical support of unmanned, is one of core research contents in computer vision and artificial intelligence, and has wide application prospect.
In the field of computer vision, due to the complexity and real-time variability of the road environment, it is often necessary to acquire multiple data sets from different perspectives, at different moments in time, each data set containing a large amount of data and real-time features. The important tasks such as real-time control, path planning and obstacle avoidance of the vehicle are realized by analyzing and processing the road conditions in real time. At present, scene data adopted in many road environment perception researches are relatively simple, the scene only contains a few obstacles with a small number, and the perception method realized under the condition can often obtain good results, but cannot adapt to a complex scene with more obstacle elements. In addition, the realization of single perception task is realized, one scene data is only suitable for solving a certain class of perception tasks, and the perception operation of processing a plurality of tasks by one scene data cannot be realized.
Therefore, the point cloud data acquired by the vehicle-mounted laser radar is selected as a research object, and the research is developed around several key technologies of unmanned vehicle road environment perception, including ground segmentation, obstacle segmentation and recognition, road boundary detection and passable area detection. Through the realization of several key technologies, the perception tasks of the road environment are completed together, so that the defects of fewer obstacles and single perception task in the existing environment perception research are overcome. The detection sensing of the surrounding environment of the unmanned vehicle is completed through the research of the laser radar environment sensing, and the method has important significance for subsequent analysis of real-time road conditions, realization of vehicle control, path planning and other operations.
Disclosure of Invention
The invention aims to provide a road environment element sensing method based on a laser radar, which solves the problem that the prior art cannot realize the sensing operation of a plurality of tasks of scene data processing and can sense road environment elements in real time and efficiently.
The technical scheme adopted by the invention is that the road environment element sensing method based on the laser radar is implemented according to the following steps:
step 1, performing downsampling on point cloud data of an original laser radar, acquiring an interested region, obtaining simplified point cloud data, and dividing ground data to obtain a result, wherein the specific process is as follows:
1.1 A downsampling algorithm based on a voxel grid is realized to conduct data simplifying operation;
1.2 Eliminating discrete points and remote useless points, and acquiring a region of interest for subsequent processing;
1.3 Dividing the simplified data obtained in the step 1.1) and the step 1.2) into ground data, and carrying out segmentation calibration processing on the data on the basis of a RANSAC plane detection algorithm to realize ground division so as to achieve universality of extracting the ground from complex scenes;
step 2, implementing obstacle segmentation and obstacle recognition,
2.1 Adopting DBSCAN clustering algorithm based on voxelization to realize barrier segmentation of non-ground data;
2.2 Multi-feature based obstacle classification identification;
step 3, detecting road boundary and analyzing passable area,
3.1 Designing a road boundary extraction method suitable for vehicle-mounted laser radar data by utilizing data point elevation mutation, screening boundary points by using priori knowledge after extracting the road boundary points, and fitting a road boundary line by adopting a least square method;
3.2 And (3) marking the unobstructed area by utilizing the grid, and detecting the passable area in the laser radar road scene.
The invention has the advantages of simple algorithm, small data error, safety and reliability, enriches the method system of artificial intelligence and computer vision, supports the development of machine environment perception, and provides a choice for the unmanned basic technology road environment perception.
Drawings
FIG. 1a is the input of raw data in step 1 of an embodiment of the method of the present invention;
FIG. 1b is the data obtained after downsampling and acquiring the region of interest in step 1 of an embodiment of the method of the present invention;
FIG. 1c is a view of the segmented scene ground data obtained in step 1 of the method embodiment of the present invention;
FIG. 2a is input non-ground point cloud raw data in step 2 of an embodiment of the method of the present invention;
FIG. 2b is data after obstacle segmentation in step 2 of an embodiment of the method of the present invention;
FIG. 2c is data obtained after obstacle classification recognition in step 2 of an embodiment of the method of the present invention;
FIG. 3a shows a road boundary detection result obtained in step 3 of the method embodiment of the present invention;
FIG. 3b shows the result of the passable zone analysis obtained in step 3 of the method embodiment of the present invention;
Detailed Description
The invention will be described in detail below with reference to the drawings and the detailed description.
The invention discloses a road environment element sensing method based on a laser radar, which is implemented according to the following steps:
step 1, as shown in fig. 1a, downsampling the point cloud data of the original lidar, acquiring the region of interest, obtaining the reduced point cloud data as shown in fig. 1b, and segmenting the ground data, to obtain the result as shown in fig. 1c, wherein the specific process is as follows:
1.1 A downsampling algorithm based on a voxel grid is implemented to perform a data reduction operation,
the point cloud data obtained by the laser radar are scattered and disordered points, and the down-sampling method based on the voxel grid does not need to construct a complex topological structure, so that the realization is simple and the algorithm efficiency is high. Selecting three side lengths of a proper grid unit according to the density degree of the distribution of the point cloud data, splitting the point cloud data into small voxel grids, counting all points contained in the unit, calculating the mass centers of the points as representative of all points in the whole grid unit, combining the mass center points of all the grid units into new data, completing downsampling,
1.1.1 Reading point cloud data of an original laser radar (shown in fig. 1 a);
1.1.2 Calculating the maximum point and the minimum point in the point cloud data, namely the upper right point maxPoint and the lower left point minPoint of the point cloud bounding box, so as to calculate the length Lx, the width Ly and the height Lz of the point cloud bounding box;
1.1.3 Setting the length Vx, the width Vy and the height Vz of the voxel grid unit; dividing the point cloud data into voxel grids to obtain m multiplied by n multiplied by s units, wherein m= [ Lx/Vx ], n= [ Ly/Vy ], s= [ Lz/Vz ], and the symbol "[ ]" is a rounded symbol, and the number of the grid units is ensured to be an integer;
1.1.4 Randomly selecting any point p, then the grid cell corresponding to the point p is:
the one-dimensional grid unit number Index to which the point p belongs is:
Index=I index_x *1+I index_y *(m+1)+I index_z *(m+1)*(n+1)(2)
1.1.5 Traversing all points of the origin cloud and inserting points with the same grid unit number Index into the same index_vector;
1.1.6 Calculating the barycenter G (x, y, z) of L points in the grid unit where the points with the same number are positioned as representative points to realize downsampling, wherein the calculation formula is as follows:
1.2 Eliminating discrete points and remote dead points, and acquiring a region of interest for subsequent processing:
after analyzing the point cloud data obtained by the vehicle-mounted laser radar, in order to eliminate discrete points and remote useless points, thereby obtaining the region of interest for subsequent processing, the step provides a region of interest obtaining method based on point density statistics. Fig. 1a is point cloud data obtained by a frame of vehicle-mounted laser radar, wherein a central blank round-like area is the position of the laser radar device, the data takes the forward direction of a vehicle as the positive direction of an X axis, the right direction of the vehicle as the positive direction of a Y axis, and the upward direction is the positive direction of a Z axis. The degree of density of the acquisition points on both sides of the vehicle advancing direction, namely the Y-axis direction is counted and compared with a set threshold value, so that whether the points belong to the region of interest or not is divided. The specific process is as follows:
1.2.1 Projecting the down-sampled point cloud data to a yoz plane;
1.2.2 Obtaining maximum data Y of projection data in Y-axis direction min To minimum data Y max Dividing the data into M segments in this area (m=20 is set depending on the data size tested);
1.2.3 Statistics of the density D of points in each segment of data i Drawing a density curve of the whole point cloud data;
1.2.4 Calculating the total density ratio r of each section of density according to the formula (4), and setting a threshold value(threshold according to data and corresponding density curve +.>Preferably 2000) from the leftmost end, if +.>Then the segment is considered to have lower density and less information, and the segment is deleted; continuing to compare the next segment if +.>Stopping the judgment; judging from the rightmost end if +.>Then delete this segment until +.>The judgment is ended when the time is over,
1.2.5 For the forward direction of the vehicle, selecting the front and rear 30m of the distance laser radar device as the region of interest in the X-axis direction, and performing region division by using a straight-through filtering method to obtain the downsampled region of interest as the scene data after final reduction processing, as shown in the result of fig. 1 b.
1.3 Dividing the simplified data obtained in the step 1.1) and the step 1.2) into ground data (shown in fig. 1 b), and performing segmentation calibration processing on the data on the basis of a RANSAC plane detection algorithm to realize ground division so as to achieve universality of extracting the ground from the complex scene, wherein the specific process is as follows:
1.3.1 Calculating a minimum value min_x and a maximum value max_x along the X coordinate axis of the vehicle, dividing the point cloud data subjected to the simplification processing into N sections of sub point cloud data along the X coordinate axis direction by a length of 1m, wherein N=ROUNDUP (max_x-min_x), and ROUNDUP (X) is an upward rounding function;
1.3.2 For each segment of sub-point cloud data, detecting the ground part in the segment of sub-point cloud data by adopting a RANSAC plane detection algorithm, and obtaining a general equation of a fitting plane, wherein the general equation is as follows:
ax+by+cz+d=0, wherein a, B, C, D are known constants and a, B, C are not zero at the same time, the normal vector v= (a, B, C) of the fitting plane is obtained By this equation; in addition, when the laser radar is used for collecting data, the Z axis in the radar coordinate system is kept vertical, the vertical vector is Q= (0, 1), a calibration matrix of the normal vector V of the ground Plane and the vertical vector Q obtained by fitting is calculated and is called a rotation matrix T, and the obtained rotation matrix T is utilized to calibrate the sub-point cloud data, and the specific calibration operation is as follows:
given that the vector before calibration is V and the vector after calibration is Q, then:
V·Q=|V||Q|cosθ (5)
the angle between V, Q is:
assuming that the pre-calibration vector V is (a 1, a2, a 3) (this is an example of a formula, equivalent to one being a form parameter, one being an actual parameter), and the post-calibration vector Q is (b 1, b2, b 3), the rotation axis C (C1, C2, C3) is obtained from the cross definition of the vector, then there are:
Let the rotation axis C correspond to a unit vector k (k x ,k y ,k z ) The rotation angle theta and the unit vector k are utilized, a Rodrign rotation formula is utilized, and a vector w obtained by rotating an arbitrary vector u in space along the unit vector k by an angle theta is as follows:
w=u cosθ+(k·u)k(1-cosθ)+(k·u)sinθ (8)
thereby yielding a corresponding rotation matrix T of:
1.3.3 The ground segmentation of the sub-point cloud data is realized based on a RANSAC plane detection algorithm, and the specific steps are as follows:
1.3.3.1 Inputting calibrated sub-point cloud data CloudPoint, giving a plane model and setting a distance threshold t (the step is to set the threshold t to be 0.1m according to empirical data);
1.3.3.2 Randomly sampling three points in the point cloud data, so that the three points are not on the same straight line, and solving a plane model equation formed by the three random sample points;
1.3.3.3 Calculating the distance dis between other points of the point cloud data and the plane, and comparing the distance dis with a set distance threshold t to obtain a local point set IncroudSet based on the plane model;
1.3.3.4 Counting the number of the local point set IncloudSet of the plane model, comparing with the current maximum number, and reserving the maximum number in the two;
1.3.3.5 Re-estimating a new plane model by using all the local point sets with the largest reserved number, and outputting the local points as the calculated ground points;
1.3.3.6 Resampling, repeating the above steps; and after the number of iterative computation reaches the maximum number of iterations, exiting the iterative loop, and selecting the optimal plane model in the iterative process and the ground point with the maximum number of points in the output office, wherein the ground point obtained at the moment is the ground data detected by the point cloud.
1.3.4 Combining the ground data detected by the sub-point cloud data of each segment to obtain final ground data, and realizing the segmentation of ground and non-ground data as shown in fig. 1 c.
Step 2, implementing obstacle segmentation and obstacle recognition,
2.1 Adopting DBSCAN clustering algorithm based on voxelization to realize barrier segmentation of non-ground data;
2.1.1 As shown in fig. 2a, the non-ground laser radar point cloud scene data is used as input to divide the voxel grid unit, the side length d of the grid unit is set to be 15cm, the maximum and minimum values of three coordinate axes of the point cloud data X, Y, Z are obtained through calculation, the number of layers m, n and s of each coordinate axis of the divided grid unit are obtained through calculation by using the set side length d of the grid unit (the layers m, n and s do not need to be distinguished from the letters repeated before because of the same operation of different data bodies), taking the X coordinate axis as an example, the min_x is the X-axis minimum coordinate, and the max_x is the X-axis maximum coordinate (here the min_ X, max _x does not need to be distinguished from the letters repeated before because the same operation of different data bodies is faced), and then:
The Y axis and the Z axis are consistent with the X axis processing mode, and the voxel grid unit division with the unit side length d of the scene data can be obtained through the steps;
2.1.2 Counting the number of points contained in all grid units, recording all non-empty grid units, reasonably marking the grid units, ensuring that the overall structure of the grid units is the same as the position information of the original data, and determining that the grid units in the neighborhood around a certain grid unit are the same as the original data;
judging each non-empty grid unit according to a set parameter threshold value MinPts (the parameter threshold value MinPts is set to 20 according to the used point cloud data and experimental experience), and if the number of points contained in the grid unit is greater than MinPts, setting the grid unit as a core unit and the contained points as core points; if the number of points contained in the grid unit is smaller than MinPts, secondary judgment is needed;
performing epsilon-neighborhood search on the points in the grid unit, judging whether neighborhood points in the core unit exist in the neighborhood, if so, dividing the grid unit into calculation after the core unit participates, and judging other points no longer; if the neighborhood point does not exist, selecting other points in the grid unit to judge, and directly judging the point in the grid unit as a noise point until all the points still do not exist in other core units after judging;
2.1.3 Calculate 1.1.2) centroid Q containing R points within each non-noisy grid cell:
2.1.4 Defining the calculated centroid point set as a core point set S, randomly selecting any core point P from the set, if the radius distance epsilon of the point P contains no less than MinPts points, taking the point P as a core point, if not, taking the point P as a noise point, sequentially calculating other centroid points contained in epsilon-neighborhood of the point P and adding the other centroid points into a candidate set E, setting a cluster, taking the point P and other centroid points in the neighborhood of the point P as a member of the same cluster, taking grid units represented by the point P and the contained points as the same cluster, and setting all points of the cluster as a distinction with other clusters; selecting a centroid point from the candidate set E, and equally calculating centroid points contained in epsilon-neighborhood of the centroid point and carrying out clustering division until no element exists in the candidate set E, so as to preliminarily complete the clustering; then, removing the core point object which is already determined in the core point set S, and arbitrarily selecting a point from the core point objects to continue the operation until the elements in the core point set S are empty;
2.1.5 When epsilon-neighborhood in step 2.1.4) contains point search, to improve the search speed, constructing KD-Tree to search nearest neighbors:
2.1.5.1 Judging whether the input data set is empty, if so, returning to the empty KD-Tree, otherwise, executing the following operation;
2.1.5.2 Node generation, namely, determination of split domain and determination of Node-data; firstly, counting the data variance of all description sub-data (feature vectors) on each dimension, selecting a dimension domain corresponding to the largest one of the values as a value of a split domain, and then sequencing a data set according to the determined value of the split domain, wherein the Node-data is a point in the middle;
2.1.5.3 Dividing a left subspace and a right subspace according to the determined Node-data;
2.1.5.4 Recursively constructing KD-Tree;
2.1.6 Selecting a smaller cluster F in cluster aggregation, wherein the cluster with the set point number smaller than 100 is the smaller cluster, analyzing epsilon-neighborhood search of all points in the cluster, judging whether points marked as label of other clusters exist in the neighborhood of the point, merging the cluster with the detected cluster if the points exist, and independently dividing the cluster into one cluster if all the points do not exist until all the smaller clusters are judged, and then storing the points marked as label of the cluster; finally, traversing all the points, outputting the same cluster according to label, and configuring different colors to distinguish;
Thus, the segmentation result of the non-ground obstacle is obtained, and the segmentation result is represented by independent clusters with different colors, as shown in fig. 2 b.
2.2 A multi-feature based obstacle classification identification,
2.2.1 Acquiring the following 6 types of characteristics of the cluster after cluster segmentation obtained by the step 2.1):
f 1 is characterized by the length, width and height information of the bounding box;
f 2 is characterized by the data amount of the data containing points;
f 3 the characteristic is three-dimensional covariance matrix eigenvalue information of cluster data after cluster segmentation, and for three-dimensional point cloud, covariance matrix H is expressed as follows:
in the formula (12), cov (x, y) is taken as an example:
cov(x,y)=E{[x-E(x)][y-E(y)]} (13)
in equation (13), taking E (x) as an example, E (x) is the expected or mean value of x, expressed as
Assume that the post-segmentation cluster data contains j points (x 1 ,y 1 ,z 1 )……(x j ,y j ,z j ) Any point (x i ,y i ,z i ) The expression is as follows:
then, all eigenvalues lambda of the covariance matrix are obtained according to the formula (15) i
|H-λE|=0 (15)
f 4 Is characterized in that distance information between cluster data after clustering segmentation and unmanned vehicle laser radar equipment is described, and average coordinates of points in the clusters are solved by using a formula (16), and the average coordinates are used as position coordinates of the obstacle clusters to assume the cluster data after clustering segmentationContaining T points;
f 5 the method is characterized in that main direction trend information of cluster data after cluster segmentation is described, according to different types of obstacles, the distribution trend of point cloud data is different, according to the maximum and minimum coordinate values of each direction axis in the cluster data and by utilizing an AABB bounding box, the data height, width and length are obtained, and the main direction trend information of the obstacle is determined by combining the information such as the area size, the length-to-height ratio and the aspect ratio of the cluster data;
f 6 The feature is double view shape contour information describing cluster data after cluster segmentation. By analyzing the shape contour information of the left view and the front view of the clustering data, the type of the obstacle represented by the clustering point cloud data can be judged in a fuzzy manner.
2.2.2 Obstacle recognition based on the JointBoost classification method,
2.2.2.1 (ii) removing f from the six types of features extracted in step 2.2.1) 4 All five types of features outside the features are used as features required by the Jointboost classifier, and the five types of features are extracted;
2.2.2.2 Selecting a proper sample data set, marking the class labels to which the sample data set belong to perform training of a Jointboost classifier, classifying and identifying the test data through the trained classifier to obtain an initial classification result,
the JointBoost implementation is similar to the SVM multi-classification in that the problem is converted into a plurality of two-class combinations, and when training is performed, m pairs of feature vectors of the training point cloud and corresponding class labels (z i ,c i ) In the iterative process of each training, one weak classifier h of each two classifications is trained m (z i ,c i ) And finding the sharing characteristics of multiple categories in multiple two categories through the minimum loss function, and combining multiple weak classifiers into a final strong classifier H according to the sharing characteristics m (z i ,c i ) The method comprises the steps of carrying out a first treatment on the surface of the In advanceAnd during line test, inputting a feature vector of a point cloud cluster to be identified, classifying and identifying by a strong classifier, and outputting probability values of all possible categories of the cluster, wherein the category represented by the maximum probability value is the category determined by the feature vector input by the point cloud cluster.
2.2.2.3 Because the classification result of the classifier may have errors, the classification result obtained by the classifier is optimized by combining other characteristic information, such as spatial characteristic information, contained in the road scene after the classification is primarily determined, and a final recognition result of the cluster is obtained.
The specific optimization process is that firstly, the Jointboost classifier is counted to obtain a classification probability result set of the cluster as U, and the probabilities are sequenced from big to small to be U in sequence 1 ,U 2 ,U 3 ,…,U n Wherein U is 1 The represented category can be determined to the category of the cluster on a large probability, but the possibility of errors still exists, and secondary discrimination is needed; then, according to the spatial characteristic information of the obstacle in the actual road scene, f is utilized 4 The characteristic is that the classification result is identified by the distance information between the cluster and the unmanned vehicle laser radar equipment.
When the unmanned vehicle runs on the road, some basic information of the road, such as the road width, can be obtained in advance according to the actual situation. And carrying out secondary discrimination on the recognition result of the cluster classifier by combining the road width and the distance information from the cluster to the radar equipment.
Obtaining the average distance of the cluster from the origin in three coordinate directions through a formula (16), and determining that the X-axis direction is the road extension direction and the Y-axis direction is the road width direction according to experimental data; if the road width is W, a certain obstacle cluster is clustered, and the distance from the origin in the road width direction is the average value of y coordinates of all points of the clusterThe comparison of the two sizes can be used as a judgment condition for determining one category of information, and the judgment rule is as follows: if->Then the obstacle has been far from the road area and the probability of an obstacle belonging to the class of vehicles is low; if->The probability that the obstacle cluster belongs to other types of obstacles such as buildings is low;
according to a probability set and a secondary discrimination rule obtained by Jointboost classification, determining that the final classification recognition result is:
if it isIf the class represented by the maximum probability value belongs to the vehicle obstacle, the point cloud cluster is the vehicle obstacle; if the category represented by the maximum probability value belongs to the marker post tree pedestrian obstacle, the point cloud cluster is the marker post tree pedestrian belonging to the road; if the category represented by the maximum probability value belongs to the building obstacle, the identification is possibly wrong, the classification is carried out again, and if the results are the same, the identification result is taken as the identification result;
If it isIf the class represented by the maximum probability value belongs to the building obstacle, the cluster is the building obstacle; if the category represented by the maximum probability value belongs to the marker post tree pedestrian obstacle, the point cloud cluster is the marker post tree pedestrian belonging to the road; if the category represented by the maximum probability value belongs to the vehicle obstacle, the identification may be wrong, the classification is carried out again, and if the result is the same, the identification result is taken as the identification result.
The obstacle classification recognition result is obtained through the steps 2.2.1) and 2.1.2), as shown in fig. 2 c.
Step 3, detecting road boundary and analyzing passable area,
3.1 Designing a road boundary extraction method suitable for vehicle-mounted laser radar data by utilizing data point elevation mutation, screening boundary points by using priori knowledge after extracting the road boundary points, and fitting a road boundary line by adopting a least square method;
3.1.1 Counting Z-axis coordinate values of all points in the laser radar point cloud data, and determining a height threshold G of a road boundary according to the road environment condition; then, counting points with abrupt change of Z coordinates (namely, the change of the Z coordinates is larger than G) of point cloud data points in a window by adopting the thought of a sliding window in the image, and marking the points, wherein the marked abrupt points are candidate points of a road boundary; regarding the determination of the height threshold value G, since in actual cases, the height difference of the road curb, road surface vehicle, green belt, building, etc. is large, it is necessary to judge by road scene, the height threshold value G of the through road is about 1m according to experimental experience, and the height threshold value G of the curved road scene area is about 0.5m; according to the elevation distribution and the determined height threshold G, a high Cheng Tubian area in the data is determined by utilizing the sliding window idea, and then the mutation point is determined,
Firstly, carrying out road transverse cutting on laser radar point cloud data according to a road extending direction (X-axis direction), cutting the data into a plurality of sub point cloud data with the width of 0.5m along the X direction, and then taking out one of the data to carry out the following operations:
operation 1: carrying out grid processing on the sub-point cloud segments, dividing the point cloud data into grid units with the length and the width of 0.5m along the Y-axis direction, and recording the grid units in a grid unit sequence F;
operation 2: a certain cell Fi is sequentially taken out of the sequence F of grid cells, the maximum elevation H of the grid cell is calculated and compared with a determined elevation threshold G: if H < G, the grid unit judges that the data is near the ground, and the extraction of the road boundary candidate points is not carried out; if H > G, the grid unit may contain boundary candidate points, and the sequence numbers of the grid units are recorded in sequence to form a new grid unit sequence K needing to extract the candidate points;
operation 3: extracting road boundary candidate points contained in each grid in the grid unit sequence K by adopting a sliding window idea, respectively processing the grids from left to right simultaneously, traversing all points in the whole grid, taking the point with the minimum Y coordinate value of the grid unit in the left advancing direction as the candidate point on the premise of the minimum Z coordinate value, and taking the point with the maximum Y coordinate value of the grid unit in the right advancing direction as the candidate point on the premise of the minimum Z coordinate value;
Operation 4: and selecting another sub-point cloud segment to execute the operation to find the road boundary candidate points until all the sub-point clouds are processed to obtain all the road boundary candidate points.
3.1.2 Screening candidate points of the road boundary, removing candidate points of the non-road edge area,
extracting effective priori knowledge by using the ground data obtained in the step 1, regarding the extracted ground data as a road surface, obtaining two constraint lines by calculating average Y coordinate values of two edge area points of the road surface along the extending direction of the road as constraint conditions, selecting candidate points of which the left and right distances are respectively within 0.5m, removing other candidate points falling outside the constraint line area, finally obtaining a point sequence of two road boundary points, taking the points in the respective sequences as final road boundary points, and fitting the road boundary lines;
for a curved road scene, a constraint line is obtained, a segmentation mode in ground segmentation in the step 1.3) is adopted, after preliminary ground data of each segment is extracted in the step 1), the approximate width of the segment of data is recorded as a constraint condition of the segment of data, then the mutation point obtained by each sub-point cloud segment in the candidate point extraction stage is screened to obtain a correct road boundary point of the sub-point cloud segment, and finally the correct boundary points of the sub-point cloud segments are combined to obtain a complete scene boundary point;
3.1.3 Road boundary fitting by least square method, fitting the target data by determining a function model, calculating the function model data such that the sum of squares of errors with the target data is minimized, obtaining the parameter value of the function model,
suppose that n target data needs to be fitted to (x 1 ,y 1 ;x 2 ,y 2 ;L;x n ,y n The method comprises the steps of carrying out a first treatment on the surface of the ) The fitted curve model equation is:
y=Ax 2 +Bx+C (17)
wherein A, B, C is any real number, and the expression for solving the least error square sum is:
when the error is minimal, A, B, C is the best estimated parameter, so partial derivatives of the A, B, C three parameters yield the formula:
and thus a system of equations for the parameters A, B, C, expressed in matrix form as:
solving the matrix to obtain a A, B, C parameter value, wherein the function model can determine that the curve represented by the model is the curve fitted by the target data; in order to achieve fitting, two XY coordinate values of the candidate points after screening are selected as input, so that the road boundary line is fitted, and a road boundary detection result is obtained, as shown in fig. 3 a.
3.2 Using the grid to mark the unobstructed area to realize the detection of the passable area in the laser radar road scene;
3.2.1 Taking the laser radar point cloud scene after ground data is removed as input, and assuming that two boundary lines obtained through road boundary detection are l 1 、l 2 Then the area between the two curves is the road area; because the laser radar is real road condition information acquired in real time, other vehicles, pedestrians, billboards and the like running on the road are obstacles relative to the radar vehicle, and the unmanned vehicle must avoid the obstacles so as not to cause accidents, the area where the obstacles are located is an area where the radar vehicle cannot run; two road boundary curves l 1 、l 2 The external non-road area is divided and only the road area is examinedConsider the road area between boundary lines; the method comprises the steps of rasterizing road area data, dividing a three-dimensional point cloud scene into a series of grid blocks with equal size on the premise of not considering elevation, and selecting regular quadrilaterals with the X, Y directions of 0.5m distance to divide;
3.2.2 After the road data is processed by the grids, judging each grid, determining whether the grid is a grid unit containing obstacle data points, dividing the type of the grid unit by counting whether the grid unit contains point cloud data, if the grid contains the point cloud data, judging the grid as an obstacle grid, and if the grid does not contain the point cloud data, judging the grid as a passable area grid;
3.2.3 Judging by adopting an eight-neighborhood marking method, marking and expanding the grid units determined as passable areas,
firstly, setting all grid unit sequences as GridSeq, determining that the grid unit sequence of a passable area is PassSeq, then arbitrarily selecting a grid as a seed unit in all grid unit sequences, and circularly searching eight adjacent grid units of the upper, lower, left, right, upper left, upper right, lower left and lower right of the grid seed; if the searched grid unit meets the condition of the passable area grid, marking the searched grid unit as a passable area tag '1', and adding the grid unit into a passSeq of the passable area grid sequence; if not, then mark "0" and then remove the grid cell from the sequence GridSeq; circularly processing until the grid sequence GridSeq is empty, and initially determining grid units of a passable area in the data at the moment;
3.2.4 Finding an obstacle on two boundary lines according to the obstacle segmentation realized in the step 2, projecting the obstacle along the extending direction of the road, determining the outline shape of the distribution of the data points of the point cloud of the obstacle,
according to the actual scene in real life, the billboards at two sides of a road and the extended branches, the main outline shapes are of two types, namely a ' or ' L ', if the distribution shape of the obstacle points on the boundary of the road accords with the two types of obstacles, the obstacle points are determined to be suspended obstacles, and the unmanned vehicle can pass, so that the grid unit corresponding to the obstacle which is determined to be passable is changed into a green, and the secondary grid is a passable area grid;
3.2.5 By utilizing the characteristics of the obstacle position information, the structural information and the like obtained by the obstacle segmentation and recognition, and combining the determined passable road area, the prediction and evaluation of the accessibility path of the unmanned vehicle in the road scene are completed,
setting the width of the vehicle of the unmanned vehicle to be 1.7m, and increasing the reserved space by 0.3m on the left and right sides of the unmanned vehicle in order to ensure smooth passing without contacting other obstacles such as vehicles, so that the whole passable width is ensured to be greater than the distance of L=2.3m; on the basis of completing obstacle segmentation, determining the positions of the obstacles on the road surface, calculating the volumes of the obstacles by using AABB bounding boxes of the respective obstacles, obtaining the width of a passable area between the obstacles or between the obstacles and the boundary line by using the two detected boundary lines, and comparing the width with the determined minimum safe passable distance L to obtain whether the width can ensure the safe passing of the unmanned vehicle.

Claims (1)

1. The road environment element sensing method based on the laser radar is characterized by comprising the following steps of:
step 1, performing downsampling on point cloud data of an original laser radar, acquiring an interested region, obtaining simplified point cloud data, and dividing ground data to obtain a result, wherein the specific process is as follows:
1.1 A downsampling algorithm based on a voxel grid is realized to conduct data simplifying operation;
selecting three side lengths of a proper grid unit according to the density degree of the distribution of the point cloud data, splitting the point cloud data into small voxel grids, counting all points contained in the unit, calculating the mass centers of the points as representative of all points in the whole grid unit, combining the mass center points of all the grid units into new data, completing downsampling,
1.1.1 Reading point cloud data of an original laser radar;
1.1.2 Calculating the maximum point and the minimum point in the point cloud data, namely the upper right point maxPoint and the lower left point minPoint of the point cloud bounding box, so as to calculate the length Lx, the width Ly and the height Lz of the point cloud bounding box;
1.1.3 Setting the length Vx, the width Vy and the height Vz of the voxel grid unit; dividing the point cloud data into voxel grids to obtain m multiplied by n multiplied by s units, wherein m= [ Lx/Vx ], n= [ Ly/Vy ], s= [ Lz/Vz ], and the symbol "[ ]" is a rounded symbol, and the number of the grid units is ensured to be an integer;
1.1.4 Randomly selecting any point p, then the grid cell corresponding to the point p is:
the one-dimensional grid unit number Index to which the point p belongs is:
Index=I index_x *1+I index_y *(m+1)+I index_z *(m+1)*(n+1) (2)
1.1.5 Traversing all points of origin cloud and inserting the same points of grid unit number Index into the same point
Index_vector vector;
1.1.6 Calculating the barycenter G (x, y, z) of L points in the grid unit where the points with the same number are positioned as representative points to realize downsampling, wherein the calculation formula is as follows:
1.2 Eliminating discrete points and remote dead points, acquiring a region of interest for subsequent processing,
the point cloud data obtained by the vehicle-mounted laser radar is the position of the laser radar equipment, the data takes the forward direction of the vehicle as the positive direction of the X axis, the right direction of the vehicle as the positive direction of the Y axis and the upward direction as the positive direction of the Z axis; the density degree of the acquisition points on two sides of the vehicle advancing direction, namely the Y-axis direction is counted and compared with a set threshold value, so that whether the points belong to the region of interest is divided, and the specific process is as follows:
1.2.1 Projecting the down-sampled point cloud data to a yoz plane;
1.2.2 Obtaining maximum data Y of projection data in Y-axis direction min To minimum data Y max Dividing the data into M sections in the area;
1.2.3 Statistics of the density D of points in each segment of data i Drawing a density curve of the whole point cloud data;
1.2.4 Calculating the total density ratio r of each section of density according to the formula (4), and setting a threshold valueComparing, judging from the leftmost end, if ∈ >Then the segment is considered to have lower density and less information, and the segment is deleted; continuing to compare the next segment if +.>Stopping the judgment; judging from the rightmost end if +.>Then delete this segment until +.>The judgment is ended when the time is over,
1.2.5 For the forward direction of the vehicle, selecting the front and rear 30m of the distance laser radar equipment as an interested region in the X-axis direction, and performing region division by using a straight-through filtering method to obtain the downsampled interested region as scene data after final reduction processing;
1.3 Dividing the simplified data obtained in the steps 1.1) and 1.2) into ground data, carrying out segmentation calibration processing on the data on the basis of a RANSAC plane detection algorithm, then realizing ground division so as to achieve the universality of extracting the ground from complex scenes,
1.3.1 Calculating a minimum value min_x and a maximum value max_x along the X coordinate axis of the vehicle, dividing the point cloud data subjected to the simplification processing into N sections of sub point cloud data along the X coordinate axis direction by a length of 1m, wherein N=ROUNDUP (max_x-min_x), and ROUNDUP (X) is an upward rounding function;
1.3.2 For each segment of sub-point cloud data, detecting the ground part in the segment of sub-point cloud data by adopting a RANSAC plane detection algorithm, and obtaining a general equation of a fitting plane, wherein the general equation is as follows:
Ax+by+cz+d=0, wherein a, B, C, D are known constants and a, B, C are not zero at the same time, the normal vector v= (a, B, C) of the fitting plane is obtained By this equation; in addition, when the laser radar is used for collecting data, the Z axis in the radar coordinate system is kept vertical, the vertical vector is Q= (0, 1), a calibration matrix of the normal vector V of the ground Plane and the vertical vector Q obtained by fitting is calculated and is called a rotation matrix T, and the obtained rotation matrix T is utilized to calibrate the sub-point cloud data, and the specific calibration operation is as follows:
given that the vector before calibration is V and the vector after calibration is Q, then:
V·Q=|V||Q|cosθ (5)
the angle between V, Q is:
assuming that the vector V before calibration is (a 1, a2, a 3), the vector Q after calibration is (b 1, b2, b 3), and the rotation axis C (C1, C2, C3) is obtained from the cross definition of the vector, there are:
let the rotation axis C correspond to a unit vector k (k x ,k y ,k z ) The rotation angle theta and the unit vector k are utilized, a Rodrign rotation formula is utilized, and a vector w obtained by rotating an arbitrary vector u in space along the unit vector k by an angle theta is as follows:
w=ucosθ+(k·u)k(1-cosθ)+(k·u)sinθ (8)
thereby yielding a corresponding rotation matrix T of:
1.3.3 The ground segmentation of the sub-point cloud data is realized based on a RANSAC plane detection algorithm, and the specific steps are as follows:
1.3.3.1 Inputting calibrated sub-point cloud data CloudPoint, giving a plane model and setting a distance threshold t;
1.3.3.2 Randomly sampling three points in the point cloud data, so that the three points are not on the same straight line, and solving a plane model equation formed by the three random sample points;
1.3.3.3 Calculating the distance dis between other points of the point cloud data and the plane, and comparing the distance dis with a set distance threshold t to obtain a local point set IncroudSet based on the plane model;
1.3.3.4 Counting the number of the local point set IncloudSet of the plane model, comparing with the current maximum number, and reserving the maximum number in the two;
1.3.3.5 Re-estimating a new plane model by using all the local point sets with the largest reserved number, and outputting the local points as the calculated ground points;
1.3.3.6 Resampling, repeating the above steps; when the iterative calculation times reach the maximum iterative number, exiting the iterative loop, and selecting the optimal plane model and the ground point with the maximum number of points in the output office in the iterative process, wherein the obtained ground point is the ground data detected by the point cloud;
1.3.4 Combining the ground data detected by each segment of sub-point cloud data to obtain final ground data, and realizing the segmentation of ground and non-ground data;
Step 2, implementing obstacle segmentation and obstacle recognition,
2.1 Adopting DBSCAN clustering algorithm based on voxelization to realize barrier segmentation of non-ground data;
2.1.1 Dividing voxel grid units by taking non-ground laser radar point cloud scene data as input, setting the side length d of the grid units to be 15cm, obtaining the maximum and minimum values of three coordinate axes of the point cloud data X, Y, Z by calculation, and calculating the number of layers m, n and s of each coordinate axis of the divided grid units by using the set side length d of the grid units, wherein, taking an X coordinate axis as an example, min_X is an X axis minimum coordinate, and max_X is an X axis maximum coordinate, then:
the Y axis and the Z axis are consistent with the X axis processing mode, and the voxel grid unit division with the unit side length d of the scene data can be obtained through the steps;
2.1.2 Counting the number of points contained in all grid units, recording all non-empty grid units, reasonably marking the grid units, ensuring that the overall structure of the grid units is the same as the position information of the original data, and determining that the grid units in the neighborhood around a certain grid unit are the same as the original data;
judging each non-empty grid unit according to a set parameter threshold value MinPts, and if the number of points contained in the grid unit is greater than MinPts, setting the grid unit as a core unit and setting the contained points as core points; if the number of points contained in the grid unit is smaller than MinPts, secondary judgment is needed;
Performing epsilon-neighborhood search on the points in the grid unit, judging whether neighborhood points in the core unit exist in the neighborhood, if so, dividing the grid unit into calculation after the core unit participates, and judging other points no longer; if the neighborhood point does not exist, selecting other points in the grid unit to judge, and directly judging the point in the grid unit as a noise point until all the points still do not exist in other core units after judging;
2.1.3 Calculate 1.1.2) centroid Q containing R points within each non-noisy grid cell:
2.1.4 Defining the calculated centroid point set as a core point set S, randomly selecting any core point P from the set, if the radius distance epsilon of the point P contains no less than MinPts points, taking the point P as a core point, if not, taking the point P as a noise point, sequentially calculating other centroid points contained in epsilon-neighborhood of the point P and adding the other centroid points into a candidate set E, setting a cluster, taking the point P and other centroid points in the neighborhood of the point P as a member of the same cluster, taking grid units represented by the point P and the contained points as the same cluster, and setting all points of the cluster as a distinction with other clusters; selecting a centroid point from the candidate set E, and equally calculating centroid points contained in epsilon-neighborhood of the centroid point and carrying out clustering division until no element exists in the candidate set E, so as to preliminarily complete the clustering; then, removing the core point object which is already determined in the core point set S, and arbitrarily selecting a point from the core point objects to continue the operation until the elements in the core point set S are empty;
2.1.5 When epsilon-neighborhood in step 2.1.4) contains point search, constructing KD-Tree to search nearest neighbor point:
2.1.5.1 Judging whether the input data set is empty, if so, returning to the empty KD-Tree, otherwise, executing the following operation;
2.1.5.2 Node generation, namely, determination of split domain and determination of Node-data; firstly, counting the data variance of all the descriptor data on each dimension, selecting a dimension domain corresponding to the largest one of the values as the value of the split domain, and then sequencing the data set according to the determined value of the split domain, wherein the Node-data is the point in the middle; dividing a left subspace and a right subspace according to the determined Node-data; recursively constructing KD-Tree;
2.1.6 Selecting a smaller cluster F in cluster aggregation, wherein the cluster with the set point number smaller than 100 is the smaller cluster, analyzing epsilon-neighborhood search of all points in the cluster, judging whether points marked as label of other clusters exist in the neighborhood of the point, merging the cluster with the detected cluster if the points exist, and independently dividing the cluster into one cluster if all the points do not exist until all the smaller clusters are judged, and then storing the points marked as label of the cluster; finally, traversing all the points, outputting the same cluster according to label, and configuring different colors to distinguish;
The segmentation result of the non-ground obstacle is obtained, and the segmentation result is reflected by independent clustering clusters with different colors;
2.2 A multi-feature based obstacle classification identification,
2.2.1 Acquiring the following 6 types of characteristics of the cluster after cluster segmentation obtained by the step 2.1):
f 1 is characterized by the length, width and height information of the bounding box;
f 2 is characterized by the data amount of the data containing points;
f 3 the characteristic is three-dimensional covariance matrix eigenvalue information of cluster data after cluster segmentation, and for three-dimensional point cloud, covariance matrix H is expressed as follows:
in the formula (12), cov (x, y) is taken as an example:
cov(x,y)=E{[x-E(x)][y-E(y)]} (13)
in equation (13), taking E (x) as an example, E (x) is the expected or mean value of x, expressed as
Assume that the post-segmentation cluster data contains j points (x 1 ,y 1 ,z 1 )……(x j ,y j ,z j ) Any point (x i ,y i ,z i ) The expression is as follows:
then, all eigenvalues lambda of the covariance matrix are obtained according to the formula (15) i
|H-λE|=0 (15)
f 4 The method is characterized in that distance information between cluster data after cluster segmentation and unmanned vehicle laser radar equipment is described, and average coordinates of points in the clusters are solved by using a formula (16), so that the cluster data after the cluster segmentation is assumed to contain T points as position coordinates of the obstacle clusters;
f 5 the method is characterized in that main direction trend information of cluster data after cluster segmentation is described, according to different types of obstacles, the distribution trend of point cloud data is different, according to the maximum and minimum coordinate values of each direction axis in the cluster data and by utilizing an AABB bounding box, the data height, width and length are obtained, and the main direction trend information of the obstacle is determined by combining the information such as the area size, the length-to-height ratio and the aspect ratio of the cluster data;
f 6 Is characterized in that the double-view shape outline information of the cluster data after cluster segmentation is described, and the shapes of the left view and the front view of the cluster data are analyzedThe shape contour information can be used for carrying out fuzzy judgment on the type of the obstacle represented by the clustering point cloud data;
2.2.2 Obstacle recognition based on the JointBoost classification method,
2.2.2.1 (ii) removing f from the six types of features extracted in step 2.2.1) 4 All five types of features outside the features are used as features required by the Jointboost classifier, and the five types of features are extracted;
2.2.2.2 Selecting a proper sample data set, marking the class labels to which the sample data set belong to perform training of a Jointboost classifier, classifying and identifying the test data through the trained classifier to obtain an initial classification result,
the JointBoost implementation is similar to the SVM multi-classification in that the problem is converted into a plurality of two-class combinations, and when training is performed, m pairs of feature vectors of the training point cloud and corresponding class labels (z i ,c i ) In the iterative process of each training, one weak classifier h of each two classifications is trained m (z i ,c i ) And finding the sharing characteristics of multiple categories in multiple two categories through the minimum loss function, and combining multiple weak classifiers into a final strong classifier H according to the sharing characteristics m (z i ,c i ) The method comprises the steps of carrying out a first treatment on the surface of the When a test is carried out, feature vectors of the point cloud cluster to be identified are input, classification and identification are carried out through a strong classifier, probability values of all possible categories of the cluster are output, and the category represented by the maximum probability value is the category determined by the feature vectors input by the point cloud cluster;
2.2.2.3 Because the classification result of the classifier may have errors, the classification result obtained by the classifier is optimized by combining other characteristic information, such as spatial characteristic information, contained in the road scene after the classification is primarily determined to obtain the final recognition result of the cluster,
the specific optimization process is that firstly, the Jointboost classifier is counted to obtain a classification probability result set of the cluster as U, and the probabilities are sequenced from big to small to be U in sequence 1 ,U 2 ,U 3 ,…,U n Wherein U is 1 The represented category is to a large extent able to determine the aggregationThe class cluster belongs to the class, but the possibility of errors still exists, and secondary discrimination is needed; then, according to the spatial characteristic information of the obstacle in the actual road scene, f is utilized 4 The characteristic is that the classification result is identified by the distance information between the cluster and the unmanned vehicle laser radar equipment;
when the unmanned vehicle runs on the road, basic information of the road, such as the road width, is obtained in advance according to the actual situation, and the recognition result of the cluster classifier is secondarily judged by combining the road width and the distance information from the cluster to the radar equipment;
Obtaining the average distance of the cluster from the origin in three coordinate directions through a formula (16), and determining that the X-axis direction is the road extension direction and the Y-axis direction is the road width direction according to experimental data; if the road width is W, a certain obstacle cluster is clustered, and the distance from the origin in the road width direction is the average value of y coordinates of all points of the clusterThe comparison of the two sizes can be used as a judgment condition for determining one category of information, and the judgment rule is as follows: if->Then the obstacle has been far from the road area and the probability of an obstacle belonging to the class of vehicles is low; if->The probability that the obstacle cluster belongs to other types of obstacles such as buildings is low;
according to a probability set and a secondary discrimination rule obtained by Jointboost classification, determining that the final classification recognition result is:
if it isIf the class represented by the maximum probability value belongs to the vehicle obstacle, the point cloud cluster belongs to the vehicle obstacleA deterrent; if the category represented by the maximum probability value belongs to the marker post tree pedestrian obstacle, the point cloud cluster is the marker post tree pedestrian belonging to the road; if the category represented by the maximum probability value belongs to the building obstacle, the identification is possibly wrong, the classification is carried out again, and if the results are the same, the identification result is taken as the identification result;
If it isIf the class represented by the maximum probability value belongs to the building obstacle, the cluster is the building obstacle; if the category represented by the maximum probability value belongs to the marker post tree pedestrian obstacle, the point cloud cluster is the marker post tree pedestrian belonging to the road; if the category represented by the maximum probability value belongs to the vehicle obstacle, the identification is possibly wrong, the classification is carried out again, and if the results are the same, the identification result is taken as the identification result;
obtaining obstacle classification and identification results through the steps 2.2.1) and 2.1.2);
step 3, detecting road boundary and analyzing passable area,
3.1 A road boundary extraction method suitable for vehicle-mounted laser radar data is designed by utilizing data point elevation mutation, the prior knowledge is used for screening boundary points after the road boundary points are extracted, the least square method is adopted for fitting the road boundary line,
3.1.1 Counting Z-axis coordinate values of all points in the laser radar point cloud data, and determining a height threshold G of a road boundary according to the road environment condition; then, counting points with abrupt change of the Z coordinates of the point cloud data points in the window by adopting the thought of a sliding window in the image, and marking the points, wherein the marked abrupt change points are candidate points of the road boundary; according to the elevation distribution and the determined height threshold G, a high Cheng Tubian area in the data is determined by utilizing the sliding window idea, and then the mutation point is determined,
Firstly, carrying out road cross cutting on laser radar point cloud data according to the extending direction of a road, cutting the data into a plurality of sub point cloud data with the width of 0.5m along the X direction, and then taking out one of the data to carry out the following operations:
operation 1: carrying out grid processing on the sub-point cloud segments, dividing the point cloud data into grid units with the length and the width of 0.5m along the Y-axis direction, and recording the grid units in a grid unit sequence F;
operation 2: a certain cell Fi is sequentially taken out of the sequence F of grid cells, the maximum elevation H of the grid cell is calculated and compared with a determined elevation threshold G: if H < G, the grid unit judges that the data is near the ground, and the extraction of the road boundary candidate points is not carried out; if H > G, the grid unit may contain boundary candidate points, and the sequence numbers of the grid units are recorded in sequence to form a new grid unit sequence K needing to extract the candidate points;
operation 3: extracting road boundary candidate points contained in each grid in the grid unit sequence K by adopting a sliding window idea, respectively processing the grids from left to right simultaneously, traversing all points in the whole grid, taking the point with the minimum Y coordinate value of the grid unit in the left advancing direction as the candidate point on the premise of the minimum Z coordinate value, and taking the point with the maximum Y coordinate value of the grid unit in the right advancing direction as the candidate point on the premise of the minimum Z coordinate value;
Operation 4: selecting another sub-point cloud segment to execute the operation to find out the road boundary candidate points until all the sub-point clouds are processed to obtain all the road boundary candidate points;
3.1.2 Screening candidate points of the road boundary, removing candidate points of the non-road edge area,
extracting effective priori knowledge by using the ground data obtained in the step 1, regarding the extracted ground data as a road surface, obtaining two constraint lines by calculating average Y coordinate values of two edge area points of the road surface along the extending direction of the road as constraint conditions, selecting candidate points of which the left and right distances are respectively within 0.5m, removing other candidate points falling outside the constraint line area, finally obtaining a point sequence of two road boundary points, taking the points in the respective sequences as final road boundary points, and fitting the road boundary lines;
for a curved road scene, a constraint line is obtained, a segmentation mode in ground segmentation in the step 1.3) is adopted, after preliminary ground data of each segment is extracted in the step 1), the approximate width of the segment of data is recorded as a constraint condition of the segment of data, then the mutation point obtained by each sub-point cloud segment in the candidate point extraction stage is screened to obtain a correct road boundary point of the sub-point cloud segment, and finally the correct boundary points of the sub-point cloud segments are combined to obtain a complete scene boundary point;
3.1.3 Road boundary fitting by least square method, fitting the target data by determining a function model, calculating the function model data such that the sum of squares of errors with the target data is minimized, obtaining the parameter value of the function model,
suppose that n target data needs to be fitted to (x 1 ,y 1 ;x 2 ,y 2 ;L;x n ,y n The method comprises the steps of carrying out a first treatment on the surface of the ) The fitted curve model equation is:
y=Ax 2 +Bx+C (17)
wherein A, B, C is any real number, and the expression for solving the least error square sum is:
when the error is minimal, A, B, C is the best estimated parameter, so partial derivatives of the A, B, C three parameters yield the formula:
and thus a system of equations for the parameters A, B, C, expressed in matrix form as:
solving the matrix to obtain a A, B, C parameter value, wherein the function model can determine that the curve represented by the model is the curve fitted by the target data; in order to realize fitting, selecting XY two coordinate values of the candidate points after screening as input, and fitting a road boundary line to obtain a road boundary detection result;
3.2 Using the grid to mark the unobstructed area to realize the detection of the passable area in the laser radar road scene,
3.2.1 Taking the laser radar point cloud scene after ground data is removed as input, and assuming that two boundary lines obtained through road boundary detection are l 1 、l 2 Then the area between the two curves is the road area; two road boundary curves l 1 、l 2 Dividing the external non-road area, and considering only the road areas between boundary lines; the method comprises the steps of rasterizing road area data, dividing a three-dimensional point cloud scene into a series of grid blocks with equal size on the premise of not considering elevation, and selecting regular quadrilaterals with the X, Y directions of 0.5m distance to divide;
3.2.2 After the road data is processed by the grids, judging each grid, determining whether the grid is a grid unit containing obstacle data points, dividing the type of the grid unit by counting whether the grid unit contains point cloud data, if the grid contains the point cloud data, judging the grid as an obstacle grid, and if the grid does not contain the point cloud data, judging the grid as a passable area grid;
3.2.3 Judging by adopting an eight-neighborhood marking method, marking and expanding the grid units determined as passable areas,
firstly, setting all grid unit sequences as GridSeq, determining that the grid unit sequence of a passable area is PassSeq, then arbitrarily selecting a grid as a seed unit in all grid unit sequences, and circularly searching eight adjacent grid units of the upper, lower, left, right, upper left, upper right, lower left and lower right of the grid seed; if the searched grid unit meets the condition of the passable area grid, marking the searched grid unit as a passable area tag '1', and adding the grid unit into a passSeq of the passable area grid sequence; if not, then mark "0" and then remove the grid cell from the sequence GridSeq; circularly processing until the grid sequence GridSeq is empty, and initially determining grid units of a passable area in the data at the moment;
3.2.4 Finding an obstacle on two boundary lines according to the obstacle segmentation realized in the step 2, projecting the obstacle along the extending direction of the road, determining the outline shape of the distribution of the data points of the point cloud of the obstacle,
changing the grid unit corresponding to the obstacle determined to be passable to be green, and indicating that the grid is a passable area grid;
3.2.5 By utilizing the position information and the structural information characteristics of the obstacle obtained by the obstacle segmentation and recognition and combining the determined passable area of the road, the prediction and the evaluation of the accessibility path of the unmanned vehicle under the road scene are completed,
setting the vehicle width of the unmanned vehicle to be 1.7m, and increasing the reserved space by 0.3m on the left and right sides of the unmanned vehicle, so that the whole passable width is ensured to be a distance larger than L=2.3m; on the basis of completing barrier segmentation, determining the positions of barriers on road surfaces, calculating the volumes of the barriers by using AABB bounding boxes of the respective barriers, obtaining the width of a passable area between the barriers or between the barriers and the boundary lines by using the two detected boundary lines, and comparing the width with the determined minimum safe passable distance L to obtain whether the width can ensure the safe passing of the unmanned vehicle.
CN202010674121.XA 2020-07-14 2020-07-14 Road environment element sensing method based on laser radar Active CN111985322B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010674121.XA CN111985322B (en) 2020-07-14 2020-07-14 Road environment element sensing method based on laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010674121.XA CN111985322B (en) 2020-07-14 2020-07-14 Road environment element sensing method based on laser radar

Publications (2)

Publication Number Publication Date
CN111985322A CN111985322A (en) 2020-11-24
CN111985322B true CN111985322B (en) 2024-02-06

Family

ID=73439427

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010674121.XA Active CN111985322B (en) 2020-07-14 2020-07-14 Road environment element sensing method based on laser radar

Country Status (1)

Country Link
CN (1) CN111985322B (en)

Families Citing this family (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113758484A (en) * 2020-11-30 2021-12-07 北京京东乾石科技有限公司 Path planning method and device
CN112526993B (en) * 2020-11-30 2023-08-08 广州视源电子科技股份有限公司 Grid map updating method, device, robot and storage medium
CN112419505B (en) * 2020-12-07 2023-11-10 苏州工业园区测绘地理信息有限公司 Automatic extraction method for vehicle-mounted point cloud road shaft by combining semantic rules and model matching
CN112699734B (en) * 2020-12-11 2024-04-16 深圳银星智能集团股份有限公司 Threshold detection method, mobile robot and storage medium
CN112634310A (en) * 2020-12-14 2021-04-09 深兰人工智能(深圳)有限公司 Road boundary extraction method, device, electronic equipment and storage medium
CN112560747A (en) * 2020-12-23 2021-03-26 苏州工业园区测绘地理信息有限公司 Vehicle-mounted point cloud data-based lane boundary interactive extraction method
CN112629548B (en) * 2020-12-28 2023-01-24 北京航空航天大学 Three-dimensional grid map creating and updating method based on roadside laser radar
CN114556442A (en) * 2020-12-29 2022-05-27 深圳市大疆创新科技有限公司 Three-dimensional point cloud segmentation method and device and movable platform
CN113156932A (en) * 2020-12-29 2021-07-23 上海市东方海事工程技术有限公司 Obstacle avoidance control method and system for rail flaw detection vehicle
WO2022141912A1 (en) * 2021-01-01 2022-07-07 杜豫川 Vehicle-road collaboration-oriented sensing information fusion representation and target detection method
CN112904841B (en) * 2021-01-12 2023-11-03 北京布科思科技有限公司 Non-horizontal single-line positioning obstacle avoidance method, device, equipment and storage medium
CN112784799A (en) * 2021-02-01 2021-05-11 三一机器人科技有限公司 AGV (automatic guided vehicle) backward pallet and obstacle identification method and device and AGV
WO2022165802A1 (en) * 2021-02-07 2022-08-11 华为技术有限公司 Road boundary recognition method and apparatus
CN112801022B (en) * 2021-02-09 2023-05-02 青岛慧拓智能机器有限公司 Method for rapidly detecting and updating road boundary of unmanned mining card operation area
GB2619196A (en) * 2021-03-01 2023-11-29 Du Yuchuan Multi-target vehicle detection and re-identification method based on radar and video fusion
CN113156451B (en) * 2021-03-23 2023-04-18 北京易控智驾科技有限公司 Unstructured road boundary detection method and device, storage medium and electronic equipment
CN113156948B (en) * 2021-04-19 2022-06-28 浙江工业大学 Ground and obstacle distinguishing and identifying method of two-wheeled self-balancing robot
CN113205139A (en) * 2021-05-06 2021-08-03 南京云智控产业技术研究院有限公司 Unmanned ship water sphere detection method based on density clustering
US11741621B2 (en) 2021-05-10 2023-08-29 Qingdao Pico Technology Co., Ltd. Method and system for detecting plane information
CN113240678B (en) * 2021-05-10 2023-05-30 青岛小鸟看看科技有限公司 Plane information detection method and system
CN113191459B (en) * 2021-05-27 2022-09-09 山东高速建设管理集团有限公司 Road-side laser radar-based in-transit target classification method
CN113064135B (en) * 2021-06-01 2022-02-18 北京海天瑞声科技股份有限公司 Method and device for detecting obstacle in 3D radar point cloud continuous frame data
CN113484875B (en) * 2021-07-30 2022-05-24 燕山大学 Laser radar point cloud target hierarchical identification method based on mixed Gaussian ordering
CN113592891B (en) * 2021-07-30 2024-03-22 华东理工大学 Unmanned vehicle passable domain analysis method and navigation grid map manufacturing method
EP4377716A1 (en) * 2021-08-25 2024-06-05 Huawei Technologies Co., Ltd. An intrusion detection method and apparatus
CN113734176A (en) * 2021-09-18 2021-12-03 重庆长安汽车股份有限公司 Environment sensing system and method for intelligent driving vehicle, vehicle and storage medium
CN113865488B (en) * 2021-09-24 2023-10-27 北京京东方技术开发有限公司 Distance measuring method, electronic equipment and computer readable storage medium
CN115248447B (en) * 2021-09-29 2023-06-02 上海仙途智能科技有限公司 Laser point cloud-based path edge identification method and system
CN113935425A (en) * 2021-10-21 2022-01-14 中国船舶重工集团公司第七一一研究所 Object identification method, device, terminal and storage medium
CN114063107A (en) * 2021-11-26 2022-02-18 山东大学 Ground point cloud extraction method based on laser beam
CN113989784A (en) * 2021-11-30 2022-01-28 福州大学 Road scene type identification method and system based on vehicle-mounted laser point cloud
US11999352B2 (en) * 2021-12-15 2024-06-04 Industrial Technology Research Institute Method and system for extracting road data and method and system for controlling self-driving car
CN114495026A (en) * 2022-01-07 2022-05-13 武汉市虎联智能科技有限公司 Laser radar identification method and device, electronic equipment and storage medium
CN114543666B (en) * 2022-01-20 2022-11-29 大连理工大学 Stockpile face prediction method based on mine field environment perception
CN114488073A (en) * 2022-02-14 2022-05-13 中国第一汽车股份有限公司 Method for processing point cloud data acquired by laser radar
CN114639024A (en) * 2022-03-03 2022-06-17 江苏方天电力技术有限公司 Automatic laser point cloud classification method for power transmission line
CN114663855B (en) * 2022-03-11 2024-05-24 北京航空航天大学 Unstructured road surface ponding and unevenness detection method
CN114359876B (en) * 2022-03-21 2022-05-31 成都奥伦达科技有限公司 Vehicle target identification method and storage medium
CN114578313B (en) * 2022-05-09 2022-07-22 中国电子科技集团公司信息科学研究院 Centroid lookup-based grid detection mirror image elimination method and device
CN114693855B (en) * 2022-05-31 2022-09-06 中汽创智科技有限公司 Point cloud data processing method and device
CN115267815B (en) * 2022-06-10 2024-06-14 合肥工业大学 Road side laser radar group optimization layout method based on point cloud modeling
CN115236674B (en) * 2022-06-15 2024-06-04 北京踏歌智行科技有限公司 Mining area environment sensing method based on 4D millimeter wave radar
CN114779794B (en) * 2022-06-21 2022-10-11 东风悦享科技有限公司 Street obstacle identification method based on unmanned patrol vehicle system in typhoon scene
CN114898013B (en) * 2022-07-15 2022-11-22 深圳市城市交通规划设计研究中心股份有限公司 Traffic isochronous ring generation method, electronic device and storage medium
CN115248448B (en) * 2022-09-22 2022-12-16 毫末智行科技有限公司 Laser radar-based road edge detection method, device, equipment and storage medium
CN115712298B (en) * 2022-10-25 2023-05-09 太原理工大学 Autonomous navigation method for robot to run in channel based on single-line laser radar
CN115574803B (en) * 2022-11-16 2023-04-25 深圳市信润富联数字科技有限公司 Method, device, equipment and storage medium for determining moving route
CN115540896B (en) * 2022-12-06 2023-03-07 广汽埃安新能源汽车股份有限公司 Path planning method and device, electronic equipment and computer readable medium
CN115641462B (en) * 2022-12-26 2023-03-17 电子科技大学 Radar image target identification method
CN116226697B (en) * 2023-05-06 2023-07-25 北京师范大学 Spatial data clustering method, system, equipment and medium
CN116740101B (en) * 2023-05-16 2024-03-12 中国信息通信研究院 Plane segmentation method for point cloud object
CN116796210B (en) * 2023-08-25 2023-11-28 山东莱恩光电科技股份有限公司 Barrier detection method based on laser radar
CN116901085B (en) * 2023-09-01 2023-12-22 苏州立构机器人有限公司 Intelligent robot obstacle avoidance method and device, intelligent robot and readable storage medium
CN117934324B (en) * 2024-03-25 2024-06-11 广东电网有限责任公司中山供电局 Denoising method and device for laser point cloud data and radar scanning device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460416A (en) * 2018-02-28 2018-08-28 武汉理工大学 A kind of structured road feasible zone extracting method based on three-dimensional laser radar
WO2018210303A1 (en) * 2017-05-19 2018-11-22 上海蔚来汽车有限公司 Road model construction
CN110244321A (en) * 2019-04-22 2019-09-17 武汉理工大学 A kind of road based on three-dimensional laser radar can traffic areas detection method
CN110320504A (en) * 2019-07-29 2019-10-11 浙江大学 A kind of unstructured road detection method based on laser radar point cloud statistics geometrical model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11170299B2 (en) * 2018-12-28 2021-11-09 Nvidia Corporation Distance estimation to objects and free-space boundaries in autonomous machine applications

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018210303A1 (en) * 2017-05-19 2018-11-22 上海蔚来汽车有限公司 Road model construction
CN108460416A (en) * 2018-02-28 2018-08-28 武汉理工大学 A kind of structured road feasible zone extracting method based on three-dimensional laser radar
CN110244321A (en) * 2019-04-22 2019-09-17 武汉理工大学 A kind of road based on three-dimensional laser radar can traffic areas detection method
CN110320504A (en) * 2019-07-29 2019-10-11 浙江大学 A kind of unstructured road detection method based on laser radar point cloud statistics geometrical model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
孔栋 ; 王晓原 ; 孙亮 ; 王方 ; 陈晨 ; .车载雷达点云的结构化道路边界提取方法.河南科技大学学报(自然科学版).2018,(第04期),全文. *
王灿 ; 孔斌 ; 杨静 ; 王智灵 ; 祝辉 ; .基于三维激光雷达的道路边界提取和障碍物检测算法.模式识别与人工智能.2020,(第04期),全文. *
苏致远 ; 徐友春 ; 李永乐 ; .基于三维激光雷达的车辆目标检测方法.军事交通学院学报.2017,(第01期),全文. *

Also Published As

Publication number Publication date
CN111985322A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN111985322B (en) Road environment element sensing method based on laser radar
CN112801022B (en) Method for rapidly detecting and updating road boundary of unmanned mining card operation area
US20230186647A1 (en) Feature extraction from mobile lidar and imagery data
Xu et al. Multiple-entity based classification of airborne laser scanning data in urban areas
CN111444767B (en) Pedestrian detection and tracking method based on laser radar
Setchell et al. Vision-based road-traffic monitoring sensor
Wang et al. Probabilistic inference for occluded and multiview on-road vehicle detection
CN114359876B (en) Vehicle target identification method and storage medium
Zhang et al. Automated road network extraction from high resolution multi-spectral imagery
CN113484875B (en) Laser radar point cloud target hierarchical identification method based on mixed Gaussian ordering
CN113989784A (en) Road scene type identification method and system based on vehicle-mounted laser point cloud
CN114488073A (en) Method for processing point cloud data acquired by laser radar
Li et al. A branch-trunk-constrained hierarchical clustering method for street trees individual extraction from mobile laser scanning point clouds
Guo et al. Classification of airborne laser scanning data using JointBoost
CN115620261A (en) Vehicle environment sensing method, system, equipment and medium based on multiple sensors
CN115063555A (en) Method for extracting vehicle-mounted LiDAR point cloud street tree growing in Gaussian distribution area
CN112200248B (en) Point cloud semantic segmentation method, system and storage medium based on DBSCAN clustering under urban road environment
Luo LiDAR based perception system: Pioneer technology for safety driving
Ma et al. Automated extraction of driving lines from mobile laser scanning point clouds
CN112907574A (en) Method, device and system for searching landing point of aircraft and storage medium
Shu et al. Pairwise-SVM for on-board urban road lidar classification
Mancini et al. Automatic extraction of urban objects from multi-source aerial data
Grote Automatic road network extraction in suburban areas from aerial images
CN115018897B (en) Method for extracting typical surface feature elements of laser point cloud city
Ao et al. Point cloud classification by fusing supervoxel segmentation with multi-scale features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant