CN112561944A - Lane line extraction method based on vehicle-mounted laser point cloud - Google Patents

Lane line extraction method based on vehicle-mounted laser point cloud Download PDF

Info

Publication number
CN112561944A
CN112561944A CN202011363963.XA CN202011363963A CN112561944A CN 112561944 A CN112561944 A CN 112561944A CN 202011363963 A CN202011363963 A CN 202011363963A CN 112561944 A CN112561944 A CN 112561944A
Authority
CN
China
Prior art keywords
point cloud
road
cloud data
lane line
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011363963.XA
Other languages
Chinese (zh)
Inventor
马凌飞
李军
陈一平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central university of finance and economics
Original Assignee
Central university of finance and economics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central university of finance and economics filed Critical Central university of finance and economics
Priority to CN202011363963.XA priority Critical patent/CN112561944A/en
Publication of CN112561944A publication Critical patent/CN112561944A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application belongs to the technical field of intelligent transportation, and particularly relates to a lane line extraction method based on vehicle-mounted laser point cloud. The existing method can not meet the requirement of lane line extraction under the condition of large-scale urban complex road conditions. The application provides a lane line extraction method based on vehicle-mounted laser point cloud, which comprises the following steps of 1) carrying out coordinate conversion on original point cloud to obtain point cloud data of the coordinate conversion; 2) processing the point cloud data to remove non-road surface point cloud data; 3) extracting point cloud data of the road surface; 4) denoising the road surface point cloud data, and extracting a road sign line; 5) and extracting lane lines from the road surface point cloud data, and generating a road center line to obtain a complete urban road lane line. The method and the device can quickly and accurately extract the lane lines of the large-scale urban complex road network.

Description

Lane line extraction method based on vehicle-mounted laser point cloud
Technical Field
The application belongs to the technical field of intelligent transportation, and particularly relates to a lane line extraction method based on vehicle-mounted laser point cloud.
Background
The urban traffic network lane lines are used as important components of intelligent traffic systems and smart cities, periodic monitoring and maintenance are needed, and the accuracy of a high-definition map and the reliability of automatic driving are effectively guaranteed through intelligent supervision. The traditional method for extracting the lane lines of the urban roads based on the vehicle-mounted laser point cloud mainly comprises two modes of manual field measurement and digital photogrammetry. The manual operation mode is to assign the road to patrol and survey personnel and carry out manual detection to the road lane line, ensures that the lane line mark is clear, complete and unbroken, however, the mode is greatly influenced by factors such as high labor cost, complex and dangerous urban road environment, long information updating period and the like. The digital photogrammetry method is to acquire urban road network lane line information by means of unmanned aerial vehicle images, low-orbit aerial images, vehicle-mounted camera images and the like, but is limited by factors such as rain and snow weather, shielding of objects on two sides of roads, image resolution and the like, so that the extracted road lane line precision cannot meet the requirements of automatic driving and high-definition maps.
As a high and new surveying and mapping means which is rapidly developed, the vehicle-mounted laser scanning technology can be used for rapidly and accurately collecting three-dimensional information of large-scale urban roads by utilizing the characteristics of high data acquisition speed, low update period, high data precision, active non-contact measurement and the like, has remarkable advantages for extracting lane lines under the condition of complex urban road conditions, and simultaneously provides necessary data support for real-time monitoring of urban road information.
However, how to efficiently and accurately extract the road lane lines from the high-density and non-sequential mass vehicle-mounted laser scanning point cloud data still faces a great challenge. The method for extracting the lane lines of the current common urban road network based on the vehicle-mounted laser point cloud comprises the following steps: based on methods such as morphological analysis, conditional random field, and deep learning. However, these methods are limited mainly by the following aspects: (1) the variety diversity and the shape complexity of the urban road network lane lines; (2) interference and shielding caused by pedestrians and vehicles on the road surface; (3) high quality pavement point cloud marking data is limited in number. The existing method can not meet the requirement of lane line extraction under the condition of large-scale urban complex road conditions.
Disclosure of Invention
1. Technical problem to be solved
The method still faces huge challenges based on how to efficiently and accurately extract the road lane lines from the high-density and non-sequential mass vehicle-mounted laser scanning point cloud data. The method for extracting the lane lines of the current common urban road network based on the vehicle-mounted laser point cloud comprises the following steps: based on methods such as morphological analysis, conditional random field, and deep learning. However, these methods are limited mainly by the following aspects: (1) the variety diversity and the shape complexity of the urban road network lane lines; (2) interference and shielding caused by pedestrians and vehicles on the road surface; (3) high quality pavement point cloud marking data is limited in number. The existing method cannot meet the problem of lane line extraction requirement under the condition of large-scale urban complex road conditions, and the application provides a lane line extraction method based on vehicle-mounted laser point cloud.
2. Technical scheme
In order to achieve the above object, the present application provides a lane line extraction method based on vehicle-mounted laser point cloud, the method including the steps of:
1) carrying out coordinate conversion on the original point cloud to obtain point cloud data subjected to coordinate conversion; 2) processing the point cloud data to remove non-road surface point cloud data; 3) extracting point cloud data of the road surface; 4) denoising the road surface point cloud data, and extracting a road sign line; 5) and extracting lane lines from the road surface point cloud data, and generating a road center line to obtain a complete urban road lane line.
Another embodiment provided by the present application is: the step 1) comprises defining directions of three axes of an original point cloud coordinate system xyz, defining a coordinate conversion relation and carrying out coordinate conversion on the original point cloud.
Another embodiment provided by the present application is: the step 2) carries out segmentation processing based on voxels on the point cloud data, wherein the segmentation processing comprises the step of uniformly segmenting the point cloud data after coordinate conversion into a plurality of point cloud blocks on an xy plane; for each cloud block, further dividing the cloud block into a series of voxels by using an octree spatial index; each voxel grows upwards to nine voxels adjacent to the voxel, wherein the nine voxels continue to grow upwards according to the same mode as a new starting point, the process is repeated, and all the voxels are traversed until no adjacent voxel exists above; the voxel of the greatest height value is determined within each point cloud block, which in turn removes non-ground point cloud data by specifying a height threshold.
Another embodiment provided by the present application is: and 3) detecting a road boundary line and extracting the road surface based on the road design standard to obtain road surface point cloud data.
Another embodiment provided by the present application is: the road surface extraction comprises the steps of uniformly dividing road surface point clouds into a plurality of groups of point cloud clusters based on the direction of a driving trajectory line; segmenting each group of point cloud clusters into a point cloud slice along the direction perpendicular to the driving track, and extracting road boundary points; and denoising the road boundary points, fitting to obtain a road boundary line, and further obtaining road surface point cloud data.
Another embodiment provided by the present application is: the extracting of the road sign line comprises the steps of inputting road surface point cloud data into an algorithm based on self-adaptive point cloud intensity value analysis, and preliminarily extracting the road sign line; denoising the preliminarily extracted point cloud data of the road sign line by adopting a statistical outlier rejection algorithm, and removing noise points from the road sign line; and inputting the denoised point cloud data of the sign line into an Euclidean clustering method based on morphological analysis and conditions to extract the road sign line.
Another embodiment provided by the present application is: the step of generating the road center line comprises the step of dividing the road surface point cloud data into uniform point cloud blocks by taking the denoised road boundary points as control points; determining road lane line points in each point cloud block through a mobile search window; and generating a road center line according to the boundary point coordinates, and superposing and fusing the road center line, the road sign line and the road boundary line to finally obtain the extracted complete urban road network lane line.
Another embodiment provided by the present application is: and 1) carrying out coordinate conversion on the original point cloud based on the traveling track data and a coordinate system conversion function.
Another embodiment provided by the present application is: the road sign lines comprise direction road sign lines and character road sign lines.
Another embodiment provided by the present application is: and extracting the road surface point cloud data based on a distance threshold value, wherein the road surface point cloud data comprises road lane line points, and the distance between the road lane line points and the road boundary line meets a given threshold value range.
3. Advantageous effects
Compared with the prior art, the lane line extraction method based on the vehicle-mounted laser point cloud has the beneficial effects that:
the application provides a lane line extraction method based on vehicle-mounted laser point cloud, and relates to urban data science, automatic driving and three-dimensional high-definition maps.
The lane line extraction method based on the vehicle-mounted laser point cloud is accurate, stable and robust.
The lane line extraction method based on the vehicle-mounted laser point cloud can be used for quickly and accurately extracting the lane lines of a large-scale urban complex road network.
The method for extracting the lane lines based on the vehicle-mounted laser point cloud realizes that the lane lines can be accurately and stably extracted under the condition of complex road conditions of large-scene cities by using methods based on road boundary detection, road surface segmentation and multi-threshold analysis.
The lane line extraction method based on the vehicle-mounted laser point cloud combines the relevant road design standard, effectively reduces the limitations of irregular arrangement of the vehicle-mounted laser scanning point cloud, interference of pedestrians and vehicles, and uneven distribution of density and intensity values, and enables lane line extraction results to be more efficient and robust.
According to the lane line extraction method based on the vehicle-mounted laser point cloud, the inherent characteristics of the road surface point cloud are effectively extracted by developing a method based on rules and multi-threshold analysis, the efficiency of large-scale point cloud data processing is improved, and the stability of automatic driving and the safety of urban traffic are greatly improved.
Drawings
FIG. 1 is a schematic diagram of a lane line extraction method based on vehicle-mounted laser point cloud according to the present application;
FIG. 2 is a schematic diagram of a point cloud data processing process of the present application;
FIG. 3 is a schematic of the pavement extraction results of the present application;
FIG. 4 is a schematic diagram of a lane line extraction result of the present application;
fig. 5 is a schematic diagram of a preliminary three-dimensional high-definition map model of the present application.
Detailed Description
Hereinafter, specific embodiments of the present application will be described in detail with reference to the accompanying drawings, and it will be apparent to those skilled in the art from this detailed description that the present application can be practiced. Features from different embodiments may be combined to yield new embodiments, or certain features may be substituted for certain embodiments to yield yet further preferred embodiments, without departing from the principles of the present application.
Referring to fig. 1 to 5, the application provides a lane line extraction method based on vehicle-mounted laser point cloud, and the method comprises the following steps:
1) carrying out coordinate conversion on the original point cloud to obtain point cloud data subjected to coordinate conversion;
2) processing the point cloud data to remove non-road surface point cloud data;
3) extracting point cloud data of the road surface;
4) denoising the road surface point cloud data, and extracting a road sign line;
5) and extracting lane lines from the road surface point cloud data, and generating a road center line to obtain a complete urban road lane line.
Further, the step 1) includes defining directions of three axes xyz of an original point cloud coordinate system, defining a coordinate conversion relationship, and performing coordinate conversion on the original point cloud.
Further, the step 2) performs voxel-based segmentation processing on the point cloud data, wherein the segmentation processing comprises uniformly segmenting the point cloud data after coordinate conversion into a plurality of point cloud blocks on an xy plane; for each cloud block, further dividing the cloud block into a series of voxels by using an octree spatial index; each voxel grows upwards to nine voxels adjacent to the voxel, wherein the nine voxels continue to grow upwards according to the same mode as a new starting point, the process is repeated, and all the voxels are traversed until no adjacent voxel exists above; the voxel of the greatest height value is determined within each point cloud block, which in turn removes non-ground point cloud data by specifying a height threshold. As shown in fig. 2, the processing procedure of point cloud segmentation based on voxel up growth is shown, with the left region being the octree spatial index of the point cloud data and the right being the voxel-based up growth mode.
Further, the step 3) detects a road boundary line and performs road surface extraction based on the road design standard to obtain road surface point cloud data.
Further, the road surface extraction comprises uniformly dividing the road surface point cloud into a plurality of groups of point cloud clusters based on the direction of the traffic trajectory line; segmenting each group of point cloud clusters into a point cloud slice along the direction perpendicular to the driving track, and extracting road boundary points; and denoising the road boundary points, fitting to obtain a road boundary line, and further obtaining road surface point cloud data. As shown in fig. 3, the road surface extraction result is based on the road design criterion and the road boundary line.
Further, the step 4) comprises inputting the road surface point cloud data into an algorithm based on the adaptive point cloud intensity value analysis, and preliminarily extracting a road sign line; denoising the preliminarily extracted point cloud data of the road sign line by adopting a statistical outlier rejection algorithm, and removing noise points from the road sign line; and inputting the denoised point cloud data of the sign line into an Euclidean clustering method based on morphological analysis and conditions to extract the road sign line. As shown in fig. 4, the extraction result of the road lane line based on the distance multi-threshold analysis is that the points are road boundary line points, the solid lines are road lane lines, and the dotted lines are road center line points.
Further, the step 5) includes using the de-noised road boundary points as control points, and dividing the road surface point cloud data into uniform point cloud blocks; determining road lane line points in each point cloud block through a mobile search window; and generating a road center line according to the boundary point coordinates, and superposing and fusing the road center line, the road sign line and the road boundary line to finally obtain the extracted complete urban road network lane line.
Further, the step 1) performs coordinate conversion on the original point cloud based on the driving track data and a coordinate system conversion function.
Further, the road marking lines include direction-type road marking lines and character-type road marking lines.
And further, extracting a lane line based on a distance threshold value from the road surface point cloud data, wherein the lane line comprises lane line points, and the distance between the lane line points and the road boundary line meets a given threshold value range.
The method can quickly and accurately extract the lane lines of the urban traffic network, ensures the stability and robustness of point cloud data processing in large-scale complex urban road environments, effectively improves the safety of automatic driving and promotes the development of high-definition maps.
Examples
The application provides a vehicle-mounted laser point cloud-based lane line extraction method for an urban road network. In order to achieve the above purpose, the following technical solutions are adopted in the present application:
a vehicle-mounted laser scanning point cloud-based lane line extraction method for an urban road network based on vehicle-mounted laser point cloud comprises the following steps:
s1, performing coordinate transformation on the original point cloud based on the traveling track data and a coordinate system transformation function to obtain point cloud data subjected to coordinate transformation; s2, carrying out voxel-based segmentation processing on the point cloud data subjected to coordinate conversion, and removing non-road surface point cloud data; s3, detecting a road boundary line and extracting a road surface based on a road design standard to obtain road surface point cloud data; s4, denoising the road surface point cloud, and extracting direction type and character type road sign lines; and S5, further extracting the lane lines of the road surface point cloud based on a distance threshold value, generating a road center line and obtaining a complete urban road lane line.
Further, step S1 specifically includes the following sub-steps:
s11, defining directions of three axes of an original point cloud coordinate system xyz: the x axis is right in front of the vehicle-mounted laser scanning system, the y axis points to the right side of the system, and the z axis points to the right above the system; s12, performing coordinate transformation on the original point cloud, selecting two continuous track points as reference points, and calculating a rotation angle, wherein the coordinate transformation relation specifically comprises the following steps:
Figure BDA0002804863830000051
Figure BDA0002804863830000061
wherein
Figure BDA0002804863830000062
And
Figure BDA0002804863830000063
the coordinates of the point cloud in the transformed coordinate system and the original coordinate system, D ═ Δ X, Δ Y, Δ Z are transformation matrices, k is a scaling factor, and is expressed as the ratio of the original coordinates to the transformed coordinates, ω isx、ωy、ωzThree rotation angles, R (omega) of a three-dimensional coordinate systemx)、R(ωy)、R(ωz) Is the corresponding rotation matrix.
Further, step S2 specifically includes the following sub-steps:
s21, uniformly dividing the point cloud data after coordinate conversion into a plurality of points B in size on the xy planesA 5m cloud of dots; s22, for each point cloud block, further dividing the point cloud block into a series of voxels, wherein the width size of the voxels is calculated by using the average point density of the octree spatial index; s23, each voxel VjNine voxels L adjacent to it upwards1,L2,…,L9Growing, then using the nine voxels as a new starting point to search nine adjacent voxels above, continuing to grow upwards according to the same mode, repeating the process, and traversing all the voxels until no new voxel is searched; s24, determining the voxel with the maximum height value in each cloud blockDetermining a ground voxel and a non-ground voxel by crossing a specified height threshold, wherein the specific conditions are as follows:
Figure BDA0002804863830000064
wherein HglobalThe method comprises the steps of representing a global height value of a voxel, specifically representing a height difference between a specified voxel and a lowest voxel in the whole point cloud data; hlocalThe method comprises the steps of representing a local height value of a voxel, specifically a height difference between a specified voxel and the lowest voxel in a point cloud area; heRepresents a global ground relief threshold; hgRepresenting the local ground relief threshold, i.e. the maximum height within a particular growth area. And according to the conditions, reserving the ground voxels, and further removing the non-ground point cloud data.
Further, step S3 specifically includes the following sub-steps:
s31, uniformly dividing the road surface point cloud into a plurality of groups with the width of D according to the direction of the traffic track linewA point cloud cluster of 3 m; s32, for each group of point cloud clusters, dividing the point cloud clusters into a point cloud cluster with the width S along the direction perpendicular to the driving trackwExtracting road boundary points according to the conditions that the angle difference between a road edge and a road surface is more than 70 degrees and the height difference is between 7 and 30cm for each point cloud slice; and S33, denoising the road boundary points by adopting a random consistency sampling method, fitting by adopting a B-spline curve function to obtain road boundary lines, and extracting point clouds in the two road boundary lines to obtain road surface point cloud data.
Further, step S4 specifically includes the following sub-steps: s41, inputting the road surface point cloud data into an algorithm based on self-adaptive point cloud intensity value analysis, which specifically comprises the following steps:
Figure BDA0002804863830000071
wherein p isiIs an arbitrary road surface point, IiAs intensity values of the point cloud data, Imin、ImaxRespectively, the minimum intensity threshold value and the maximum intensity threshold value of the point cloud. Maintaining the intensity value at IminAnd ImaxPreliminarily obtaining point cloud data of the road sign line by using the point clouds in the range; s42, denoising the preliminarily extracted point cloud data of the road sign line by adopting a statistical outlier rejection algorithm, and removing noise points from the road sign line; s43, inputting the denoised marker line point cloud data into a Euclidean clustering method based on morphological analysis and conditions, and reserving point cloud clusters with the width of more than 50cm to obtain direction type and character type road marker lines.
Further, step S5 specifically includes the following sub-steps:
s51, taking the de-noised road boundary points in S33 as control points, and dividing the road surface point cloud data into uniform points with the width DzPoint cloud block of 2 m; s52, moving a search window from two boundary points of each local point cloud block in each point cloud block, searching for points with the distance to the road boundary line within a given threshold range, and simultaneously extracting points representing the high intensity value of the lane line by adopting an analysis method based on an intensity threshold, wherein the specific rule is as follows:
Figure BDA0002804863830000072
wherein d isiDistance of each point to the boundary line of the road, dmin、dmaxRespectively a minimum distance threshold and a maximum distance threshold preset according to a road design standard. Maintaining the intensity value at IminAnd ImaxWithin the range of d, and the distance from each point to the boundary line of the roadminAnd dmaxObtaining road lane line point cloud data by using the point clouds in the range, and further obtaining a road lane line by adopting B spline function fitting; s53, calculating coordinates of the road center line point cloud data according to the boundary point coordinates, specifically:
Figure BDA0002804863830000073
wherein C: (XC,YC,ZC) Coordinates, X, representing points of the centre linemax、Xmin、Ymax、Ymin、Zmax、ZminRespectively representing the maximum value and the minimum value of the x coordinate, the y coordinate and the z coordinate in each cloud block, further adopting a B spline function to fit to obtain a road center line, and obtaining other types of road lines which are overlapped and fused with the S33, the S43 and the S52 to finally obtain the extracted complete urban road network lines.
Although the present application has been described above with reference to specific embodiments, those skilled in the art will recognize that many changes may be made in the configuration and details of the present application within the principles and scope of the present application. The scope of protection of the application is determined by the appended claims, and all changes that come within the meaning and range of equivalency of the technical features are intended to be embraced therein.

Claims (10)

1. A lane line extraction method based on vehicle-mounted laser point cloud is characterized by comprising the following steps: the method comprises the following steps:
1) carrying out coordinate conversion on the original point cloud to obtain point cloud data subjected to coordinate conversion;
2) processing the point cloud data to remove non-road surface point cloud data;
3) extracting point cloud data of the road surface;
4) denoising the road surface point cloud data, and extracting a road sign line;
5) and extracting lane lines from the road surface point cloud data, and generating a road center line to obtain a complete urban road lane line.
2. The vehicle-mounted laser point cloud-based lane line extraction method according to claim 1, wherein: the step 1) comprises defining directions of three axes of an original point cloud coordinate system xyz, defining a coordinate conversion relation and carrying out coordinate conversion on the original point cloud.
3. The vehicle-mounted laser point cloud-based lane line extraction method according to claim 1, wherein: the step 2) carries out segmentation processing based on voxels on the point cloud data, wherein the segmentation processing comprises the step of uniformly segmenting the point cloud data after coordinate conversion into a plurality of point cloud blocks on an xy plane; for each cloud block, further dividing the cloud block into a series of voxels by using an octree spatial index; each voxel grows upwards to nine voxels adjacent to the voxel, wherein the nine voxels continue to grow upwards according to the same mode as a new starting point, the process is repeated, and all the voxels are traversed until no adjacent voxel exists above; the voxel with the largest height value is determined within each point cloud block, and the non-ground point cloud points are removed by specifying a height threshold.
4. The vehicle-mounted laser point cloud-based lane line extraction method according to claim 1, wherein: and 3) detecting a road boundary line and extracting the road surface based on the road design standard to obtain road surface point cloud data.
5. The vehicle-mounted laser point cloud-based lane line extraction method according to claim 4, wherein: the road surface extraction comprises the steps of uniformly dividing road surface point clouds into a plurality of groups of point cloud clusters based on the direction of a driving trajectory line; segmenting each group of point cloud clusters into a point cloud slice along the direction perpendicular to the driving track, and extracting road boundary points; and denoising the road boundary points, fitting to obtain a road boundary line, and further obtaining road surface point cloud data.
6. The vehicle-mounted laser point cloud-based lane line extraction method according to claim 1, wherein: the step 4) comprises inputting the road surface point cloud data into an algorithm based on self-adaptive point cloud intensity value analysis, and preliminarily extracting a road sign line; denoising the preliminarily extracted point cloud data of the road sign line by adopting a statistical outlier rejection algorithm, and removing noise points from the road sign line; and inputting the denoised point cloud data of the sign line into an Euclidean clustering method based on morphological analysis and conditions to extract the road sign line.
7. The vehicle-mounted laser point cloud-based lane line extraction method according to claim 5, wherein: the step 5) comprises the steps of taking the de-noised road boundary points as control points, and dividing the road surface point cloud data into uniform point cloud blocks; determining road lane line points in each point cloud block through a mobile search window; and generating a road center line according to the boundary point coordinates, and superposing and fusing the road center line, the road sign line and the road boundary line to finally obtain the extracted complete urban road network lane line.
8. The vehicle-mounted laser point cloud-based lane line extraction method according to any one of claims 1 to 7, wherein: and 1) carrying out coordinate conversion on the original point cloud based on the traveling track data and a coordinate system conversion function.
9. The vehicle-mounted laser point cloud-based lane line extraction method according to claim 8, wherein: the road sign lines comprise direction road sign lines and character road sign lines.
10. The vehicle-mounted laser point cloud-based lane line extraction method according to claim 8, wherein: and extracting the road surface point cloud data based on a distance threshold value, wherein the road surface point cloud data comprises road lane line points, and the distance between the road lane line points and the road boundary line meets a given threshold value range.
CN202011363963.XA 2020-11-27 2020-11-27 Lane line extraction method based on vehicle-mounted laser point cloud Pending CN112561944A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011363963.XA CN112561944A (en) 2020-11-27 2020-11-27 Lane line extraction method based on vehicle-mounted laser point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011363963.XA CN112561944A (en) 2020-11-27 2020-11-27 Lane line extraction method based on vehicle-mounted laser point cloud

Publications (1)

Publication Number Publication Date
CN112561944A true CN112561944A (en) 2021-03-26

Family

ID=75045098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011363963.XA Pending CN112561944A (en) 2020-11-27 2020-11-27 Lane line extraction method based on vehicle-mounted laser point cloud

Country Status (1)

Country Link
CN (1) CN112561944A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113189564A (en) * 2021-07-01 2021-07-30 西南交通大学 Train track point cloud extraction method
CN114084129A (en) * 2021-10-13 2022-02-25 武汉光庭信息技术股份有限公司 Fusion-based vehicle automatic driving control method and system
CN114662600A (en) * 2022-03-25 2022-06-24 南京慧尔视软件科技有限公司 Lane line detection method and device and storage medium
CN116184357A (en) * 2023-03-07 2023-05-30 之江实验室 Ground point cloud data processing method and device, electronic device and storage medium
CN116188334A (en) * 2023-05-04 2023-05-30 四川省公路规划勘察设计研究院有限公司 Automatic repair method and device for lane line point cloud
CN117291845A (en) * 2023-11-27 2023-12-26 成都理工大学 Point cloud ground filtering method, system, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014130404A (en) * 2012-12-28 2014-07-10 Aero Asahi Corp Road surface image generation system, shadow removing apparatus, method and program
CN104197897A (en) * 2014-04-25 2014-12-10 厦门大学 Urban road marker automatic sorting method based on vehicle-mounted laser scanning point cloud
WO2017041396A1 (en) * 2015-09-10 2017-03-16 百度在线网络技术(北京)有限公司 Driving lane data processing method, device, storage medium and apparatus
CN107463918A (en) * 2017-08-17 2017-12-12 武汉大学 Lane line extracting method based on laser point cloud and image data fusion
CN108171131A (en) * 2017-12-15 2018-06-15 湖北大学 Based on the Lidar point cloud data road marking line extracting methods and system for improving MeanShift
CN108831146A (en) * 2018-04-27 2018-11-16 厦门维斯云景信息科技有限公司 Generate semi-automatic cloud method of three-dimensional high-definition mileage chart intersection lane
CN108898672A (en) * 2018-04-27 2018-11-27 厦门维斯云景信息科技有限公司 A kind of semi-automatic cloud method making three-dimensional high-definition mileage chart lane line
US20190370565A1 (en) * 2018-06-01 2019-12-05 Baidu Online Network Technology (Beijing) Co., Ltd Method and apparatus for extracting lane line and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014130404A (en) * 2012-12-28 2014-07-10 Aero Asahi Corp Road surface image generation system, shadow removing apparatus, method and program
CN104197897A (en) * 2014-04-25 2014-12-10 厦门大学 Urban road marker automatic sorting method based on vehicle-mounted laser scanning point cloud
WO2017041396A1 (en) * 2015-09-10 2017-03-16 百度在线网络技术(北京)有限公司 Driving lane data processing method, device, storage medium and apparatus
CN107463918A (en) * 2017-08-17 2017-12-12 武汉大学 Lane line extracting method based on laser point cloud and image data fusion
CN108171131A (en) * 2017-12-15 2018-06-15 湖北大学 Based on the Lidar point cloud data road marking line extracting methods and system for improving MeanShift
CN108831146A (en) * 2018-04-27 2018-11-16 厦门维斯云景信息科技有限公司 Generate semi-automatic cloud method of three-dimensional high-definition mileage chart intersection lane
CN108898672A (en) * 2018-04-27 2018-11-27 厦门维斯云景信息科技有限公司 A kind of semi-automatic cloud method making three-dimensional high-definition mileage chart lane line
US20190370565A1 (en) * 2018-06-01 2019-12-05 Baidu Online Network Technology (Beijing) Co., Ltd Method and apparatus for extracting lane line and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHENGMING YE等: "Robust Lane Extraction From MLS Point Clouds Towards HD Maps Especially in Curve Road", 《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》, vol. 23, no. 2, pages 1505 - 1518 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113189564A (en) * 2021-07-01 2021-07-30 西南交通大学 Train track point cloud extraction method
CN113189564B (en) * 2021-07-01 2021-09-07 西南交通大学 Train track point cloud extraction method
CN114084129A (en) * 2021-10-13 2022-02-25 武汉光庭信息技术股份有限公司 Fusion-based vehicle automatic driving control method and system
CN114662600A (en) * 2022-03-25 2022-06-24 南京慧尔视软件科技有限公司 Lane line detection method and device and storage medium
CN114662600B (en) * 2022-03-25 2023-11-07 南京慧尔视软件科技有限公司 Lane line detection method, device and storage medium
CN116184357A (en) * 2023-03-07 2023-05-30 之江实验室 Ground point cloud data processing method and device, electronic device and storage medium
CN116184357B (en) * 2023-03-07 2023-08-15 之江实验室 Ground point cloud data processing method and device, electronic device and storage medium
CN116188334A (en) * 2023-05-04 2023-05-30 四川省公路规划勘察设计研究院有限公司 Automatic repair method and device for lane line point cloud
CN116188334B (en) * 2023-05-04 2023-07-18 四川省公路规划勘察设计研究院有限公司 Automatic repair method and device for lane line point cloud
CN117291845A (en) * 2023-11-27 2023-12-26 成都理工大学 Point cloud ground filtering method, system, electronic equipment and storage medium
CN117291845B (en) * 2023-11-27 2024-03-19 成都理工大学 Point cloud ground filtering method, system, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112561944A (en) Lane line extraction method based on vehicle-mounted laser point cloud
CN108519605B (en) Road edge detection method based on laser radar and camera
Zai et al. 3-D road boundary extraction from mobile laser scanning data via supervoxels and graph cuts
CN109961440B (en) Three-dimensional laser radar point cloud target segmentation method based on depth map
CN111079611B (en) Automatic extraction method for road surface and marking line thereof
CN108845569A (en) Generate semi-automatic cloud method of the horizontal bend lane of three-dimensional high-definition mileage chart
CN108062517A (en) Unstructured road boundary line extraction method based on vehicle-mounted laser point cloud
EP4120123A1 (en) Scan line-based road point cloud extraction method
CN111580131B (en) Method for identifying vehicles on expressway by three-dimensional laser radar intelligent vehicle
CN110197173B (en) Road edge detection method based on binocular vision
CN112070756B (en) Three-dimensional road surface disease measuring method based on unmanned aerial vehicle oblique photography
CN115294293B (en) Method for automatically compiling high-precision map road reference line based on low-altitude aerial photography result
CN114092658A (en) High-precision map construction method
CN113516853B (en) Multi-lane traffic flow detection method for complex monitoring scene
Guo et al. Curb detection and compensation method for autonomous driving via a 3-D-LiDAR sensor
CN114821522A (en) Urban road cross slope and super height value calculation method based on vehicle-mounted laser point cloud data
CN114170149A (en) Road geometric information extraction method based on laser point cloud
CN111861946A (en) Adaptive multi-scale vehicle-mounted laser radar dense point cloud data filtering method
CN116188334A (en) Automatic repair method and device for lane line point cloud
CN115546551A (en) Deep learning-based geographic information extraction method and system
Zhang et al. Research on Pothole Detection Method for Intelligent Driving Vehicle
CN114758323A (en) Urban road sign extraction method based on vehicle-mounted laser point cloud
CN114063107A (en) Ground point cloud extraction method based on laser beam
CN114820931A (en) Virtual reality-based CIM (common information model) visual real-time imaging method for smart city
CN113920483A (en) Method and device for classifying objects in road point cloud, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210326

RJ01 Rejection of invention patent application after publication