CN111445578A - Map three-dimensional road feature identification method and system - Google Patents

Map three-dimensional road feature identification method and system Download PDF

Info

Publication number
CN111445578A
CN111445578A CN202010228450.1A CN202010228450A CN111445578A CN 111445578 A CN111445578 A CN 111445578A CN 202010228450 A CN202010228450 A CN 202010228450A CN 111445578 A CN111445578 A CN 111445578A
Authority
CN
China
Prior art keywords
dimensional
map
dimensional intensity
intensity
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010228450.1A
Other languages
Chinese (zh)
Other versions
CN111445578B (en
Inventor
杨蒙蒙
杨殿阁
江昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202010228450.1A priority Critical patent/CN111445578B/en
Publication of CN111445578A publication Critical patent/CN111445578A/en
Application granted granted Critical
Publication of CN111445578B publication Critical patent/CN111445578B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention belongs to the technical field of map data processing, and relates to a map three-dimensional road feature identification method and a map three-dimensional road feature identification system, which comprise the following steps: s1, generating a two-dimensional intensity characteristic map of a road through laser point cloud data; s2, establishing a conversion relation between a two-dimensional space and a three-dimensional space, and converting the two-dimensional intensity characteristic graph into a three-dimensional intensity virtual characteristic graph; s3, acquiring the position and the shape of a two-dimensional marking element in the three-dimensional intensity virtual characteristic diagram based on a deep learning algorithm; and S4, converting the position and the shape of the two-dimensional marking element into a three-dimensional marking element based on a dynamic template matching method, and generating a three-dimensional intensity characteristic diagram with the marking element. Compared with the existing two-dimensional intensity characteristic image, the three-dimensional intensity characteristic image has three-dimensional virtual geometric characteristics and contains the corresponding relation between the three-dimensional intensity characteristic image and the three-dimensional laser point, so that the marking line elements are more obvious in the map and are easier to identify.

Description

Map three-dimensional road feature identification method and system
Technical Field
The invention relates to a map three-dimensional road feature identification method and a map three-dimensional road feature identification system, and belongs to the technical field of map data processing.
Background
The automatic driving high-precision map is one of core technologies for realizing high-level automatic driving, namely unmanned driving, plays an important role in functions of unmanned positioning, control, navigation, decision and the like, and provides important guarantee in unmanned driving safety. The Chinese road construction is fast, the updating requirement is large, the fast updating of the high-precision map is a great challenge at present, the road element information is used as a key component in the high-precision map, and the accurate, efficient and fast identification of the road marking is a key for the automatic driving map composition and dynamic updating.
At present, most of the existing researches extract and classify the road marking characteristic elements based on images and videos, but the accuracy of extracting the road marking characteristic elements and the geometric accuracy are damaged due to the influence of illumination, shadow, weather and surface materials. The method mainly comprises two types, the first type is to convert laser point cloud into a two-dimensional characteristic image, then the two-dimensional road marking is extracted and identified by adopting a traditional image processing or deep learning method based on the two-dimensional image, and the existing extraction method is only limited to two-dimensional road marking extraction and lacks of three-dimensional vector characteristics. The second type is that three-dimensional laser point clouds are directly used for extracting and identifying three-dimensional marking lines, but the method is greatly influenced by intensity characteristic values, particularly, the intensity characteristic value table of a ground marking line is weak under the conditions of long-term exposure, rolling and the like, so that the extraction accuracy of the road marking line is seriously influenced, and the method for obtaining the image marking the marking line based on the calculation of the three-dimensional point clouds is directly used, so that the calculation amount is large, the type of the extracted marking line is single, and the actual application requirements are difficult to meet.
Disclosure of Invention
In view of the above-mentioned deficiencies of the prior art, the present invention provides a method and a system for recognizing map three-dimensional road features, which have three-dimensional virtual geometric features and include the corresponding relationship between a three-dimensional intensity feature image and a three-dimensional laser spot, compared with the existing two-dimensional intensity feature image, so that the marking line elements are more obvious and easier to recognize in the map.
In order to achieve the aim, the invention provides a map three-dimensional road feature identification method, which comprises the following steps: s1, generating a two-dimensional intensity characteristic map of a road by using laser radar three-dimensional point cloud data; s2, establishing a conversion relation between a two-dimensional space and a three-dimensional space, and converting the two-dimensional intensity characteristic graph into a three-dimensional intensity virtual characteristic graph; s3, acquiring the position and the shape of a two-dimensional marking element in the three-dimensional strength virtual characteristic diagram based on a deep learning related algorithm; and S4, converting the position and the shape of the two-dimensional marking element into a three-dimensional marking element according to the conversion relation by a dynamic template matching method, and generating a three-dimensional intensity characteristic diagram with the marking element.
Further, the method for generating the two-dimensional intensity characteristic map comprises the following steps: s1.1, filtering the laser points acquired by the vehicle-mounted laser radar detector to acquire ground laser points; s1.2, acquiring a spatial range of a three-dimensional ground laser point of a region to be processed according to the ground laser point, and rasterizing the spatial range; s1.3, calculating the attribute value of each grid; s1.4, normalizing the attribute value of each grid to generate a two-dimensional intensity characteristic diagram.
Further, the attribute value F of each gridijThe calculation formula of (2) is as follows:
Figure BDA0002428452000000021
Figure BDA0002428452000000022
wherein, wijkWeight value for the kth laser spot in grid (i, j); i isijkIs the intensity characteristic value of the kth laser point in the grid (i, j); n isijIs the total number of laser spots in (i, j); GSD is the ground laser spot distance;
Figure BDA0002428452000000023
is the distance of the kth laser spot in (i, j) from the center,
Figure BDA0002428452000000024
is the maximum height difference of the laser spot in the grid (i, j),
Figure BDA0002428452000000025
is the minimum height difference of the laser spots in the grid (i, j); z is a radical ofmaxAnd zminRespectively the maximum value and the minimum value in the vertical direction of the three-dimensional ground space range;
Figure BDA0002428452000000026
α and β are weighting factors in the horizontal direction and the elevation direction, respectively, and the sum of the weighting factors is 1.
Further, the process of converting into the three-dimensional intensity virtual feature map is as follows: establishing a corresponding relation between a two-dimensional space grid and a three-dimensional space point cloud, constructing topological relation data of a two-dimensional space according to a three-dimensional absolute space coordinate value of a starting point position in the grid, grid spacing, image resolution, a rotation angle and a difference value of an average elevation value of each point in the grid and the starting point, and converting a two-dimensional intensity characteristic map into a three-dimensional intensity virtual characteristic map through the topological relation data.
Further, the method of acquiring the position of the two-dimensional reticle element in step S3 is: and extracting partial images from the three-dimensional intensity virtual characteristic image to generate a data training set, and training the data training set through a deep learning algorithm to obtain the position of the two-dimensional marking element in the three-dimensional intensity virtual characteristic image.
Further, before training the data set, translation, rotation, shearing, scale adjustment and denoising are required to be performed on the image in the data set, and sample enhancement and augmentation processing are performed on the image through the countermeasure generation network GAN and model migration.
Further, in step S3, the shape of the two-dimensional reticle element is obtained by using a dynamic simulation matching algorithm.
Further, a template matching data set needs to be introduced into a dynamic matching algorithm, the template matching data set is matched with the three-dimensional intensity virtual characteristic map, the similarity of the template matching data set and the three-dimensional intensity virtual characteristic map is determined, and the template matching data set comprises images of road marked lines, vector characteristic data based on the images, vector characteristic categories and dynamic angle information of the marked lines.
Further, the calculation formula of the similarity MAX (NCC (x, y, θ)) is:
Figure BDA0002428452000000031
wherein x and y are respectively an abscissa and an ordinate of a marking line in the three-dimensional intensity virtual characteristic diagram in a horizontal plane, theta is a dynamic angle value in the three-dimensional intensity virtual characteristic diagram, and i and j are position vectors of the marking line in the image of the template matching data set; NCC (x, y, θ) is a function of the normalized cross-correlation matching algorithm;
Figure BDA0002428452000000032
is a matching image in the three-dimensional intensity virtual feature map; t (i, j) is the image in the template matching dataset;
Figure BDA0002428452000000033
is the mean gray value of the matched image at (x, y); e (t) represents the average gray value of the image in the template matching dataset, M, N being the maximum of the abscissa and ordinate, respectively, of the image of the template matching dataset.
The invention also discloses a map three-dimensional road feature recognition system, which comprises: the two-dimensional intensity characteristic map generation module is used for generating a two-dimensional intensity characteristic map of the road through laser cloud data; the three-dimensional intensity virtual characteristic diagram generating module is used for establishing a conversion relation between a two-dimensional space and a three-dimensional space and converting the two-dimensional intensity characteristic diagram into a three-dimensional intensity virtual characteristic diagram; the two-dimensional marking element acquisition module is used for acquiring the position and the shape of the two-dimensional marking element in the three-dimensional intensity virtual characteristic diagram; and the three-dimensional intensity characteristic map generation module is used for converting the position and the shape of the two-dimensional marking element into the three-dimensional marking element according to the conversion relation through a dynamic template matching algorithm so as to generate the three-dimensional intensity characteristic map of the road marking element.
Due to the adoption of the technical scheme, the invention has the following advantages:
1. the method performs projection conversion on the laser radar data into a three-dimensional virtual intensity characteristic image, and compared with the existing two-dimensional intensity characteristic image, the projection conversion has three-dimensional virtual geometric characteristics and contains the corresponding relation between the three-dimensional intensity characteristic image and a three-dimensional laser point, so that the marking line elements are more obvious in a map and are easier to identify;
2. the invention provides a dynamic template matching method, which is used for enhancing the robustness of a matching algorithm, rapidly obtaining two-dimensional characteristics of a road marking in an image, carrying out depth fusion with the characteristics of a three-dimensional virtual image, and rapidly and robustly obtaining accurate and efficient three-dimensional vector road elements.
Drawings
FIG. 1 is a flow chart of a method for identifying three-dimensional road features of a map according to an embodiment of the invention;
FIG. 2(b) is a three-dimensional intensity feature map with line marking elements finally obtained by the map three-dimensional road feature identification method in an embodiment of the present invention; fig. 2(a) and 2(c) are two-dimensional intensity characteristic diagrams marked with reticle elements corresponding to the outlined positions in fig. 2(b), respectively.
Detailed Description
The present invention is described in detail by way of specific embodiments in order to better understand the technical direction of the present invention for those skilled in the art. It should be understood, however, that the detailed description is provided for a better understanding of the invention only and that they should not be taken as limiting the invention. In describing the present invention, it is to be understood that the terminology used is for the purpose of description only and is not intended to be indicative or implied of relative importance.
Example one
The embodiment discloses a map three-dimensional road feature identification method, as shown in fig. 1, comprising the following steps:
s1, generating a two-dimensional intensity characteristic map of a road through laser point cloud data;
s2, establishing a conversion relation between a two-dimensional space and a three-dimensional space, and converting the two-dimensional intensity characteristic graph into a three-dimensional intensity virtual characteristic graph;
s3, acquiring the position and the shape of a two-dimensional marking element in the three-dimensional intensity virtual characteristic diagram based on a deep learning algorithm;
and S4, converting the positions and the shapes of the two-dimensional marking elements into three-dimensional marking elements according to the conversion relation, and generating a three-dimensional vector diagram with the marking elements.
In the existing algorithm based on the detection result of the laser radar detector, most of characteristic images are constructed based on intensity reflection values, but the characteristic images can only obtain two-dimensional intensity characteristic diagrams, and the three-dimensional geometric characteristics of laser radar data are lost. Although the extraction and identification of the road element features can be directly carried out on the basis of the two-dimensional feature map, the three-dimensional geometric features of the road elements cannot be obtained, and the method is difficult to be directly and accurately applied to the construction of the vector element features in the high-precision map. In the embodiment, the laser radar data is projected and converted into the three-dimensional virtual intensity characteristic image, and compared with the existing two-dimensional intensity characteristic image, the projection conversion has three-dimensional virtual geometric characteristics and comprises the corresponding relation between the three-dimensional intensity characteristic image and the three-dimensional laser point, so that the marking line elements are more obvious in the map and are easier to identify.
The method for generating the two-dimensional intensity characteristic diagram comprises the following steps:
s1.1, filtering the laser point of the laser radar detector to obtain a ground laser point.
S1.2, acquiring the spatial range of the three-dimensional ground laser point of the area to be processed according to the ground laser point, and rasterizing the spatial range.
S1.3 calculating the attribute value F of each gridijAttribute value F of each gridijThe attribute value F of each grid is determined by the point density, the plane, the elevation difference and the gray value of all laser points falling in the three-dimensional space of the gridijThe calculation formula of (2) is as follows:
Figure BDA0002428452000000041
Figure BDA0002428452000000042
wherein, wijkWeight value for the kth laser spot in grid (i, j); i isijkIs the kth in the grid (i, j)The intensity characteristic value of the laser spot; n isijIs the total number of laser spots in (i, j); GSD is the ground laser spot distance;
Figure BDA0002428452000000043
is the distance of the kth laser spot in (i, j) from the center,
Figure BDA0002428452000000044
is the maximum height difference of the laser spot in the grid (i, j),
Figure BDA0002428452000000045
is the minimum height difference of the laser spots in the grid (i, j); z is a radical ofmaxAnd zminRespectively the maximum value and the minimum value in the vertical direction of the three-dimensional ground space range;
Figure BDA0002428452000000046
α and β are weighting factors in the horizontal direction and the elevation direction, respectively, and the sum of the weighting factors is 1.
S1.4, normalizing the attribute value of each grid, and unifying the attribute values subjected to normalization processing to a [0,255] interval to finally obtain a two-dimensional intensity characteristic diagram of the area to be processed.
The process of converting into the three-dimensional intensity virtual characteristic diagram comprises the following steps: establishing a corresponding relation between a two-dimensional space grid and a three-dimensional space point cloud, constructing topological relation data of a two-dimensional space according to a three-dimensional absolute space coordinate value of a starting point position in the grid, grid spacing, image resolution, a rotation angle and a difference value of an average elevation value of each point in the grid and the starting point, and converting a two-dimensional intensity characteristic map into a three-dimensional intensity virtual characteristic map through the topological relation data. The construction of the three-dimensional intensity virtual characteristic diagram lays a data foundation for the rapid and accurate extraction of the three-dimensional vector road element characteristics. Although the three-dimensional intensity virtual feature map is established, the reticle elements are still two-dimensional at this time, and the process of establishing the three-dimensional intensity virtual feature map is only equivalent to arranging a two-dimensional plane graph in the three-dimensional feature map, and the graph is two-dimensional. Therefore, the reticle elements also need to be expanded into a three-dimensional structure in the three-dimensional characteristic virtual intensity characteristic diagram. This requires the operations in steps S3, S4 described above.
In step S3, the position and shape of the two-dimensional reticle element are acquired from the three-dimensional intensity virtual feature map, and the method may be divided into two steps: the position of the two-dimensional reticle element and the shape of the two-dimensional reticle element are acquired.
Namely, the method for acquiring the position of the two-dimensional marking element in the three-dimensional intensity virtual characteristic diagram comprises the following steps: the method comprises the steps of extracting partial images from a three-dimensional intensity virtual characteristic diagram to generate a data training set, manufacturing a sample data set of road marking data aiming at the generated virtual intensity characteristic diagram, and performing translation, rotation, shearing, scale adjustment and denoising on images in the data set before training the data set aiming at a small amount of manufactured sample data because the manufacturing of the sample data set consumes a large amount of labor force, and performing sample enhancement and augmentation on the images through a countermeasure generation network GAN and model migration.
And adjusting model training parameters according to the characteristics of the data set, and training the data training set through a deep learning algorithm to obtain the position of the two-dimensional marking element in the three-dimensional intensity virtual characteristic diagram. In this embodiment, the deep learning algorithm may be a Yolo series, FasterRCNN, SSD, or other related algorithms, and is preferably FasterRCNN algorithm. In this embodiment, the two-dimensional line marking elements are marked by using a positioning frame, that is, the line marking frame to be identified is in a rectangular frame by using a rectangular frame, of course, the positioning frame may also be a frame body with other shapes, the shape of the positioning frame is not limited to a rectangle, and in addition, the positions of the line marking elements may also be marked by using other marking methods, such as methods of displaying coordinates of center points of the line marking elements.
The road marking lines comprise various types of lane lines, arrows, stop lines, speed bumps, characters and the like. The shape of the two-dimensional marking element is obtained by adopting a dynamic simulation matching algorithm. The dynamic simulation matching algorithm in this embodiment is preferably: normalized cross-correlation matching algorithm (NCC algorithm). The NCC algorithm determines the degree of matching by calculating the cross-correlation value of the template and the image to be matched. The position of the search window at which the cross-correlation value is maximum determines the position of the image in the template in the image to be matched. Specifically, in the scheme of this embodiment, a template matching data set needs to be introduced into the dynamic matching algorithm, and the template matching data set is matched with the three-dimensional intensity virtual feature map to determine the similarity between the template matching data set and the three-dimensional intensity virtual feature map.
The template matching dataset includes an image for each road marking, image-based vector feature data, vector feature categories, and dynamic angle information for the marking. In the existing matching algorithm, only the length-width ratio of the marking data and the template matching similarity are considered, and the influence of the rotation angle on the matching result is ignored. The driving route of the laser radar data in the acquisition process is not a straight line but a curve with certain uncertainty, and the accuracy of the matching result is seriously influenced because the influence of the rotation angle is not considered in the template matching data set. Therefore, dynamic angle information is introduced into the template matching data set in the embodiment, so that the data in the template matching data set is closer to the actual situation, and the accuracy of the matching algorithm is improved.
The calculation formula of the matching similarity MAX (NCC (x, y, θ)) of the matching image at the coordinates (x, y) in the three-dimensional intensity virtual feature map is:
Figure BDA0002428452000000061
wherein x and y are respectively an abscissa and an ordinate of a marking line in the three-dimensional intensity virtual characteristic diagram in a horizontal plane, theta is a dynamic angle value in the three-dimensional intensity virtual characteristic diagram, and i and j are position vectors of the marking line in the image of the template matching data set; NCC (x, y, θ) is a function of the normalized cross-correlation matching algorithm;
Figure BDA0002428452000000062
is a matching image in the three-dimensional intensity virtual feature map; t (i, j) is the image in the template matching dataset;
Figure BDA0002428452000000063
is the mean gray value of the matched image at (x, y); e (T) represents the mean gray value of the image in the template matching dataset, M, N being the abscissa and ordinate, respectively, of the image of the template matching datasetThe maximum value of the coordinates. The value of the matching similarity MAX (NCC (x, y, θ)) is in the range of 0 to 1.
And based on the spatial corresponding relation between the grid coordinate and the three-dimensional laser point cloud contained in the three-dimensional virtual intensity image map, converting the position and the shape of the two-dimensional marking element into the three-dimensional marking element according to the conversion relation, and thus obtaining the vector characteristic of the three-dimensional road marking element. And (4) according to the three-dimensional position information converted from the three-dimensional position information, bringing the vector characteristics of the three-dimensional road marking elements into the three-dimensional intensity virtual characteristic diagram, and obtaining the three-dimensional intensity characteristic diagram with the marking elements. The obtained three-dimensional intensity characteristic diagram with the marking line elements is shown in fig. 2, and fig. 2(b) is the three-dimensional intensity characteristic diagram with the marking line elements finally obtained by the map three-dimensional road characteristic identification method in the embodiment; fig. 2(a) and 2(c) are two-dimensional intensity characteristic diagrams marked with marking line elements corresponding to the outlined positions in fig. 2(b), and the marking lines in fig. 2(b) are more intuitive, three-dimensional and clear through comparison and can be well applied to high-definition maps.
Example two
Based on the same inventive concept, the embodiment discloses a map three-dimensional road feature recognition system, which comprises:
the two-dimensional intensity characteristic map generation module is used for generating a two-dimensional intensity characteristic map of the road through laser cloud data;
the three-dimensional intensity virtual characteristic diagram generating module is used for establishing a conversion relation between a two-dimensional space and a three-dimensional space and converting the two-dimensional intensity characteristic diagram into a three-dimensional intensity virtual characteristic diagram;
the two-dimensional marking element acquisition module is used for acquiring the position and the shape of the two-dimensional marking element in the three-dimensional intensity virtual characteristic diagram;
and the three-dimensional intensity characteristic map generation module is used for converting the positions and the shapes of the two-dimensional marking elements into the three-dimensional marking elements according to the conversion relation and generating the three-dimensional intensity characteristic map with the marking elements.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A map three-dimensional road feature identification method is characterized by comprising the following steps:
s1, generating a two-dimensional intensity characteristic map of a road through laser point cloud data;
s2, establishing a conversion relation between a two-dimensional space and a three-dimensional space, and converting the two-dimensional intensity characteristic graph into a three-dimensional intensity virtual characteristic graph;
s3, acquiring the position and the shape of a two-dimensional marking element in the three-dimensional intensity virtual characteristic diagram;
and S4, converting the position and the shape of the two-dimensional marking element into a three-dimensional marking element according to the conversion relation, and generating a three-dimensional intensity characteristic diagram with the marking element.
2. The map three-dimensional road feature recognition method of claim 1, wherein the method for generating the two-dimensional intensity feature map is as follows:
s1.1, filtering laser points obtained by a laser radar detector to obtain ground laser points;
s1.2, acquiring a spatial range of a three-dimensional ground laser spot of a region to be processed according to the ground laser spot, and rasterizing the spatial range;
s1.3, calculating the attribute value of each grid;
s1.4, normalizing the attribute value of each grid to generate a two-dimensional intensity characteristic diagram.
3. The map three-dimensional road feature recognition method according to claim 2, wherein the attribute value F of each gridijThe calculation formula of (2) is as follows:
Figure FDA0002428451990000011
Figure FDA0002428451990000012
wherein, wijkWeight value for the kth laser spot in grid (i, j); i isijkIs the intensity characteristic value of the kth laser point in the grid (i, j); n isijIs the total number of laser spots in (i, j); GSD is the ground laser spot distance;
Figure FDA0002428451990000013
is the distance of the kth laser spot in (i, j) from the center,
Figure FDA0002428451990000014
is the maximum height difference of the laser spot in the grid (i, j),
Figure FDA0002428451990000015
is the minimum height difference of the laser spots in the grid (i, j); z is a radical ofmaxAnd zminRespectively the maximum value and the minimum value in the vertical direction of the three-dimensional ground space range;
Figure FDA0002428451990000016
α and β are weighting factors in the horizontal direction and the elevation direction, respectively, and the sum of the weighting factors is 1.
4. The map three-dimensional road feature recognition method according to any one of claims 1 to 3, wherein the process of converting into the three-dimensional intensity virtual feature map is as follows: establishing a corresponding relation between a two-dimensional space grid and a three-dimensional space point cloud, constructing topological relation data of a two-dimensional space according to a three-dimensional absolute space coordinate value of a starting point position in the grid, grid spacing, image resolution, a rotation angle and a difference value of an average elevation value of each point in the grid and the starting point, and converting the two-dimensional intensity characteristic map into the three-dimensional intensity virtual characteristic map through the topological relation data.
5. The method for identifying three-dimensional road characteristics on a map according to claim 4, wherein the method for acquiring the position of the two-dimensional marking element in step S3 comprises: and extracting partial images from the three-dimensional intensity virtual characteristic map to generate a data training set, and training the data training set through a deep learning algorithm to obtain the position of the two-dimensional marking element in the three-dimensional intensity virtual characteristic map.
6. The method as claimed in claim 5, wherein before training the data set, it is necessary to perform translation, rotation, shearing, scaling and de-noising on the images in the data set, and perform sample enhancement and augmentation on the images through the anti-generation network GAN and model migration.
7. The method for identifying three-dimensional road characteristics on a map according to claim 4, wherein in step S3, the shape of the two-dimensional line marking element is obtained by using a dynamic simulation matching algorithm.
8. The method for identifying three-dimensional road characteristics on map according to claim 7, wherein the dynamic matching algorithm needs to introduce a template matching data set, match the template matching data set with the three-dimensional intensity virtual characteristic map and determine the similarity of the two, and the template matching data set comprises the image of each road marking, the vector characteristic data based on the image, the vector characteristic category and the dynamic angle information of the marking.
9. The map three-dimensional road feature recognition method according to claim 8, wherein the calculation formula of the similarity MAX (NCC (x, y, θ)) is:
Figure FDA0002428451990000021
wherein x and y are respectively the abscissa and the ordinate of the marked line in the three-dimensional intensity virtual characteristic diagram in the horizontal plane,theta is a dynamic angle value in the three-dimensional intensity virtual characteristic diagram, and i and j are position vectors of marked lines in the image of the template matching data set; NCC (x, y, θ) is a function of the normalized cross-correlation matching algorithm;
Figure FDA0002428451990000022
is a matching image in the three-dimensional intensity virtual feature map; t (i, j) is the image in the template matching dataset;
Figure FDA0002428451990000023
is the mean gray value of the matched image at (x, y); e (t) represents the average gray value of the image in the template matching dataset, M, N being the maximum of the abscissa and ordinate, respectively, of the image of the template matching dataset.
10. A map three-dimensional road feature recognition system, comprising:
the two-dimensional intensity characteristic map generation module is used for generating a two-dimensional intensity characteristic map of the road through laser point cloud data;
the three-dimensional intensity virtual characteristic diagram generating module is used for establishing a conversion relation between a two-dimensional space and a three-dimensional space and converting the two-dimensional intensity characteristic diagram into a three-dimensional intensity virtual characteristic diagram;
the two-dimensional marking element acquisition module is used for acquiring the position and the shape of the two-dimensional marking element in the three-dimensional intensity virtual characteristic diagram;
and the three-dimensional intensity characteristic map generation module is used for converting the positions and the shapes of the two-dimensional marking elements into three-dimensional marking elements according to the conversion relation and generating the three-dimensional intensity characteristic map with the marking elements.
CN202010228450.1A 2020-03-27 2020-03-27 Map three-dimensional road feature identification method and system Active CN111445578B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010228450.1A CN111445578B (en) 2020-03-27 2020-03-27 Map three-dimensional road feature identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010228450.1A CN111445578B (en) 2020-03-27 2020-03-27 Map three-dimensional road feature identification method and system

Publications (2)

Publication Number Publication Date
CN111445578A true CN111445578A (en) 2020-07-24
CN111445578B CN111445578B (en) 2023-03-10

Family

ID=71650907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010228450.1A Active CN111445578B (en) 2020-03-27 2020-03-27 Map three-dimensional road feature identification method and system

Country Status (1)

Country Link
CN (1) CN111445578B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022134475A1 (en) * 2020-12-25 2022-06-30 深圳市慧鲤科技有限公司 Point cloud map construction method and apparatus, electronic device, storage medium and program

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8411080B1 (en) * 2008-06-26 2013-04-02 Disney Enterprises, Inc. Apparatus and method for editing three dimensional objects
CN103500338A (en) * 2013-10-16 2014-01-08 厦门大学 Road zebra crossing automatic extraction method based on vehicle-mounted laser scanning point cloud
CN104766058A (en) * 2015-03-31 2015-07-08 百度在线网络技术(北京)有限公司 Method and device for obtaining lane line
CN107993282A (en) * 2017-11-06 2018-05-04 江苏省测绘研究所 One kind can dynamically measure live-action map production method
CN108052869A (en) * 2017-11-23 2018-05-18 深圳市易成自动驾驶技术有限公司 Lane detection method, apparatus and computer readable storage medium
CN108470159A (en) * 2018-03-09 2018-08-31 腾讯科技(深圳)有限公司 Lane line data processing method, device, computer equipment and storage medium
CN110243375A (en) * 2019-06-26 2019-09-17 汕头大学 Method that is a kind of while constructing two-dimensional map and three-dimensional map
CN110320504A (en) * 2019-07-29 2019-10-11 浙江大学 A kind of unstructured road detection method based on laser radar point cloud statistics geometrical model

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8411080B1 (en) * 2008-06-26 2013-04-02 Disney Enterprises, Inc. Apparatus and method for editing three dimensional objects
CN103500338A (en) * 2013-10-16 2014-01-08 厦门大学 Road zebra crossing automatic extraction method based on vehicle-mounted laser scanning point cloud
CN104766058A (en) * 2015-03-31 2015-07-08 百度在线网络技术(北京)有限公司 Method and device for obtaining lane line
CN107993282A (en) * 2017-11-06 2018-05-04 江苏省测绘研究所 One kind can dynamically measure live-action map production method
CN108052869A (en) * 2017-11-23 2018-05-18 深圳市易成自动驾驶技术有限公司 Lane detection method, apparatus and computer readable storage medium
CN108470159A (en) * 2018-03-09 2018-08-31 腾讯科技(深圳)有限公司 Lane line data processing method, device, computer equipment and storage medium
CN110243375A (en) * 2019-06-26 2019-09-17 汕头大学 Method that is a kind of while constructing two-dimensional map and three-dimensional map
CN110320504A (en) * 2019-07-29 2019-10-11 浙江大学 A kind of unstructured road detection method based on laser radar point cloud statistics geometrical model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DESHENG XIE: "Obstacle detection and tracking method for autonomous vehicle based on three-dimensional LiDAR", 《INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS》 *
匡海威: "基于深度学习的车道线检测算法研究", 《信息科技辑》 *
宣寒宇等: "一种鲁棒性的多车道线检测算法", 《计算机科学》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022134475A1 (en) * 2020-12-25 2022-06-30 深圳市慧鲤科技有限公司 Point cloud map construction method and apparatus, electronic device, storage medium and program
JP2023510474A (en) * 2020-12-25 2023-03-14 シェンチェン テトラス.エーアイ テクノロジー カンパニー リミテッド POINT CLOUD MAP CONSTRUCTION METHOD AND DEVICE, ELECTRONIC DEVICE, STORAGE MEDIUM AND PROGRAM
JP7316456B2 (en) 2020-12-25 2023-07-27 シェンチェン テトラス.エーアイ テクノロジー カンパニー リミテッド POINT CLOUD MAP CONSTRUCTION METHOD AND DEVICE, ELECTRONIC DEVICE, STORAGE MEDIUM AND PROGRAM

Also Published As

Publication number Publication date
CN111445578B (en) 2023-03-10

Similar Documents

Publication Publication Date Title
CN109685066B (en) Mine target detection and identification method based on deep convolutional neural network
CN109544612B (en) Point cloud registration method based on feature point geometric surface description
US9846946B2 (en) Objection recognition in a 3D scene
CN103714541B (en) Method for identifying and positioning building through mountain body contour area constraint
CN108868268B (en) Unmanned parking space posture estimation method based on point-to-surface distance and cross-correlation entropy registration
Chen et al. Building change detection with RGB-D map generated from UAV images
Yang et al. Semiautomated building facade footprint extraction from mobile LiDAR point clouds
CN104536009A (en) Laser infrared composite ground building recognition and navigation method
CN113139453B (en) Orthoimage high-rise building base vector extraction method based on deep learning
CN112257605B (en) Three-dimensional target detection method, system and device based on self-labeling training sample
Wang et al. Window detection from mobile LiDAR data
San et al. Building extraction from high resolution satellite images using Hough transform
CN103136525B (en) High-precision positioning method for special-shaped extended target by utilizing generalized Hough transformation
Cheng et al. Building boundary extraction from high resolution imagery and lidar data
CN104102909B (en) Vehicle characteristics positioning and matching process based on lenticular information
CN108257155B (en) Extended target stable tracking point extraction method based on local and global coupling
CN110674674A (en) Rotary target detection method based on YOLO V3
CN112825192A (en) Object identification system and method based on machine learning
CN107316328A (en) A kind of closed loop detection method based on two dimensional laser scanning instrument Corner Feature
Wang Automatic extraction of building outline from high resolution aerial imagery
CN114549549B (en) Dynamic target modeling tracking method based on instance segmentation in dynamic environment
CN108573280A (en) A kind of unmanned boat independently passes through the method for bridge
Yao et al. Automatic extraction of road markings from mobile laser-point cloud using intensity data
CN111445578B (en) Map three-dimensional road feature identification method and system
CN110276371B (en) Container corner fitting identification method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant