CN117930179A - Laser point cloud data association method and device - Google Patents

Laser point cloud data association method and device Download PDF

Info

Publication number
CN117930179A
CN117930179A CN202211255991.9A CN202211255991A CN117930179A CN 117930179 A CN117930179 A CN 117930179A CN 202211255991 A CN202211255991 A CN 202211255991A CN 117930179 A CN117930179 A CN 117930179A
Authority
CN
China
Prior art keywords
point
target
class
point cloud
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211255991.9A
Other languages
Chinese (zh)
Inventor
张振林
何世政
王东科
唐培培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automotive Innovation Corp
Original Assignee
China Automotive Innovation Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automotive Innovation Corp filed Critical China Automotive Innovation Corp
Priority to CN202211255991.9A priority Critical patent/CN117930179A/en
Publication of CN117930179A publication Critical patent/CN117930179A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application relates to the technical field of laser radars, in particular to a laser point cloud data association method and device, wherein the method comprises the following steps: acquiring a current frame point cloud and a history point cloud image of a laser radar; the history point cloud image comprises at least one frame of history frame point cloud; determining a target point in the current frame point cloud; calculating the characteristic value of the target point; the target point is any point in the current frame point cloud; determining a target category of the target point based on the characteristic value of the target point; and associating the target point with the historical point cloud image based on the target category of the target point. The characteristic category of the points is obtained by avoiding fitting a large number of points in the point cloud of the current frame, the time for determining the characteristic category (namely the target category) of each point in the point cloud can be shortened, and the instantaneity of extracting the characteristic points from the point cloud is improved; the universality of feature point extraction is improved, and the actual environment can be reflected; by performing data association based on the feature categories of the points, accuracy of data association can be improved.

Description

Laser point cloud data association method and device
Technical Field
The application relates to the technical field of laser radar simulation, in particular to a laser point cloud data association method and device.
Background
The 3D laser radar gradually becomes one of the main sensors in the automatic driving field due to the advantages of long measurement distance, high precision and the like, and plays an important role in sensing, positioning and mapping tasks.
In order to perform the 3D lidar based on-line localization and mapping (Simultaneous Localization AND MAPPING, SLAM) tasks, it is often necessary that the number of laser lines be high enough to facilitate a full understanding of the scene. However, the more the laser lines, the more laser points in the point cloud of the 3D laser radar, resulting in processing delay of SLAM task and reduced real-time performance. Therefore, at present, a mode of extracting feature points from the point cloud is adopted, the SLAM task is completed based on the feature points, and the real-time performance of SLAM task processing is guaranteed.
In the prior art, for feature point extraction in a point cloud, a curvature is generally calculated by extracting the distance difference between a laser point (i.e. a point cloud midpoint) of the same laser scanning line and a plurality of adjacent points, and the laser point is divided into an edge point and a surface point based on the curvature. In the characteristic point extraction process, the laser radar is required to provide scanning line information, so that the universality of the characteristic point extraction is low; and since only information of the same scan line is extracted, the extracted feature points do not sufficiently reflect the actual environment. In the prior art, the characteristic category of each point in the point cloud, namely the edge point and the face point, is determined by fitting the point cloud data. But fitting based on a large number of point cloud data consumes time and the real-time of extracting feature points is poor.
In addition, in the SLAM task, after the feature points (i.e., the current feature points) in the current point cloud are extracted, the current feature points need to be data-associated with the historical feature points. Specifically, the current feature point is projected onto a reference map, and the corresponding point of the current feature point in the reference map is determined through distance search. Wherein, the reference map is formed according to the characteristic points of the historical frame point cloud (namely, the historical characteristic points). Under the condition that the initial pose of the laser radar deviates greatly from the historical pose when the point cloud of the current frame is acquired, the situation of error association is easy to occur through distance searching, and the subsequent point cloud association effect is influenced.
Therefore, it is necessary to provide a method and a device for associating laser point cloud data, which can avoid fitting a large number of points in the point cloud to obtain the characteristic category of the point, speed up the time for determining the characteristic category of each point in the point cloud, and improve the real-time performance of extracting the characteristic point from the point cloud; the feature types of each point are prevented from being determined based on the laser scanning lines, the universality of feature point extraction is improved, the actual environment can be reflected, and the accuracy of data association is improved.
Disclosure of Invention
The embodiment of the application provides a method and a device for associating laser point cloud data, which can avoid fitting a large number of points in point cloud to obtain the characteristic category of the point, quicken the time for determining the characteristic category of each point in the point cloud and improve the instantaneity for extracting the characteristic point from the point cloud; the feature types of each point are prevented from being determined based on the laser scanning lines, the universality of feature point extraction is improved, the actual environment can be reflected, and the accuracy of data association is improved.
In a first aspect, an embodiment of the present application provides a laser point cloud data association method, where the method includes:
acquiring a current frame point cloud and a history point cloud image of a laser radar; the history point cloud image comprises at least one frame of history frame point cloud;
Determining a target point in the current frame point cloud; calculating the characteristic value of the target point; the target point is any point in the current frame point cloud;
determining a target category of the target point based on the characteristic value of the target point;
and associating the target point with the historical point cloud image based on the target class of the target point.
In some alternative embodiments, determining a target class of the target point based on the characteristic value of the target point includes:
Calculating a shape tolerance of the target point based on the characteristic value of the target point;
And determining the target category of the target point according to the shape tolerance of the target point.
In some alternative embodiments, the shape tolerance includes at least one of straightness, flatness, and sphericity; the target class comprises at least one of a straight line class, a plane class and a spherical class;
determining the target class of the target point according to the shape tolerance of the target point, comprising:
Determining that the target class of the target point includes the sphere class when the sphericity of the target point reaches a sphere threshold;
Determining that the target class of the target point includes the plane class when the flatness of the target point reaches a plane threshold;
when the straightness of the target point reaches a straight line threshold, determining that the target class of the target point includes the straight line class.
In some of the alternative embodiments of the present invention,
The plane class comprises at least one of a vertical plane class and a horizontal plane class;
when the flatness of the target point reaches a plane threshold, determining that the target class of the target point includes the plane class includes:
When the flatness of the target point reaches the plane threshold value, calculating a normal vector of the target point;
determining that the target class of the target point includes the vertical class when the normal vector reaches a first vector threshold;
when the normal vector reaches a second vector threshold, determining that the target class of the target point includes the horizontal plane class.
In some alternative embodiments, the target class further comprises an intersecting line class, the method further comprising:
Determining a plurality of planes based on the normal vector and a first location of a plane point; the plane points are used for representing points of the current frame point cloud, wherein the target category of the points comprises the plane category; the first position is the position of the plane point in the current frame point cloud;
Calculating a center of gravity distance between two planes of the plurality of planes;
Calculating an intersection line of the two planes under the condition that the gravity center distance meets a first distance threshold value;
Determining an intersection line point from the current frame point cloud based on the intersection line; the intersection line points are used for representing points of the current frame point cloud, wherein the target category comprises the intersection line class; the distance between the intersection line point and the intersection line satisfies a second distance threshold.
In some alternative embodiments, the straight line class includes at least one of a pole class and a beam class;
when the straightness of the target point reaches a straight line threshold, determining that the target class of the target point includes the straight line class includes:
When the straightness of the target point reaches the straight line threshold value, calculating a main direction of the target point;
When the main direction reaches a first direction threshold value, determining that the target class of the target point comprises the upright class;
When the primary direction reaches a second direction threshold, determining that the target class of the target point includes the crossbeam class.
In some alternative embodiments, the target class further comprises a corner class; the method further comprises the steps of:
Determining a plurality of straight lines based on the primary direction and the second position of the straight line point; the straight line points are used for representing points of the current frame point cloud, wherein the target category of the points comprises the straight line category; the second position is the position of the straight line point in the current frame point cloud;
Calculating the center-of-gravity distance of two straight lines in the plurality of straight lines;
Determining an intersection point of the two straight lines under the condition that the center-of-gravity distance of the straight lines meets a third distance threshold;
Determining an inflection point from the current frame point cloud based on the intersection point; the inflection point is used for representing points in the current frame point cloud, wherein the target category comprises the corner category; the distance between the inflection point and the intersection point satisfies a fourth distance threshold.
In some optional embodiments, determining the target point in the current frame point cloud includes:
extracting a plurality of ground points and a plurality of non-ground points from the current frame point cloud by adopting a fitting technology;
The target point is determined from the plurality of non-ground points.
In some optional embodiments, associating the target point with the history point cloud map based on the target class of the target point includes:
According to the initial pose of the laser radar when the current frame point cloud is obtained, mapping the current frame point cloud to the historical point cloud map to obtain a mapping point of the target point in the historical point cloud map;
determining a set of adjacent history points of the mapping points in the history point cloud picture;
calculating the feature quantity and the intensity of the mapping points according to the target category of the target point; the feature quantity includes at least one of a curvature, a normal vector, and a principal direction;
Acquiring the characteristic quantity and the intensity of each history point in the adjacent history point set; calculating the characteristic quantity difference and the intensity difference between the mapping point and each history point in the adjacent history point set;
determining a target history point from the set of adjacent history points based on the feature quantity differences and the intensity differences between the map points and each history point in the set of adjacent history points; and associating the target point with the target history point.
In a second aspect, an embodiment of the present application provides a laser point cloud data association apparatus, where the apparatus includes:
The first acquisition module is used for acquiring the current frame point cloud and the historical point cloud image of the laser radar; the history point cloud image comprises at least one frame of history frame point cloud;
a processing module, configured to determine a target point in the current frame point cloud; calculating the characteristic value of the target point; the target point is any point in the current frame point cloud;
a determining module, configured to determine a target category of the target point based on the feature value of the target point;
And the association module is used for associating the target point with the history point cloud image based on the target category of the target point.
In some alternative embodiments, the determining module includes:
a first determination submodule for calculating a shape tolerance of the target point based on the characteristic value of the target point;
And the second determination submodule is used for determining the target category of the target point according to the shape tolerance of the target point.
In some alternative embodiments, the shape tolerance includes at least one of straightness, flatness, and sphericity; the target class comprises at least one of a straight line class, a plane class and a spherical class;
A second determination sub-module, comprising:
a sphere class determination module, configured to determine that the target class of the target point includes the sphere class when the sphericity of the target point reaches a sphere threshold;
A plane class determination module, configured to determine that the target class of the target point includes the plane class when the flatness of the target point reaches a plane threshold;
And the straight line type determining module is used for determining that the target type of the target point comprises the straight line type when the straightness of the target point reaches a straight line threshold value.
In some alternative embodiments, the plane class includes at least one of a vertical plane class and a horizontal plane class;
a plane class determination module comprising:
the normal vector determining module is used for calculating the normal vector of the target point when the flatness of the target point reaches the plane threshold value;
a vertical plane class determination module, configured to determine that the target class of the target point includes the vertical plane class when the normal vector reaches a first vector threshold;
and the horizontal plane class determining module is used for determining that the target class of the target point comprises the horizontal plane class when the normal vector reaches a second vector threshold value.
In some alternative embodiments, the target class further comprises an intersecting line class, the apparatus further comprising:
A plane determination module for determining a plurality of planes based on the normal vector and a first position of a plane point; the plane points are used for representing points of the current frame point cloud, wherein the target category of the points comprises the plane category; the first position is the position of the plane point in the current frame point cloud;
The plane gravity center calculating module is used for calculating the gravity center distance between two planes in the plurality of planes;
the intersection line calculation module is used for calculating an intersection line of the two planes under the condition that the gravity center distance meets a first distance threshold value;
An intersecting line point determining module, configured to determine an intersecting line point from the current frame point cloud based on the intersecting line; the intersection line points are used for representing points of the current frame point cloud, wherein the target category comprises the intersection line class; the distance between the intersection line point and the intersection line satisfies a second distance threshold.
In some alternative embodiments, the straight line class includes at least one of a pole class and a beam class;
a straight line class determination module comprising:
A main direction calculation module, configured to calculate a main direction of the target point when the straightness of the target point reaches the straight line threshold;
The pole setting class determining module is used for determining that the target class of the target point comprises the pole setting class when the main direction reaches a first direction threshold value;
And the beam class determining module is used for determining that the target class of the target point comprises the beam class when the main direction reaches a second direction threshold value.
In some alternative embodiments, the target class further comprises a corner class; the apparatus further comprises:
A straight line determining module, configured to determine a plurality of straight lines based on the main direction and the second position of the straight line point; the straight line points are used for representing points of the current frame point cloud, wherein the target category of the points comprises the straight line category; the second position is the position of the straight line point in the current frame point cloud;
The straight line center of gravity calculation module is used for calculating the straight line center of gravity distance of two straight lines in the plurality of straight lines;
The intersection point determining module is used for determining an intersection point of the two straight lines under the condition that the center-of-gravity distance of the straight lines meets a third distance threshold value;
The inflection point determining module is used for determining an inflection point from the current frame point cloud based on the intersection point; the inflection point is used for representing points in the current frame point cloud, wherein the target category comprises the corner category; the distance between the inflection point and the intersection point satisfies a fourth distance threshold.
In some alternative embodiments, the processing module includes:
the non-ground point extraction module is used for extracting a plurality of ground points and a plurality of non-ground points from the current frame point cloud by adopting a fitting technology;
and the target point determining module is used for determining the target point from the plurality of non-ground points.
In some alternative embodiments, the association module includes:
The mapping module is used for mapping the current frame point cloud to the history point cloud graph according to the initial pose of the laser radar when the current frame point cloud is acquired, so as to obtain the mapping point of the target point in the history point cloud graph;
The adjacent point determining module is used for determining an adjacent historical point set of the mapping points in the historical point cloud picture;
The feature calculation module is used for calculating the feature quantity and the intensity of the mapping points according to the target category of the target point; the feature quantity includes at least one of a curvature, a normal vector, and a principal direction;
A fourth obtaining module, configured to obtain a feature quantity and an intensity of each history point in the adjacent history point set; calculating the characteristic quantity difference and the intensity difference between the mapping point and each history point in the adjacent history point set;
A target history point determining module configured to determine a target history point from the set of adjacent history points based on the feature quantity differences and the intensity differences between the map points and each history point in the set of adjacent history points; and associating the target point with the target history point.
In a third aspect, an embodiment of the present application provides a computer storage medium, where at least one instruction or at least one section of program is stored, where the at least one instruction or at least one section of program is loaded and executed by a processor to implement the laser point cloud data association method described above.
In a fourth aspect, an embodiment of the present application provides an electronic device, where the electronic device includes a processor and a memory, where at least one instruction or at least one section of program is stored in the memory, where the at least one instruction or the at least one section of program is loaded by the processor and executes the laser point cloud data association method described above.
In a fifth aspect, embodiments of the present application provide a computer program product, the computer program product comprising a computer program, the computer program being stored in a readable storage medium, at least one processor of the computer device reading and executing the computer program from the readable storage medium, causing the computer device to perform the laser point cloud data association method described above.
The method comprises the steps of obtaining a current frame point cloud and a history point cloud image of a laser radar; the history point cloud image comprises at least one frame of history frame point cloud; determining a target point in the current frame point cloud; calculating the characteristic value of the target point; the target point is any point in the current frame point cloud; determining a target category of the target point based on the characteristic value of the target point; and associating the target point with the historical point cloud image based on the target class of the target point. In this way, the characteristic values of the points in the point cloud of the current frame are used for determining the target category of the points, namely the characteristic category of the points, so that the characteristic category of the points is prevented from being obtained by fitting a large number of points in the point cloud of the current frame, the time for determining the characteristic category of the points in the point cloud is shortened, and the instantaneity of extracting the characteristic points from the point cloud is improved; the feature types of each point are prevented from being determined based on the laser scanning lines, the universality of feature point extraction is improved, and the actual environment can be reflected; by performing data association based on the feature categories of the points, accuracy of data association can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scenario diagram of a laser point cloud data association method provided by an embodiment of the present application;
Fig. 2 is a schematic flow chart of a laser point cloud data association method according to an embodiment of the present application;
fig. 3 is a schematic diagram of feature classification in a laser point cloud data association method according to an embodiment of the present application;
Fig. 4 is a schematic flow chart of determining a target class based on a feature value of a target point in the laser point cloud data association method according to the embodiment of the present application;
Fig. 5 is a schematic flow chart of determining a target class based on a shape tolerance of a target point in a laser point cloud data association method according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of determining intersection line type target points in a laser point cloud data association method according to an embodiment of the present application;
fig. 7 is a schematic flow chart of determining corner class target points in a laser point cloud data association method according to an embodiment of the present application;
fig. 8 is a schematic flow chart of data association in a laser point cloud data association method according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of a laser point cloud data association device according to an embodiment of the present application;
Fig. 10 is a block diagram of a hardware structure of an electronic device for implementing a laser point cloud data association method according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
Reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic may be included in at least one implementation of the invention. In the description of the present invention, it should be understood that the terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the above-described figures are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Before the method of the embodiment of the application is introduced, an application scene of the laser point cloud data association method is first introduced by way of example.
Referring to fig. 1, fig. 1 is an application scenario diagram of a laser point cloud data association method according to an embodiment of the present application. The point cloud image output by the laser radar is obtained after characteristic points in multi-frame point cloud are associated. Specifically, extracting characteristic points in the current frame point cloud T1; and matching the characteristic points in the current frame point cloud T1 with the characteristic points in the history point cloud image T2.
The method specifically comprises two steps of region segmentation and feature extraction aiming at extracting feature points in the current frame point cloud T1. The region segmentation stage mainly completes feature classification and identification determination, such as determining whether the region belongs to edge points or face points; the feature extraction stage mainly completes the extraction of various feature points.
In the prior art, for feature point extraction in point cloud, curvature is generally calculated by extracting the distance difference between the laser point of the same laser scanning line and the adjacent points, and the laser point is divided into edge points and surface points based on the curvature. The laser radar is required to provide scanning line information, so that the universality of the feature point extraction is low; and since only information of the same scan line is extracted, the extracted feature points do not sufficiently reflect the actual environment. In the prior art, the feature classification of each point in the point cloud is determined by fitting the point cloud data, namely, the feature classification is completed. But fitting based on a large number of point cloud data consumes time and the real-time of extracting feature points is poor.
In addition, the feature points in the current frame point cloud T1 shown in fig. 1 (i.e., the current feature points) are data-associated with the feature points in the history point cloud T2 (i.e., the history feature points). In the prior art, when the initial pose of the laser radar deviates greatly from the historical pose when the point cloud of the current frame is acquired, the situation of error association is easy to occur through distance searching, and the subsequent point cloud association effect is influenced.
In order to solve the problems, the method acquires the current frame point cloud and the history point cloud image of the laser radar; the history point cloud image comprises at least one frame of history frame point cloud; determining a target point in the current frame point cloud; calculating the characteristic value of the target point; the target point is any point in the current frame point cloud; determining a target category of the target point based on the characteristic value of the target point; and associating the target point with the historical point cloud image based on the target class of the target point. Specifically, the target category of each point, namely the characteristic category of each point, is determined through the characteristic values of each point in the point cloud of the current frame, so that the characteristic category of each point is obtained by avoiding fitting a large number of points in the point cloud of the current frame, the time for determining the characteristic category of each point in the point cloud is shortened, and the instantaneity of extracting the characteristic point from the point cloud is improved; the feature types of each point are prevented from being determined based on the laser scanning lines, the universality of feature point extraction is improved, and the actual environment can be reflected; by performing data association based on the feature categories of the points, accuracy of data association can be improved.
The following describes a specific embodiment of a laser point cloud data association method provided by the present application, and fig. 2 is a schematic flow chart of a laser point cloud data association method provided by the embodiment of the present application; the present specification provides method operational steps as an example or a flowchart, but may include more or fewer operational steps based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in a real system or server product, the methods illustrated in the embodiments or figures may be performed sequentially or in parallel (e.g., in a parallel processor or multithreaded environment). As shown in fig. 2, the method may include:
s202: acquiring a current frame point cloud and a history point cloud image of a laser radar; the history point cloud image includes at least one frame of history frame point cloud.
Specifically, the history frame point cloud only includes history feature points, that is, the history frame point cloud may be a history frame point cloud obtained by extracting feature points. It will be appreciated that the historical feature points contained in the historical frame point cloud described above may be used to characterize the shape of objects contained in the historical frame point cloud. Thus, the situation that the target points in the point cloud of the current frame are matched with excessive points is avoided, and time and computer calculation force are consumed.
S204: determining a target point in the current frame point cloud; calculating the characteristic value of the target point; the target point is any point in the current frame point cloud.
Specifically, the calculation of the feature value may be: searching for points in the radius r around the target point in the point cloud of the current frame, and calculating a characteristic value of the target point based on the points in the radius r of the target point when the number n of points in the radius r of the target point is larger than a point threshold n m, wherein the characteristic value of the target point comprises mu 1、μ2、μ3, for example, mu 123. Specifically, the feature value of the target point is used to characterize the three-dimensional shape feature characterized by n points within the radius r around the target point.
Since it is easier to distinguish between the ground point and the non-ground point based on the fitting, and accurately determine the feature class of the non-ground point in the point cloud is a difficulty associated with the laser point cloud data, in order to improve the accuracy of determining the feature class of the non-ground point, in some alternative embodiments, determining the target point in the current frame point cloud in step S204 includes:
extracting a plurality of ground points and a plurality of non-ground points from the current frame point cloud by adopting a fitting technology;
The target point is determined from the plurality of non-ground points.
Specifically, fig. 3 is a schematic diagram of feature classification in a laser point cloud data association method according to an embodiment of the present application, as shown in fig. 3, first, a current frame point cloud is divided into a ground point and a non-ground point. Subsequent feature classification will then be performed based on non-ground points. And calculating and storing normal vectors of a plurality of ground points, so that the subsequent data association based on the normal vectors of the ground points is facilitated. In addition, the fitting technique may specifically be to fit a maximum plane in the point cloud of the current frame, determine a point on the maximum plane as a ground point, or map the point cloud of the current frame in a two-dimensional plane, segment the point cloud of the current frame in the two-dimensional plane, and then fit the point cloud data of each region. It will be appreciated that the feature class of ground points is a horizontal class.
In this embodiment, the ground points are first identified by using a fitting technique, for example, the non-ground points are points on the non-ground area S shown in fig. 1; the target point is then determined from the distinguished non-ground points, for example, including the points on the sphere Q, plane M and straight line G shown in fig. 1 described above. Therefore, the method can facilitate the subsequent feature classification of the non-ground points, and improve the accuracy and efficiency of the feature classification of the non-ground points.
After determining the feature value of the target point in the above step S204, the process proceeds to step S206, where the feature class of the target point is performed.
S206: and determining a target category of the target point based on the characteristic value of the target point.
In order to quickly determine the target class of the target point, fig. 4 is a schematic flow chart of determining the target class based on the characteristic value of the target point in the laser point cloud data association method according to the embodiment of the present application, in some optional embodiments, the determining the target class of the target point in step S206 based on the characteristic value of the target point includes the following steps shown in fig. 4:
S2061: and calculating the shape tolerance of the target point based on the characteristic value of the target point.
Specifically, the shape tolerance refers to the variation allowed by the shape of a single actual element, such as flatness, roundness, cylindricity, straightness, contour, etc.; is the tolerance of the geometry of the element being measured, i.e., the accuracy of the geometry. Thus, the geometric categories, such as planes, formed by the target point and n surrounding points are reflected by the shape tolerance.
S2062: and determining the target category of the target point according to the shape tolerance of the target point.
In order to classify the detailed characteristics of the points in the point cloud of the current frame, the accuracy of data association of the points after characteristic classification is improved. FIG. 5 is a schematic flow chart of determining a target class based on a shape tolerance of a target point in a laser point cloud data correlation method according to an embodiment of the present application, where in some alternative embodiments, the shape tolerance includes at least one of straightness, flatness, and sphericity; the target class comprises at least one of a straight line class, a plane class and a spherical class; determining the target class of the target point according to the shape tolerance of the target point, comprising:
S20621: when the sphericity of the target point reaches a sphere threshold, determining that the target class of the target point includes the sphere class.
For example, the calculation of the sphericity Φ is as follows formula (1):
Wherein μ 3 is the maximum value among the feature values of the target point; mu 1 is the minimum value among the feature values of the target point.
In some alternative embodiments, as shown in fig. 3, the target point in the non-ground point is determined by step 3.1 to be a sphere, that is, whether the non-ground point is a sphere. This determination is made by the above spherical threshold.
For example, whenWhen the current point is considered to be a spherical point. Wherein/>Is the sphericity of the target point,Is a sphere threshold.
In some alternative embodiments, the curvature σ c of the spherical point is calculated and saved by equation (2) below. The formula (2) is:
in this embodiment, by recording the curvature of each spherical point, the spherical points can be accurately associated based on the curvature in the subsequent data association.
S20622: when the flatness of the target point reaches a plane threshold, determining that the target class of the target point includes the plane class.
For example, the calculation of the flatness ρ 2 is as follows formula (3):
Where μ 2 is the value in between the eigenvalues of the target point, i.e., μ 123.
In some alternative embodiments, as shown in fig. 3, when the target point in the non-ground point is determined not to belong to the sphere class through step 3.1, step 3.2 is entered to determine whether the target point is a plane class, that is, whether the target point is a plane point, where the determination in step 3.2 is performed through the plane threshold.
For example, whenWhen the current point is considered to be a planar point. Where ρ 2c is the flatness of the target point,Is a plane threshold.
In order to extract as few feature points as possible while ensuring that feature points can characterize the shape of the object, in some alternative embodiments, the plane class includes at least one of a vertical plane class and a horizontal plane class;
when the flatness of the target point reaches a plane threshold, determining that the target class of the target point includes the plane class includes:
When the flatness of the target point reaches the plane threshold value, calculating a normal vector of the target point;
determining that the target class of the target point includes the vertical class when the normal vector reaches a first vector threshold;
when the normal vector reaches a second vector threshold, determining that the target class of the target point includes the horizontal plane class.
Specifically, the first vector threshold is a normal vector along the horizontal direction; the second vector threshold is a normal vector along the vertical direction. The normal vector of each plane point can be recorded, and the plane points are accurately associated when the subsequent data are associated.
In some alternative embodiments, as shown in fig. 3, when it is determined in step 3.2 that the target point belongs to a plane class, step 3.3 is entered, and it is determined whether the target point is a specific plane class, that is, whether the target point is a vertical plane point (i.e., a vertical plane class point) or a horizontal plane point (i.e., a horizontal plane class point). Wherein, whether the target point is a specific plane class can be judged by the normal vector of the target point.
In the above embodiment, the vertical plane class point and the horizontal plane class point are selected from the plane class points by the first vector threshold and the second vector threshold. When the characteristic points of the current frame point cloud are extracted, only the characteristic points in the vertical plane class points and the horizontal plane class points can be extracted to reflect the object shape in the current frame point cloud.
S20623: when the straightness of the target point reaches a straight line threshold, determining that the target class of the target point includes the straight line class.
In some alternative embodiments, as shown in fig. 3, when it is determined in step 3.2 that the target point does not belong to a plane point, step 3.4 is entered to determine whether the target point is a straight line type, that is, whether the target point is a straight line point (i.e., a straight line type point).
For example, whenWhen the current point is considered to be a straight line point. Where ρ 1c is the straightness of the target point,Is a plane threshold.
In some alternative embodiments, the straight line class includes at least one of a pole class and a beam class;
when the straightness of the target point reaches a straight line threshold, determining that the target class of the target point includes the straight line class includes:
When the straightness of the target point reaches the straight line threshold value, calculating a main direction of the target point;
When the main direction reaches a first direction threshold value, determining that the target class of the target point comprises the upright class;
When the primary direction reaches a second direction threshold, determining that the target class of the target point includes the crossbeam class.
Specifically, the first direction threshold is a direction threshold along the vertical direction; the second direction threshold is a direction threshold along the horizontal direction.
In some alternative embodiments, as shown in fig. 3, when it is determined in step 3.4 that the target point belongs to a straight line point, step 3.5 is entered to determine whether the target point is a specific straight line class, that is, whether the target point is a vertical pole point (i.e., a vertical pole class point) or a horizontal beam point (i.e., a horizontal beam class point). Wherein, whether the straight line point belongs to the specific straight line class can be judged through the main direction of the straight line point.
In the above embodiment, the vertical rod class point and the horizontal beam class point are selected from the straight line class points by the first direction threshold value and the second direction threshold value. When the characteristic points of the current frame point cloud are extracted, only the characteristic points in the vertical rod type points and the cross beam type points can be extracted to reflect the shape of the object in the current frame point cloud. Thus, as few feature points as possible can be extracted, and the extracted feature points can characterize the shape of the object.
Identifying and extracting points near the intersection line of two planes in the point cloud is beneficial to identifying the shape of an object. As shown in fig. 3, after determining that the points in the point cloud belong to the plane class, the intersection line points may be determined according to the points in the plane class, and in some alternative embodiments, the target class further includes the intersection line class, and fig. 6 is a schematic flow chart of determining the intersection line class target point in the laser point cloud data association method provided by the embodiment of the present application, where the method further includes the following steps shown in fig. 6:
S602: determining a plurality of planes based on the normal vector and a first location of a plane point; the plane points are used for representing points of the current frame point cloud, wherein the target category of the points comprises the plane category; the first position is the position of the plane point in the current frame point cloud.
Specifically, it is first assumed that there are a plurality of different planes in the space, denoted by P i (i e R), and then the plane points are attributed to the different planes, specifically including: for an empty plane P i, adding a plane point, and then using the average normal vector of the included plane point to represent the plane normal vector of the plane P i; for a new plane point, judging the similarity between the normal vector of the new plane point and the plane normal vector of each existing plane, when the plane similar to the normal vector cannot be found, creating a new plane, adding the new plane point, and recording the plane normal vector of the new plane; when a plane with a similar normal vector with the new plane point is found, and the number of the plane points in the existing plane is smaller than x 1, calculating the gravity center of the plane point of the plane, calculating the distance between the new plane point and the gravity center of the plane, and attributing the plane point meeting a certain distance threshold to the plane; when the number of the plane points is greater than or equal to x 1, searching the neighboring points of the new plane point in the plane point set of the existing plane, and when the number of the neighboring points meets a certain number threshold, attributing the new plane point to the existing plane.
S604: a center of gravity distance between two planes of the plurality of planes is calculated.
Specifically, two planes with adjacent positions, namely two adjacent planes, are determined by the center of gravity distance of the two planes.
S606: and calculating an intersection line of the two planes in the case that the center-of-gravity distance meets a first distance threshold.
Specifically, two planes with the center of gravity distance meeting a first distance threshold are considered to be close in space and intersect, two plane equations are combined, and an intersection line equation is calculated.
S608: determining an intersection line point from the current frame point cloud based on the intersection line; the intersection line points are used for representing points of the current frame point cloud, wherein the target category comprises the intersection line class; the distance between the intersection line point and the intersection line satisfies a second distance threshold.
Specifically, by traversing the point cloud midpoint of the current frame, calculating the distance from the point to the intersecting line, the point meeting the threshold is regarded as the intersecting line point, and the direction of the intersecting line is the main direction of the intersecting line point. The feature class of the intersection point is an intersection class.
Through the embodiment, the points near the intersection line of the two planes in the point cloud of the current frame are identified and extracted, so that the shape of an object can be identified, and the accuracy and the efficiency of carrying out data association on the points in the point cloud of the current frame in the follow-up process are improved.
And identifying and extracting points near the intersection point of the two straight lines in the point cloud is beneficial to identifying the shape of the object. As shown in fig. 3, after determining that points in the point cloud belong to a straight line class, an inflection point may be determined according to points in the straight line class, and fig. 7 is a schematic flow diagram of determining a corner class target point in the laser point cloud data association method according to the embodiment of the present application, where in some optional embodiments, the target class further includes a corner class; the method further comprises the steps of:
s702: determining a plurality of straight lines based on the primary direction and the second position of the straight line point; the straight line points are used for representing points of the current frame point cloud, wherein the target category of the points comprises the straight line category; the second position is the position of the straight line point in the current frame point cloud.
Specifically, it is first assumed that there are a plurality of different straight lines in the space, denoted by L i (i e R), and then the straight line points are attributed to the different straight lines, specifically including: for an empty straight line, after adding straight line points, the main direction of the straight line points is expressed by the average main direction of the included straight line points; for a new straight line point, judging the similarity between the main direction of the new straight line point and the main direction of an existing straight line, when the existing straight line with the similar main direction cannot be found, creating a new straight line, adding the new straight line point, and recording the main direction of the current new straight line; when an existing straight line with a similar main direction is found, calculating the straight line gravity center of the existing straight line when the number of the straight line points is smaller than x 2 and the main direction is similar to that of the new straight line points, and calculating the distance between the new straight line points and the straight line gravity center, wherein the point meeting the distance threshold belongs to the existing straight line; when the number of the straight line points is larger than or equal to x 2, searching the adjacent points of the current new straight line point in the straight line point set of the existing straight line, and when the number of the adjacent points meets the number threshold, attributing the current new straight line point to the existing straight line.
S704: and calculating the center-of-gravity distance of two straight lines in the plurality of straight lines.
Specifically, based on the different straight lines distinguished from the average main direction, firstly, for the straight lines with the quantity meeting the specified threshold, calculating a straight line equation of the straight lines, and simultaneously, calculating the gravity center of the included straight line points.
S706: and determining an intersection point of the two straight lines when the center-of-gravity distance of the straight lines meets a third distance threshold.
Specifically, for two different straight lines, when the gravity center distance of the two straight lines meets a threshold value, the two straight lines are considered to be close to each other in space and intersect, and the intersection point position is calculated by combining two straight line equations.
S708: determining an inflection point from the current frame point cloud based on the intersection point; the inflection point is used for representing points in the current frame point cloud, wherein the target category comprises the corner category; the distance between the inflection point and the intersection point satisfies a fourth distance threshold.
Specifically, points within a specified range around the intersection point are regarded as inflection points, i.e., points whose distance from the intersection point satisfies a fourth distance threshold, and the inflection points record the principal directions of the two straight lines with which they are associated at the same time at the time of recording.
Through the embodiment, the points near the intersection point of the two straight lines in the point cloud of the current frame are identified and extracted, so that the shape of an object is favorably identified, and the accuracy and the efficiency of carrying out data association on the points in the point cloud of the current frame in the follow-up process are improved.
Through the above embodiment, after determining the target category of the target point in the current frame point cloud, step S208 is entered to perform data association on the target point.
S208: and associating the target point with the historical point cloud image based on the target class of the target point.
And carrying out data association on the target points based on the target categories, so that the accuracy and the efficiency of the data association can be improved. Fig. 8 is a schematic flow chart of data association in a laser point cloud data association method according to an embodiment of the present application, in some alternative embodiments, the associating the target point with the history point cloud image is performed based on the target category of the target point, including the following steps shown in fig. 8:
S2081: and according to the initial pose of the laser radar when the current frame point cloud is obtained, mapping the current frame point cloud to the historical point cloud image to obtain the mapping point of the target point in the historical point cloud image.
Specifically, the position and angle of each frame of point cloud image are different along with the scanning angle and position movement of the laser radar. Therefore, before the data association is performed on the target point, the initial pose corresponding to the point cloud of the current frame needs to be mapped to the historical point cloud image.
S2082: and determining a set of adjacent history points of the mapping points in the history point cloud picture.
Specifically, a history point near the mapping point in the history point cloud image is determined, and the probability that the nearby history point is associated with the mapping point is high.
S2083: calculating the feature quantity and the intensity of the mapping points according to the target category of the target point; the feature quantity includes at least one of a curvature, a normal vector, and a principal direction.
For example, if the target class of the target point includes a sphere class, the feature quantity of the map point is curvature; the target class of the target point comprises a plane class, and the feature quantity of the mapping point is a normal vector; the target class of the target point includes a straight line class, and the feature quantity of the mapping point is the main direction. And mapping the characteristic quantity of the target point into the history point cloud image to obtain the characteristic quantity of the mapping point. And calculating the intensity of the target point, wherein the intensity can characterize the situation of the object surface reflecting the laser light, so the intensity can characterize the shape of the object to some extent. For example, the average intensity of n points within the radius r of the target point is determined as the intensity of the mapping point of the target point and stored.
S2084: acquiring the characteristic quantity and the intensity of each history point in the adjacent history point set; and calculating the characteristic quantity difference and the intensity difference between the mapping point and each history point in the adjacent history point set.
Specifically, if the types of the characteristic quantities of the history points close to the history point set are the same, comparing the directions and the sizes of the two characteristic quantities and the intensity difference; if the types of the characteristic quantities of the history points in the adjacent history point sets are different, the history points are directly eliminated.
S2085: determining a target history point from the set of adjacent history points based on the feature quantity differences and the intensity differences between the map points and each history point in the set of adjacent history points; and associating the target point with the target history point.
Specifically, the smaller the feature quantity difference and the intensity difference, the larger the correlation between the target point and the history point. In some alternative embodiments, the correspondence between the target point and the history point may be determined by determining a characteristic difference threshold and an intensity difference threshold, for example, for the spherical point Q s, the history point M s in its neighboring history point set is first searched in the history point cloud chart, then the difference Δ s between the curvatures of the spherical point Q s and the history point M s and the intensity difference Δ I are calculated, when Δ ss_thre and Δ II_thre are determined, the spherical point Q s and the history point M s are associated, Δ s_thre is the curvature difference threshold, belongs to the characteristic difference threshold, and Δ I_thre is the intensity difference threshold. Therefore, the problem of poor real-time performance of data association caused by traversing all history points in each adjacent history point set is avoided.
In the above embodiment, based on the target category of the target point, the feature quantity type and the value of the feature quantity of the mapping point corresponding to the target point are determined, and the history points in the history point set can be screened through the target category and/or the feature quantity, and the history points associated with the target point can be accurately screened.
Some embodiments of the present application further provide a laser point cloud data association device, and fig. 8 is a schematic structural diagram of the laser point cloud data association device provided by the embodiment of the present application; as shown in fig. 8, the laser point cloud data association apparatus includes:
The first acquisition module is used for acquiring the current frame point cloud and the historical point cloud image of the laser radar; the history point cloud image comprises at least one frame of history frame point cloud;
a processing module, configured to determine a target point in the current frame point cloud; calculating the characteristic value of the target point; the target point is any point in the current frame point cloud;
a determining module, configured to determine a target category of the target point based on the feature value of the target point;
And the association module is used for associating the target point with the history point cloud image based on the target category of the target point.
In some alternative embodiments, the determining module includes:
a first determination submodule for calculating a shape tolerance of the target point based on the characteristic value of the target point;
And the second determination submodule is used for determining the target category of the target point according to the shape tolerance of the target point.
In some alternative embodiments, the shape tolerance includes at least one of straightness, flatness, and sphericity; the target class comprises at least one of a straight line class, a plane class and a spherical class;
A second determination sub-module, comprising:
a sphere class determination module, configured to determine that the target class of the target point includes the sphere class when the sphericity of the target point reaches a sphere threshold;
A plane class determination module, configured to determine that the target class of the target point includes the plane class when the flatness of the target point reaches a plane threshold;
And the straight line type determining module is used for determining that the target type of the target point comprises the straight line type when the straightness of the target point reaches a straight line threshold value.
In some alternative embodiments, the plane class includes at least one of a vertical plane class and a horizontal plane class;
a plane class determination module comprising:
the normal vector determining module is used for calculating the normal vector of the target point when the flatness of the target point reaches the plane threshold value;
a vertical plane class determination module, configured to determine that the target class of the target point includes the vertical plane class when the normal vector reaches a first vector threshold;
and the horizontal plane class determining module is used for determining that the target class of the target point comprises the horizontal plane class when the normal vector reaches a second vector threshold value.
In some alternative embodiments, the target class further comprises an intersecting line class, the apparatus further comprising:
A plane determination module for determining a plurality of planes based on the normal vector and a first position of a plane point; the plane points are used for representing points of the current frame point cloud, wherein the target category of the points comprises the plane category; the first position is the position of the plane point in the current frame point cloud;
The plane gravity center calculating module is used for calculating the gravity center distance between two planes in the plurality of planes;
the intersection line calculation module is used for calculating an intersection line of the two planes under the condition that the gravity center distance meets a first distance threshold value;
An intersecting line point determining module, configured to determine an intersecting line point from the current frame point cloud based on the intersecting line; the intersection line points are used for representing points of the target category including the intersection line class; the distance between the intersection line point and the intersection line satisfies a second distance threshold.
In some alternative embodiments, the straight line class includes at least one of a pole class and a beam class;
a straight line class determination module comprising:
A main direction calculation module, configured to calculate a main direction of the target point when the straightness of the target point reaches the straight line threshold;
The pole setting class determining module is used for determining that the target class of the target point comprises the pole setting class when the main direction reaches a first direction threshold value;
And the beam class determining module is used for determining that the target class of the target point comprises the beam class when the main direction reaches a second direction threshold value.
In some alternative embodiments, the target class further comprises a corner class; the apparatus further comprises:
A straight line determining module, configured to determine a plurality of straight lines based on the main direction and the second position of the straight line point; the straight line points are used for representing points of the current frame point cloud, wherein the target category of the points comprises the straight line category; the second position is the position of the straight line point in the current frame point cloud;
The straight line center of gravity calculation module is used for calculating the straight line center of gravity distance of two straight lines in the plurality of straight lines;
The intersection point determining module is used for determining an intersection point of the two straight lines under the condition that the center-of-gravity distance of the straight lines meets a third distance threshold value;
The inflection point determining module is used for determining an inflection point from the current frame point cloud based on the intersection point; the inflection point is used to characterize a point of the target class that includes the corner class; the distance between the inflection point and the intersection point satisfies a fourth distance threshold.
In some alternative embodiments, the processing module includes:
the non-ground point extraction module is used for extracting a plurality of ground points and a plurality of non-ground points from the current frame point cloud by adopting a fitting technology;
and the target point determining module is used for determining the target point from the plurality of non-ground points.
In some alternative embodiments, the association module includes:
The mapping module is used for mapping the current frame point cloud to the history point cloud graph according to the initial pose of the laser radar when the current frame point cloud is acquired, so as to obtain the mapping point of the target point in the history point cloud graph;
The adjacent point determining module is used for determining an adjacent historical point set of the mapping points in the historical point cloud picture;
The feature calculation module is used for calculating the feature quantity and the intensity of the mapping points according to the target category of the target point; the feature quantity includes at least one of a curvature, a normal vector, and a principal direction;
A fourth obtaining module, configured to obtain a feature quantity and an intensity of each history point in the adjacent history point set; calculating the characteristic quantity difference and the intensity difference between the mapping point and each history point in the adjacent history point set;
A target history point determining module configured to determine a target history point from the set of adjacent history points based on the feature quantity differences and the intensity differences between the map points and each history point in the set of adjacent history points; and associating the target point with the target history point.
The device and method embodiments in the embodiments of the present application are based on the same application concept.
Fig. 10 is a block diagram of a hardware structure of an electronic device for implementing a laser point cloud data association method according to an embodiment of the present application. The electronic device may be a server or a terminal device, and the internal structure thereof may be as shown in fig. 10. As shown in fig. 10, the electronic device 1000 may vary considerably in configuration or performance and may include one or more central processing units (Central Processing Units, CPU) 1010 (the processor 1010 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA), a memory 1030 for storing data, one or more storage mediums 1020 (e.g., one or more mass storage devices) for storing applications 1023 or data 1022. Wherein the memory 1030 and storage medium 1020 can be transitory or persistent storage. The program stored on the storage medium 1020 may include one or more modules, each of which may include a series of instruction operations on a server. Still further, the central processor 1010 may be configured to communicate with a storage medium 1020 and execute a series of instruction operations in the storage medium 1020 on the electronic device 1000. The electronic device 1000 can also include one or more power supplies 1050, one or more wired or wireless network interfaces 1050, one or more input/output interfaces 1040, and/or one or more operating systems 1021, such as Windows, mac OS, unix, linux, freeBSD, and the like.
Input-output interface 1040 may be used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the electronic device 1000. In one example, input-output interface 1040 includes a network adapter (Network Interface Controller, NIC) that may be connected to other network devices via base stations to communicate with the internet. In one example, the input-output interface 1040 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
The power supply 1060 may be logically connected to the processor 1010 through a power management system, so that functions of managing charging, discharging, and power consumption are implemented through the power management system.
It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 10 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, electronic device 1000 may also include more or fewer components than shown in FIG. 10 or have a different configuration than shown in FIG. 10.
The embodiment of the application also provides a computer storage medium, wherein at least one instruction or at least one section of program is stored in the computer storage medium, and the at least one instruction or the at least one section of program is loaded and executed by a processor to realize the laser point cloud data association method.
Alternatively, in this embodiment, the storage medium may be located in at least one network server among a plurality of network servers of the computer network. Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The embodiment of the present application further provides an electronic device, where the electronic device at least includes a processor 1010 and a memory 1030, where at least one instruction or at least one program is stored in the memory 1030, and the at least one instruction or at least one program is loaded by the processor 1010 and executes the laser point cloud data association method described above.
Embodiments of the present application provide a computer program product comprising a computer program stored in a readable storage medium, from which at least one processor of a computer device reads and executes the computer program, causing the computer device to perform the above-described laser point cloud data correlation method.
The method comprises the steps of obtaining a current frame point cloud and a history point cloud image of a laser radar; wherein the history point cloud image comprises at least one frame of history frame point cloud; determining a target point in the current frame point cloud; calculating the characteristic value of the target point; wherein the target point is any point in the current frame point cloud; determining a target category of the target point based on the characteristic value of the target point; and associating the target point with the historical point cloud image based on the target class of the target point. Specifically, the target category of each point, namely the characteristic category of each point, is determined through the characteristic values of each point in the point cloud of the current frame, so that the characteristic category of each point is obtained by avoiding fitting a large number of points in the point cloud of the current frame, the time for determining the characteristic category of each point in the point cloud is shortened, and the instantaneity of extracting the characteristic point from the point cloud is improved; the feature types of each point are prevented from being determined based on the laser scanning lines, the universality of feature point extraction is improved, and the actual environment can be reflected; by performing data association based on the feature categories of the points, accuracy of data association can be improved.
It should be noted that: the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the apparatus embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the application is not intended to limit the application to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the application are intended to be included within the scope of the application.

Claims (10)

1. A method for associating laser point cloud data, the method comprising:
acquiring a current frame point cloud and a history point cloud image of a laser radar; the history point cloud image comprises at least one frame of history frame point cloud;
Determining a target point in the current frame point cloud; calculating the characteristic value of the target point; the target point is any point in the current frame point cloud;
determining a target category of the target point based on the characteristic value of the target point;
and associating the target point with the historical point cloud image based on the target class of the target point.
2. The method of claim 1, wherein determining a target category of the target point based on the characteristic value of the target point comprises:
Calculating a shape tolerance of the target point based on the characteristic value of the target point;
And determining the target category of the target point according to the shape tolerance of the target point.
3. The method of claim 2, wherein the shape tolerance comprises at least one of straightness, flatness, and sphericity; the target class comprises at least one of a straight line class, a plane class and a spherical class;
determining the target class of the target point according to the shape tolerance of the target point, comprising:
Determining that the target class of the target point includes the sphere class when the sphericity of the target point reaches a sphere threshold;
Determining that the target class of the target point includes the plane class when the flatness of the target point reaches a plane threshold;
when the straightness of the target point reaches a straight line threshold, determining that the target class of the target point includes the straight line class.
4. A method according to claim 3, wherein the plane classes include at least one of a vertical class and a horizontal class;
when the flatness of the target point reaches a plane threshold, determining that the target class of the target point includes the plane class includes:
When the flatness of the target point reaches the plane threshold value, calculating a normal vector of the target point;
determining that the target class of the target point includes the vertical class when the normal vector reaches a first vector threshold;
when the normal vector reaches a second vector threshold, determining that the target class of the target point includes the horizontal plane class.
5. The method of claim 4, wherein the target class further comprises an intersecting line class, the method further comprising:
Determining a plurality of planes based on the normal vector and a first location of a plane point; the plane points are used for representing points of the current frame point cloud, wherein the target category of the points comprises the plane category; the first position is the position of the plane point in the current frame point cloud;
Calculating a center of gravity distance between two planes of the plurality of planes;
Calculating an intersection line of the two planes under the condition that the gravity center distance meets a first distance threshold value;
Determining an intersection line point from the current frame point cloud based on the intersection line; the intersection line points are used for representing points of the current frame point cloud, wherein the target category comprises the intersection line class; the distance between the intersection line point and the intersection line satisfies a second distance threshold.
6. A method according to claim 3, wherein the straight line class comprises at least one of a pole class and a beam class;
when the straightness of the target point reaches a straight line threshold, determining that the target class of the target point includes the straight line class includes:
When the straightness of the target point reaches the straight line threshold value, calculating a main direction of the target point;
When the main direction reaches a first direction threshold value, determining that the target class of the target point comprises the upright class;
When the primary direction reaches a second direction threshold, determining that the target class of the target point includes the crossbeam class.
7. The method of claim 6, wherein the target class further comprises a corner class; the method further comprises the steps of:
Determining a plurality of straight lines based on the primary direction and the second position of the straight line point; the straight line points are used for representing points of the current frame point cloud, wherein the target category of the points comprises the straight line category; the second position is the position of the straight line point in the current frame point cloud;
Calculating the center-of-gravity distance of two straight lines in the plurality of straight lines;
Determining an intersection point of the two straight lines under the condition that the center-of-gravity distance of the straight lines meets a third distance threshold;
Determining an inflection point from the current frame point cloud based on the intersection point; the inflection point is used for representing points in the current frame point cloud, wherein the target category comprises the corner category; the distance between the inflection point and the intersection point satisfies a fourth distance threshold.
8. The method of any of claims 1 to 7, wherein determining a target point in the current frame point cloud comprises:
extracting a plurality of ground points and a plurality of non-ground points from the current frame point cloud by adopting a fitting technology;
The target point is determined from the plurality of non-ground points.
9. The method of any of claims 1 to 7, wherein associating the target point with the historical point cloud map based on the target class of the target point comprises:
According to the initial pose of the laser radar when the current frame point cloud is obtained, mapping the current frame point cloud to the historical point cloud map to obtain a mapping point of the target point in the historical point cloud map;
determining a set of adjacent history points of the mapping points in the history point cloud picture;
calculating the feature quantity and the intensity of the mapping points according to the target category of the target point; the feature quantity includes at least one of a curvature, a normal vector, and a principal direction;
Acquiring the characteristic quantity and the intensity of each history point in the adjacent history point set; calculating the characteristic quantity difference and the intensity difference between the mapping point and each history point in the adjacent history point set;
determining a target history point from the set of adjacent history points based on the feature quantity differences and the intensity differences between the map points and each history point in the set of adjacent history points; and associating the target point with the target history point.
10. A laser point cloud data correlation device, the device comprising:
The first acquisition module is used for acquiring the current frame point cloud and the historical point cloud image of the laser radar; the history point cloud image comprises at least one frame of history frame point cloud;
a processing module, configured to determine a target point in the current frame point cloud; calculating the characteristic value of the target point; the target point is any point in the current frame point cloud;
a determining module, configured to determine a target category of the target point based on the feature value of the target point;
And the association module is used for associating the target point with the history point cloud image based on the target category of the target point.
CN202211255991.9A 2022-10-13 2022-10-13 Laser point cloud data association method and device Pending CN117930179A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211255991.9A CN117930179A (en) 2022-10-13 2022-10-13 Laser point cloud data association method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211255991.9A CN117930179A (en) 2022-10-13 2022-10-13 Laser point cloud data association method and device

Publications (1)

Publication Number Publication Date
CN117930179A true CN117930179A (en) 2024-04-26

Family

ID=90756122

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211255991.9A Pending CN117930179A (en) 2022-10-13 2022-10-13 Laser point cloud data association method and device

Country Status (1)

Country Link
CN (1) CN117930179A (en)

Similar Documents

Publication Publication Date Title
US11067669B2 (en) Method and apparatus for adjusting point cloud data acquisition trajectory, and computer readable medium
Yokoyama et al. Pole-like objects recognition from mobile laser scanning data using smoothing and principal component analysis
CN111553946B (en) Method and device for removing ground point cloud and method and device for detecting obstacle
US11506755B2 (en) Recording medium recording information processing program, information processing apparatus, and information processing method
CN111433780A (en) Lane line detection method, lane line detection apparatus, and computer-readable storage medium
Kim et al. Urban scene understanding from aerial and ground LIDAR data
JP5870011B2 (en) Point cloud analysis device, point cloud analysis method, and point cloud analysis program
Zelener et al. Cnn-based object segmentation in urban lidar with missing points
CN111915657A (en) Point cloud registration method and device, electronic equipment and storage medium
CN113887433A (en) Obstacle detection method and device, computer equipment and storage medium
Patil et al. A survey on joint object detection and pose estimation using monocular vision
JPWO2019040997A5 (en)
CN111105435A (en) Marker matching method and device and terminal equipment
CN117930179A (en) Laser point cloud data association method and device
CN113379826A (en) Method and device for measuring volume of logistics piece
Gollub et al. A partitioned approach for efficient graph–based place recognition
CN114782496A (en) Object tracking method and device, storage medium and electronic device
CN113592976A (en) Map data processing method and device, household appliance and readable storage medium
CN113496142A (en) Method and device for measuring volume of logistics piece
CN117392000B (en) Noise removing method and device, electronic equipment and storage medium
CN112733817B (en) Method for measuring precision of point cloud layer in high-precision map and electronic equipment
CN113033270B (en) 3D object local surface description method and device adopting auxiliary axis and storage medium
Roy Algorithm development for real-time infrastructure damage detection and analysis
Zavodny et al. Region extraction in large-scale urban lidar data
CN118311601A (en) Multi-mode visual mapping method and module for unmanned aerial vehicle, pod and unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination