CN111461980A - Performance estimation method and device of point cloud splicing algorithm - Google Patents

Performance estimation method and device of point cloud splicing algorithm Download PDF

Info

Publication number
CN111461980A
CN111461980A CN202010238029.9A CN202010238029A CN111461980A CN 111461980 A CN111461980 A CN 111461980A CN 202010238029 A CN202010238029 A CN 202010238029A CN 111461980 A CN111461980 A CN 111461980A
Authority
CN
China
Prior art keywords
point cloud
pose information
information
pose
splicing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010238029.9A
Other languages
Chinese (zh)
Other versions
CN111461980B (en
Inventor
袁鹏飞
杨坤
蔡仁澜
宋适宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010238029.9A priority Critical patent/CN111461980B/en
Publication of CN111461980A publication Critical patent/CN111461980A/en
Application granted granted Critical
Publication of CN111461980B publication Critical patent/CN111461980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure discloses a performance estimation method and a performance estimation device for a point cloud splicing algorithm, which can be used for automatic driving. The method comprises the following steps: acquiring the acquired point cloud frame and initial pose information of the point cloud frame; splicing the point cloud frames based on the initial pose information of the point cloud frames, and determining standard pose information of the point cloud frames according to the splicing result of the point cloud frames; generating at least one candidate pose information based on the initial pose information, wherein the credibility of each candidate pose information is lower than that of the initial pose information; and respectively correcting the candidate pose information of the point cloud frame by adopting at least one splicing algorithm, and determining performance information representing the pose optimization capability of the point cloud splicing algorithm according to the difference between the pose correction result of each candidate pose information and the standard pose information of the point cloud frame. The method realizes accurate estimation of the performance of the point cloud splicing algorithm.

Description

Performance estimation method and device of point cloud splicing algorithm
Technical Field
The embodiment of the disclosure relates to the technical field of artificial intelligence, in particular to the technical field of point cloud data processing, and particularly relates to a method and a device for estimating the performance of a point cloud splicing algorithm.
Background
In the field of autonomous driving, accurate positioning of the vehicle is crucial for its subsequent decisions. The positioning algorithm based on the reflection value map can accurately realize the positioning of the automatic driving vehicle at high speed.
The construction of the reflection value map requires the collection of point cloud frames, the acquisition of pose information of the point cloud frames, and the splicing of the point cloud frames based on a point cloud splicing algorithm. The method has the important function of calibrating the pose information of the point cloud frame and splicing the point cloud frame based on the calibrated pose information.
The selection of the point cloud splicing algorithm has direct influence on the point cloud splicing effect. If the point cloud splicing algorithm cannot well register the poses of different point cloud frames, point clouds capable of accurately representing a real scene cannot be spliced, and accurate positioning of an automatic driving vehicle cannot be achieved on the basis of the obtained reflection value map. It is therefore desirable to test and evaluate the performance of the stitching algorithm prior to stitching the point cloud frames using the stitching algorithm.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for estimating the performance of a point cloud splicing algorithm, electronic equipment and a computer readable medium.
In a first aspect, an embodiment of the present disclosure provides a method for estimating performance of a point cloud stitching algorithm, including: acquiring the acquired point cloud frame and initial pose information of the point cloud frame; splicing the point cloud frames based on the initial pose information of the point cloud frames, and determining standard pose information of the point cloud frames according to the splicing result of the point cloud frames; generating at least one candidate pose information based on the initial pose information, wherein the credibility of each candidate pose information is lower than that of the initial pose information; and respectively correcting the candidate pose information of the point cloud frame by adopting at least one splicing algorithm, and determining performance information representing the pose optimization capability of the point cloud splicing algorithm according to the difference between the pose correction result of each candidate pose information and the standard pose information of the point cloud frame.
In some embodiments, the acquiring the acquired point cloud frame and the initial pose information of the point cloud frame includes: acquiring an acquired point cloud frame, device pose information of acquisition equipment when the acquisition equipment acquires the point cloud frame, and pose sensing information of an inertial navigation system; and correcting the pose information of the equipment according to the pose information of the equipment and the pose sensing information of the inertial navigation system to obtain the initial pose information of the point cloud frame.
In some embodiments, the stitching the point cloud frames based on the initial pose information of the point cloud frames and determining the standard pose information of the point cloud frames according to the stitching result of the point cloud frames includes: performing pose optimization on each point cloud frame based on the initial pose information, and splicing each point cloud frame based on the optimized pose information to obtain a splicing result of the point cloud frames; and determining the optimized pose information as the standard pose information of each point cloud frame in response to the fact that the splicing result of the point cloud frames meets the preset condition.
In some embodiments, the generating at least one candidate pose information based on the initial pose information includes: and superposing the initial pose information and at least one piece of noise information to generate at least one piece of candidate pose information.
In some embodiments, the determining the performance information characterizing the pose optimization capability of the point cloud stitching algorithm according to the difference between the pose correction result of each candidate pose information and the standard pose information of the point cloud frame includes: and determining performance information representing the pose optimization capability of the point cloud splicing algorithm according to the difference between the pose correction result of each candidate pose information and the standard pose information of the point cloud frame and the reliability of each candidate pose information.
In some embodiments, the above method further comprises: and selecting a point cloud splicing algorithm with performance information meeting preset conditions to splice point clouds to be spliced.
In a second aspect, an embodiment of the present disclosure provides a performance estimation apparatus for a point cloud stitching algorithm, including: an acquisition unit configured to acquire the acquired point cloud frame and initial pose information of the point cloud frame; the first determining unit is configured to splice the point cloud frames based on the initial pose information of the point cloud frames and determine standard pose information of the point cloud frames according to the splicing result of the point cloud frames; a generation unit configured to generate at least one candidate pose information based on the initial pose information, each candidate pose information having a lower degree of reliability than the initial pose information; and the second determining unit is configured to respectively correct each candidate pose information of the point cloud frame by adopting at least one splicing algorithm, and determine performance information representing the pose optimization capability of the point cloud splicing algorithm according to the difference between the pose correction result of each candidate pose information and the standard pose information of the point cloud frame.
In some embodiments, the acquiring unit is configured to acquire the acquired point cloud frame and the initial pose information of the point cloud frame as follows: acquiring an acquired point cloud frame, device pose information of acquisition equipment when the acquisition equipment acquires the point cloud frame, and pose sensing information of an inertial navigation system; and correcting the pose information of the equipment according to the pose information of the equipment and the pose sensing information of the inertial navigation system to obtain the initial pose information of the point cloud frame.
In some embodiments, the first determination unit is configured to determine the standard pose information of the point cloud frame as follows: performing pose optimization on each point cloud frame based on the initial pose information, and splicing each point cloud frame based on the optimized pose information to obtain a splicing result of the point cloud frames; and determining the optimized pose information as the standard pose information of each point cloud frame in response to the fact that the splicing result of the point cloud frames meets the preset condition.
In some embodiments, the above-mentioned generating unit is configured to generate the at least one candidate pose information as follows: and superposing the initial pose information and at least one piece of noise information to generate at least one piece of candidate pose information.
In some embodiments, the confidence levels of the candidate pose information are different from each other, and the second determining unit is configured to determine the performance information characterizing the pose optimization capability of the point cloud stitching algorithm as follows: and determining performance information representing the pose optimization capability of the point cloud splicing algorithm according to the difference between the pose correction result of each candidate pose information and the standard pose information of the point cloud frame and the reliability of each candidate pose information.
In some embodiments, the above apparatus further comprises: and the splicing unit is configured to select a point cloud splicing algorithm with performance information meeting preset conditions to splice point clouds to be spliced.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device for storing one or more programs which, when executed by one or more processors, cause the one or more processors to implement the method of performance estimation of a point cloud stitching algorithm as provided by the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a computer-readable medium on which a computer program is stored, where the program, when executed by a processor, implements the performance estimation method of the point cloud stitching algorithm provided in the first aspect.
According to the performance estimation method and device of the point cloud splicing algorithm, the acquired point cloud frames and the initial pose information of the point cloud frames are acquired, then the point cloud frames are spliced based on the initial pose information of the point cloud frames, the standard pose information of the point cloud frames is determined according to the splicing result of the point cloud frames, and then at least one candidate pose information is generated based on the initial pose information, wherein the credibility of each candidate pose information is lower than that of the initial pose information; and finally, respectively correcting the candidate pose information of the point cloud frame by adopting at least one stitching algorithm, and determining the performance information representing the pose optimization capability of the point cloud stitching algorithm according to the difference between the pose correction result of each candidate pose information and the standard pose information of the point cloud frame, so that the accurate estimation of the pose optimization capability of the point cloud stitching algorithm is realized.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which embodiments of the present disclosure may be applied;
FIG. 2 is a flow diagram of one embodiment of a method of performance estimation of a point cloud stitching algorithm according to the present disclosure;
FIG. 3 is a flow diagram of another embodiment of a method of performance estimation of a point cloud stitching algorithm according to the present disclosure;
FIG. 4 is a schematic structural diagram of an embodiment of a performance estimation apparatus of the point cloud stitching algorithm of the present disclosure;
FIG. 5 is a schematic block diagram of a computer system suitable for use in implementing an electronic device of an embodiment of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which the performance estimation method of the point cloud stitching algorithm or the performance estimation apparatus of the point cloud stitching algorithm of the present disclosure may be applied.
As shown in fig. 1, system architecture 100 may include autonomous vehicle 101, network 102, and server 103. Network 102 is the medium used to provide a communication link between autonomous vehicle 101 and server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A lidar 1011 may be mounted on the autonomous vehicle 101, and the lidar 1011 may collect point cloud data of the environment surrounding the autonomous vehicle 101. Specifically, the laser radar 1011 may periodically scan the environment surrounding the autonomous vehicle 101, and the data points scanned during each period form a point cloud frame.
Autonomous vehicle 101 may also be equipped with an electronic control unit 1012. The electronic control unit 1012 may receive the point cloud frame scanned by the laser radar 1011, and may process the point cloud frame, or the electronic control unit 1012 may transmit the point cloud frame to the server 103 via the network 102.
Server 103 may be a server that provides positioning or other services for autonomous vehicle 101. The server 103 may receive the point cloud data sent by the autonomous vehicle 101, estimate the position and attitude of the autonomous vehicle based on the point cloud data and a pre-constructed point cloud map, send the pose estimation result to the autonomous vehicle 101, and the autonomous vehicle 101 may receive the pose estimation result through the electronic control unit 1012 and execute a corresponding driving decision.
In an application scenario of the present disclosure, a laser radar 1011 mounted on the autonomous vehicle 101 may collect a plurality of point cloud frames during driving and transmit to the server 103 via the network 102 through the electronic control unit 1012. The server 103 may estimate the pose of the laser radar 1011 when acquiring each point cloud frame by using a stitching algorithm, and stitch each point cloud frame based on the pose of the laser radar 1011 when acquiring each point cloud frame, and the stitched point cloud may be used to construct a high-precision map or may be used for obstacle detection.
Alternatively, the autonomous vehicle 101 may locally stitch the frames of point clouds collected by the lidar 1011. For example, the electronic control unit 1012 may estimate poses corresponding to the cloud frames of the points acquired by the laser radar 1011, and splice the cloud frames of the points according to the estimated poses.
Or, the server 103 may also perform performance evaluation on various point cloud stitching algorithms based on the point cloud frame acquired by the laser radar 1011, and determine the optimization capability of the point cloud stitching algorithms on the pose of the point cloud frame.
The electronic control unit 1012 may be hardware or software, and when the electronic control unit 1012 is hardware, it may be implemented as various electronic devices including a processor. When the electronic control unit 1012 is software, it may be installed in the operating system of the autonomous vehicle 101, and the electronic control unit 1012 may be implemented as a single software module or a plurality of software modules.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 105 is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the performance estimation method of the point cloud stitching algorithm provided by the embodiment of the present disclosure may be executed by the electronic control unit 1012 or the server 105, and accordingly, a performance estimation device of the point cloud stitching algorithm may be disposed in the electronic control unit 1012 or the server 105.
It should be understood that the number of autonomous vehicles, electronic control units, lidar, networks, and servers in fig. 1 are merely illustrative. There may be any number of autonomous vehicles, electronic control units, lidar, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method of performance estimation of a point cloud stitching algorithm in accordance with the present disclosure is shown. As shown in fig. 2, a process 200 of the performance estimation method of the point cloud registration algorithm of the present embodiment includes the following steps:
step 201, acquiring the collected point cloud frame and the initial pose information of the point cloud frame.
In this embodiment, a plurality of point cloud frames acquired by a laser radar installed on a point cloud data acquisition vehicle may be acquired. The acquired point cloud frame may be a point cloud frame of a preset area. When the laser radar collects the point cloud frame, the pose information of the laser radar can be recorded. An executive body of the performance estimation method of the point cloud splicing algorithm can acquire pose information when the laser radar collects a point cloud frame, and the pose information is optimized by methods such as Kalman filtering and the like to obtain initial pose information of the point cloud frame.
Here, the initial pose information of the point cloud frame is the relative pose information of the laser radar with respect to the standard pose at the time of acquiring the point cloud frame. In practice, the laser radar may be calibrated in advance, the position and orientation information of the laser radar when acquiring the point cloud frame may be determined according to calibration data and positioning data of the laser radar by a GNSS (global navigation Satellite System), and optimization operations such as filtering are performed to obtain corresponding initial position and orientation information.
In some optional implementation manners of this embodiment, the acquired point cloud frame and the initial pose information of the point cloud frame may be acquired as follows:
firstly, acquiring a point cloud frame, acquiring device pose information of a device when the point cloud frame is acquired by the acquisition device, and pose sensing information of an inertial navigation system, wherein the device pose information can be acquired by a GNSS (global navigation satellite system), and in practice, the laser radar can record the pose information in real time according to positioning data of the GNSS when acquiring point cloud data. And the position and pose sensing information of the inertial navigation system is the sensing information of the inertial navigation system on the position and pose of the laser radar.
And then, correcting the pose information of the equipment according to the pose information of the equipment and the pose sensing information of the inertial navigation system to obtain the initial pose information of the point cloud frame. The device pose information can be optimized based on the sensing information of the inertial navigation system to the pose of the laser radar, for example, a Kalman filter can be set based on the pose sensing information of the inertial navigation system, and the Kalman filter is adopted to filter the device pose information to obtain the initial pose information of the point cloud frame.
Because the pose of the equipment is optimized through the pose sensing information of the inertial navigation system, the reliability of the initial pose information of the point cloud frame is high.
And 202, splicing the point cloud frames based on the initial pose information of the point cloud frames, and determining the standard pose information of the point cloud frames according to the splicing result of the point cloud frames.
In this embodiment, the executing body may perform stitching based on the initial pose information of each point cloud frame, specifically, may convert each point cloud frame into a world coordinate system according to the initial pose information of each point cloud frame, perform feature extraction and matching on adjacent point cloud frames, and stitch the adjacent point cloud frames together according to a feature matching result.
In the splicing process, the relative pose information of the adjacent point cloud frames can be determined based on the feature matching result of the adjacent point cloud frames, and then the initial pose information is calibrated according to the splicing result of all the point cloud frames and the relative pose information of each adjacent point cloud frame to obtain the standard pose information of each point cloud frame.
For example, the initial pose information may be calibrated according to the position coordinates of each data point in the point cloud obtained after the splicing and the position coordinates of the corresponding data point in the point cloud frame before the splicing, a transformation matrix for transforming the position coordinates of the data point in the point cloud frame before the splicing to the position coordinates of the corresponding data point in the point cloud obtained after the splicing is determined, the relative pose of the point cloud frame before the splicing with respect to the point cloud after the splicing is determined according to the transformation matrix, and the initial pose information of the point cloud frame before the splicing is corrected according to the relative pose.
In some optional implementations of this embodiment, the standard pose information of the point cloud frame may be determined as follows: performing pose optimization on each point cloud frame based on the initial pose information, and splicing each point cloud frame based on the optimized pose information to obtain a splicing result of the point cloud frames; and determining the optimized pose information as the standard pose information of each point cloud frame in response to the fact that the splicing result of the point cloud frames meets the preset condition.
The initial pose information can be optimized by adopting a basic point cloud splicing algorithm to obtain the final optimized pose information of each point cloud frame, for example, the initial pose information can be optimized based on a closest point iteration method. And then, converting each point cloud frame to the same coordinate system based on the finally optimized pose information of each point cloud frame and splicing to obtain a splicing result of the point cloud frames.
Whether the splicing result of the point cloud frame meets a preset condition can be judged, for example, whether the distribution of the data points in the splicing result of the point cloud frame is abnormal can be judged, or obstacle detection can be performed on the splicing result of the point cloud frame, and whether the relative position, the distribution and the like of an obstacle are abnormal can be judged. If the distribution of the data points and the relative positions and the distribution of the obstacles in the splicing result are not abnormal, the splicing result can be determined to meet the preset conditions.
In practice, the result of the point cloud frame stitching can also be checked manually. And determining whether the splicing result of the point cloud frame meets a preset condition or not according to the manual checking result. And if the splicing result is confirmed to be accurate manually, the splicing result of the point cloud frame meets the preset condition. At this time, it can be determined that the finally optimized pose information is standard pose information of each point cloud frame.
At step 203, at least one candidate pose information is generated based on the initial pose information.
In this embodiment, the reliability of the initial pose information of each point cloud frame is high, the reliability of the initial pose information can be reduced by adding noise to the initial pose information, and specifically, the initial pose information and at least one piece of noise information can be superimposed to generate at least one piece of candidate pose information. And the credibility of each candidate pose information is lower than that of the initial pose information.
For example, a plurality of different degrees of noise may be added to the initial pose information, respectively, to generate a plurality of candidate pose information with different degrees of confidence.
Alternatively, candidate pose information may be generated by adding a preset offset amount to the initial pose information, the greater the added offset amount, the poorer the reliability of the generated candidate pose information.
Or in the process of determining the initial pose information of the point cloud frame by performing pose optimization based on the pose information of the point cloud frame acquired by the laser radar, noise signals of different degrees are added, and pose information with different credibility is obtained and used as candidate pose information.
And 204, respectively correcting candidate pose information of the point cloud frame by adopting at least one splicing algorithm, and determining performance information representing pose optimization capability of the point cloud splicing algorithm according to the difference between the pose correction result of the candidate pose information and the standard pose information of the point cloud frame.
In this embodiment, the candidate pose information of each point cloud frame may be corrected by using a stitching algorithm to be evaluated. For each splicing algorithm, the candidate pose information can be used as an initial pose estimation result of the point cloud frame, and the point cloud splicing algorithm is adopted to estimate the relative pose based on the initial pose estimation result of each point cloud frame, so that the pose correction result of each point cloud frame by the point cloud splicing algorithm is obtained.
And then determining the performance of each point cloud splicing algorithm according to the pose correction result of each point cloud splicing algorithm. In an optional implementation manner, the higher the consistency of the pose correction result of the point cloud frame and the standard pose information of the point cloud frame is, the better the performance of the point cloud stitching algorithm is. The error of the point cloud splicing algorithm can be determined based on the difference between the pose correction result of the point cloud frame and the standard pose information thereof, so that the error of each point cloud splicing algorithm can be obtained, and the performance evaluation of the point cloud splicing algorithm is realized.
In some optional implementation manners of this embodiment, the credibility of each candidate pose information is different from each other, and the performance information representing the pose optimization capability of the point cloud stitching algorithm may be determined as follows: and determining performance information representing the pose optimization capability of the point cloud splicing algorithm according to the difference between the pose correction result of each candidate pose information and the standard pose information of the point cloud frame and the reliability of each candidate pose information.
Specifically, the optimization capability of the point cloud splicing algorithm can be evaluated according to the following rules: the point cloud splicing algorithm with low reliability of candidate pose information and small difference between corresponding pose correction results and standard pose information has strong optimization capability; conversely, the point cloud stitching algorithm with high confidence of candidate pose information and large difference between the corresponding pose correction result and the standard pose information has weak optimization capability.
Optionally, if the difference between the pose correction result of each candidate pose information and the standard pose information by the point cloud stitching algorithm is within the preset difference range, the optimization capability of the point cloud stitching algorithm is strong. And if the difference between the pose correction result of the candidate pose information with the highest reliability and the corresponding standard pose information by the point cloud splicing algorithm exceeds a preset difference range, the optimization capability of the point cloud splicing algorithm is poor.
And the candidate pose information with different credibility, the corresponding pose correction result and the difference between the standard pose information can be input into a preset performance evaluation model to determine the performance information of the cloud splicing algorithm of each point. The preset performance evaluation model can perform performance evaluation based on a preset rule about the difference between the reliability of candidate pose information, the corresponding pose correction result and standard pose information. Or the preset performance evaluation model can be a model obtained through sample data training, so that the optimization capability of different point cloud splicing algorithms can be better distinguished.
According to the performance estimation method of the point cloud splicing algorithm, the collected point cloud frames and the initial pose information of the point cloud frames are obtained, then the point cloud frames are spliced based on the initial pose information of the point cloud frames, the standard pose information of the point cloud frames is determined according to the splicing result of the point cloud frames, and then at least one candidate pose information is generated based on the initial pose information, wherein the credibility of each candidate pose information is lower than that of the initial pose information; and finally, respectively correcting the candidate pose information of the point cloud frame by adopting at least one stitching algorithm, and determining the performance information representing the pose optimization capability of the point cloud stitching algorithm according to the difference between the pose correction result of each candidate pose information and the standard pose information of the point cloud frame, so that the accurate estimation of the pose optimization capability of the point cloud stitching algorithm is realized. And candidate pose information is obtained in a mode of reducing the reliability of the initial pose information of the point cloud frame, so that the optimization difficulty of the candidate pose information can be obtained before performance estimation, and a more accurate performance estimation result can be obtained. In addition, the method does not need to acquire a large amount of data to evaluate the algorithm, and reduces the cost and difficulty of performance estimation of the point cloud splicing algorithm.
Referring to fig. 3, a flow of another embodiment of a method for estimating the performance of a point cloud stitching algorithm according to the present disclosure is shown. As shown in fig. 3, a process 300 of the performance estimation method of the point cloud stitching algorithm of the present embodiment includes the following steps:
step 301, acquiring the collected point cloud frame and initial pose information of the point cloud frame.
And 302, splicing the point cloud frames based on the initial pose information of the point cloud frames, and determining the standard pose information of the point cloud frames according to the splicing result of the point cloud frames.
Step 303, generating at least one candidate pose information based on the initial pose information. And the credibility of each candidate pose information is lower than that of the initial pose information.
And 304, respectively correcting candidate pose information of the point cloud frame by adopting at least one splicing algorithm, and determining performance information representing pose optimization capability of the point cloud splicing algorithm according to the difference between the pose correction result of the candidate pose information and the standard pose information of the point cloud frame.
Step 301, step 302, step 303, and step 304 in this embodiment are respectively consistent with step 201, step 202, step 203, and step 204 in the foregoing embodiment, and specific implementation manners of step 301, step 302, step 303, and step 304 may refer to descriptions of step 201, step 202, step 203, and step 204 in the foregoing embodiment, which is not described herein again.
And 305, selecting a point cloud splicing algorithm with performance information meeting preset conditions to splice point clouds to be spliced.
After the point clouds to be spliced are obtained, a proper point cloud splicing algorithm can be selected according to the performance information of each candidate point cloud splicing algorithm to execute the splicing task of the point clouds to be spliced.
Specifically, it may be determined whether the performance information of the candidate point cloud stitching algorithm meets a preset condition, where the preset condition may be that the performance level reaches a preset level, or that a difference between a pose correction result of candidate pose information with a confidence level lower than a preset confidence threshold and corresponding standard pose information is smaller than a preset threshold. Alternatively, the preset condition may include: and the performance of all candidate point cloud splicing algorithms is optimal.
The point clouds to be spliced can comprise multi-frame point clouds, the multi-frame point clouds can be obtained by continuous collection, and can also be obtained by sampling a collected point cloud frame sequence, wherein two adjacent frames of point clouds to be spliced contain an overlapping area. In practice, initial pose information of each frame of point cloud can be obtained according to GNSS positioning data of each frame of point cloud to be spliced, and then pose optimization and point cloud splicing are carried out by adopting a point cloud splicing algorithm with the performance meeting preset conditions. The spliced point cloud can be used for constructing a high-precision map for automatic driving or for obstacle detection.
According to the method and the device, the point cloud splicing task is executed by selecting the point cloud splicing algorithm with the performance information meeting the preset conditions, the point cloud splicing algorithm with better performance can be selected efficiently, and the accuracy and the reliability of the splicing result are improved.
In some optional implementation manners of this embodiment, the performance of the cloud stitching algorithm of each point may be estimated by using the above processes from step 301 to step 304 for the point cloud frames of different scene types. Here, the scene type may be divided according to the type of the geographical location or area, for example, various scene types such as indoor, highway, parking lot, etc. For the point cloud frame of each scene type, the point cloud frames collected in the scene type can be acquired to estimate the performance of different point cloud splicing algorithms under the scene type. Then, in step 305, the preset condition may include that the performance of the scene type corresponding to the point cloud to be stitched satisfies the preset condition. And selecting the point cloud splicing algorithm meeting preset conditions according to the scene type of the point cloud to be spliced and the performance estimation result of the point cloud splicing algorithm under the corresponding scene type so as to execute the splicing task of the point cloud to be spliced. Therefore, the point cloud splicing algorithm more suitable for the point cloud to be spliced can be selected by respectively testing and evaluating the performance of the point cloud splicing algorithm in various scene types, and the reliability of the splicing result is further improved.
Referring to fig. 4, as an implementation of the performance estimation method for the point cloud stitching algorithm, the present disclosure provides an embodiment of a performance estimation apparatus for a point cloud stitching algorithm, where the apparatus embodiment corresponds to the method embodiments shown in fig. 2 and fig. 3, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 4, the performance estimation apparatus 400 of the point cloud stitching algorithm of the present embodiment includes an acquisition unit 401, a first determination unit 402, a generation unit 403, and a second determination unit 404. The acquiring unit 401 is configured to acquire the acquired point cloud frame and the initial pose information of the point cloud frame; the first determining unit 402 is configured to splice the point cloud frames based on the initial pose information of the point cloud frames, and determine standard pose information of the point cloud frames according to the splicing result of the point cloud frames; the generation unit 403 is configured to generate at least one candidate pose information based on the initial pose information, each candidate pose information having a lower degree of reliability than the initial pose information; the second determining unit 404 is configured to respectively correct each candidate pose information of the point cloud frame by using at least one stitching algorithm, and determine performance information representing pose optimization capability of the point cloud stitching algorithm according to a difference between a pose correction result of each candidate pose information and standard pose information of the point cloud frame.
In some embodiments, the acquiring unit 401 is configured to acquire the acquired point cloud frame and the initial pose information of the point cloud frame as follows: acquiring an acquired point cloud frame, device pose information of acquisition equipment when the acquisition equipment acquires the point cloud frame, and pose sensing information of an inertial navigation system; and correcting the pose information of the equipment according to the pose information of the equipment and the pose sensing information of the inertial navigation system to obtain the initial pose information of the point cloud frame.
In some embodiments, the first determining unit 402 is configured to determine the standard pose information of the point cloud frame as follows: performing pose optimization on each point cloud frame based on the initial pose information, and splicing each point cloud frame based on the optimized pose information to obtain a splicing result of the point cloud frames; and determining the optimized pose information as the standard pose information of each point cloud frame in response to the fact that the splicing result of the point cloud frames meets the preset condition.
In some embodiments, the above-mentioned generating unit 403 is configured to generate at least one candidate pose information as follows: and superposing the initial pose information and at least one piece of noise information to generate at least one piece of candidate pose information.
In some embodiments, the confidence levels of the candidate pose information are different from each other, and the second determining unit 404 is configured to determine the performance information characterizing the pose optimization capability of the point cloud stitching algorithm as follows: and determining performance information representing the pose optimization capability of the point cloud splicing algorithm according to the difference between the pose correction result of each candidate pose information and the standard pose information of the point cloud frame and the reliability of each candidate pose information.
In some embodiments, the above apparatus further comprises: and the splicing unit is configured to select a point cloud splicing algorithm with performance information meeting preset conditions to splice point clouds to be spliced.
The units in the apparatus 400 described above correspond to the steps in the method described with reference to fig. 2 and 3. Thus, the operations and features described above for the method for estimating the performance of the point cloud stitching algorithm and the technical effects that can be achieved are also applicable to the apparatus 400 and the units included therein, and are not described herein again.
Referring now to FIG. 5, a schematic diagram of an electronic device (e.g., the server shown in FIG. 1) 500 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc., output devices 507 including, for example, a liquid crystal display (L CD), speaker, vibrator, etc., storage devices 508 including, for example, a hard disk, etc., and communication devices 509. the communication devices 509 may allow the electronic device 500 to communicate wirelessly or wiredly with other devices to exchange data.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program, when executed by the processing device 501, performs the above-described functions defined in the methods of embodiments of the present disclosure. It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring the acquired point cloud frame and initial pose information of the point cloud frame; splicing the point cloud frames based on the initial pose information of the point cloud frames, and determining standard pose information of the point cloud frames according to the splicing result of the point cloud frames; generating at least one candidate pose information based on the initial pose information, wherein the credibility of each candidate pose information is lower than that of the initial pose information; and respectively correcting the candidate pose information of the point cloud frame by adopting at least one splicing algorithm, and determining performance information representing the pose optimization capability of the point cloud splicing algorithm according to the difference between the pose correction result of each candidate pose information and the standard pose information of the point cloud frame.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including AN object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a first determination unit, a generation unit, and a second determination unit. The names of the units do not form a limitation to the unit itself in some cases, and for example, the acquiring unit may also be described as a "unit that acquires the acquired point cloud frame and the initial pose information of the point cloud frame".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept as defined above. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (14)

1. A performance estimation method of a point cloud stitching algorithm comprises the following steps:
acquiring a collected point cloud frame and initial pose information of the point cloud frame;
splicing the point cloud frames based on the initial pose information of the point cloud frames, and determining the standard pose information of the point cloud frames according to the splicing result of the point cloud frames;
generating at least one candidate pose information based on the initial pose information, wherein the credibility of each candidate pose information is lower than that of the initial pose information;
and respectively correcting the candidate pose information of the point cloud frame by adopting at least one splicing algorithm, and determining performance information representing the pose optimization capability of the point cloud splicing algorithm according to the difference between the pose correction result of each candidate pose information and the standard pose information of the point cloud frame.
2. The method of claim 1, wherein the acquiring the acquired point cloud frame and the initial pose information of the point cloud frame comprises:
acquiring an acquired point cloud frame, device pose information of acquisition equipment when the acquisition equipment acquires the point cloud frame, and pose sensing information of an inertial navigation system;
and correcting the equipment pose information according to the equipment pose information and the pose sensing information of the inertial navigation system to obtain the initial pose information of the point cloud frame.
3. The method of claim 1, wherein the stitching the point cloud frames based on their initial pose information and determining their standard pose information from the result of the stitching comprises:
performing pose optimization on each point cloud frame based on the initial pose information, and splicing each point cloud frame based on the optimized pose information to obtain a splicing result of the point cloud frames;
and determining the optimized pose information as the standard pose information of each point cloud frame in response to the fact that the splicing result of the point cloud frames meets the preset condition.
4. The method of claim 1, wherein the generating at least one candidate pose information based on the initial pose information comprises:
and superposing the initial pose information and at least one piece of noise information to generate at least one piece of candidate pose information.
5. The method according to claim 1, wherein the confidence levels of the candidate pose information are different from each other, and
the determining performance information representing the pose optimization capability of the point cloud stitching algorithm according to the difference between the pose correction result of each candidate pose information and the standard pose information of the point cloud frame comprises the following steps:
and determining performance information representing the pose optimization capability of the point cloud splicing algorithm according to the difference between the pose correction result of each candidate pose information and the standard pose information of the point cloud frame and the reliability of each candidate pose information.
6. The method of any of claims 1-5, wherein the method further comprises:
and selecting a point cloud splicing algorithm with performance information meeting preset conditions to splice point clouds to be spliced.
7. A performance estimation apparatus of a point cloud stitching algorithm, comprising:
an acquisition unit configured to acquire the acquired point cloud frame and initial pose information of the point cloud frame;
a first determining unit configured to splice the point cloud frames based on initial pose information of the point cloud frames and determine standard pose information of the point cloud frames according to a splicing result of the point cloud frames;
a generation unit configured to generate at least one candidate pose information based on the initial pose information, each of the candidate pose information having a lower degree of reliability than the initial pose information;
and the second determining unit is configured to respectively correct each candidate pose information of the point cloud frame by adopting at least one splicing algorithm, and determine performance information representing the pose optimization capability of the point cloud splicing algorithm according to the difference between the pose correction result of each candidate pose information and the standard pose information of the point cloud frame.
8. The apparatus according to claim 7, wherein the acquisition unit is configured to acquire the acquired point cloud frame and initial pose information of the point cloud frame as follows:
acquiring an acquired point cloud frame, device pose information of acquisition equipment when the acquisition equipment acquires the point cloud frame, and pose sensing information of an inertial navigation system;
and correcting the equipment pose information according to the equipment pose information and the pose sensing information of the inertial navigation system to obtain the initial pose information of the point cloud frame.
9. The apparatus of claim 7, wherein the first determination unit is configured to determine standard pose information for the point cloud frame as follows:
performing pose optimization on each point cloud frame based on the initial pose information, and splicing each point cloud frame based on the optimized pose information to obtain a splicing result of the point cloud frames;
and determining the optimized pose information as the standard pose information of each point cloud frame in response to the fact that the splicing result of the point cloud frames meets the preset condition.
10. The apparatus according to claim 7, wherein the generating unit is configured to generate at least one candidate pose information as follows:
and superposing the initial pose information and at least one piece of noise information to generate at least one piece of candidate pose information.
11. The apparatus according to claim 7, wherein the confidence levels of the candidate pose information are different from each other, and
the second determination unit is configured to determine performance information characterizing pose optimization capability of the point cloud stitching algorithm as follows:
and determining performance information representing the pose optimization capability of the point cloud splicing algorithm according to the difference between the pose correction result of each candidate pose information and the standard pose information of the point cloud frame and the reliability of each candidate pose information.
12. The apparatus of any of claims 7-11, wherein the apparatus further comprises:
and the splicing unit is configured to select a point cloud splicing algorithm with performance information meeting preset conditions to splice point clouds to be spliced.
13. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
14. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-6.
CN202010238029.9A 2020-03-30 2020-03-30 Performance estimation method and device of point cloud stitching algorithm Active CN111461980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010238029.9A CN111461980B (en) 2020-03-30 2020-03-30 Performance estimation method and device of point cloud stitching algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010238029.9A CN111461980B (en) 2020-03-30 2020-03-30 Performance estimation method and device of point cloud stitching algorithm

Publications (2)

Publication Number Publication Date
CN111461980A true CN111461980A (en) 2020-07-28
CN111461980B CN111461980B (en) 2023-08-29

Family

ID=71685139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010238029.9A Active CN111461980B (en) 2020-03-30 2020-03-30 Performance estimation method and device of point cloud stitching algorithm

Country Status (1)

Country Link
CN (1) CN111461980B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989451A (en) * 2021-10-28 2022-01-28 北京百度网讯科技有限公司 High-precision map construction method and device and electronic equipment
WO2024001916A1 (en) * 2022-06-30 2024-01-04 先临三维科技股份有限公司 Scanner orientation determination method and apparatus, device, and storage medium

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169576A (en) * 2011-04-02 2011-08-31 北京理工大学 Quantified evaluation method of image mosaic algorithms
DE102013102528A1 (en) * 2013-03-13 2014-09-18 Dr. Ing. H.C. F. Porsche Aktiengesellschaft Method for determining a mounting position of an interior sensor system in a vehicle
KR20180014677A (en) * 2016-08-01 2018-02-09 코그넥스코오포레이션 System and method for improved scoring of 3d poses and spurious point removal in 3d image data
CN108363995A (en) * 2018-03-19 2018-08-03 百度在线网络技术(北京)有限公司 Method and apparatus for generating data
US20180283019A1 (en) * 2017-03-31 2018-10-04 Canvas Construction, Inc. Automated drywalling system and method
US20180308254A1 (en) * 2017-04-25 2018-10-25 Symbol Technologies, Llc Systems and methods for extrinsic calibration of a plurality of sensors
CN108765487A (en) * 2018-06-04 2018-11-06 百度在线网络技术(北京)有限公司 Rebuild method, apparatus, equipment and the computer readable storage medium of three-dimensional scenic
CN108921947A (en) * 2018-07-23 2018-11-30 百度在线网络技术(北京)有限公司 Generate method, apparatus, equipment, storage medium and the acquisition entity of electronic map
CN109061703A (en) * 2018-06-11 2018-12-21 百度在线网络技术(北京)有限公司 Method, apparatus, equipment and computer readable storage medium used for positioning
CN109214248A (en) * 2017-07-04 2019-01-15 百度在线网络技术(北京)有限公司 The method and apparatus of the laser point cloud data of automatic driving vehicle for identification
WO2019035768A1 (en) * 2017-08-17 2019-02-21 Iko Pte. Ltd. Systems and methods for analyzing cutaneous conditions
CN109410735A (en) * 2017-08-15 2019-03-01 百度在线网络技术(北京)有限公司 Reflected value map constructing method and device
CN109407073A (en) * 2017-08-15 2019-03-01 百度在线网络技术(北京)有限公司 Reflected value map constructing method and device
CN109427046A (en) * 2017-08-30 2019-03-05 深圳中科飞测科技有限公司 Distortion correction method, device and the computer readable storage medium of three-dimensional measurement
CN109813305A (en) * 2018-12-29 2019-05-28 广州蓝海机器人***有限公司 Unmanned fork lift based on laser SLAM
CN109887057A (en) * 2019-01-30 2019-06-14 杭州飞步科技有限公司 The method and apparatus for generating high-precision map
CN109993783A (en) * 2019-03-25 2019-07-09 北京航空航天大学 A kind of roof and side optimized reconstruction method towards complex three-dimensional building object point cloud
CN110031825A (en) * 2019-04-17 2019-07-19 北京智行者科技有限公司 Laser positioning initial method
US20190242711A1 (en) * 2018-02-08 2019-08-08 Raytheon Company Image geo-registration for absolute navigation aiding using uncertainy information from the on-board navigation system
CN110221276A (en) * 2019-05-31 2019-09-10 文远知行有限公司 Scaling method, device, computer equipment and the storage medium of laser radar
CN110400363A (en) * 2018-04-24 2019-11-01 北京京东尚科信息技术有限公司 Map constructing method and device based on laser point cloud
CN110849374A (en) * 2019-12-03 2020-02-28 中南大学 Underground environment positioning method, device, equipment and storage medium

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169576A (en) * 2011-04-02 2011-08-31 北京理工大学 Quantified evaluation method of image mosaic algorithms
DE102013102528A1 (en) * 2013-03-13 2014-09-18 Dr. Ing. H.C. F. Porsche Aktiengesellschaft Method for determining a mounting position of an interior sensor system in a vehicle
KR20180014677A (en) * 2016-08-01 2018-02-09 코그넥스코오포레이션 System and method for improved scoring of 3d poses and spurious point removal in 3d image data
US20180283019A1 (en) * 2017-03-31 2018-10-04 Canvas Construction, Inc. Automated drywalling system and method
WO2018183959A1 (en) * 2017-03-31 2018-10-04 Canvas Construction, Inc. Automated drywalling system and method
US20180308254A1 (en) * 2017-04-25 2018-10-25 Symbol Technologies, Llc Systems and methods for extrinsic calibration of a plurality of sensors
CN109214248A (en) * 2017-07-04 2019-01-15 百度在线网络技术(北京)有限公司 The method and apparatus of the laser point cloud data of automatic driving vehicle for identification
CN109407073A (en) * 2017-08-15 2019-03-01 百度在线网络技术(北京)有限公司 Reflected value map constructing method and device
CN109410735A (en) * 2017-08-15 2019-03-01 百度在线网络技术(北京)有限公司 Reflected value map constructing method and device
WO2019035768A1 (en) * 2017-08-17 2019-02-21 Iko Pte. Ltd. Systems and methods for analyzing cutaneous conditions
CN109427046A (en) * 2017-08-30 2019-03-05 深圳中科飞测科技有限公司 Distortion correction method, device and the computer readable storage medium of three-dimensional measurement
US20190242711A1 (en) * 2018-02-08 2019-08-08 Raytheon Company Image geo-registration for absolute navigation aiding using uncertainy information from the on-board navigation system
CN108363995A (en) * 2018-03-19 2018-08-03 百度在线网络技术(北京)有限公司 Method and apparatus for generating data
CN110400363A (en) * 2018-04-24 2019-11-01 北京京东尚科信息技术有限公司 Map constructing method and device based on laser point cloud
CN108765487A (en) * 2018-06-04 2018-11-06 百度在线网络技术(北京)有限公司 Rebuild method, apparatus, equipment and the computer readable storage medium of three-dimensional scenic
CN109061703A (en) * 2018-06-11 2018-12-21 百度在线网络技术(北京)有限公司 Method, apparatus, equipment and computer readable storage medium used for positioning
CN108921947A (en) * 2018-07-23 2018-11-30 百度在线网络技术(北京)有限公司 Generate method, apparatus, equipment, storage medium and the acquisition entity of electronic map
CN109813305A (en) * 2018-12-29 2019-05-28 广州蓝海机器人***有限公司 Unmanned fork lift based on laser SLAM
CN109887057A (en) * 2019-01-30 2019-06-14 杭州飞步科技有限公司 The method and apparatus for generating high-precision map
CN109993783A (en) * 2019-03-25 2019-07-09 北京航空航天大学 A kind of roof and side optimized reconstruction method towards complex three-dimensional building object point cloud
CN110031825A (en) * 2019-04-17 2019-07-19 北京智行者科技有限公司 Laser positioning initial method
CN110221276A (en) * 2019-05-31 2019-09-10 文远知行有限公司 Scaling method, device, computer equipment and the storage medium of laser radar
CN110849374A (en) * 2019-12-03 2020-02-28 中南大学 Underground environment positioning method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ENSHENG LIU: "Application of three-dimensional laser scanning in the protection of multi-dynasty ceramic fragments", pages 99 *
陈曦: "基于双目立体视觉的三维拼接和融合方法", vol. 25, no. 4, pages 119 - 122 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989451A (en) * 2021-10-28 2022-01-28 北京百度网讯科技有限公司 High-precision map construction method and device and electronic equipment
CN113989451B (en) * 2021-10-28 2024-04-09 北京百度网讯科技有限公司 High-precision map construction method and device and electronic equipment
WO2024001916A1 (en) * 2022-06-30 2024-01-04 先临三维科技股份有限公司 Scanner orientation determination method and apparatus, device, and storage medium

Also Published As

Publication number Publication date
CN111461980B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
CN110687549B (en) Obstacle detection method and device
CN111461981B (en) Error estimation method and device for point cloud stitching algorithm
CN109459734B (en) Laser radar positioning effect evaluation method, device, equipment and storage medium
CN109435955B (en) Performance evaluation method, device and equipment for automatic driving system and storage medium
CN109410735B (en) Reflection value map construction method and device
CN109407073B (en) Reflection value map construction method and device
CN110163153B (en) Method and device for recognizing traffic sign board boundary
CN109871019B (en) Method and device for acquiring coordinates by automatic driving
CN111353453B (en) Obstacle detection method and device for vehicle
CN115616937B (en) Automatic driving simulation test method, device, equipment and computer readable medium
CN111461980B (en) Performance estimation method and device of point cloud stitching algorithm
CN110619666A (en) Method and device for calibrating camera
CN112699765A (en) Method and device for evaluating visual positioning algorithm, electronic equipment and storage medium
CN111469781B (en) For use in output of information processing system method and apparatus of (1)
CN115900712A (en) Information source reliability evaluation combined positioning method
CN115272452A (en) Target detection positioning method and device, unmanned aerial vehicle and storage medium
CN114022561A (en) Urban area monocular mapping method and system based on GPS constraint and dynamic correction
CN111340880A (en) Method and apparatus for generating a predictive model
CN113758492A (en) Map detection method and device
CN115512336B (en) Vehicle positioning method and device based on street lamp light source and electronic equipment
CN112595330B (en) Vehicle positioning method and device, electronic equipment and computer readable medium
CN112630798A (en) Method and apparatus for estimating ground
CN111383337B (en) Method and device for identifying objects
CN109710594B (en) Map data validity judging method and device and readable storage medium
CN116295508A (en) Road side sensor calibration method, device and system based on high-precision map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant