CN116134488A - Point cloud labeling method, point cloud labeling device, computer equipment and storage medium - Google Patents

Point cloud labeling method, point cloud labeling device, computer equipment and storage medium Download PDF

Info

Publication number
CN116134488A
CN116134488A CN202080103188.6A CN202080103188A CN116134488A CN 116134488 A CN116134488 A CN 116134488A CN 202080103188 A CN202080103188 A CN 202080103188A CN 116134488 A CN116134488 A CN 116134488A
Authority
CN
China
Prior art keywords
frame
point cloud
labeling
annotation
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080103188.6A
Other languages
Chinese (zh)
Inventor
邹晓艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DeepRoute AI Ltd
Original Assignee
DeepRoute AI Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DeepRoute AI Ltd filed Critical DeepRoute AI Ltd
Publication of CN116134488A publication Critical patent/CN116134488A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A point cloud labeling method, a point cloud labeling device, a computer device and a storage medium comprise: acquiring a point cloud sequence to be marked (S202); pre-labeling each frame of point cloud in the point cloud sequence to be labeled to obtain pre-labeling information of each frame of point cloud, wherein the pre-labeling information comprises: marking identification information and first pose information of a labeling frame of each target object in the corresponding frame point cloud (S204); determining a labeling frame sequence corresponding to each target object according to the identification information, and determining a first point cloud in each labeling frame in the labeling frame sequence according to first pose information of each labeling frame in the labeling frame sequence (S206); obtaining point cloud registration results among corresponding marking frames according to first point clouds in each marking frame in the marking frame sequence, and obtaining second pose information of each marking frame in the marking frame sequence based on the point cloud registration results (S028); and determining the labeling result of the point cloud sequence to be labeled according to the second pose information of each labeling frame in each labeling frame sequence (S210). By adopting the method, the marking efficiency and the marking precision can be improved.

Description

Point cloud labeling method, point cloud labeling device, computer equipment and storage medium Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a point cloud labeling method, a point cloud labeling device, a computer device, and a storage medium.
Background
With the development of technologies such as sensors and deep learning, automatic driving has made great progress. In autopilot, surrounding environment is often perceived by using multi-sensor fusion, and the laser radar is focused by autopilot researchers because the laser radar can directly provide an accurate three-dimensional scene, so that the current point cloud-based method is mainstream in three-dimensional perception. As a basis of deep learning, the quality of the labeling data directly affects the effect of the final model, so that point cloud labeling becomes a very critical ring in automatic driving.
Most of the existing point cloud labeling is performed by means of purely manual mode, the labeling efficiency is low, and the labeling precision is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a point cloud labeling method, a point cloud labeling device, a computer device, and a storage medium that can improve labeling efficiency and labeling accuracy.
A point cloud labeling method, the method comprising:
acquiring a point cloud sequence to be marked;
pre-labeling each frame of point cloud in the point cloud sequence to be labeled to obtain pre-labeling information of each frame of point cloud, wherein the pre-labeling information comprises: marking identification information and first pose information of a labeling frame of each target object in the corresponding frame point cloud;
Determining a labeling frame sequence corresponding to each target object according to the identification information, and determining a first point cloud in each labeling frame in the labeling frame sequence according to first pose information of each labeling frame in the labeling frame sequence;
obtaining point cloud registration results among corresponding annotation frames according to first point clouds in each annotation frame in the annotation frame sequence, and obtaining second pose information of each annotation frame in the annotation frame sequence based on the point cloud registration results;
and determining the labeling result of the point cloud sequence to be labeled according to the second pose information of each labeling frame in each labeling frame sequence.
A point cloud labeling apparatus, the apparatus comprising:
the acquisition module is used for acquiring the point cloud sequence to be marked;
the pre-labeling module is used for pre-labeling each frame of point cloud in the point cloud sequence to be labeled to obtain pre-labeling information of each frame of point cloud, and the pre-labeling information comprises: marking identification information and first pose information of a labeling frame of each target object in the corresponding frame point cloud;
the processing module is used for determining a labeling frame sequence corresponding to each target object according to the identification information, and determining a first point cloud in each labeling frame in the labeling frame sequence according to first pose information of each labeling frame in the labeling frame sequence;
The registration module is used for obtaining point cloud registration results among corresponding annotation frames according to first point clouds in each annotation frame in the annotation frame sequence, and obtaining second pose information of each annotation frame in the annotation frame sequence based on the point cloud registration results;
and the determining module is used for determining the labeling result of the point cloud sequence to be labeled according to the second pose information of each labeling frame in each labeling frame sequence.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a point cloud sequence to be marked;
pre-labeling each frame of point cloud in the point cloud sequence to be labeled to obtain pre-labeling information of each frame of point cloud, wherein the pre-labeling information comprises: marking identification information and first pose information of a labeling frame of each target object in the corresponding frame point cloud;
determining a labeling frame sequence corresponding to each target object according to the identification information, and determining a first point cloud in each labeling frame in the labeling frame sequence according to first pose information of each labeling frame in the labeling frame sequence;
obtaining point cloud registration results among corresponding annotation frames according to first point clouds in each annotation frame in the annotation frame sequence, and obtaining second pose information of each annotation frame in the annotation frame sequence based on the point cloud registration results;
And determining the labeling result of the point cloud sequence to be labeled according to the second pose information of each labeling frame in each labeling frame sequence.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a point cloud sequence to be marked;
pre-labeling each frame of point cloud in the point cloud sequence to be labeled to obtain pre-labeling information of each frame of point cloud, wherein the pre-labeling information comprises: marking identification information and first pose information of a labeling frame of each target object in the corresponding frame point cloud;
determining a labeling frame sequence corresponding to each target object according to the identification information, and determining a first point cloud in each labeling frame in the labeling frame sequence according to first pose information of each labeling frame in the labeling frame sequence;
obtaining point cloud registration results among corresponding annotation frames according to first point clouds in each annotation frame in the annotation frame sequence, and obtaining second pose information of each annotation frame in the annotation frame sequence based on the point cloud registration results;
and determining the labeling result of the point cloud sequence to be labeled according to the second pose information of each labeling frame in each labeling frame sequence.
According to the point cloud labeling method, the device, the computer equipment and the storage medium, the point cloud sequences are pre-labeled to obtain the labeling frame sequences corresponding to the target objects, and for each labeling frame sequence, pose information of the labeling frames is corrected through the point cloud registration results among the corresponding labeling frames, for example, manual work can only participate in correcting pose information of one labeling frame in the labeling frame sequences, and pose information of all other labeling frames in the labeling frame sequences can be corrected through point cloud registration, so that manual intervention is reduced, efficiency of point cloud labeling is improved, and meanwhile, higher labeling precision is achieved.
Drawings
FIG. 1 is an internal block diagram of a computer device in one embodiment;
FIG. 2 is a flow chart of a point cloud labeling method according to one embodiment;
FIG. 3 is a flow chart of step S208 in one embodiment;
FIG. 4 is a flowchart illustrating step S306 in one embodiment;
fig. 5 is a block diagram of a point cloud labeling apparatus according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The point cloud labeling method provided by the application can be applied to computer equipment shown in figure 1. The computer device may be a terminal, and its internal structure may be as shown in fig. 1. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program when executed by a processor implements a point cloud labeling method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, as shown in fig. 2, a point cloud labeling method is provided, and an example of application of the method to the computer device in fig. 1 is described, which includes the following steps S202 to S210.
S202, acquiring a point cloud sequence to be marked.
The point cloud sequence to be marked comprises continuous frame point clouds, and specifically can be point clouds continuously acquired aiming at a target scene. For example, in a vehicle intelligent driving application, the target scene is a road scene, the detection device (such as a laser radar) is installed on the collection vehicle, and along with the movement of the collection vehicle, the computer device is in communication connection with the detection device to obtain the point cloud collected by the detection device.
S204, pre-labeling each frame of point cloud in the point cloud sequence to be labeled to obtain pre-labeling information of each frame of point cloud, wherein the pre-labeling information comprises: the method comprises the steps of marking identification information and first pose information of a labeling frame of each target object in a corresponding frame point cloud.
The point cloud target detection algorithm and the multi-target tracking algorithm or the detection tracking joint processing algorithm can be adopted to pre-label each frame of point cloud in the point cloud sequence to be labeled, so as to obtain pre-label information of each frame of point cloud. It should be noted that the pre-labeling may be implemented by any possible algorithm that may be present and may occur later.
For any frame point cloud, the pre-labeling information comprises: the method comprises the steps of marking identification information and first pose information of a labeling frame of each target object in the frame point cloud. The target object represents an object to be marked, for example, in a road scene, the target object may be a vehicle. The annotation box may in particular be a regular three-dimensional bounding box, for example a cuboid box. The identification information is used for distinguishing the annotation frames corresponding to different target objects, and the identification information of the annotation frames corresponding to the same target object in different frame point clouds is the same. The first pose information is used for representing the position and the pose of the annotation frame, and specifically may include coordinates and orientations of a center point of the annotation frame.
S206, determining a labeling frame sequence corresponding to each target object according to the identification information, and determining a first point cloud in each labeling frame in the labeling frame sequence according to the first pose information of each labeling frame in the labeling frame sequence.
Determining a label frame sequence corresponding to each target object according to the identification information, wherein the label frame sequence comprises the following steps: determining each annotation frame with the same identification information as the corresponding annotation frame of the same target object in different frame point clouds; and forming the annotation frames corresponding to the same target object in different frame point clouds into an annotation frame sequence corresponding to the same target object.
Because the identification information of the corresponding annotation frames of the same target object in different frame point clouds is the same, after the identification information of the annotation frames of each target object in each frame point cloud is obtained, all the annotation frames can be classified according to the identification information, the annotation frames corresponding to the same identification information are classified into one type, each type of standard frame corresponds to the same target object, each type of annotation frame forms an annotation frame sequence, the annotation frames corresponding to the same target object in different frame point clouds are included, the annotation frame sequence can represent the annotation frame track, and the motion track of the corresponding target object can be understood.
For any annotation frame, the spatial range contained in the annotation frame can be determined according to the first pose information and the size information of the annotation frame, and the point cloud in the spatial range is determined as the first point cloud in the annotation frame. The first point cloud may be understood as a point cloud corresponding to the pre-labeled target object.
S208, according to the first point cloud in each labeling frame in the labeling frame sequence, a point cloud registration result between corresponding labeling frames is obtained, and based on the point cloud registration result, second pose information of each labeling frame in the labeling frame sequence is obtained.
The pre-marked marking frames may not accurately mark the target object, the first pose information of each marking frame may be corrected, and the second pose information represents the pose information of the marking frame after correction. It can be understood that overlapping point clouds easily exist in each frame of point clouds acquired continuously, so that for any labeling frame sequence, overlapping point clouds easily exist in the first point clouds in each labeling frame. Based on the above, the point clouds in each labeling frame can be registered, a point cloud registration result between the corresponding labeling frames is obtained, and pose information of the corresponding labeling frames is corrected based on the point cloud registration result.
And S210, determining the labeling result of the point cloud sequence to be labeled according to the second pose information of each labeling frame in each labeling frame sequence.
After the second pose information of each labeling frame in each labeling frame sequence is obtained, labeling of all frame point clouds in the point cloud sequence to be labeled can be considered to be completed, and the labeling result of the point cloud sequence to be labeled comprises the identification information and the second pose information of all labeling frames.
In the point cloud labeling method, the point cloud sequences are pre-labeled to obtain the labeling frame sequences corresponding to the target objects, and for each labeling frame sequence, the pose information of the labeling frames is corrected through the point cloud registration results among the corresponding labeling frames, for example, a person can only participate in correcting the pose information of one labeling frame in the labeling frame sequences, and the pose information of all other labeling frames in the labeling frame sequences can be corrected through the point cloud registration, so that the manual intervention is reduced, the efficiency of point cloud labeling is improved, and meanwhile, the labeling precision is higher.
In one embodiment, the pre-labeling information further includes first size information of the labeling frame, the size information specifically including a length, a width, and a height of the labeling frame. For each sequence of annotation boxes, the following can also be done: counting the number of point clouds in each marking frame according to the first size information and the first pose information of each marking frame in the marking frame sequence; and correcting the first size information according to the number of the point clouds in each marking frame to obtain second size information of each marking frame in the marking frame sequence.
The first size information of the pre-labeled label frame may not be consistent with the actual size of the target object, for example, if the label frame is not attached to the edge of the target object, the first size information may contain more noise point clouds that do not belong to the target object. In addition, for a sequence of annotation boxes, although corresponding to the same target object, there may be differences in the first size information of the annotation boxes. Based on the above, the first size information of each labeling frame can be corrected, and the second size information which is more in line with the actual size of the target object can be obtained. It can be understood that the more the number of point clouds in the labeling frame is, the more the reflected in-frame information is, so that labeling personnel can conveniently recognize the actual size of the target object in the frame, and therefore, the first size information of the labeling frame can be corrected according to the number of point clouds in each labeling frame.
In one embodiment, according to the number of point clouds in each labeling frame, the first size information is corrected to obtain second size information of each labeling frame in the labeling frame sequence, and the method specifically includes the following steps: determining the maximum point cloud quantity from the point cloud quantity in each marking frame, and acquiring correction size information of the marking frame corresponding to the maximum point cloud quantity; and replacing the first size information with the corrected size information to serve as second size information of each marking frame in the marking frame sequence.
For each marking frame sequence, the marking frame with the largest point cloud quantity can be found out by counting the point cloud quantity in each marking frame, then whether the first size information of the marking frame needs to be modified is determined according to actual conditions, and particularly whether the marking frame is closely attached to a target object or not can be determined by marking personnel through naked eyes. If the first size information does not need to be modified, the modified size information is the first size information of the labeling frame with the largest number of point clouds; if the first size information is modified, the modified size information is modified size information of the marking frame with the largest number of point clouds. After the corrected size information is obtained, the first size information of all the marked frames in the marked frame sequence is updated to the corrected size information in a unified mode, and the second size information is the corrected size information.
In one embodiment, the first point cloud in each labeling frame in the labeling frame sequence is determined according to the first pose information of each labeling frame in the labeling frame sequence, specifically, the first point cloud in each labeling frame in the labeling frame sequence is determined according to the first pose information and the second size information of each labeling frame in the labeling frame sequence.
When the size information of the marking frame is corrected, the point cloud information in the marking frame may also change, and the first point cloud in the marking frame is determined according to the first pose information of the marking frame and the corrected size information (namely, the second size information) for subsequent registration, so that noise interference in the point cloud registration process can be reduced, and more accurate registration results are facilitated. It can be appreciated that when the size information of the labeling frame is corrected, the pose information of the labeling frame may also change, and at this time, the first point cloud in the labeling frame may be determined according to the corrected pose information and the corrected size information of the labeling frame.
In one embodiment, as shown in fig. 3, the step of obtaining the second pose information of each labeling frame in the labeling frame sequence based on the point cloud registration result obtained by obtaining the point cloud registration result between corresponding labeling frames according to the first point cloud in each labeling frame in the labeling frame sequence may specifically include the following steps S302 to S306.
S302, determining a pre-correction annotation frame from the annotation frame sequence, and acquiring corrected pose information of the pre-correction annotation frame as second pose information of the pre-correction annotation frame.
The pre-correction annotation frame represents an annotation frame with more accurate pose information, and the corrected pose information can be obtained by modifying the first pose information of the pre-correction annotation frame according to actual conditions by an annotator. It can be understood that when the first pose information of the pre-corrected annotation frame does not need to be modified, the corrected pose information of the pre-corrected annotation frame is the first pose information of the pre-corrected annotation frame.
S304, determining a second point cloud in the pre-correction annotation frame according to the second pose information of the pre-correction annotation frame.
When the pose information of the marking frame is corrected, the point cloud information in the marking frame may also change, and the point cloud in the pre-corrected marking frame is determined according to the second pose information of the pre-corrected marking frame and used as the second point cloud in the pre-corrected marking frame.
S306, according to the second point cloud in the pre-correction marking frames and the first point cloud in each marking frame to be corrected, obtaining point cloud registration results among the corresponding marking frames, and based on the point cloud registration results, obtaining second pose information of each marking frame to be corrected, wherein the marking frames to be corrected represent marking frames except the pre-correction marking frames in the marking frame sequence.
For each marking frame to be corrected, the pose information of the marking frame to be corrected does not need to be corrected manually by marking personnel, and the pose information of the marking frame to be corrected can be corrected through the point cloud registration result between the marking frame to be corrected and the corrected marking frame. The corrected annotation frame comprises a pre-corrected annotation frame and can also comprise an annotation frame to be corrected after pose information is corrected based on the point cloud registration result.
For example, one labeling frame sequence L1 sequentially includes labeling frames 1, labeling frames 2, …, and labeling frames 10 according to the corresponding point cloud acquisition sequence, and the same target object in the point clouds of the 1 st to 10 th frames respectively, for the labeling frame sequence L1, one labeling frame may be selected as a pre-correction labeling frame, for example, labeling frame 1 is selected as a pre-correction labeling frame, for any one of the labeling frames 2-10, pose information may be corrected according to the point cloud registration result between the labeling frame and the labeling frame 1, and pose information may also be corrected according to the point cloud registration result between the labeling frame and the previous corrected labeling frame.
In one embodiment, the pre-corrected annotation frame is determined from the annotation frame sequence, and specifically, the first annotation frame in the annotation frame sequence is taken as the pre-corrected annotation frame. As shown in fig. 4, according to the second point cloud in the pre-corrected labeling frame and the first point cloud in each labeling frame to be corrected, a point cloud registration result between the corresponding labeling frames is obtained, and based on the point cloud registration result, the step of obtaining the second pose information of each labeling frame to be corrected may specifically include the following steps S402 to S406.
S402, taking the pre-corrected annotation frame as a reference annotation frame, taking the later annotation frame of the reference annotation frame as a current annotation frame to be corrected, registering a first point cloud in the current annotation frame to be corrected with a second point cloud in the reference annotation frame, obtaining a point cloud registration result between the current annotation frame to be corrected and the reference annotation frame, and obtaining second pose information of the current annotation frame to be corrected based on the point cloud registration result.
The registration of the first point cloud in the current labeling frame to be corrected and the second point cloud in the reference labeling frame can be performed by adopting an Iterative Closest Point (ICP) algorithm, and the point cloud registration result between the current labeling frame to be corrected and the reference labeling frame can specifically comprise pose transformation parameters of the current labeling frame to be corrected relative to the reference labeling frame. Based on the point cloud registration result, second pose information of the current to-be-corrected annotation frame is obtained, specifically, the second pose information of the reference annotation frame is transformed by using pose transformation parameters of the current to-be-corrected annotation frame relative to the reference annotation frame, and the second pose information of the current to-be-corrected annotation frame is obtained.
The pose information can specifically include coordinates (x, y, z) of a center point of the marking frame and a direction (heading), the direction includes a rotation angle of the marking frame on an xy plane, the pose conversion parameters can specifically include a rotation parameter (R) and a translation parameter (T), and the rotation parameter (R) and the translation parameter (T) are applied to second pose information of the reference marking frame, so that second pose information of the marking frame to be corrected at present can be obtained.
S404, obtaining a second point cloud in the current annotation frame to be corrected according to the second pose information of the current annotation frame to be corrected.
When pose information of the current to-be-corrected annotation frame is corrected, point cloud information in the current to-be-corrected annotation frame can also change along with the pose information, and the point cloud in the current to-be-corrected annotation frame is determined according to second pose information of the current to-be-corrected annotation frame and used as second point cloud in the current to-be-corrected annotation frame.
S406, taking the current annotation frame to be corrected as a new reference annotation frame, and returning to the step of taking the next annotation frame of the reference annotation frame as the current annotation frame to be corrected, and registering the first point cloud in the current annotation frame to be corrected with the second point cloud in the reference annotation frame until the second pose information of all the annotation frames to be corrected in the annotation frame sequence is obtained.
For example, a labeling frame sequence L1 sequentially includes labeling frames 1, labeling frames 2, …, and labeling frames 10 according to the corresponding point cloud acquisition sequence, and the same target object in the point clouds of the 1 st to 10 th frames respectively corresponds, for the labeling frame sequence L1, the labeling frame 1 is selected as a pre-correction labeling frame, first, the labeling frame 1 is used as a reference labeling frame, the labeling frame 2 is used as a currently-to-be-corrected labeling frame, and the first point cloud in the labeling frame 2 and the second point cloud in the labeling frame 1 are registered to obtain pose transformation parameters (R) between the labeling frame 2 and the labeling frame 1 21 ,T 21 ) Using pose transformation parameters (R 21 ,T 21 ) And transforming the second pose information of the labeling frame 1 to obtain the second pose information of the labeling frame 2, and obtaining a second point cloud in the labeling frame 2 according to the second pose information of the labeling frame 2. Then, the labeling frame 2 is used as a new reference labeling frame, the labeling frame 3 is used as a current labeling frame to be corrected, and the first point cloud in the labeling frame 3 and the second point cloud in the labeling frame 2 are registered to obtain pose transformation parameters (R 32 ,T 32 ) Using pose transformation parameters (R 32 ,T 32 ) And transforming the second pose information of the labeling frame 2 to obtain the second pose information of the labeling frame 3. And so on until the second pose information of the labeling frame 10 is obtained, so far, the second pose information of all labeling frames in the labeling frame sequence L1 is obtained.
In this embodiment, the pose information of the corrected labeling frame is transformed by using pose transformation parameters between adjacent labeling frames (including a corrected labeling frame and a labeling frame to be corrected) in the labeling frame sequence, so as to obtain the pose information of the labeling frame to be corrected, and as the corrected pose information, the accuracy of the point cloud registration result is guaranteed due to more overlapping point clouds in the adjacent labeling frames, so that the corrected pose accuracy is improved.
In one embodiment, the step of determining the labeling result of the point cloud sequence to be labeled according to the second pose information of each labeling frame in each labeling frame sequence may specifically include the following steps: for each marking frame sequence, taking second pose information of each marking frame in the marking frame sequence as a node, taking pose transformation parameters of adjacent marking frames as edges, constructing a pose graph, and optimizing the second pose information of each marking frame in the marking frame sequence based on the pose graph to obtain third pose information of each marking frame in the marking frame sequence; and obtaining the labeling result of the point cloud sequence to be labeled according to the third pose information of each labeling frame in each labeling frame sequence.
For each labeling frame sequence, after the second pose information of each labeling frame is obtained, the pose information of the labeling frame can be further optimized by using a pose graph (pose graph), and the third pose information obtained through the pose graph optimization is used as the pose information of the final labeling of the labeling frame. Through pose graph optimization, accumulated errors in the registration process can be reduced, and more accurate pose information is obtained.
In the embodiment, the automatic correction of the labeling frame pose information is realized by utilizing the ICP algorithm and the pore graph optimization, and only a small amount of manual labeling is needed, so that the labeling efficiency is improved, and meanwhile, the labeling precision is higher. In addition, the marked information carries the identification information of the corresponding target object, and the data set marked in the way can be used for detecting the point cloud target and tracking the point cloud target.
In one embodiment, before the point cloud registration result between the corresponding labeling frames is obtained according to the first point cloud in each labeling frame in the labeling frame sequence, the method further comprises the following steps: and dividing the marking frame sequence to obtain continuous marking frame subsequences.
When the sequence of the annotation frames is longer, namely more annotation frames are included, accumulated errors in the point cloud registration result are larger, and accuracy of the corrected pose information is affected. In order to reduce the accumulated error, a longer marking frame sequence can be divided into a plurality of continuous marking frame sub-sequences, and pose correction is respectively carried out on each marking frame sub-sequence in the marking frame sequence.
In one embodiment, the step of obtaining the point cloud registration result between the corresponding labeling frames according to the first point cloud in each labeling frame in the labeling frame sequence, and obtaining the second pose information of each labeling frame in the labeling frame sequence based on the point cloud registration result may specifically include the following steps: and for each labeling frame sub-sequence in the labeling frame sequence, acquiring a point cloud registration result between corresponding labeling frames according to the first point cloud in each labeling frame in the labeling frame sub-sequence, and acquiring second pose information of each labeling frame in the labeling frame sub-sequence based on the point cloud registration result. For specific registration procedures, reference may be made to the previous embodiments, and no further description is given here.
For each labeling frame sub-sequence in the labeling frame sequence, after the second pose information of each labeling frame in the labeling frame sub-sequence is obtained, the pose information of the labeling frame can be further optimized by using a pose graph (pose graph), and the third pose information obtained through the pose graph optimization is used as the pose information of the final labeling of the labeling frame. Through pose graph optimization, accumulated errors in the registration process can be reduced, and more accurate pose information is obtained. And after the third pose information of each marking frame in all marking frame subsequences is obtained, merging the third pose information of each marking frame in the marking frame subsequences to obtain the third pose information of each marking frame in the marking frame sequence.
In one embodiment, the labeling frame sequence is segmented to obtain a continuous labeling frame sub-sequence, specifically, the labeling frame sequence is segmented once every preset number of labeling frames from the first labeling frame of the labeling frame sequence to obtain the continuous labeling frame sub-sequence.
The preset number may be set in combination with the actual situation, for example, may be set to 10, that is, 10 label frames in each continuous sequence are used as a label frame subsequence. For example, if the label frame sequence L2 includes 100 label frames, namely, label frame 1, label frames 2, …, and label frame 100 in this order, the label frame sequence L2 may be divided into 10 label frame sub-sequences, specifically, label frames 1 to 10 form a first label frame sub-sequence, label frames 11 to 20 form a second label frame sub-sequence, and so on, label frames 91 to 100 form a tenth label frame sub-sequence, thereby completing the division of the label frame sequence.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages performed is not necessarily sequential, but may be performed alternately or alternately with at least a part of the steps or stages in other steps or other steps.
In one embodiment, as shown in fig. 5, there is provided a point cloud labeling apparatus 500, including: an acquisition module 510, a pre-labeling module 520, a processing module 530, a registration module 540, and a determination module 550, wherein:
the obtaining module 510 is configured to obtain a point cloud sequence to be annotated.
The pre-labeling module 520 is configured to pre-label each frame of point cloud in the point cloud sequence to be labeled, and obtain pre-labeling information of each frame of point cloud, where the pre-labeling information includes: the method comprises the steps of marking identification information and first pose information of a labeling frame of each target object in a corresponding frame point cloud.
The processing module 530 is configured to determine a sequence of labeling frames corresponding to each target object according to the identification information, and determine a first point cloud in each labeling frame in the sequence of labeling frames according to the first pose information of each labeling frame in the sequence of labeling frames.
The registration module 540 is configured to obtain a point cloud registration result between the corresponding labeling frames according to the first point cloud in each labeling frame in the labeling frame sequence, and obtain second pose information of each labeling frame in the labeling frame sequence based on the point cloud registration result.
The determining module 550 is configured to determine a labeling result of the point cloud sequence to be labeled according to the second pose information of each labeling frame in each labeling frame sequence.
In one embodiment, the processing module 530 is specifically configured to, when determining the sequence of annotation frames corresponding to each target object according to the identification information: determining each annotation frame with the same identification information as the corresponding annotation frame of the same target object in different frame point clouds; and forming the annotation frames corresponding to the same target object in different frame point clouds into an annotation frame sequence corresponding to the same target object.
In one embodiment, the pre-labeling information further includes first size information of the labeling frame; the processing module 530 is further configured to: for each sequence of annotation boxes, the following is done: counting the number of point clouds in each marking frame according to the first size information and the first pose information of each marking frame in the marking frame sequence; and correcting the first size information according to the number of the point clouds in each marking frame to obtain second size information of each marking frame in the marking frame sequence.
In one embodiment, the processing module 530 is specifically configured to, when correcting the first size information according to the number of point clouds in each of the labeling frames to obtain the second size information of each of the labeling frames in the labeling frame sequence: determining the maximum point cloud quantity from the point cloud quantity in each marking frame, and acquiring correction size information of the marking frame corresponding to the maximum point cloud quantity; and replacing the first size information with the corrected size information to serve as second size information of each marking frame in the marking frame sequence.
In one embodiment, the processing module 530 is specifically configured to, when determining the first point cloud in each annotation frame in the sequence of annotation frames according to the first pose information of each annotation frame in the sequence of annotation frames: and determining a first point cloud in each marking frame in the marking frame sequence according to the first pose information and the second size information of each marking frame in the marking frame sequence.
In one embodiment, the registration module 540 is specifically configured to, when obtaining the point cloud registration result between the corresponding labeling frames according to the first point cloud in each labeling frame in the labeling frame sequence and obtaining the second pose information of each labeling frame in the labeling frame sequence based on the point cloud registration result: determining a pre-correction annotation frame from the annotation frame sequence, and acquiring correction pose information of the pre-correction annotation frame as second pose information of the pre-correction annotation frame; determining a second point cloud in the pre-correction annotation frame according to the second pose information of the pre-correction annotation frame; and obtaining point cloud registration results among the corresponding marking frames according to the second point cloud in the marking frames to be corrected and the first point cloud in each marking frame to be corrected, and obtaining second pose information of each marking frame to be corrected based on the point cloud registration results, wherein the marking frames to be corrected represent marking frames except the marking frames to be corrected in the marking frame sequence.
In one embodiment, the registration module 540 is specifically configured to, when determining a pre-revised annotation frame from the sequence of annotation frames: taking a first marking frame in the marking frame sequence as a pre-correction marking frame; the registration module 540 is specifically configured to, when obtaining the point cloud registration result between the corresponding labeling frames according to the second point cloud in the pre-correction labeling frame and the first point cloud in each labeling frame to be corrected, obtain the second pose information of each labeling frame to be corrected based on the point cloud registration result: registering a first point cloud in the current marking frame to be corrected with a second point cloud in the reference marking frame to obtain a point cloud registration result between the current marking frame to be corrected and the reference marking frame, and obtaining second pose information of the current marking frame to be corrected based on the point cloud registration result; obtaining a second point cloud in the current marking frame to be corrected according to the second pose information of the current marking frame to be corrected; and taking the current marking frame to be corrected as a new reference marking frame, returning to the step of taking the next marking frame of the reference marking frame as the current marking frame to be corrected, and registering the first point cloud in the current marking frame to be corrected with the second point cloud in the reference marking frame until the second pose information of all the marking frames to be corrected in the marking frame sequence is obtained.
In one embodiment, the point cloud registration result between the currently to-be-corrected annotation frame and the reference annotation frame comprises: the pose transformation parameters of the current marking frame to be corrected relative to the reference marking frame; the registration module 540 is specifically configured to, when obtaining the second pose information of the current label frame to be corrected based on the point cloud registration result: and transforming the second pose information of the reference annotation frame by using the pose transformation parameters to obtain the second pose information of the current annotation frame to be corrected.
In one embodiment, the pose information includes: the coordinates and the orientation of the center point of the labeling frame, and the pose transformation parameters comprise: rotation parameters and translation parameters.
In one embodiment, the determining module 550 is specifically configured to, when determining the labeling result of the point cloud sequence to be labeled according to the second pose information of each labeling frame in each labeling frame sequence: for each marking frame sequence, taking second pose information of each marking frame in the marking frame sequence as a node, taking pose transformation parameters of adjacent marking frames as edges, constructing a pose graph, and optimizing the second pose information of each marking frame in the marking frame sequence based on the pose graph to obtain third pose information of each marking frame in the marking frame sequence; and obtaining the labeling result of the point cloud sequence to be labeled according to the third pose information of each labeling frame in each labeling frame sequence.
In one embodiment, the processing module 530 is further configured to: and dividing the marking frame sequence to obtain continuous marking frame subsequences.
In one embodiment, the processing module 530 is specifically configured to, when dividing the sequence of labeling frames to obtain a continuous subsequence of labeling frames: and starting from the first marking frame of the marking frame sequence, dividing the marking frame sequence once every other preset number of marking frames to obtain a continuous marking frame sub-sequence.
In one embodiment, the preset number is 10.
In one embodiment, the registration module 540 is specifically configured to, when obtaining the point cloud registration result between the corresponding labeling frames according to the first point cloud in each labeling frame in the labeling frame sequence and obtaining the second pose information of each labeling frame in the labeling frame sequence based on the point cloud registration result: and for each labeling frame sub-sequence of the labeling frame sequence, acquiring a point cloud registration result between corresponding labeling frames according to the first point cloud in each labeling frame in the labeling frame sub-sequence, and acquiring second pose information of each labeling frame in the labeling frame sub-sequence based on the point cloud registration result.
For specific limitations of the point cloud labeling device, reference may be made to the above limitation of the point cloud labeling method, and no further description is given here. The modules in the point cloud labeling device can be all or partially realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the various method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the respective method embodiments described above.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the steps of the respective method embodiments described above.
It should be appreciated that the terms "first," "second," and the like in the above embodiments are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Furthermore, in the description of the present application, unless otherwise indicated, the meaning of "plurality" means at least two.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (17)

  1. A point cloud labeling method, the method comprising:
    acquiring a point cloud sequence to be marked;
    pre-labeling each frame of point cloud in the point cloud sequence to be labeled to obtain pre-labeling information of each frame of point cloud, wherein the pre-labeling information comprises: marking identification information and first pose information of a labeling frame of each target object in the corresponding frame point cloud;
    determining a labeling frame sequence corresponding to each target object according to the identification information, and determining a first point cloud in each labeling frame in the labeling frame sequence according to first pose information of each labeling frame in the labeling frame sequence;
    obtaining point cloud registration results among corresponding annotation frames according to first point clouds in each annotation frame in the annotation frame sequence, and obtaining second pose information of each annotation frame in the annotation frame sequence based on the point cloud registration results;
    And determining the labeling result of the point cloud sequence to be labeled according to the second pose information of each labeling frame in each labeling frame sequence.
  2. The method according to claim 1, wherein determining a sequence of annotation boxes corresponding to each target object according to the identification information comprises:
    determining each annotation frame with the same identification information as the corresponding annotation frame of the same target object in different frame point clouds;
    and forming the annotation frames corresponding to the same target object in different frame point clouds into an annotation frame sequence corresponding to the same target object.
  3. The method of claim 1, wherein the pre-labeling information further comprises first size information of a labeling frame; the method further comprises the steps of:
    for each sequence of annotation boxes, the following is done:
    counting the number of point clouds in each marking frame according to the first size information and the first pose information of each marking frame in the marking frame sequence;
    and correcting the first size information according to the number of the point clouds in each marking frame to obtain second size information of each marking frame in the marking frame sequence.
  4. A method according to claim 3, wherein modifying the first size information according to the number of point clouds in each annotation frame to obtain second size information for each annotation frame in the sequence of annotation frames comprises:
    Determining the maximum point cloud quantity from the point cloud quantity in each marking frame, and acquiring correction size information of the marking frame corresponding to the maximum point cloud quantity;
    and replacing the first size information with the corrected size information to serve as second size information of each marking frame in the marking frame sequence.
  5. A method according to claim 3, wherein determining a first point cloud within each annotation frame in the sequence of annotation frames from the first pose information for each annotation frame in the sequence of annotation frames comprises:
    and determining a first point cloud in each marking frame in the marking frame sequence according to the first pose information and the second size information of each marking frame in the marking frame sequence.
  6. The method of claim 1, wherein obtaining a point cloud registration result between corresponding annotation frames according to a first point cloud in each annotation frame in the annotation frame sequence, and obtaining second pose information of each annotation frame in the annotation frame sequence based on the point cloud registration result, comprises:
    determining a pre-correction annotation frame from the annotation frame sequence, and acquiring corrected pose information of the pre-correction annotation frame as second pose information of the pre-correction annotation frame;
    Determining a second point cloud in the pre-correction annotation frame according to the second pose information of the pre-correction annotation frame;
    and obtaining point cloud registration results among the corresponding marking frames according to the second point cloud in the pre-correction marking frames and the first point cloud in each marking frame to be corrected, and obtaining second pose information of each marking frame to be corrected based on the point cloud registration results, wherein the marking frames to be corrected represent marking frames except the pre-correction marking frames in the marking frame sequence.
  7. The method of claim 6, wherein determining a pre-revised annotation frame from the sequence of annotation frames comprises: taking a first marking frame in the marking frame sequence as a pre-correction marking frame;
    obtaining a point cloud registration result between corresponding marking frames according to the second point cloud in the pre-correction marking frame and the first point cloud in each marking frame to be corrected, and obtaining second pose information of each marking frame to be corrected based on the point cloud registration result, wherein the method comprises the following steps:
    the pre-correction marking frame is used as a reference marking frame, the latter marking frame of the reference marking frame is used as a current marking frame to be corrected, the first point cloud in the current marking frame to be corrected is registered with the second point cloud in the reference marking frame, a point cloud registration result between the current marking frame to be corrected and the reference marking frame is obtained, and second pose information of the current marking frame to be corrected is obtained based on the point cloud registration result;
    Obtaining a second point cloud in the current annotation frame to be corrected according to the second pose information of the current annotation frame to be corrected;
    and taking the current to-be-corrected annotation frame as a new reference annotation frame, returning to the step of taking the next annotation frame of the reference annotation frame as the current to-be-corrected annotation frame, and registering the first point cloud in the current to-be-corrected annotation frame and the second point cloud in the reference annotation frame until the second pose information of all to-be-corrected annotation frames in the annotation frame sequence is obtained.
  8. The method of claim 6, wherein the point cloud registration result between the current to-be-corrected annotation frame and the reference annotation frame comprises: the pose transformation parameters of the current annotation frame to be corrected relative to the reference annotation frame;
    based on the point cloud registration result, obtaining second pose information of the current to-be-corrected annotation frame comprises the following steps:
    and transforming the second pose information of the reference annotation frame by using the pose transformation parameters to obtain the second pose information of the current annotation frame to be corrected.
  9. The method of claim 8, wherein the pose information comprises: and marking coordinates and orientations of central points of the frames, wherein the pose transformation parameters comprise: rotation parameters and translation parameters.
  10. The method according to any one of claims 1 to 9, wherein determining the labeling result of the point cloud sequence to be labeled according to the second pose information of each labeling frame in each labeling frame sequence comprises:
    for each marking frame sequence, taking second pose information of each marking frame in the marking frame sequence as a node, taking pose transformation parameters of adjacent marking frames as edges, constructing a pose graph, and optimizing the second pose information of each marking frame in the marking frame sequence based on the pose graph to obtain third pose information of each marking frame in the marking frame sequence;
    and obtaining the labeling result of the point cloud sequence to be labeled according to the third pose information of each labeling frame in each labeling frame sequence.
  11. The method of claim 1, further comprising, prior to obtaining a point cloud registration result between respective annotation boxes from a first point cloud within each annotation box in the sequence of annotation boxes: and dividing the marking frame sequence to obtain continuous marking frame subsequences.
  12. The method of claim 11, wherein partitioning the sequence of annotation boxes to obtain a sequence of contiguous annotation boxes comprises:
    And starting from the first marking frame of the marking frame sequence, dividing the marking frame sequence once every other preset number of marking frames to obtain a continuous marking frame subsequence.
  13. The method of claim 12, wherein the predetermined number is 10.
  14. The method according to any one of claims 11 to 13, wherein obtaining a point cloud registration result between the corresponding annotation frames according to the first point cloud in each annotation frame in the annotation frame sequence, and obtaining second pose information of each annotation frame in the annotation frame sequence based on the point cloud registration result, comprises:
    and for each labeling frame sub-sequence of the labeling frame sequence, acquiring a point cloud registration result between corresponding labeling frames according to a first point cloud in each labeling frame in the labeling frame sub-sequence, and acquiring second pose information of each labeling frame in the labeling frame sub-sequence based on the point cloud registration result.
  15. A point cloud labeling apparatus, the apparatus comprising:
    the acquisition module is used for acquiring the point cloud sequence to be marked;
    the pre-labeling module is used for pre-labeling each frame of point cloud in the point cloud sequence to be labeled to obtain pre-labeling information of each frame of point cloud, and the pre-labeling information comprises: marking identification information and first pose information of a labeling frame of each target object in the corresponding frame point cloud;
    The processing module is used for determining a labeling frame sequence corresponding to each target object according to the identification information, and determining a first point cloud in each labeling frame in the labeling frame sequence according to first pose information of each labeling frame in the labeling frame sequence;
    the registration module is used for obtaining point cloud registration results among corresponding annotation frames according to first point clouds in each annotation frame in the annotation frame sequence, and obtaining second pose information of each annotation frame in the annotation frame sequence based on the point cloud registration results;
    and the determining module is used for determining the labeling result of the point cloud sequence to be labeled according to the second pose information of each labeling frame in each labeling frame sequence.
  16. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 14 when the computer program is executed.
  17. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 14.
CN202080103188.6A 2020-12-23 2020-12-23 Point cloud labeling method, point cloud labeling device, computer equipment and storage medium Pending CN116134488A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/138522 WO2022133776A1 (en) 2020-12-23 2020-12-23 Point cloud annotation method and apparatus, computer device and storage medium

Publications (1)

Publication Number Publication Date
CN116134488A true CN116134488A (en) 2023-05-16

Family

ID=82158545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080103188.6A Pending CN116134488A (en) 2020-12-23 2020-12-23 Point cloud labeling method, point cloud labeling device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN116134488A (en)
WO (1) WO2022133776A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116665212A (en) * 2023-07-31 2023-08-29 福思(杭州)智能科技有限公司 Data labeling method, device, processing equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9443297B2 (en) * 2013-07-10 2016-09-13 Cognex Corporation System and method for selective determination of point clouds
CN109727312B (en) * 2018-12-10 2023-07-04 广州景骐科技有限公司 Point cloud labeling method, point cloud labeling device, computer equipment and storage medium
CN110084895B (en) * 2019-04-30 2023-08-22 上海禾赛科技有限公司 Method and equipment for marking point cloud data
CN111931727A (en) * 2020-09-23 2020-11-13 深圳市商汤科技有限公司 Point cloud data labeling method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116665212A (en) * 2023-07-31 2023-08-29 福思(杭州)智能科技有限公司 Data labeling method, device, processing equipment and storage medium
CN116665212B (en) * 2023-07-31 2023-10-13 福思(杭州)智能科技有限公司 Data labeling method, device, processing equipment and storage medium

Also Published As

Publication number Publication date
WO2022133776A1 (en) 2022-06-30

Similar Documents

Publication Publication Date Title
CN111536964B (en) Robot positioning method and device, and storage medium
CN107796397B (en) Robot binocular vision positioning method and device and storage medium
US11288525B2 (en) Object detection for distorted images
CN109470254B (en) Map lane line generation method, device, system and storage medium
US8359156B2 (en) Map generation system and map generation method by using GPS tracks
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
CN111192331A (en) External parameter calibration method and device for laser radar and camera
CN108638062B (en) Robot positioning method, device, positioning equipment and storage medium
CN111753649B (en) Parking space detection method, device, computer equipment and storage medium
CN105324729B (en) Method for the ambient enviroment for modeling vehicle
CN114236564B (en) Method for positioning robot in dynamic environment, robot, device and storage medium
WO2022016320A1 (en) Map update method and apparatus, computer device, and storage medium
CN114943952A (en) Method, system, device and medium for obstacle fusion under multi-camera overlapped view field
CN115494533A (en) Vehicle positioning method, device, storage medium and positioning system
CN116134488A (en) Point cloud labeling method, point cloud labeling device, computer equipment and storage medium
CN113566817B (en) Vehicle positioning method and device
CN114445794A (en) Parking space detection model training method, parking space detection method and device
KR20220109537A (en) Apparatus and method for calibration of sensor system of autonomous vehicle
CN115647696B (en) Automatic machining device, machining method and machining terminal for large steel structure
CN114705180B (en) Data correction method, device and equipment for high-precision map and storage medium
CN116626700A (en) Robot positioning method and device, electronic equipment and storage medium
CN114358038B (en) Two-dimensional code coordinate calibration method and device based on vehicle high-precision positioning
CN116342745A (en) Editing method and device for lane line data, electronic equipment and storage medium
CN116576868A (en) Multi-sensor fusion accurate positioning and autonomous navigation method
EP4148392A1 (en) Method and apparatus for vehicle positioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination