CN112015938A - Point cloud label transmission method, device and system - Google Patents

Point cloud label transmission method, device and system Download PDF

Info

Publication number
CN112015938A
CN112015938A CN201910453496.0A CN201910453496A CN112015938A CN 112015938 A CN112015938 A CN 112015938A CN 201910453496 A CN201910453496 A CN 201910453496A CN 112015938 A CN112015938 A CN 112015938A
Authority
CN
China
Prior art keywords
point cloud
target object
current frame
labeled
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910453496.0A
Other languages
Chinese (zh)
Other versions
CN112015938B (en
Inventor
徐建云
孙杰
朱雨时
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910453496.0A priority Critical patent/CN112015938B/en
Publication of CN112015938A publication Critical patent/CN112015938A/en
Application granted granted Critical
Publication of CN112015938B publication Critical patent/CN112015938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application discloses a point cloud label transmission method, device and system, and belongs to the field of data processing. The method comprises the following steps: marking the target object in the current frame point cloud to obtain a label of the target object, and determining the relative pose information between the current frame point cloud and the point cloud to be marked and the state of the target object. Due to different states of the target object, the position of the target object in the current frame point cloud and the position of the target object in the point cloud to be labeled may be the same or different. Therefore, according to the state of the target object and the relative pose information between the current frame point cloud and the point cloud to be labeled, the labels of the target object in different states in the current frame point cloud can be added to the position of the target object in the point cloud to be labeled. According to the method and the device, the target object in the point cloud to be labeled can be labeled more conveniently and efficiently according to the state of the target object and the relative pose information between the current frame point cloud and the point cloud to be labeled.

Description

Point cloud label transmission method, device and system
Technical Field
The present disclosure relates to the field of data processing, and in particular, to a method, an apparatus, and a system for transmitting a point cloud tag.
Background
A point cloud tag generally refers to a set of multiple tags obtained by labeling multiple objects in a point cloud. The tag of an object may typically include information about the size, type, location, and orientation of the object. Since there are several same objects between a frame of point cloud and its adjacent or similar multi-frame point cloud, in order to simplify the labeling work for these several objects in the adjacent or similar multi-frame point cloud, the labels of these several objects in the frame of point cloud may be transferred to the adjacent or similar multi-frame point cloud, and such a process is also the process of point cloud label transfer.
Currently, the transmission of point cloud labels is mainly realized by copying labels of target objects in current frame point clouds to be labeled including the target objects, so as to realize the transmission of the labels of the same target objects between the current frame point clouds and the point clouds to be labeled. However, in general, pose information of point clouds of different frames is different, so that after a tag of a target object in a current frame point cloud is copied to a point cloud to be labeled including the target object, a deviation often exists, that is, the copied tag of the target object cannot accurately indicate information of the target object in the point cloud to be labeled, thereby causing a problem of inaccurate labeling.
Disclosure of Invention
The embodiment of the application provides a point cloud label transfer method, a point cloud label transfer device and a point cloud label transfer system, and can solve the problem of inaccurate labeling caused by copying a label of a target object in a current frame point cloud to a point cloud to be labeled comprising the target object in the related art. The technical scheme is as follows:
in one aspect, a point cloud tag delivery method is provided, and the method includes:
marking a target object in the current frame point cloud to obtain a label of the target object;
determining relative pose information between the current frame point cloud and a point cloud to be marked and the state of the target object, wherein the point cloud to be marked comprises the target object;
and adding a label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the state of the target object and the relative pose information.
Optionally, the determining the state of the target object includes:
projecting point cloud data in N frames of point clouds into the current frame of point cloud, wherein the N frames of point clouds comprise the point cloud to be marked and the point cloud between the point cloud to be marked and the current frame of point cloud;
and determining the state of the target object according to the projected current frame point cloud.
Optionally, the adding, according to the state of the target object and the relative pose information, a label of the target object in the current frame point cloud to a position of the target object in the point cloud to be labeled includes:
if the target object is in a static state, adding a label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the relative pose information;
if the target object is in a motion state, acquiring position change information of the target object between the current frame point cloud and the point cloud to be marked;
and adding a label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the relative pose information and the position change information of the target object between the current frame point cloud and the point cloud to be labeled.
Optionally, the adding, according to the relative pose information and the position change information of the target object between the current frame point cloud and the point cloud to be labeled, a label of the target object in the current frame point cloud to a position of the target object in the point cloud to be labeled includes:
determining a label reference position of the target object in the point cloud to be marked according to the relative pose information;
and adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the label reference position of the target object in the point cloud to be labeled and the position change information of the target object between the current frame point cloud and the point cloud to be labeled.
Optionally, before adding the tag of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the state of the target object and the relative pose information, the method further includes:
and if the target object is positioned in the region of interest in the point cloud to be labeled, executing a step of adding a label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the state of the target object and the relative pose information.
Optionally, after the tag of the target object in the current frame point cloud is added to the position of the target object in the point cloud to be labeled according to the state of the target object and the relative pose information, the method further includes:
determining the label confidence of the target object in the current frame point cloud and the label confidence of the target object in the point cloud to be labeled;
if the label confidence coefficient of the target object in the current frame point cloud is lower than that of the target object in the point cloud to be labeled, modifying the label of the target object in the point cloud to be labeled;
and adding the modified label to the position of the target object in the current frame point cloud according to the state of the target object and the relative pose information so as to update the label of the target object in the current frame point cloud.
In another aspect, a point cloud tag transferring apparatus is provided, the apparatus including:
the marking module is used for marking a target object in the current frame point cloud to obtain a label of the target object;
the first determination module is used for determining the relative pose information between the current frame point cloud and the point cloud to be marked and the state of the target object, wherein the point cloud to be marked comprises the target object;
and the first adding module is used for adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the state of the target object and the relative pose information.
Optionally, the first determining module includes:
the projection sub-module is used for projecting point cloud data in N frames of point clouds into the current frame of point cloud, wherein the N frames of point clouds comprise the point cloud to be labeled and the point cloud between the point cloud to be labeled and the current frame of point cloud;
and the determining submodule is used for determining the state of the target object according to the projected current frame point cloud.
Optionally, the first adding module includes:
the first adding submodule is used for adding a label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the relative pose information if the target object is in a static state;
the obtaining sub-module is used for obtaining the position change information of the target object between the current frame point cloud and the point cloud to be marked if the target object is in a motion state;
and the second adding submodule is used for adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the relative pose information and the position change information of the target object between the current frame point cloud and the point cloud to be labeled.
Optionally, the second adding sub-module includes:
the determining unit is used for determining a label reference position of the target object in the point cloud to be marked according to the relative pose information;
and the adding unit is used for adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the label reference position of the target object in the point cloud to be labeled and the position change information of the target object between the current frame point cloud and the point cloud to be labeled.
Optionally, the apparatus further comprises:
and the triggering module is used for triggering the adding module to add the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the state and the relative pose information of the target object if the target object is positioned in the region of interest in the point cloud to be labeled.
Optionally, the apparatus further comprises:
the second determining module is used for determining the label confidence of the target object in the current frame point cloud and the label confidence of the target object in the point cloud to be labeled;
a modification module, configured to modify a tag of the target object in the point cloud to be labeled if the tag confidence of the target object in the current frame point cloud is lower than the tag confidence of the target object in the point cloud to be labeled;
and the second adding module is used for adding the modified label to the position of the target object in the current frame point cloud according to the state of the target object and the relative pose information so as to update the label of the target object in the current frame point cloud.
In a third aspect, a system for transferring a point cloud tag is provided, where the system includes a point cloud collector and a point cloud labeling device, and the point cloud labeling device is configured to perform any of the steps of the method of the first aspect.
In a fourth aspect, a point cloud annotation apparatus is provided, the point cloud annotation apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of any of the methods of the first aspect described above.
In a fifth aspect, there is provided a computer readable storage medium having stored thereon instructions which, when executed by a processor, carry out the steps of any of the methods of the first aspect described above.
In a sixth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the method of any of the first aspects above.
The technical scheme provided by the embodiment of the application can at least bring the following beneficial effects:
in the embodiment of the application, a target object in a current frame point cloud is labeled to obtain a label of the target object, and then relative pose information between the current frame point cloud and a point cloud to be labeled and a state of the target object are determined. When the target object is in a static state, the position of the target object in the current frame point cloud is the same as the position of the target object in the point cloud to be marked; when the target object is in a motion state, the position of the target object in the current frame point cloud is different from the position of the target object in the point cloud to be marked. Therefore, according to the state of the target object and the relative pose information between the current frame point cloud and the point cloud to be labeled, the labels of the target object in different states in the current frame point cloud can be added to the position of the target object in the point cloud to be labeled. The point cloud label transferring method provided by the embodiment of the application can transfer the labels of the target objects in the current frame point cloud to the point cloud to be labeled, so that the labeling of the target objects in the point cloud to be labeled is simpler and more efficient.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application.
Fig. 2 is a schematic diagram of an acquisition scene of a point cloud provided in an embodiment of the present application.
Fig. 3 is a schematic diagram of a point cloud provided in an embodiment of the present application.
Fig. 4 is a flowchart of a first point cloud tag delivery method according to an embodiment of the present disclosure.
Fig. 5 is a flowchart of a second method for transferring a point cloud tag according to an embodiment of the present disclosure.
Fig. 6 is a block diagram of a point cloud tag transmission apparatus according to an embodiment of the present disclosure.
Fig. 7 is a schematic structural diagram of a point cloud annotation device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with aspects of the present application.
Before explaining the embodiments of the present application in detail, an implementation environment of the embodiments of the present application is described:
fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application, and referring to fig. 1, the implementation environment includes a mobile device 100 and a point cloud annotation device 200. The mobile device 100 is connected with the point cloud annotation device 200 through a network. The mobile device 100 is mounted with a point cloud collector 110. The point cloud collector 110 may collect the point cloud during the movement of the movable apparatus 100. The point cloud is a set composed of a plurality of three-dimensional points, and point cloud data in the point cloud is information such as a three-dimensional coordinate and a normal coordinate axis of each three-dimensional point included in the point cloud. The point cloud collector 110 may send the collected point cloud to the point cloud annotation device 200. The three-dimensional points included in the point cloud may be points on objects such as vehicles, people or buildings included in the collection scene.
Illustratively, the mobile device 100 may be an automobile, a robot, or the like, and the point cloud collector 110 may be a laser radar sensor, a binocular camera, a Time of Flight (TOF) camera, a structured light camera, or the like. The point cloud collector 110 may collect point clouds according to a certain collection period, and send the collected point clouds to the point cloud labeling apparatus 200. The acquisition period can be preset according to the use requirement, and the embodiment of the application does not limit the acquisition period. For example, the acquisition period may be 0.1 second, etc., that is, the point cloud collector 110 may acquire one frame of point cloud every 0.1 second.
In one possible implementation, the point cloud collector 110 is a lidar sensor that can emit laser light into the collection scene, and the emitted laser light is reflected back into the lidar sensor when contacting the surface of an object in the collection scene. The lidar sensor may determine a distance between the object and the lidar sensor based on the time taken for the laser to transmit and return and the speed of the laser transmission. The laser emitted by the laser radar sensor can be 4 lines, 8 lines, 16 lines and the like. The fewer the laser lines emitted by the laser radar sensor, the more sparse the collected point cloud; the more laser lines emitted by the laser radar sensor, the denser the collected point cloud. For the same frame of point cloud, in the acquisition scene corresponding to the frame of point cloud, the more distant objects from the laser radar sensor are sparse the three-dimensional points included in the frame of point cloud, and the more close objects from the laser radar sensor are dense the three-dimensional points included in the frame of point cloud. For convenience of description, the point clouds referred to in the following embodiments of the present application may all be collected by a lidar sensor. Of course, the point cloud related in the embodiment of the present application may also be collected by other types of point cloud collectors 110, which is not limited in the embodiment of the present application.
In addition, in some examples, the point cloud collector 110 is a mechanical rotation type laser radar sensor, the mechanical rotation type laser radar sensor can rotate 360 degrees around an axis perpendicular to the ground, and when the mechanical rotation type laser radar sensor rotates a small angle, one point cloud data packet can be obtained, a plurality of point cloud data packets can be obtained by rotating 360 degrees, and a complete frame of point cloud can be obtained by splicing the point clouds in the plurality of point cloud data packets. In other words, the mechanical rotary lidar sensor needs to rotate around an axis perpendicular to the ground within one acquisition period to acquire the point cloud of the acquisition scene by 360 degrees. However, in an acquisition period, during the rotation process of the mechanical rotation type lidar sensor, the position and the posture of the movable device 100 may be changed greatly, that is, the position and the posture of the movable device 100 may be changed greatly, so that the point clouds in the plurality of point cloud data packets acquired in the acquisition period are prone to cause measurement deviation when being spliced. In order to eliminate the measurement deviation, the relative pose information of the mobile device 100 in the acquisition period may be obtained first, and then the point clouds in the point cloud data packets are projected into the same three-dimensional coordinate system according to the relative pose information and the relative pose information between the point clouds in the point cloud data packets acquired in the acquisition period, and then a frame of point clouds is obtained by stitching, which is motion compensation. The three-dimensional coordinate system may be a three-dimensional coordinate system corresponding to a point cloud in any point cloud data packet, and the like, which is not limited in the embodiment of the present application.
Of course, the point cloud collected by the point cloud collector 110 may need to be motion compensated in other situations, and only one possible situation is illustrated here by way of example. However, when there is no deviation or the deviation is small, it is not timing, and no motion compensation may be performed. That is, motion compensation is not a necessary step for the point cloud collected by the point cloud collector 110.
Furthermore, in a possible implementation manner, the mobile device 100 may directly acquire pose information of the current frame point cloud and send the pose information to the point cloud labeling device 200; or, when the point cloud collector 110 is a laser radar sensor, the mobile device 100 may obtain, through a laser odometer (Lidar odometer) algorithm, the relative pose information of the current frame point cloud with respect to the first frame point cloud in the currently collected multi-frame point clouds, so as to determine the pose information of the current frame point cloud, and send the determined pose information of the current frame point cloud to the point cloud labeling device 200. The current frame point cloud is any one of the point clouds currently acquired by the point cloud acquisition unit 110.
The point cloud annotation device 200 is a device that provides background services for the mobile device 100. The point cloud annotation device 200 includes a user data store. The point cloud labeling apparatus 200 may receive the point clouds transmitted by the point cloud collector 110 installed on the mobile apparatus 100, and may determine the collection time of each received frame of point cloud and the pose information of each frame of point cloud.
In order to more intuitively understand the collection scene of the point cloud provided by the embodiment of the present application and the point cloud collected by the point cloud collector 110. Referring to fig. 2 and 3, fig. 2 is a schematic diagram of a scene of acquiring a point cloud, and fig. 3 is a schematic diagram of a point cloud acquired by the point cloud acquirer 110.
Fig. 4 is a flowchart of a point cloud tag transmission method according to an embodiment of the present disclosure. Referring to fig. 4, the method includes:
step 401: and marking the target object in the current frame point cloud to obtain the label of the target object.
Step 402: and determining the relative pose information between the current frame point cloud and the point cloud to be marked and the state of the target object, wherein the point cloud to be marked comprises the target object.
Step 403: and adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the state of the target object and the relative pose information.
Optionally, determining the state of the target object includes:
projecting point cloud data in N frames of point clouds into a current frame of point cloud, wherein the N frames of point clouds comprise a point cloud to be marked and a point cloud between the point cloud to be marked and the current frame of point cloud;
and determining the state of the target object according to the projected current frame point cloud.
Optionally, adding a label of the target object in the current frame point cloud to a position of the target object in the point cloud to be labeled according to the state and the relative pose information of the target object, including:
if the target object is in a static state, adding a label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the relative pose information;
if the target object is in a motion state, acquiring position change information of the target object between the current frame point cloud and the point cloud to be marked;
and adding a label of the target object in the current frame point cloud to the position of the target object in the point cloud to be marked according to the relative pose information and the position change information of the target object between the current frame point cloud and the point cloud to be marked.
Optionally, adding a label of the target object in the current frame point cloud to a position of the target object in the point cloud to be labeled according to the relative pose information and position change information of the target object between the current frame point cloud and the point cloud to be labeled, including:
determining a label reference position of a target object in the point cloud to be marked according to the relative pose information;
and adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the label reference position of the target object in the point cloud to be labeled and the position change information of the target object between the current frame point cloud and the point cloud to be labeled.
Optionally, before adding the tag of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the state of the target object and the relative pose information, the method further includes:
and if the target object is positioned in the region of interest in the point cloud to be labeled, executing a step of adding a label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the state of the target object and the relative pose information.
Optionally, after adding the tag of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the state of the target object and the relative pose information, the method further includes:
determining the label confidence of a target object in the current frame point cloud and the label confidence of the target object in the point cloud to be labeled;
if the label confidence coefficient of the target object in the current frame point cloud is lower than that of the target object in the point cloud to be labeled, modifying the label of the target object in the point cloud to be labeled;
and adding the modified label to the position of the target object in the current frame point cloud according to the state of the target object and the relative pose information so as to update the label of the target object in the current frame point cloud.
All the above optional technical solutions can be combined arbitrarily to form an optional embodiment of the present application, and the present application embodiment is not described in detail again.
Fig. 5 is a flowchart of a point cloud tag delivery method provided in an embodiment of the present application, where an execution subject of the method may be the point cloud annotation apparatus 200 shown in fig. 1, and referring to fig. 5, the method includes:
step 501: and marking the target object in the current frame point cloud to obtain the label of the target object.
It should be noted that the current frame point cloud is any one of the currently acquired point clouds. The target object in the current frame point cloud can be labeled through a three-dimensional frame or point-by-point labeling and the like, which is not limited in the embodiment of the application. In addition, the tag of the target object may be a tag for indicating information such as size, category, position, and direction of the target object. It should be understood that, when labeling a target object in a current frame point cloud by means of a three-dimensional frame, a label of the target object may include a three-dimensional frame, where the three-dimensional frame is a geometric frame for surrounding the target object, and the three-dimensional frame may be a three-dimensional rectangular frame or a three-dimensional polygonal frame, and the like, which is not limited in this embodiment of the present application. Under such conditions, the three-dimensional coordinates of the three-dimensional frame included in the tag of the target object in the three-dimensional coordinate system corresponding to the current frame point cloud are the same as the three-dimensional coordinates of the target object in the three-dimensional coordinate system corresponding to the current frame point cloud. That is, in the three-dimensional coordinate system corresponding to the current frame point cloud, the three-dimensional coordinates of the tag of the target object are the same as the three-dimensional coordinates of the target object.
The three-dimensional coordinate system corresponding to each frame of point cloud is the body coordinate system of the movable equipment when each frame of point cloud is collected. Therefore, for convenience of description, the three-dimensional coordinate system corresponding to the point cloud is referred to as the three-dimensional coordinate system of the point cloud in a unified manner.
Step 502: and determining the relative pose information between the current frame point cloud and the point cloud to be marked and the state of the target object.
It should be noted that the point cloud to be labeled includes the target object, the point cloud to be labeled is any frame point cloud except the current frame point cloud among the currently collected point clouds, that is, the point cloud to be labeled may be located before the current frame point cloud or located after the current frame point cloud.
In some embodiments, the mobile device may acquire the pose information of the current frame point cloud and the pose information of the point cloud to be labeled, so that the point cloud labeling device may acquire the pose information of the current frame point cloud and the pose information of the point cloud to be labeled from the mobile device. And then, the point cloud marking equipment can determine the relative pose information between the current frame point cloud and the point cloud to be marked according to the pose information of the current frame point cloud and the pose information of the point cloud to be marked. Or, in other embodiments, when the point cloud collector mounted on the mobile device is a laser point cloud sensor, the mobile device may obtain, by using a laser mileage calculation method, the relative pose information of the current frame point cloud with respect to the first frame point cloud in the currently acquired point cloud, and the relative pose information of the point cloud to be labeled with respect to the first frame point cloud in the currently acquired point cloud. In this way, the point cloud identification device can acquire the relative pose information of the current frame point cloud relative to the first frame point cloud in the currently acquired point clouds and the relative pose information of the point cloud to be marked relative to the first frame point cloud in the currently acquired point clouds from the movable device, so that the relative pose information between the current frame point cloud and the point cloud to be marked can be determined. Of course, the relative pose information between the current frame point cloud and the point cloud to be labeled can be determined in other manners, which is not limited in the embodiment of the present application.
Next, how to determine the state of the target object in step 502 will be described. In one possible implementation, determining the state of the target object may be achieved through the following steps (1) to (2).
(1): and projecting all point cloud data or part of point cloud data in the N frames of point clouds into the current frame of point cloud.
It should be noted that the point cloud data in any one frame of point cloud in the N frames of point clouds may be data such as a three-dimensional coordinate and a normal coordinate axis of each three-dimensional point included in the any one frame of point cloud, where the three-dimensional coordinate of each three-dimensional point is a three-dimensional coordinate in a three-dimensional coordinate system of the any one frame of point cloud. Under such conditions, all point cloud data or part of point cloud data in the N frames of point clouds is projected into the current frame of point cloud, that is, the three-dimensional coordinates of all three-dimensional points or part of three-dimensional points included in each frame of point cloud in the N frames of point cloud in the corresponding three-dimensional coordinate system are converted into the three-dimensional coordinate system of the current frame of point cloud, so that all point cloud data or part of point cloud data in the N frames of point cloud is projected into the current frame of point cloud.
(2): and determining the state of the target object according to the projected current frame point cloud.
It should be noted that the state of the target object may be a static state or a moving state. If the target object is in a static state, in the projected current frame point cloud, the three-dimensional coordinates of the three-dimensional points of the object point in the target object in the N frame point clouds are the same as the three-dimensional coordinates of the three-dimensional points in the current frame point cloud or the difference value is within the reference threshold range. If the target object is in a motion state, in the projected current frame point cloud, the difference value between the three-dimensional coordinates of the three-dimensional points of the object point in the target object in the N frame point clouds and the three-dimensional coordinates of the three-dimensional points in the current frame point cloud is larger than the reference threshold range. The reference threshold range may be preset according to a use requirement, and is not limited in the embodiment of the present application. Under the condition, in the projected current frame point cloud, whether the current target object to be marked is in a static state or a moving state can be determined according to whether the difference value between the three-dimensional coordinates of the three-dimensional points corresponding to the object points of the target object in the N frame point cloud and the three-dimensional coordinates of the three-dimensional points corresponding to the current frame point cloud is larger than the reference threshold range. That is, if the difference is greater than the reference threshold range, it is determined that the current target object to be labeled is in a motion state, otherwise, it is determined that the current target object to be labeled is in a static state.
In addition, based on the above description, when the target object is in a moving state, in the projected current frame point cloud, the difference between the three-dimensional coordinates of the three-dimensional point corresponding to the real object point in the target object in the N frame point cloud and the three-dimensional coordinates of the three-dimensional point corresponding to the current frame point cloud is greater than the reference threshold range. In this case, the target object may have a relatively obvious track in the projected current frame point cloud. In contrast, when the target object is in a static state, the target object does not have a more obvious track in the projected current frame point cloud. Based on this, whether the target object is in a static state or a moving state can also be determined according to the track of the target object in the projected current frame point cloud.
For example, when the track length of the target object in the projected current frame point cloud is greater than a length threshold, the target object may be determined to be in a motion state, otherwise, the target object is determined to be in a static state. The length threshold may be preset, and this is not limited in this application. Of course, whether the target object is in a stationary state or a moving state may also be determined in other ways, which is not limited in this application.
Step 503: and adding a label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the state of the target object and the relative pose information between the current frame point cloud and the point cloud to be labeled.
Step 503 can also be implemented by the following steps a to C.
Step A: and if the target object is in a static state, adding a label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the relative pose information between the current frame point cloud and the point cloud to be labeled.
When the target object is in a static state, point cloud data in the current frame point cloud is projected into the point cloud to be marked according to the relative pose information between the current frame point cloud and the point cloud to be marked, and in the projected point cloud to be marked, the three-dimensional coordinates of the three-dimensional point corresponding to the object point in the target object in the current frame point cloud are the same as the three-dimensional coordinates of the three-dimensional point corresponding to the point cloud to be marked or the difference value is positioned in a reference threshold value range. Under the condition, because the three-dimensional coordinate of the label of the target object is the same as the three-dimensional coordinate of the target object in the current frame point cloud, the three-dimensional coordinate of the label of the target object in the three-dimensional coordinate system of the current frame point cloud is converted into the three-dimensional coordinate system of the point cloud to be labeled according to the relative pose information between the current frame point cloud and the point cloud to be labeled, and then the converted three-dimensional coordinate of the label of the target object is the same as the three-dimensional coordinate of the target object in the three-dimensional coordinate system of the point cloud to be labeled or the difference value is within the reference threshold range. That is, the position of the tag converted to the target object in the point cloud to be labeled is the same as the position of the target object in the point cloud to be labeled. Therefore, the label of the target object in the current frame point cloud is added to the position of the target object in the point cloud to be labeled.
And B: and if the target object is in a motion state, acquiring the position change information of the target object between the current frame point cloud and the point cloud to be marked.
When the target object is in a motion state, point cloud data in the current frame point cloud is projected into the point cloud to be marked according to the relative pose information between the current frame point cloud and the point cloud to be marked, and in the projected point cloud to be marked, the three-dimensional coordinates of the three-dimensional point corresponding to the object point in the current frame point cloud in the target object and the three-dimensional coordinate difference of the three-dimensional point corresponding to the point cloud to be marked are larger than the reference threshold range. Under the condition, because the three-dimensional coordinates of the label of the target object are the same as the three-dimensional coordinates of the target object in the current frame point cloud, the three-dimensional coordinates of the label of the target object in the three-dimensional coordinate system of the current frame point cloud are converted into the three-dimensional coordinate system of the point cloud to be marked according to the relative pose information between the current frame point cloud and the point cloud to be marked, and then the difference value between the converted three-dimensional coordinates of the label of the target object and the three-dimensional coordinates of the target object in the three-dimensional coordinate system of the point cloud to be marked is larger than the reference threshold range. That is, the position of the tag converted to the target object in the point cloud to be labeled is different from the position of the target object in the point cloud to be labeled. The three-dimensional coordinate of the converted tag of the target object is different from the three-dimensional coordinate of the target object in the three-dimensional coordinate system of the point cloud to be labeled, namely the position change information of the target object between the current frame point cloud and the point cloud to be labeled.
Next, two possible implementation manners of determining the position change information of the target object between the current frame point cloud and the point cloud to be labeled are described.
In a first possible implementation manner, after determining that the state of the target object is a motion state, a first frame point cloud and a last frame point cloud of the target object in the currently acquired point clouds are acquired. The method comprises the steps of projecting a first frame of point cloud into a last frame of point cloud, and determining three-dimensional coordinates of a centroid of a target object in the first frame of point cloud and three-dimensional coordinates of a centroid of a target object in the last frame of point cloud in a three-dimensional coordinate system of the projected last frame of point cloud. The displacement of the two centroids of the target object in the projected last frame point cloud can be determined according to the two three-dimensional coordinates, and can be used as the position change information of the target object between the first frame point cloud and the last frame point cloud. Since the displacement is a vector including direction information and distance information, the position change information is also a vector including the direction information and the distance information. And then, according to the acquisition time of the first frame point cloud and the acquisition time of the last frame point cloud, the acquisition time interval between the first frame point cloud and the last frame point cloud is determined. According to the displacement of the two centroids of the target object in the projected last frame point cloud and the acquisition time interval between the first frame point cloud and the last frame point cloud, the movement speed of the target object between the first frame point cloud and the last frame point cloud can be determined. And according to the movement speed and the acquisition time interval between the current frame point cloud and the point cloud to be marked, the position change information of the target object between the current frame point cloud and the point cloud to be marked can be determined. Specifically, the point cloud data in the current frame point cloud may be projected to the point cloud to be labeled to obtain the projected point cloud to be labeled, and then the motion speed is multiplied by the acquisition time interval between the current frame point cloud and the point cloud to be labeled to obtain a displacement between the centroid of the target object in the current frame point cloud and the centroid of the target object in the point cloud to be labeled in the three-dimensional coordinate system of the projected point cloud to be labeled, where the displacement is the position change information of the target object between the current frame point cloud and the point cloud to be labeled.
It should be noted that after the movement speed of the target object between the first frame point cloud and the last frame point cloud is determined, for other point clouds to be marked including the target object between the first frame point cloud and the last frame point cloud, the position change information of the target object between the current frame point cloud and the other point clouds to be marked can be determined directly according to the movement speed and the acquisition time interval between the current frame point cloud and the other point clouds to be marked.
In a second possible implementation manner, after the state of the target object is determined to be a motion state, the position change information of the target object between the current frame point cloud and the point cloud to be labeled can be determined directly according to the current frame point cloud and the point cloud to be labeled. Specifically, a current frame point cloud may be projected into a point cloud to be labeled, and a three-dimensional coordinate of a centroid of a target object in the current frame point cloud and a three-dimensional coordinate of a centroid of a target object in the point cloud to be labeled in a three-dimensional coordinate system of the projected point cloud to be labeled are determined. And determining the displacement of the two centroids of the target object in the projected point cloud to be marked according to the two three-dimensional coordinates, and taking the displacement as the position change information of the target object between the current frame point cloud and the point cloud to be marked.
Of course, the position change information of the target object between the current frame point cloud and the point cloud to be labeled can be determined by other implementation manners in the embodiment of the present application, which is not limited in the embodiment of the present application.
And C: and adding a label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the relative pose information between the current frame point cloud and the point cloud to be labeled and the position change information of the target object between the current frame point cloud and the point cloud to be labeled.
In a possible implementation manner, the tag reference position of the target object in the point cloud to be labeled can be determined according to the relative pose information. And then, adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the label reference position of the target object in the point cloud to be labeled and the position change information of the target object between the current frame point cloud and the point cloud to be labeled.
Specifically, the three-dimensional coordinates of the tag of the target object in the three-dimensional coordinate system of the current frame point cloud may be converted into the three-dimensional coordinate system of the point cloud to be labeled according to the relative pose information between the current frame point cloud and the point cloud to be labeled, so as to obtain the tag reference position of the target object in the point cloud to be labeled, that is, the tag reference three-dimensional coordinates. And then determining the direction of the target object in the point cloud to be marked in the label reference three-dimensional coordinate according to the direction information included in the position change information of the target object between the current frame point cloud and the point cloud to be marked, and determining the distance between the target object in the point cloud to be marked and the label reference three-dimensional coordinate according to the distance information included in the position change information, thereby determining the three-dimensional coordinate of the centroid of the target object in the point cloud to be marked, namely the position of the target object in the point cloud to be marked. And then adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled.
Based on the above description, after the label of the target object in the current frame point cloud is added to the position where the target object is located in the point cloud to be labeled, the labeling of the target object in the point cloud to be labeled is also realized. Such a process can also be understood as transferring the label of the target object in the current frame point cloud to the position where the target object is located in the point cloud to be labeled. For other point clouds to be labeled, which include the target object, and are acquired currently, the label of the target object in the current point cloud can be transferred to the position where the target object is located in the other point clouds to be labeled through the above steps 501-503.
It should be understood that, because the point cloud to be labeled may be located before the current frame point cloud or located after the current frame point cloud, the label of the target object in the current frame point cloud may be transferred to the point cloud to be labeled including the target object before the current frame point cloud, or may be transferred to the point cloud to be labeled including the target object after the current frame point cloud, or may be transferred to the point clouds to be labeled including the target object before the current frame point cloud and after the current frame point cloud at the same time, which is not limited in this application embodiment.
In addition, in a possible case, the label of the target object in the current frame point cloud may be continuously transmitted to the point cloud to be labeled before the current frame point cloud, or continuously transmitted to the point cloud to be labeled after the current frame point cloud, or simultaneously continuously transmitted to the point clouds to be labeled before and after the current frame point cloud.
Optionally, before step 503, the method may further include: and if the target object is positioned in the region of interest in the point cloud to be labeled, executing a step of adding a label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the state and the relative pose information of the target object.
It should be noted that the Region of Interest (ROI) refers to a Region that needs to be focused on in the point cloud to be labeled. For example, in a possible case that the collection scene corresponding to the currently collected point cloud is an outdoor parking lot, the region of interest may be an area where a road is located and an area where each parking space is located, and the region where other buildings in the collection scene are located is a non-region of interest.
In order to reduce the labeling workload of the point cloud to be labeled, an area of interest may be preset, and before adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled, it may be determined whether the target object is located in the area of interest in the point cloud to be labeled. If the target object is located in the region of interest in the point cloud to be labeled, it indicates that the position of the target object in the point cloud to be labeled needs to be paid attention, and at this time, the step of adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled can be executed. If the target object is not located in the region of interest in the point cloud to be labeled, it indicates that the position of the target object in the point cloud to be labeled does not need to be paid attention, and at this time, the step of adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled can be omitted, so that the labeling process of the point cloud to be labeled is simplified, and the labeling efficiency of the point cloud to be labeled is improved.
Optionally, after step 503, a three-dimensional frame included in the tag added to the position where the target object is located in the point cloud to be labeled may be checked and adjusted, so that the target object can be surrounded by the three-dimensional frame more accurately.
Optionally, after step 503, the following steps (1) to (3) may also be included.
(1): and determining the label confidence of the target object in the current frame point cloud and the label confidence of the target object in the point cloud to be labeled.
Note that the tag confidence is data indicating the confidence level of the information of the target object recorded by the tag. The higher the confidence of the label, the higher the credibility of the information of the target object recorded by the label, that is, the higher the accuracy; the lower the confidence of the tag, the lower the confidence, i.e., the lower the accuracy, of the information indicating the target object recorded by the tag.
In a possible implementation manner, the tag confidence of the target object may be automatically estimated according to the number and the position distribution of three-dimensional points surrounded by a three-dimensional frame included in the tag of the target object, or the tag confidence of the target object sent by the user may be directly received. The value of the label confidence coefficient can be 0-5, the smaller the value is, the lower the label confidence coefficient is, and the larger the value is, the higher the label confidence coefficient is.
For example, the target object is an automobile, and if the target object in the current frame point cloud includes a small number of three-dimensional points and is mainly concentrated on the head of the automobile, under such a condition, the determined information of the length, the width and the like of the automobile is relatively inaccurate, and the value of the tag confidence coefficient may be determined to be 2, that is, the tag confidence coefficient of the target object is relatively low. If the target object in the current frame point cloud includes more three-dimensional points and the position distribution is uniform, the outline of the automobile can be presented more completely, under the condition, the determined information such as the length and the width of the automobile is more accurate, the value of the tag confidence coefficient can be determined to be 4, namely the tag confidence coefficient of the target object is higher.
(2): and if the label confidence coefficient of the target object in the current frame point cloud is lower than that of the target object in the point cloud to be labeled, modifying the label of the target object in the point cloud to be labeled.
Since, in a possible case, the visibility of the target object in the current frame point cloud is low or the number of three-dimensional points included in the current frame point cloud is small, the position distribution is uneven, and the like, the tag confidence of the target object in the current frame point cloud is low. After the label of the target object in the current point cloud is added to the position where the target object is located in the point cloud to be labeled, if the visibility of the target object in the point cloud to be labeled is high or the number of three-dimensional points of the target object in the point cloud to be labeled is large, the position distribution is uniform, and the like, that is, the confidence coefficient of the label of the target object in the point cloud to be labeled is higher than that of the target object in the current point cloud, at this time, the label added to the target object in the point cloud to be labeled can be modified.
The operation of modifying the label added to the target object in the point cloud to be labeled may be: the method comprises the steps of adjusting information such as the length, the width and the height of a three-dimensional frame included by a label of a target object in a point cloud to be marked so that the three-dimensional frame can completely surround the target object, and then modifying the information such as the length, the width and the height of the target object in the label of the target object according to the adjusted information such as the length, the width and the height of the three-dimensional frame. Of course, the tag added to the target object in the point cloud to be labeled may also be modified in other ways, which is not limited in the embodiment of the present application.
(3): and adding the modified label to the position of the target object in the current frame point cloud according to the state and the relative pose information of the target object so as to update the label of the target object in the current frame point cloud.
Because the tag confidence coefficient of the target object in the current frame point cloud is lower than that of the target object in the point cloud to be labeled, after the tag of the target object in the point cloud to be labeled is modified, the modified tag can be added to the position where the target object in the current frame point cloud is located according to the state of the target object and the relative pose information between the point cloud to be labeled and the current frame point cloud so as to update the tag of the target object in the current frame point cloud, and therefore the tag confidence coefficient of the target object in the current frame point cloud is improved.
The operation of adding the modified tag to the position where the target object is located in the current frame point cloud to update the tag of the target object in the current frame point cloud may be: firstly, deleting the label of the target object in the current frame point cloud, and then adding the modified label to the position of the target object in the current frame point cloud, thereby realizing the update of the label of the target object in the current frame point cloud. The modified tag is added to the position of the target object in the current point cloud, and reference may be made to step 503, where the tag of the target object in the current point cloud is added to the description of the position of the target object in the point cloud to be labeled, or the modified tag may be added to the position of the target object in the current point cloud in other manners.
In the embodiment of the application, a target object in a current frame point cloud is labeled to obtain a label of the target object, and then relative pose information between the current frame point cloud and a point cloud to be labeled and a state of the target object are determined. When the target object is in a static state, the position of the target object in the current frame point cloud is the same as the position of the target object in the point cloud to be marked; when the target object is in a motion state, the position of the target object in the current frame point cloud is different from the position of the target object in the point cloud to be marked. Therefore, according to the state of the target object and the relative pose information between the current frame point cloud and the point cloud to be labeled, the labels of the target object in different states in the current frame point cloud can be added to the position of the target object in the point cloud to be labeled. The point cloud label transferring method provided by the embodiment of the application can transfer the labels of the target objects in the current frame point cloud to the point cloud to be labeled, so that the labeling of the target objects in the point cloud to be labeled is simpler and more efficient.
Fig. 6 is a block diagram of a point cloud tag transferring apparatus provided in an embodiment of the present application, which may be applied to a point cloud annotation device, and referring to fig. 6, the apparatus includes an annotation module 601, a first determination module 602, and a first adding module 603.
A labeling module 601, configured to label a target object in a current frame point cloud to obtain a label of the target object;
a first determining module 602, configured to determine relative pose information between a current frame point cloud and a point cloud to be labeled, and a state of a target object, where the point cloud to be labeled includes the target object;
the first adding module 603 is configured to add a label of the target object in the current frame point cloud to a position of the target object in the point cloud to be labeled according to the state of the target object and the relative pose information.
Optionally, the first determining module 602 includes:
the projection submodule is used for projecting point cloud data in N frames of point clouds into the current frame of point cloud, and the N frames of point clouds comprise point clouds to be labeled and point clouds between the point clouds to be labeled and the current frame of point cloud;
and the determining submodule is used for determining the state of the target object according to the projected current frame point cloud.
Optionally, the first adding module 603 includes:
the first adding submodule is used for adding a label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the relative pose information if the target object is in a static state;
the acquisition submodule is used for acquiring position change information of the target object between the current frame point cloud and the point cloud to be marked if the target object is in a motion state;
and the second adding submodule is used for adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the relative pose information and the position change information of the target object between the current frame point cloud and the point cloud to be labeled.
Optionally, the second adding submodule includes:
the determining unit is used for determining the label reference position of the target object in the point cloud to be marked according to the relative pose information;
and the adding unit is used for adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the label reference position of the target object in the point cloud to be labeled and the position change information of the target object between the current frame point cloud and the point cloud to be labeled.
Optionally, the apparatus further comprises:
and the triggering module is used for triggering the adding module to add the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the state and the relative pose information of the target object if the target object is positioned in the region of interest in the point cloud to be labeled.
Optionally, the apparatus further comprises:
the second determining module is used for determining the label confidence coefficient of the target object in the current frame point cloud and the label confidence coefficient of the target object in the point cloud to be labeled;
the modifying module is used for modifying the label of the target object in the point cloud to be labeled if the label confidence coefficient of the target object in the current frame point cloud is lower than that of the target object in the point cloud to be labeled;
and the second adding module is used for adding the modified label to the position of the target object in the current frame point cloud according to the state of the target object and the relative pose information so as to update the label of the target object in the current frame point cloud.
In the embodiment of the application, a target object in a current frame point cloud is labeled to obtain a label of the target object, and then relative pose information between the current frame point cloud and a point cloud to be labeled and a state of the target object are determined. When the target object is in a static state, the position of the target object in the current frame point cloud is the same as the position of the target object in the point cloud to be marked; when the target object is in a motion state, the position of the target object in the current frame point cloud is different from the position of the target object in the point cloud to be marked. Therefore, according to the state of the target object and the relative pose information between the current frame point cloud and the point cloud to be labeled, the labels of the target object in different states in the current frame point cloud can be added to the position of the target object in the point cloud to be labeled. The point cloud label transferring method provided by the embodiment of the application can transfer the labels of the target objects in the current frame point cloud to the point cloud to be labeled, so that the labeling of the target objects in the point cloud to be labeled is simpler and more efficient.
It should be noted that: in the point cloud tag transmission apparatus provided in the above embodiment, only the division of the functional modules is illustrated when the point cloud tag is transmitted, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the point cloud tag transmission device and the point cloud tag transmission method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 7 is a schematic structural diagram of a point cloud annotation apparatus 700 according to an embodiment of the present disclosure, where the point cloud annotation apparatus 700 can generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 701 and one or more memories 702, where the memory 702 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 701. Of course, the annotating device 700 can also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the annotating device 700 can also include other components for implementing device functions, which are not described herein again.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including instructions executable by a processor in a point cloud tag delivery apparatus to perform the point cloud tag delivery method of the above embodiments is also provided. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a point cloud tag transmission system is further provided, and the system includes a point cloud collector and a point cloud labeling device, and the point cloud labeling device is configured to execute the steps of the point cloud tag transmission method.
In an exemplary embodiment, there is also provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the point cloud tag delivery method described above.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (13)

1. A point cloud tag transfer method, the method comprising:
marking a target object in the current frame point cloud to obtain a label of the target object;
determining relative pose information between the current frame point cloud and a point cloud to be marked and the state of the target object, wherein the point cloud to be marked comprises the target object;
and adding a label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the state of the target object and the relative pose information.
2. The method of claim 1, wherein said determining the state of the target object comprises:
projecting point cloud data in N frames of point clouds into the current frame of point cloud, wherein the N frames of point clouds comprise the point cloud to be marked and the point cloud between the point cloud to be marked and the current frame of point cloud;
and determining the state of the target object according to the projected current frame point cloud.
3. The method of claim 1, wherein the adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the state of the target object and the relative pose information comprises:
if the target object is in a static state, adding a label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the relative pose information;
if the target object is in a motion state, acquiring position change information of the target object between the current frame point cloud and the point cloud to be marked;
and adding a label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the relative pose information and the position change information of the target object between the current frame point cloud and the point cloud to be labeled.
4. The method of claim 3, wherein the adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the relative pose information and the position change information of the target object between the current frame point cloud and the point cloud to be labeled comprises:
determining a label reference position of the target object in the point cloud to be marked according to the relative pose information;
and adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the label reference position of the target object in the point cloud to be labeled and the position change information of the target object between the current frame point cloud and the point cloud to be labeled.
5. The method of any one of claims 1 to 4, wherein the step of adding the label of the target object in the current point cloud of the frame to the position of the target object in the point cloud to be labeled according to the state of the target object and the relative pose information further comprises:
and if the target object is positioned in the region of interest in the point cloud to be labeled, executing a step of adding a label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the state of the target object and the relative pose information.
6. The method of any one of claims 1 to 4, wherein the step of adding the label of the target object in the current point cloud of the current frame to the position of the target object in the point cloud to be labeled according to the state of the target object and the relative pose information further comprises:
determining the label confidence of the target object in the current frame point cloud and the label confidence of the target object in the point cloud to be labeled;
if the label confidence coefficient of the target object in the current frame point cloud is lower than that of the target object in the point cloud to be labeled, modifying the label of the target object in the point cloud to be labeled;
and adding the modified label to the position of the target object in the current frame point cloud according to the state of the target object and the relative pose information so as to update the label of the target object in the current frame point cloud.
7. A point cloud tag transfer apparatus, the apparatus comprising:
the marking module is used for marking a target object in the current frame point cloud to obtain a label of the target object;
the first determination module is used for determining the relative pose information between the current frame point cloud and the point cloud to be marked and the state of the target object, wherein the point cloud to be marked comprises the target object;
and the first adding module is used for adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the state of the target object and the relative pose information.
8. The apparatus of claim 7, wherein the first determining module comprises:
the projection sub-module is used for projecting point cloud data in N frames of point clouds into the current frame of point cloud, wherein the N frames of point clouds comprise the point cloud to be labeled and the point cloud between the point cloud to be labeled and the current frame of point cloud;
and the determining submodule is used for determining the state of the target object according to the projected current frame point cloud.
9. The apparatus of claim 7, wherein the first adding module comprises:
the first adding submodule is used for adding a label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the relative pose information if the target object is in a static state;
the obtaining sub-module is used for obtaining the position change information of the target object between the current frame point cloud and the point cloud to be marked if the target object is in a motion state;
and the second adding submodule is used for adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the relative pose information and the position change information of the target object between the current frame point cloud and the point cloud to be labeled.
10. The apparatus of claim 9, wherein the second add submodule comprises:
the determining unit is used for determining a label reference position of the target object in the point cloud to be marked according to the relative pose information;
and the adding unit is used for adding the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the label reference position of the target object in the point cloud to be labeled and the position change information of the target object between the current frame point cloud and the point cloud to be labeled.
11. The apparatus of any of claims 7-10, wherein the apparatus further comprises:
and the triggering module is used for triggering the adding module to add the label of the target object in the current frame point cloud to the position of the target object in the point cloud to be labeled according to the state and the relative pose information of the target object if the target object is positioned in the region of interest in the point cloud to be labeled.
12. The apparatus of any of claims 7-10, wherein the apparatus further comprises:
the second determining module is used for determining the label confidence of the target object in the current frame point cloud and the label confidence of the target object in the point cloud to be labeled;
a modification module, configured to modify a tag of the target object in the point cloud to be labeled if the tag confidence of the target object in the current frame point cloud is lower than the tag confidence of the target object in the point cloud to be labeled;
and the second adding module is used for adding the modified label to the position of the target object in the current frame point cloud according to the state of the target object and the relative pose information so as to update the label of the target object in the current frame point cloud.
13. A point cloud tag transmission system, comprising a point cloud collector and a point cloud labeling device, wherein the point cloud labeling device is used for executing the point cloud tag transmission method of any one of claims 1 to 6.
CN201910453496.0A 2019-05-28 2019-05-28 Point cloud label transfer method, device and system Active CN112015938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910453496.0A CN112015938B (en) 2019-05-28 2019-05-28 Point cloud label transfer method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910453496.0A CN112015938B (en) 2019-05-28 2019-05-28 Point cloud label transfer method, device and system

Publications (2)

Publication Number Publication Date
CN112015938A true CN112015938A (en) 2020-12-01
CN112015938B CN112015938B (en) 2024-06-14

Family

ID=73501695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910453496.0A Active CN112015938B (en) 2019-05-28 2019-05-28 Point cloud label transfer method, device and system

Country Status (1)

Country Link
CN (1) CN112015938B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792653A (en) * 2021-09-13 2021-12-14 山东交通学院 Method, system, equipment and storage medium for cloud detection of remote sensing image
CN114937144A (en) * 2022-05-17 2022-08-23 苏州思卡智能科技有限公司 Projection classification method for 3D contour of vehicle

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016170330A1 (en) * 2015-04-24 2016-10-27 Oxford University Innovation Limited Processing a series of images to identify at least a portion of an object
WO2017087201A1 (en) * 2015-11-18 2017-05-26 Faro Technologies, Inc. Automated generation of a three-dimensional scanner video
CN108152831A (en) * 2017-12-06 2018-06-12 中国农业大学 A kind of laser radar obstacle recognition method and system
US10152771B1 (en) * 2017-07-31 2018-12-11 SZ DJI Technology Co., Ltd. Correction of motion-based inaccuracy in point clouds
US20190011566A1 (en) * 2017-07-04 2019-01-10 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for identifying laser point cloud data of autonomous vehicle
CN109271880A (en) * 2018-08-27 2019-01-25 深圳清创新科技有限公司 Vehicle checking method, device, computer equipment and storage medium
CN109471128A (en) * 2018-08-30 2019-03-15 福瑞泰克智能***有限公司 A kind of positive sample production method and device
CN109509260A (en) * 2017-09-14 2019-03-22 百度在线网络技术(北京)有限公司 Mask method, equipment and the readable medium of dynamic disorder object point cloud
CN109633685A (en) * 2018-11-22 2019-04-16 浙江中车电车有限公司 A kind of method and system based on laser radar obstruction detection state
CN109726647A (en) * 2018-12-14 2019-05-07 广州文远知行科技有限公司 Mask method, device, computer equipment and the storage medium of point cloud
CN109727312A (en) * 2018-12-10 2019-05-07 广州景骐科技有限公司 Point cloud mask method, device, computer equipment and storage medium
US20190147220A1 (en) * 2016-06-24 2019-05-16 Imperial College Of Science, Technology And Medicine Detecting objects in video data

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016170330A1 (en) * 2015-04-24 2016-10-27 Oxford University Innovation Limited Processing a series of images to identify at least a portion of an object
WO2017087201A1 (en) * 2015-11-18 2017-05-26 Faro Technologies, Inc. Automated generation of a three-dimensional scanner video
US20190147220A1 (en) * 2016-06-24 2019-05-16 Imperial College Of Science, Technology And Medicine Detecting objects in video data
US20190011566A1 (en) * 2017-07-04 2019-01-10 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for identifying laser point cloud data of autonomous vehicle
US10152771B1 (en) * 2017-07-31 2018-12-11 SZ DJI Technology Co., Ltd. Correction of motion-based inaccuracy in point clouds
CN109509260A (en) * 2017-09-14 2019-03-22 百度在线网络技术(北京)有限公司 Mask method, equipment and the readable medium of dynamic disorder object point cloud
CN108152831A (en) * 2017-12-06 2018-06-12 中国农业大学 A kind of laser radar obstacle recognition method and system
CN109271880A (en) * 2018-08-27 2019-01-25 深圳清创新科技有限公司 Vehicle checking method, device, computer equipment and storage medium
CN109471128A (en) * 2018-08-30 2019-03-15 福瑞泰克智能***有限公司 A kind of positive sample production method and device
CN109633685A (en) * 2018-11-22 2019-04-16 浙江中车电车有限公司 A kind of method and system based on laser radar obstruction detection state
CN109727312A (en) * 2018-12-10 2019-05-07 广州景骐科技有限公司 Point cloud mask method, device, computer equipment and storage medium
CN109726647A (en) * 2018-12-14 2019-05-07 广州文远知行科技有限公司 Mask method, device, computer equipment and the storage medium of point cloud

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
OLE SCHUMAN等: "Semantic Segmentation on Radar Point Clouds", 《2018 21ST INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION)》 *
江文婷;龚小谨;刘济林;: "基于增量计算的大规模场景致密语义地图构建", 浙江大学学报(工学版), no. 02, 15 February 2016 (2016-02-15), pages 198 - 204 *
江文婷;龚小谨;刘济林;: "基于增量计算的大规模场景致密语义地图构建", 浙江大学学报(工学版), no. 02, pages 198 - 204 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792653A (en) * 2021-09-13 2021-12-14 山东交通学院 Method, system, equipment and storage medium for cloud detection of remote sensing image
CN113792653B (en) * 2021-09-13 2023-10-20 山东交通学院 Method, system, equipment and storage medium for cloud detection of remote sensing image
CN114937144A (en) * 2022-05-17 2022-08-23 苏州思卡智能科技有限公司 Projection classification method for 3D contour of vehicle

Also Published As

Publication number Publication date
CN112015938B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
EP3505869B1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
US11030803B2 (en) Method and apparatus for generating raster map
CN108694882B (en) Method, device and equipment for labeling map
US11227395B2 (en) Method and apparatus for determining motion vector field, device, storage medium and vehicle
US11067669B2 (en) Method and apparatus for adjusting point cloud data acquisition trajectory, and computer readable medium
EP3570253B1 (en) Method and device for reconstructing three-dimensional point cloud
CN109490825B (en) Positioning navigation method, device, equipment, system and storage medium
CN111784836B (en) High-precision map generation method, device, equipment and readable storage medium
KR20210036317A (en) Mobile edge computing based visual positioning method and device
CN109814137B (en) Positioning method, positioning device and computing equipment
CN112101209A (en) Method and apparatus for determining a world coordinate point cloud for roadside computing devices
CN116105742B (en) Composite scene inspection navigation method, system and related equipment
CN112015938B (en) Point cloud label transfer method, device and system
CN113379748B (en) Point cloud panorama segmentation method and device
CN114140592A (en) High-precision map generation method, device, equipment, medium and automatic driving vehicle
CN114140527A (en) Dynamic environment binocular vision SLAM method based on semantic segmentation
CN112381873B (en) Data labeling method and device
WO2022126380A1 (en) Three-dimensional point cloud segmentation method and apparatus, and movable platform
CN112017202B (en) Point cloud labeling method, device and system
An et al. Image-based positioning system using LED Beacon based on IoT central management
CN111380529B (en) Mobile device positioning method, device and system and mobile device
CN112639822A (en) Data processing method and device
CN116642490A (en) Visual positioning navigation method based on hybrid map, robot and storage medium
CN116626700A (en) Robot positioning method and device, electronic equipment and storage medium
CN110853098A (en) Robot positioning method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant