CN112116698A - Method and device for point cloud fusion - Google Patents

Method and device for point cloud fusion Download PDF

Info

Publication number
CN112116698A
CN112116698A CN201910540418.4A CN201910540418A CN112116698A CN 112116698 A CN112116698 A CN 112116698A CN 201910540418 A CN201910540418 A CN 201910540418A CN 112116698 A CN112116698 A CN 112116698A
Authority
CN
China
Prior art keywords
point
point cloud
frame
weighted
cell
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910540418.4A
Other languages
Chinese (zh)
Inventor
M·多梅林
李千山
田文鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bayerische Motoren Werke AG
Original Assignee
Bayerische Motoren Werke AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bayerische Motoren Werke AG filed Critical Bayerische Motoren Werke AG
Priority to CN201910540418.4A priority Critical patent/CN112116698A/en
Publication of CN112116698A publication Critical patent/CN112116698A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention provides a method and a device for point cloud fusion. The method may comprise and the apparatus may be for: receiving a first frame of point cloud from a sensor, each point in the first frame of point cloud having an associated weight; mapping the first frame of point clouds into a grid comprising a plurality of cells; performing weighted averaging on each point in each unit mapped to the grid in the first frame of point cloud to obtain a first weighted point of the unit; receiving a second frame of point cloud from the sensor, each point in the second frame of point cloud having an associated weight; mapping the second frame point cloud into a mesh comprising the first weighted points; and performing weighted averaging on each point in each cell mapped to the grid in the second frame point cloud and the first weighted point located in the cell to obtain a second weighted point of the cell. By adopting the method and the device disclosed by the invention, the processing efficiency of the point cloud data can be obviously improved and the processing resources can be saved.

Description

Method and device for point cloud fusion
Technical Field
The present invention relates to processing of point cloud data, and more particularly, to a method and apparatus for point cloud fusion.
Background
Autopilot has long been the subject of research efforts aimed at improving the safety and efficiency of automotive transportation. In recent years, increasingly sophisticated sensors have made autonomous driving systems more realistic. For example, 3D scanners (e.g., lidar, stereo cameras, time-of-flight cameras, etc.) are now widely used in autopilot systems. Such 3D scanners measure a large number of points on the surface of an object and often output a point cloud as a data file. The point cloud represents a set of points measured by the 3D scanner. As is known in the art, point clouds may be used for many purposes, including creating 3D maps, object recognition, object tracking, and the like.
Typically, a 3D scanner, such as a lidar, scans surrounding objects at a rate of tens to hundreds of frames per second. When an autonomous vehicle travels on a road, continuous frames of point clouds are obtained by a laser radar mounted on the vehicle to learn the surroundings of the vehicle in real time. For this reason, each frame of point cloud needs to be segmented to extract the structure of the object. Traditionally, segmentation is done directly for each frame of point cloud. However, since each frame of point cloud may contain thousands of points, this approach requires a significant amount of processing resources. Furthermore, since two adjacent frames of point clouds are obtained in a relatively short time interval, they may contain a large amount of overlapping information. Segmenting directly for each frame of point cloud may also lose potential association information between the frame of point clouds.
Accordingly, there is a need to fuse the frame point clouds for further processing.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
According to an embodiment of the invention, there is provided a method for point cloud fusion, the method comprising: receiving a first frame of point cloud from a sensor, each point in the first frame of point cloud having an associated weight; mapping the first frame of point clouds into a grid comprising a plurality of cells; performing weighted average on each point in each unit mapped to the grid in the first frame of point cloud to obtain a first weighted point of the unit; receiving a second frame of point cloud from the sensor, each point in the second frame of point cloud having an associated weight; mapping the second frame of point cloud into the mesh comprising first weighted points; and performing weighted average on each point in each unit mapped to the grid in the second frame of point cloud and the first weighted point located in the unit to obtain a second weighted point of the unit.
According to an embodiment of the present invention, there is provided an apparatus for point cloud fusion, the apparatus including: a receiving unit configured to receive a first frame of point cloud from a sensor, each point in the first frame of point cloud having an associated weight; a mapping unit configured to map the first frame point cloud into a grid comprising a plurality of cells; and a computing unit configured to perform a weighted average of points in each cell mapped to the grid in the first frame of point cloud to obtain a first weighted point of the cell; wherein the receiving unit is further configured to receive a second frame of point cloud from the sensor, each point in the second frame of point cloud having an associated weight; wherein the mapping unit is further configured to map the second frame of point clouds into the mesh comprising first weighted points; and wherein the computing unit is further configured to perform a weighted average of the points in each cell mapped to the mesh in the second frame of point cloud and the first weighted point located in the cell to obtain a second weighted point for the cell.
According to an embodiment of the present invention, there is provided an apparatus for point cloud fusion, the apparatus including: a memory storing a computer program; and a processor coupled to the memory, the computer program when executed by the processor implementing the steps of: receiving a first frame of point cloud from a sensor, each point in the first frame of point cloud having an associated weight; mapping the first frame of point clouds into a grid comprising a plurality of cells; performing weighted average on each point in each unit mapped to the grid in the first frame of point cloud to obtain a first weighted point of the unit; receiving a second frame of point cloud from the sensor, each point in the second frame of point cloud having an associated weight; mapping the second frame of point cloud into the mesh comprising first weighted points; and performing weighted average on each point in each unit mapped to the grid in the second frame of point cloud and the first weighted point located in the unit to obtain a second weighted point of the unit.
According to an embodiment of the present invention, there is provided a vehicle including: a sensor; and an apparatus for point cloud fusion according to the invention.
According to an embodiment of the invention, a non-transitory computer-readable medium is provided, storing a computer program which, when executed by a processor, performs a method for point cloud fusion according to the invention.
By adopting the method and the device disclosed by the invention, the processing efficiency of the point cloud data can be obviously improved and the processing resources can be saved. In addition, the relevance between adjacent point cloud frames can be fully utilized so as to extract the object structure represented by the point cloud data.
These and other features and advantages will become apparent upon reading the following detailed description and upon reference to the accompanying drawings. It is to be understood that both the foregoing general description and the following detailed description are explanatory only and are not restrictive of aspects as claimed.
Drawings
So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only some typical aspects of this invention and are therefore not to be considered limiting of its scope, for the description may admit to other equally effective aspects.
FIG. 1 shows a schematic diagram of an exemplary road environment.
Fig. 2 shows a schematic diagram of a point cloud of an exemplary road environment obtained by a lidar.
FIG. 3 shows a schematic diagram of an exemplary three-dimensional mesh, according to one embodiment of the invention.
4A-4B illustrate a schematic diagram of point cloud fusion of two adjacent frames of point clouds using a mesh, according to one embodiment of the invention.
FIG. 5 shows a flow diagram of a method for point cloud fusion according to one embodiment of the invention.
FIG. 6 shows a block diagram of an apparatus for point cloud fusion according to one embodiment of the invention.
FIG. 7 shows a block diagram of an exemplary computing device, according to an embodiment of the invention.
Detailed Description
The present invention will be described in detail below with reference to the attached drawings, and the features of the present invention will be further apparent from the following detailed description.
In the present invention, point cloud fusion refers to a technique that combines frames of point clouds together for further processing. Through point cloud fusion, the relevance among the point clouds of each frame can be effectively utilized, and the efficiency of point cloud data processing is improved. The invention is mainly based on the following concepts: the space is first divided by a grid comprising a plurality of cells (e.g., a two-dimensional space is divided by a two-dimensional grid or a three-dimensional space is divided by a three-dimensional grid), then the obtained point clouds are mapped into the cells of the grid, and then the points mapped into each cell of the grid are weighted-averaged to obtain the weighted point of the cell (also referred to as the center of gravity of the cell), thereby representing the points mapped into the cell by only the weighted point.
Fig. 1 shows a schematic view of an exemplary road environment 100. The road environment 100 includes lane lines, lane edges, guardrails, road signs, speed limit signs, street lamps, trees, and the like. Vehicles (e.g., autonomous vehicles) may travel in such road environments.
Fig. 2 shows a schematic diagram of a point cloud 200 of an exemplary road environment obtained by a lidar. The laser radar is an active remote sensing device which uses a laser as a transmitting light source and adopts a photoelectric detection technical means. The laser radar takes laser as a signal source, pulse laser emitted by a laser device is applied to trees, roads, bridges, buildings and the like on the ground to cause scattering, and a part of light waves can be reflected to a receiver of the laser radar. And calculating according to a laser ranging principle to obtain the distance from the laser radar to the target point, and continuously scanning the target object by the pulse laser to obtain the data of all the target points on the target object. Such data is known in the art as point clouds. Each point in the point cloud may include three-dimensional coordinates (x, y, z), color information, and/or reflection intensity information (e.g., reflectivity), among others. The point cloud 200 obtained by the lidar may be stored for further processing.
FIG. 3 shows a schematic diagram of an exemplary three-dimensional mesh 300, according to one embodiment of the invention. The size of the mesh 300 may correspond to the size of the space covered by a frame of point cloud data (e.g., 200 meters) or more. The grid 300 may have a plurality of cells. In the embodiment shown in fig. 3, the grid 300 may have a plurality of equally sized cells (e.g., cell 301, cell 302), each of which may have a square shape. The side length of each cube can be selected according to practical needs (e.g., 2-10 cm). In another embodiment, not shown, the grid 300 may have a plurality of differently sized cells, each of which may also have a different geometric shape (e.g., a cuboid, a cube, etc.). It should be noted that the exemplary grid 300 shown in fig. 3 is shown for ease of illustration purposes only, and the present invention is not limited to the exemplary grid 300 shown in fig. 3, but may include any grid suitable for dividing a space, and the grid may have any number, any size, and/or any shape of cells.
4A-4B illustrate a schematic diagram 400 of point cloud fusion of two adjacent frames of point clouds using a mesh, according to one embodiment of the invention. For ease of illustration, in fig. 4A-4B, the principles of the present invention are illustrated with a two-dimensional grid 400, which can be easily generalized to a three-dimensional grid. Grid 400 may include a plurality of cells (e.g., cell 401), each of which may have a square shape.
At time t1A first frame point cloud may be received from a lidar, where each point in the first frame point cloud may have coordinates, color information, and/or reflectivity. The coordinates areAnd (4) correlating relative coordinate systems with the laser radar as a reference point. When the laser radar moves along with the movement of the vehicle, the coordinate system adopted by the next frame of point cloud is often different from the coordinate system adopted by the previous frame of point cloud. In one embodiment, a corresponding weight may be assigned to each point based on the color information and/or reflectivity of the point. For example, if red is of interest, a higher weight may be assigned to a point with red. Further, higher weights may be assigned to points with higher reflectivity.
A first frame point cloud may be mapped into the grid 400. The mapping may include populating the cells of the grid 400 with the points in the first frame point cloud. In one embodiment, the coordinate system employed by the mesh 400 may be the same as the coordinate system employed by the first frame point cloud, thereby eliminating the need for coordinate conversion. In another embodiment, the grid 400 may use an absolute coordinate system with a fixed point as the origin, thereby requiring the coordinates of the various points in the first frame of point cloud to be translated into the absolute coordinate system. In FIG. 4A, it is assumed that there are five points P11(x11,y11)、P12(x12,y12)、P13(x13,y13)、P14(x14,y14) And P15(x15,y15) Fall into cells 401, each having a weight W11、W12、W13、W14And W15. Weight W11、W12、W13、W14And W15Each of which may have a value ranging between 0-1, or any other suitable value. The five points are weighted averaged:
Figure BDA0002102376320000051
thereby obtaining a first weighted point P for the cell 4011w(x1w,y1w) And the first weighted point P1w(x1w,y1w) Weight W of1w=W11+W12+W13+W14+W15. In the same manner, similar operations are performed on the other respective cells in the grid 400, such that each cell gets a first weighted point, and in the following operations, only these first weighted points are used.
At time t2A second frame of point cloud may be received from the lidar. Similarly, each point in the second frame point cloud may have coordinates, color information, and/or reflectivity, and a corresponding weight may be assigned to the point based on the color information and/or reflectivity.
The second frame point cloud may be mapped into the grid 400. The mapping may include populating the points in the second frame of point cloud into the cells of the grid 400 that already include the first weighted point. As described above, the coordinate system used for the second frame point cloud may be different from the coordinate system used for the mesh due to the movement of the laser radar, and thus, the coordinate conversion is required. In one embodiment, the coordinates of the points in the second frame of point cloud may be transformed into the coordinate system employed by the grid 400. For example, if the grid 400 employs an absolute coordinate system, the coordinates of the various points in the second frame of point cloud may be transformed into the absolute coordinate system. Alternatively, if the grid 400 employs the same coordinate system as the first frame point cloud, the coordinates of the various points in the second frame point cloud may be transformed into the coordinate system employed by the first frame point cloud (e.g., this may be performed in accordance with the movement that the lidar takes place between acquiring the first frame point cloud and acquiring the second frame point cloud). In another embodiment, the coordinates of the grid 400 may be converted to a coordinate system employed by the second frame of point cloud. In FIG. 4B, it is assumed that there are also five points P21(x21,y21)、P22(x22,y22)、P13(x23,y23)、P14(x24,y24) And P25(x25,y25) Fall into cells 401, each having a weight W21、W22、W23、W24And W25. Weight W21、W22、W23、W24And W25May have a range between 0-1Or any other suitable value. For these five points and the first weighted point P located in cell 4011w(x1w,y1w) Carrying out weighted average:
Figure BDA0002102376320000061
thereby obtaining a second weighted point P for cell 4012w(x2w,y2w) And the second weighted point P2w(x2w,y2w) Weight W of2w=W1w+W21+W22+W23+W24+W25. In the same manner, similar operations are performed on the other respective cells in the grid 400, such that each cell gets one second weighted point, and in the following operations, only these second weighted points are used.
At time t3A third frame of point cloud may be received from the lidar and the process described above performed in the same manner, resulting in a third weighted point. And so on, after each frame of point cloud is received, mapping the point cloud into each unit of the grid, and carrying out weighted average on the points falling into each unit and the weighted points obtained after the previous frame is processed, thereby obtaining new weighted points. Performing point cloud fusion in this manner can significantly save processing resources and take full advantage of the correlation between frames of point clouds.
It should be noted that the embodiments shown in fig. 4A-4B are for illustrative purposes only, and in actual operation, the number of points mapped into each cell may be much greater.
FIG. 5 shows a flow diagram of a method for point cloud fusion according to one embodiment of the invention. For example, method 500 may be implemented within at least one processor (e.g., processor 704 of fig. 7), which may be located in an on-board computer system, a remote server, or a combination thereof. Of course, in various aspects of the invention, the method 500 may be implemented by any suitable apparatus capable of performing the relevant operations.
The method 500 begins at step 510. At step 510, the method 500 may include receiving a first frame of point cloud from a sensor, each point in the first frame of point cloud having an associated weight. Here, the sensor may include any sensor (e.g., a lidar, a stereo camera, or a time-of-flight camera) capable of scanning an object at a particular frame rate to obtain point cloud data. Each point in the point cloud may include three-dimensional coordinates (x, y, z), color information, and/or reflection intensity information (e.g., reflectivity), among others. In one embodiment, a corresponding weight may be assigned to each point based on the color information and/or reflectivity of the point. For example, if red is of interest, a higher weight may be assigned to a point with red. Further, higher weights may be assigned to points with higher reflectivity.
At step 520, the method 500 may include mapping the first frame of point cloud into a grid including a plurality of cells. The size of the mesh may correspond to the size of the space covered by a frame of point cloud data (e.g., 200 meters). In one embodiment, each cell in the grid may have the same size and the same shape (e.g., a cube with a side length of 2-10 cm). In another embodiment, each cell in the grid may have a different size and a different shape. Mapping may include populating individual points in the first frame point cloud into individual cells of the grid. In one embodiment, the coordinate system employed by the mesh may be the same as the coordinate system employed by the first frame of point cloud, thereby eliminating the need for coordinate conversion when populating the points in the first frame of point cloud into the cells of the mesh. In another embodiment, the mesh may use an absolute coordinate system with a fixed point as the origin, thereby requiring that the coordinates of each point in the first frame of point cloud be first converted to the absolute coordinate system before each point is filled into each cell of the mesh.
At step 530, the method 500 may include performing a weighted average of the points in each cell mapped to the mesh in the first frame of point cloud to obtain a first weighted point for the cell. An example of weighted averaging of the various points in each cell to obtain a first weighted point for that cell is shown in the description of FIG. 4A.
At step 540, the method 500 may include receiving a second frame of point cloud from the sensor, each point in the second frame of point cloud having an associated weight. In one embodiment, the first frame point cloud and the second frame point cloud may be frames of point clouds continuously acquired by the sensor.
At step 550, method 500 may include mapping the second frame of point cloud into the mesh including the first weighted point. In one embodiment, the mapping may include converting coordinates of the second frame point cloud to correlate with a coordinate system used by the mesh or converting coordinates of the mesh to correlate with a coordinate system used by the second frame point cloud and populating respective cells of the mesh with respective points in the second frame point cloud.
At step 560, the method 500 may include performing a weighted average of the points in each cell mapped to the grid in the second frame of point cloud and the first weighted point located in the cell to obtain a second weighted point for the cell. An example of weighted averaging of the points belonging to the second frame point cloud in each cell and the first weighted point located in the cell to obtain the second weighted point for the cell is shown in the description of FIG. 4B.
In an optional step, the method 500 may include performing point cloud segmentation using the second weighted point in each cell of the mesh. Various segmentation algorithms known in the art may be employed. These segmentation algorithms may include a K-means (K-means) algorithm, a K-nearest neighbor algorithm, a GMM algorithm, a region growing algorithm, and/or other well-known algorithms. In one embodiment, the point cloud segmentation is performed based on a region growing algorithm, and the growth criterion of the region growing algorithm is defined based on the attributes of each second weighted point. These attributes include at least one of: a distance of two adjacent second weighted points, a similarity of normal directions of the two adjacent second weighted points, or a similarity of reflectivities of the second weighted points. In the present invention, by representing the respective points in the first frame point cloud and the second frame point cloud falling into each cell of the mesh by the second weighted points, the number of points that need to be processed when performing point cloud segmentation is greatly reduced. In addition, the second weighted point comprises the relevance between the first frame point cloud and the second frame point cloud, so that the accuracy of point cloud segmentation can be improved.
FIG. 6 shows a block diagram of an apparatus for point cloud fusion according to one embodiment of the invention. All of the functional blocks of the apparatus 600 (including the respective units in the apparatus 600) may be implemented by hardware, software, or a combination of hardware and software. Those skilled in the art will appreciate that the functional blocks depicted in fig. 6 may be combined into a single functional block or divided into multiple sub-functional blocks.
The apparatus 600 may include a receiving unit 610 configured to receive a first frame point cloud from a sensor, each point in the first frame point cloud having an associated weight. The apparatus 600 may further comprise a mapping unit 620 configured to map the first frame point cloud into a grid comprising a plurality of cells. The apparatus 600 may further include a computing unit 630 configured to perform a weighted average of the points in each cell mapped to the mesh in the first frame of point cloud to obtain a first weighted point for the cell. The receiving unit 610 may be further configured to receive a second frame of point cloud from the sensor, each point in the second frame of point cloud having an associated weight. The mapping unit 620 may be further configured to map the second frame point cloud into the mesh including the first weighted points. Furthermore, the calculation unit 630 may be further configured to perform a weighted average of the points in each cell mapped to the mesh in the second frame point cloud and the first weighted point located in the cell to obtain a second weighted point of the cell. In another embodiment, the apparatus 600 may optionally include a point cloud segmentation unit configured to perform point cloud segmentation using the second weighted point in each cell of the mesh.
FIG. 7 shows a block diagram of an exemplary computing device, which is one example of a hardware device that may be applied to aspects of the present invention, according to one embodiment of the present invention.
With reference to FIG. 7, a computing device 700, which is one example of a hardware device that may be employed in connection with aspects of the present invention, will now be described. Computing device 700 may be any machine that may be configured to implement processing and/or computing, and may be, but is not limited to, a workstation, a server, a desktop computer, a laptop computer, a tablet computer, personal digital processing, a smart phone, an in-vehicle computer, or any combination thereof. The various methods/apparatus/servers/client devices described above may be implemented in whole or at least in part by a computing device 700 or similar device or system.
Computing device 700 may include components that may be connected or communicate via one or more interfaces and a bus 702. For example, computing device 700 may include a bus 702, one or more processors 704, one or more input devices 706, and one or more output devices 708. The one or more processors 704 may be any type of processor and may include, but are not limited to, one or more general purpose processors and/or one or more special purpose processors (e.g., dedicated processing chips). Input device 706 may be any type of device capable of inputting information to a computing device and may include, but is not limited to, a mouse, a keyboard, a touch screen, a microphone, and/or a remote controller. Output device 708 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. Computing device 700 may also include or be connected with non-transitory storage device 710, which may be any storage device that is non-transitory and that enables data storage, and which may include, but is not limited to, a disk drive, an optical storage device, a solid-state memory, a floppy disk, a flexible disk, a hard disk, a tape, or any other magnetic medium, an optical disk or any other optical medium, a ROM (read only memory), a RAM (random access memory), a cache memory, and/or any memory chip or cartridge, and/or any other medium from which a computer can read data, instructions, and/or code. The non-transitory storage device 710 may be detached from the interface. The non-transitory storage device 710 may have data/instructions/code for implementing the above-described methods and steps. Computing device 700 may also include a communication device 712. The communication device 712 may be any type of device or system capable of communicating with internal apparatus and/or with a network and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication device, and/or a chipset, such as a bluetooth device, an IEEE 1302.11 device, a WiFi device, a WiMax device, a cellular communication device, and/or the like.
When the computing device 700 is used as an in-vehicle device, it may also be connected with external devices, such as a GPS receiver, sensors for sensing different environmental data, such as acceleration sensors, wheel speed sensors, gyroscopes, etc. In this manner, the computing device 700 may receive, for example, positioning data and sensor data indicative of a vehicle-form condition. When computing device 700 is used as an in-vehicle device, it may also be connected with other devices for controlling the travel and operation of the vehicle (e.g., engine systems, wipers, anti-lock brake systems, etc.).
Further, the non-transitory storage device 710 may have map information and software components so that the processor 704 may implement route guidance processing. Further, the output device 706 may include a display for displaying a map, displaying a location marker of the vehicle, and displaying an image indicating the running condition of the vehicle. The output device 706 may also include a speaker or headphone interface for audio guidance.
The bus 702 may include, but is not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an enhanced ISA (eisa) bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus. In particular, for an in-vehicle device, the bus 702 may also include a Controller Area Network (CAN) bus or other structure designed for applications in an automobile.
Computing device 700 may also include a working memory 714, which working memory 714 may be any type of working memory capable of storing instructions and/or data that facilitate the operation of processor 704 and may include, but is not limited to, random access memory and/or read only memory devices.
Software components may be located in the working memory 714, including, but not limited to, an operating system 716, one or more application programs 718, drivers, and/or other data and code. Instructions for implementing the above-described methods and steps may be included in the one or more application programs 718, and the aforementioned modules/units/components of the various apparatus/server/client devices may be implemented by the processor 704 reading and executing the instructions of the one or more application programs 718.
It should also be appreciated that variations may be made according to particular needs. For example, customized hardware might also be used, and/or particular components might be implemented in hardware, software, firmware, middleware, microcode, hardware description speech, or any combination thereof. In addition, connections to other computing devices, such as network input/output devices and the like, may be employed. For example, some or all of the disclosed methods and apparatus can be implemented with logic and algorithms in accordance with the present invention through programming hardware (e.g., programmable logic circuitry including Field Programmable Gate Arrays (FPGAs) and/or Programmable Logic Arrays (PLAs)) having assembly language or hardware programming languages (e.g., VERILOG, VHDL, C + +).
Although the various aspects of the present invention have been described thus far with reference to the accompanying drawings, the above-described methods, systems, and apparatuses are merely examples, and the scope of the present invention is not limited to these aspects but only by the appended claims and equivalents thereof. Various components may be omitted or may be replaced with equivalent components. In addition, the steps may also be performed in a different order than described in the present invention. Further, the various components may be combined in various ways. It is also important that as technology develops that many of the described components can be replaced by equivalent components appearing later.

Claims (15)

1. A method for point cloud fusion, the method comprising:
receiving a first frame of point cloud from a sensor, each point in the first frame of point cloud having an associated weight;
mapping the first frame of point clouds into a grid comprising a plurality of cells;
performing weighted averaging on each point in each cell mapped to the grid in the first frame of point cloud to obtain a first weighted point of the cell;
receiving a second frame of point cloud from the sensor, each point in the second frame of point cloud having an associated weight;
mapping the second frame point cloud into the mesh comprising the first weighted points; and
weighted averaging is performed on each point in each cell mapped to the grid in the second frame of point cloud and the first weighted point located in the cell to obtain a second weighted point of the cell.
2. The method of claim 1, wherein the sensor comprises at least one of: a lidar, a stereo camera, or a time-of-flight camera.
3. The method of claim 1, wherein each cell in the grid has a shape of a cube and is the same size, the cube having a side length of 2-10 cm.
4. The method of claim 1, wherein each cell in the grid has a different size.
5. The method of claim 1, wherein the weights are based on the reflectivity and/or color of each point.
6. The method of claim 1, further comprising: performing point cloud segmentation using the second weighted point in each cell of the mesh.
7. The method of claim 6, wherein the point cloud segmentation is performed based on a region growing algorithm, and wherein a growing criterion of the region growing algorithm is defined based on an attribute of each second weighted point.
8. The method of claim 7, wherein the attribute comprises at least one of: a distance of two adjacent second weighted points, a similarity of normal directions of the two adjacent second weighted points, or a similarity of reflectivities of the second weighted points.
9. The method of claim 1, wherein mapping the second frame point cloud into the mesh including the first weighted points comprises: coordinates of the second frame point cloud are transformed to correlate with a coordinate system used by the mesh.
10. An apparatus for point cloud fusion, the apparatus comprising:
a receiving unit configured to receive a first frame of point cloud from a sensor, each point in the first frame of point cloud having an associated weight;
a mapping unit configured to map the first frame point cloud into a grid comprising a plurality of cells; and
a computing unit configured to perform a weighted average of the points in each cell mapped to the grid in the first frame of point cloud to obtain a first weighted point for the cell;
wherein the receiving unit is further configured to receive a second frame of point clouds from the sensor, each point in the second frame of point clouds having an associated weight;
wherein the mapping unit is further configured to map the second frame point cloud into the mesh comprising the first weighted points; and is
Wherein the computing unit is further configured to perform a weighted average of the points in each cell mapped to the mesh in the second frame of point cloud and the first weighted point located in that cell to obtain a second weighted point for that cell.
11. The apparatus of claim 10, further comprising: a point cloud segmentation unit configured to perform point cloud segmentation using the second weighted points in each cell of the mesh.
12. An apparatus for point cloud fusion, the apparatus comprising:
a memory storing a computer program; and
a processor coupled to the memory, the computer program when executed by the processor implementing the steps of:
receiving a first frame of point cloud from a sensor, each point in the first frame of point cloud having an associated weight;
mapping the first frame of point clouds into a grid comprising a plurality of cells;
performing weighted averaging on each point in each cell mapped to the grid in the first frame of point cloud to obtain a first weighted point of the cell;
receiving a second frame of point cloud from the sensor, each point in the second frame of point cloud having an associated weight;
mapping the second frame point cloud into the mesh comprising the first weighted points; and
weighted averaging is performed on each point in each cell mapped to the grid in the second frame of point cloud and the first weighted point located in the cell to obtain a second weighted point of the cell.
13. The apparatus of claim 12, wherein the computer program, when executed by the processor, further performs the steps of: performing point cloud segmentation using the second weighted point in each cell of the mesh.
14. A vehicle, comprising:
a sensor; and
the apparatus for point cloud fusion of any of claims 10-11.
15. A non-transitory computer readable medium storing a computer program which, when executed by a processor, performs the method of any of claims 1-9.
CN201910540418.4A 2019-06-21 2019-06-21 Method and device for point cloud fusion Pending CN112116698A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910540418.4A CN112116698A (en) 2019-06-21 2019-06-21 Method and device for point cloud fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910540418.4A CN112116698A (en) 2019-06-21 2019-06-21 Method and device for point cloud fusion

Publications (1)

Publication Number Publication Date
CN112116698A true CN112116698A (en) 2020-12-22

Family

ID=73796036

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910540418.4A Pending CN112116698A (en) 2019-06-21 2019-06-21 Method and device for point cloud fusion

Country Status (1)

Country Link
CN (1) CN112116698A (en)

Similar Documents

Publication Publication Date Title
US10643103B2 (en) Method and apparatus for representing a map element and method and apparatus for locating a vehicle/robot
CN110226186B (en) Method and device for representing map elements and method and device for positioning
JP2023523243A (en) Obstacle detection method and apparatus, computer device, and computer program
CN110663060B (en) Method, device, system and vehicle/robot for representing environmental elements
US11538168B2 (en) Incremental segmentation of point cloud
WO2019138597A1 (en) System and method for assigning semantic label to three-dimensional point of point cloud
CN115147328A (en) Three-dimensional target detection method and device
US20200166346A1 (en) Method and Apparatus for Constructing an Environment Model
WO2022083529A1 (en) Data processing method and apparatus
CN111833443A (en) Landmark position reconstruction in autonomous machine applications
US11391590B2 (en) Methods and apparatus for selecting a map for a moving object, system, and vehicle/robot
CN112116698A (en) Method and device for point cloud fusion
CN112198523A (en) Method and apparatus for point cloud segmentation
WO2018120932A1 (en) Method and apparatus for optimizing scan data and method and apparatus for correcting trajectory
CN111060114A (en) Method and device for generating feature map of high-precision map
CN111383337A (en) Method and device for identifying objects
CN111542828A (en) Line recognition method, line recognition device, line recognition system, and computer storage medium
EP3944137A1 (en) Positioning method and positioning apparatus
TWI843116B (en) Moving object detection method, device, electronic device and storage medium
CN115830558A (en) Lane line correction method, lane line correction device, electronic device, and storage medium
CN115619817A (en) Method and device for enhancing point cloud data detection result, computer equipment and medium
CN112101392A (en) Method and system for identifying objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination