CN113160398A - Rapid three-dimensional grid construction system, method, medium, equipment and unmanned vehicle - Google Patents

Rapid three-dimensional grid construction system, method, medium, equipment and unmanned vehicle Download PDF

Info

Publication number
CN113160398A
CN113160398A CN202011562154.1A CN202011562154A CN113160398A CN 113160398 A CN113160398 A CN 113160398A CN 202011562154 A CN202011562154 A CN 202011562154A CN 113160398 A CN113160398 A CN 113160398A
Authority
CN
China
Prior art keywords
dimensional grid
dimensional
depth image
stand column
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011562154.1A
Other languages
Chinese (zh)
Other versions
CN113160398B (en
Inventor
付浩
薛含章
任瑞科
吴涛
孙振平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202011562154.1A priority Critical patent/CN113160398B/en
Publication of CN113160398A publication Critical patent/CN113160398A/en
Application granted granted Critical
Publication of CN113160398B publication Critical patent/CN113160398B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention belongs to the technical field of unmanned driving, and discloses a rapid three-dimensional grid construction system, a rapid three-dimensional grid construction method, a rapid three-dimensional grid construction medium, rapid three-dimensional grid construction equipment and an unmanned vehicle, wherein the system firstly projects three-dimensional point clouds generated by a multi-line laser radar to generate a depth image; combining three-dimensional grids which are longitudinally and continuously arranged into a stand column, taking the stand column as a processing element, and corresponding any stand column to a certain rectangular area in the depth image; and finally, comparing the expected depth of each three-dimensional grid in the stand column with the observation depth of the corresponding pixel in a certain rectangular region of the depth image, and updating the state attribute of each three-dimensional grid in the stand column. The invention selects the upright columns in the three-dimensional grid map as processing elements, and accelerates the processing of the three-dimensional grid map by utilizing the two-dimensional depth image, thereby realizing the real-time update of the three-dimensional grid state around the unmanned vehicle. Experiments on the public data set show that the method can well eliminate the influence of dynamic objects.

Description

Rapid three-dimensional grid construction system, method, medium, equipment and unmanned vehicle
Technical Field
The invention belongs to the technical field of unmanned driving, and particularly relates to a rapid three-dimensional grid construction system, a rapid three-dimensional grid construction method, a rapid three-dimensional grid construction medium, a rapid three-dimensional grid construction device and an unmanned vehicle.
Background
At present, the environmental awareness module of an autonomous vehicle mainly comprises two tasks: mapping of static parts of the environment (terrain, static obstacles, etc.) and detection of moving objects in the environment. The two tasks are interleaved with each other. A prerequisite for performing both tasks is an efficient environment representation model.
Among the existing environment representation methods, the occupancy grid map is the most popular one because of its advantages of high efficiency and simplicity. This method discretizes the environment into equidistant grids, and each grid cell stores the probability that it is occupied.
For two-dimensional lidar, millimeter-wave radar, etc. sensors, an inverse model of the sensor is typically used to generate the occupancy grid map. In this model, light is projected between the sensor origin and each beam reading of the sensor. The grid state property near the sensor beam reading will be updated as occupied, while the grid state property through which the light passes will be updated as free. This ray casting operation can be effectively implemented using the ray casting algorithm proposed by Bresenham.
Today, although three-dimensional sensors such as multiline lidar are already ubiquitous in unmanned vehicles, two-dimensional occupancy grid maps are still in common use. Few studies have attempted to use three-dimensional occupancy grid maps directly. There are three main reasons for this: first, since the vehicle mostly travels on a plane, a two-dimensional occupancy grid map is sufficient in most cases. Second, lidar data is very sparse in three dimensions, especially in the vertical direction. Thus, some three-dimensional grid cells may not always be updated. Third, the computational cost of a three-dimensional footprint grid map is so high that many jobs must be computed in real time using a GPU. Despite the higher computational complexity, the three-dimensional occupancy grid map is a more desirable choice for autonomous vehicles. This is because in practice the vehicle is travelling in a three-dimensional space, and most of the environment information will be retained after the three-dimensional representation of the environment. This will be of great help for subsequent context awareness modules, such as terrain modeling or dynamic target tracking.
Most of the existing unmanned automobiles adopt a two-dimensional grid map as an environment representation model. While a two-dimensional grid map is a simplified process for a three-dimensional environment, it is difficult to reflect pitch and roll changes of the vehicle, and a large amount of information is lost when projecting from the three-dimensional environment onto a two-dimensional grid. Compared with a two-dimensional grid map, a three-dimensional grid map is a more accurate description method for the environment, but due to the large calculation amount, the three-dimensional grid map is rarely researched and applied to an unmanned automobile.
Through the above analysis, the problems and defects of the prior art are as follows: the existing environment representation model of the unmanned automobile adopts a two-dimensional grid, the pitching and rolling changes of the automobile are difficult to reflect, a large amount of information loss is generated when the three-dimensional environment is projected to the two-dimensional grid, and a description method of the three-dimensional grid to the environment is lacked.
The difficulty in solving the above problems and defects is: the three-dimensional grid map has very high calculation cost, and the updating speed of the state attribute of each three-dimensional grid is very slow, so that the requirement of unmanned real-time property cannot be met.
The significance of solving the problems and the defects is as follows: after the environment is represented three-dimensionally by the three-dimensional grid map, most of the environment information acquired by the sensors will be retained. This is of great help for both accurate modeling of the environment and for subsequent environment-aware modules, such as terrain modeling or dynamic target tracking.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a rapid three-dimensional grid construction system, a rapid three-dimensional grid construction method, a rapid three-dimensional grid construction medium, rapid three-dimensional grid construction equipment and an unmanned vehicle.
The invention is realized in such a way that a rapid three-dimensional grid construction method comprises the following steps:
firstly, projecting three-dimensional point cloud generated by a multi-line laser radar to generate a depth image;
combining three-dimensional grids which are longitudinally and continuously arranged into a stand column, taking the stand column as a processing element, and corresponding any stand column to a certain rectangular area in the depth image;
and step three, comparing the expected depth of each three-dimensional grid in the stand column with the observation depth of the corresponding pixel in a certain rectangular area of the depth image, and updating the state attribute of each three-dimensional grid in the stand column.
Further, the rapid three-dimensional grid construction method specifically comprises the following steps:
each frame of three-dimensional point cloud generated by the multi-line laser radar is projected and converted into a depth image; wherein for each three-dimensional point pi={xi,yi,ziAnd the corresponding pixel coordinate (u, v) in the depth image is as follows:
Figure RE-GDA0003091761440000031
where ω and h are the width and height of the depth image, respectively, fdownAnd fupRespectively representing the minimum and maximum vertical field of view, D, of the lidari=||pi||2Is a point piDistance to the origin of the laser radar, DiWill be stored as pixel values in the depth image. The resolution of each axis of the three-dimensional grid map is defined as rx、ryAnd rzAnd, the installation height of the laser radar is h1The actual range of the three-dimensional grid map in the longitudinal direction is [ Z ]min,Zmax]。
Defining each column of the three-dimensional grid map as a pillar, for each pillar, it corresponds to a rectangular region S in the depth imagei=[vup,vdown]×[uleft,uright];
For each column, its XY coordinates can be expressed as (x)i,yi) First, the maximum expected depth of the three-dimensional grid in the vertical column is calculated
Figure RE-GDA0003091761440000032
And minimum desired depth
Figure RE-GDA0003091761440000033
And then, calculating the ordinate upper bound v of the rectangular area corresponding to the current upright column in the depth image by using an equation1And a lower bound v2
Calculating the vertical coordinate upper bound v of the rectangular area corresponding to the current upright column in the depth image1And a lower bound v2The equation of (1) is:
v1=(-arcsin(Zmax/D1)+fup)/(fdown+fup)·h
υ2=(-arcsin(Zmin/D2)+fup)/(fdown+fup)·h;
due to v1May be less than 0, v2Or may be greater than h, in which case the current column is not observed completely; and finally, the upper bound v of the vertical coordinate of the line segment corresponding to the current upright post in the depth imageupAnd a lower bound vdownThe calculation is as follows:
υup=max(0,v1),
vdown=min(h,v2),
regarding the projection of the upright post on the XY plane as a square, selecting the upper left vertex and the lower right vertex of the square to calculate the azimuth angle range, and enabling (m)v,nv) Coordinates representing the origin of the lidar in the XY plane, (m)i,ni) Representing the coordinates of the column in the XY plane, the relative azimuth angle θlAnd thetarCalculated from the following formula:
Figure RE-GDA0003091761440000041
according to calculated thetalAnd thetarThe horizontal direction of the rectangular area corresponding to the current upright post in the depth image can be further calculatedCoordinate left boundary uleftAnd the right boundary uright
Figure RE-GDA0003091761440000042
Finally, the rectangular area S may be divided intoi=[vup,vdown]×[uleft,uright]The observation depth values corresponding to the pixels included in (1) are combined into a one-dimensional vector:
Figure RE-GDA0003091761440000043
for each three-dimensional grid in the column, the predicted depth value is
Figure RE-GDA0003091761440000044
Then the predicted depth value is compared with DiIf d is greater thani-Di|<E, updating the state attribute of the current three-dimensional grid to be occupied; if d isi<DiIf the data is not in the idle state, the data is updated to be idle, otherwise, the data is updated to be shielding.
It is another object of the present invention to provide a rapid three-dimensional grid construction system, comprising:
the depth image generation module is used for projecting the three-dimensional point cloud generated by the multi-line laser radar to generate a depth image;
the processing element constructing module is used for combining the three-dimensional grids which are longitudinally and continuously arranged into upright posts, taking the upright posts as processing elements and finding a rectangular area corresponding to any upright post in the depth image;
and the state updating module is used for comparing the expected depth of each three-dimensional grid in the stand column with the observation depth of the corresponding pixel in a certain rectangular region of the depth image, so that the state attribute of each three-dimensional grid in the stand column is updated.
By combining all the technical schemes, the invention has the advantages and positive effects that: the invention selects the upright columns in the three-dimensional grid map as processing elements, and accelerates the processing of the three-dimensional grid map by utilizing the two-dimensional depth image, thereby realizing the real-time update of the three-dimensional grid state around the unmanned vehicle; . Experiments on KITTI data sets show that the method can well eliminate the influence of dynamic objects.
The present invention can be used to efficiently update a three-dimensional occupancy grid map by taking advantage of the intrinsic characteristic depth image of lidar data, which is a compact data representation of lidar. Experiments on actual data sets show that the method of the present invention is faster and more efficient than conventional three-dimensional ray casting methods.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained from the drawings without creative efforts.
Fig. 1 is a flowchart of a method for building a fast three-dimensional grid according to an embodiment of the present invention.
Fig. 2 is a three-dimensional point cloud effect map generated by the multiline lidar, a two-dimensional depth image generated by projection of the point cloud, and a three-dimensional occupancy grid map finally generated according to the embodiment of the invention.
Fig. 3 is a schematic diagram of a relationship between an original point cloud and a two-dimensional depth image and a three-dimensional occupancy grid map provided by an embodiment of the present invention.
Fig. 4 is a schematic diagram of the calculation of the relative azimuth angle according to the embodiment of the present invention.
Fig. 5 is a map effect diagram generated by splicing multi-frame point cloud data by using different methods according to the embodiment of the present invention, and the leftmost side is an effect diagram obtained by directly splicing multi-frame point cloud data; the middle part is an effect graph for splicing multi-frame point cloud data by using a three-dimensional ray projection algorithm; the rightmost side is an effect diagram for splicing multi-frame point cloud data by using the method provided by the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In view of the problems in the prior art, the present invention provides a system, a method, a medium, a device, and an unmanned vehicle for rapid three-dimensional grid construction, and the present invention is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the method for constructing a fast three-dimensional grid according to the embodiment of the present invention includes:
s101, projecting three-dimensional point cloud generated by a multi-line laser radar to generate a depth image;
s102, combining three-dimensional grids which are longitudinally and continuously arranged into a stand column, taking the stand column as a processing element, and corresponding any stand column to a certain rectangular area in the depth image;
s103, comparing the expected depth of each three-dimensional grid in the stand column with the observation depth of the corresponding pixel in a certain rectangular area of the depth image, and updating the state attribute of each three-dimensional grid in the stand column.
Further, the rapid three-dimensional grid construction method specifically comprises the following steps:
each frame of three-dimensional point cloud generated by the multi-line laser radar is projected and converted into a depth image; wherein for each three-dimensional point pi={xi,yi,ziAnd the corresponding pixel coordinate (u, v) in the depth image is as follows:
Figure RE-GDA0003091761440000061
where ω and h are the width and height of the depth image, respectively, fdownAnd fupRespectively representing the minimum and maximum vertical field of view, D, of the lidari=||pi||2Is a point piDistance to the origin of the laser radar, DiWill be stored as pixel values in the depth image. The resolution of each axis of the three-dimensional grid map is defined as rx、ryAnd rzAnd, the installation height of the laser radar is hlThe actual range of the three-dimensional grid map in the longitudinal direction is [ Z ]min,Zmax]。
Defining each column of the three-dimensional grid map as a pillar, for each pillar, it corresponds to a rectangular region S in the depth imagei=[vup,vdown]×[uleft,uright];
For each column, its XY coordinates can be expressed as (x)i,yi) First, the maximum expected depth of the three-dimensional grid in the vertical column is calculated
Figure RE-GDA0003091761440000071
And minimum desired depth
Figure RE-GDA0003091761440000072
And then, calculating the ordinate upper bound v of the rectangular area corresponding to the current upright column in the depth image by using an equation1And a lower bound v2
Calculating the vertical coordinate upper bound v of the rectangular area corresponding to the current upright column in the depth image1And a lower bound v2The equation of (1) is:
υ1=(-arcsin(Zmax/D1)+fup)/(fdown+fup)·h
v2=(-arcsin(Zmin/D2)+fup)/(fdown+fup)·h;
due to v1May be less than 0, v2Or may be greater than h, in which case the current column is not observed completely; and finally, the upper bound v of the vertical coordinate of the line segment corresponding to the current upright post in the depth imageupAnd a lower bound vdownThe calculation is as follows:
vup=max(0,v1),
vdown=min(h,v2),
regarding the projection of the upright post on the XY plane as a square, selecting the upper left vertex and the lower right vertex of the square to calculate the azimuth angle range, and enabling (m)v,nv) Coordinates representing the origin of the lidar in the XY plane, (m)i,ni) Representing the coordinates of the column in the XY plane, the relative azimuth angle θlAnd thetarCalculated from the following formula:
Figure RE-GDA0003091761440000073
according to calculated thetalAnd thetarThe left boundary u of the abscissa of the rectangular region corresponding to the current upright in the depth image can be further calculatedleftAnd the right boundary uright
Figure RE-GDA0003091761440000074
Finally, the rectangular area S may be divided intoi=[vup,vdown]×[uleft,uright]The observation depth values corresponding to the pixels included in (1) are combined into a one-dimensional vector:
Figure RE-GDA0003091761440000081
for each three-dimensional grid in the column, the predicted depth value is
Figure RE-GDA0003091761440000082
Then the predicted depth value is compared with DiIf d is greater thani-Di|<E, updating the state attribute of the current three-dimensional grid to be occupied; if d isi<DiIf the data is not in the idle state, the data is updated to be idle, otherwise, the data is updated to be shielding.
The rapid three-dimensional grid construction system provided by the embodiment of the invention comprises:
the depth image generation module is used for projecting the three-dimensional point cloud generated by the multi-line laser radar to generate a depth image;
the processing element constructing module is used for combining the three-dimensional grids which are longitudinally and continuously arranged into upright posts, taking the upright posts as processing elements and finding a rectangular area corresponding to any upright post in the depth image;
and the state updating module is used for comparing the expected depth of each three-dimensional grid in the stand column with the observation depth of the corresponding pixel in a certain rectangular region of the depth image, so that the state attribute of each three-dimensional grid in the stand column is updated.
It should be noted that the embodiments of the present invention can be realized by hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided on a carrier medium such as a disk, CD-or DVD-ROM, programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier, for example. The apparatus and its modules of the present invention may be implemented by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., or by software executed by various types of processors, or by a combination of hardware circuits and software, e.g., firmware. The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A rapid three-dimensional grid construction method is characterized by comprising the following steps:
projecting three-dimensional point cloud generated by a multi-line laser radar to generate a depth image;
combining three-dimensional grids which are longitudinally and continuously arranged into a stand column, taking the stand column as a processing element, and corresponding any stand column to a certain rectangular area in the depth image;
and comparing the expected depth of each three-dimensional grid in the stand column with the observation depth of the corresponding pixel in a certain rectangular region of the depth image, so that the state attribute of each three-dimensional grid in the stand column can be updated.
2. The rapid three-dimensional grid construction method according to claim 1, specifically comprising: each frame of three-dimensional point cloud generated by the multi-line laser radar is projected and converted into a depth image; wherein for each three-dimensional point pi={xi,yi,ziAnd the corresponding pixel coordinate (u, v) in the depth image is as follows:
Figure FDA0002859670400000011
where ω and h are the width and height of the depth image, respectively, fdownAnd fupRespectively representing the minimum and maximum vertical field of view, D, of the lidari=||pi||2Is a point piDistance to the origin of the laser radar, DiWill be stored as pixel values in the depth image. The resolution of each axis of the three-dimensional grid map is defined as rx、ryAnd rzAnd, the installation height of the laser radar is hlThe actual range of the three-dimensional grid map in the longitudinal direction is [ Z ]min,Zmax]。
3. The fast three-dimensional grid construction method according to claim 2, wherein each column of the three-dimensional grid map is defined as a pillar, which corresponds to a rectangular area S in the depth image for each pillari=[vup,vdown]×[uleft,uright];
For each column, its XY coordinates can be expressed as (x)i,yi) First, the maximum expected depth of the three-dimensional grid in the vertical column is calculated
Figure FDA0002859670400000012
And minimum desired depth
Figure FDA0002859670400000013
And then, calculating the ordinate upper bound v of the rectangular area corresponding to the current upright column in the depth image by using an equation1And a lower bound v2
Calculating the vertical coordinate upper bound v of the rectangular area corresponding to the current upright column in the depth image1And a lower bound v2The equation of (1) is:
v1=(-arcsin(Zmax/D1)+fup)/(fdown+fup)·h
v2=(-arcsin(Zmin/D2)+fup)/(fdown+fup)·h;
due to v1May be less than 0, v2Or may be greater than h, in which case the current column is not observed completely; and finally, the upper bound v of the vertical coordinate of the line segment corresponding to the current upright post in the depth imageupAnd a lower bound vdownThe calculation is as follows:
vup=max(0,v1),
vdown=min(h,v2)。
4. the method of claim 3, wherein the projection of the vertical posts onto the XY plane is considered as a positiveSquare, selecting the upper left vertex and the lower right vertex of the square to calculate the azimuth angle range, and (m)v,nv) Coordinates representing the origin of the lidar in the XY plane, (m)i,ni) Representing the coordinates of the column in the XY plane, the relative azimuth angle θlAnd thetarCalculated from the following formula:
Figure FDA0002859670400000021
according to calculated thetalAnd thetarThe left boundary u of the abscissa of the rectangular region corresponding to the current upright in the depth image can be further calculatedleftAnd the right boundary uright
Figure FDA0002859670400000022
Finally, the rectangular area S may be divided intoi=[vup,vdown]×[uleft,uright]The observation depth values corresponding to the pixels included in (1) are combined into a one-dimensional vector:
Figure FDA0002859670400000031
5. the method of rapid three-dimensional grid construction according to claim 4, wherein for each three-dimensional grid in the pillars, the predicted depth value is
Figure FDA0002859670400000032
Then the predicted depth value is compared with DiIf d is greater thani-DiIf | <e, the state attribute of the current three-dimensional grid can be updated to be occupied; if d isi<DiIf the data is not in the idle state, the data is updated to be idle, otherwise, the data is updated to be shielding.
6. A computer device, characterized in that the computer device comprises a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to carry out the steps of:
projecting three-dimensional point cloud generated by a multi-line laser radar to generate a depth image;
combining three-dimensional grids which are longitudinally and continuously arranged into a stand column, taking the stand column as a processing element, and corresponding any stand column to a certain rectangular area in the depth image;
and comparing the expected depth of each three-dimensional grid in the stand column with the observation depth of the corresponding pixel in a certain rectangular region of the depth image, so that the state attribute of each three-dimensional grid in the stand column can be updated.
7. A computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
projecting three-dimensional point cloud generated by a multi-line laser radar to generate a depth image;
combining three-dimensional grids which are longitudinally and continuously arranged into a stand column, taking the stand column as a processing element, and corresponding any stand column to a certain rectangular area in the depth image;
and comparing the expected depth of each three-dimensional grid in the stand column with the observation depth of the corresponding pixel in a certain rectangular region of the depth image, so that the state attribute of each three-dimensional grid in the stand column can be updated.
8. An information data processing terminal, characterized in that the information data processing terminal is used for implementing the rapid three-dimensional grid construction method of any one of claims 1 to 5.
9. A rapid three-dimensional grid construction system for use in the rapid three-dimensional grid construction method according to any one of claims 1 to 5, the rapid three-dimensional grid construction system comprising:
the depth image generation module is used for projecting the three-dimensional point cloud generated by the multi-line laser radar to generate a depth image;
the processing element constructing module is used for combining the three-dimensional grids which are longitudinally and continuously arranged into upright posts, taking the upright posts as processing elements and finding a rectangular area corresponding to any upright post in the depth image;
and the state updating module is used for comparing the expected depth of each three-dimensional grid in the stand column with the observation depth of the corresponding pixel in a certain rectangular region of the depth image, so that the state attribute of each three-dimensional grid in the stand column is updated.
10. An unmanned aerial vehicle equipped with the rapid three-dimensional grid construction system according to claim 9.
CN202011562154.1A 2020-12-25 2020-12-25 Rapid three-dimensional grid construction system, method, medium, equipment and unmanned vehicle Active CN113160398B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011562154.1A CN113160398B (en) 2020-12-25 2020-12-25 Rapid three-dimensional grid construction system, method, medium, equipment and unmanned vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011562154.1A CN113160398B (en) 2020-12-25 2020-12-25 Rapid three-dimensional grid construction system, method, medium, equipment and unmanned vehicle

Publications (2)

Publication Number Publication Date
CN113160398A true CN113160398A (en) 2021-07-23
CN113160398B CN113160398B (en) 2023-03-28

Family

ID=76878031

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011562154.1A Active CN113160398B (en) 2020-12-25 2020-12-25 Rapid three-dimensional grid construction system, method, medium, equipment and unmanned vehicle

Country Status (1)

Country Link
CN (1) CN113160398B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050285876A1 (en) * 2004-06-29 2005-12-29 Piotr Balaga Composition of raster and vector graphics in geographical information systems
WO2011152895A2 (en) * 2010-02-12 2011-12-08 The University Of North Carolina At Chapel Hill Systems and methods that generate height map models for efficient three dimensional reconstruction from depth information
CN102855663A (en) * 2012-05-04 2013-01-02 北京建筑工程学院 Method for building CSG (Constructive Solid Geometry) model according to laser radar grid point cloud
US9361412B1 (en) * 2012-03-26 2016-06-07 The United Sates Of America As Represented By The Secretary Of The Navy Method for the simulation of LADAR sensor range data
US20170116487A1 (en) * 2015-10-22 2017-04-27 Kabushiki Kaisha Toshiba Apparatus, method and program for generating occupancy grid map
CA3000134A1 (en) * 2017-11-27 2018-06-07 Cae Inc. Method and system for simulating a radar image
US20180205941A1 (en) * 2017-01-17 2018-07-19 Facebook, Inc. Three-dimensional scene reconstruction from set of two dimensional images for consumption in virtual reality
US20190004534A1 (en) * 2017-07-03 2019-01-03 Baidu Usa Llc High resolution 3d point clouds generation from upsampled low resolution lidar 3d point clouds and camera images
US20190004535A1 (en) * 2017-07-03 2019-01-03 Baidu Usa Llc High resolution 3d point clouds generation based on cnn and crf models
CN109214982A (en) * 2018-09-11 2019-01-15 大连理工大学 A kind of three-dimensional point cloud imaging method based on bicylindrical projection model
CN109443369A (en) * 2018-08-20 2019-03-08 北京主线科技有限公司 The method for constructing sound state grating map using laser radar and visual sensor
US20200286247A1 (en) * 2019-03-06 2020-09-10 Qualcomm Incorporated Radar-aided single image three-dimensional depth reconstruction
CN111694011A (en) * 2020-06-19 2020-09-22 安徽卡思普智能科技有限公司 Road edge detection method based on data fusion of camera and three-dimensional laser radar

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050285876A1 (en) * 2004-06-29 2005-12-29 Piotr Balaga Composition of raster and vector graphics in geographical information systems
WO2011152895A2 (en) * 2010-02-12 2011-12-08 The University Of North Carolina At Chapel Hill Systems and methods that generate height map models for efficient three dimensional reconstruction from depth information
US9361412B1 (en) * 2012-03-26 2016-06-07 The United Sates Of America As Represented By The Secretary Of The Navy Method for the simulation of LADAR sensor range data
CN102855663A (en) * 2012-05-04 2013-01-02 北京建筑工程学院 Method for building CSG (Constructive Solid Geometry) model according to laser radar grid point cloud
US20170116487A1 (en) * 2015-10-22 2017-04-27 Kabushiki Kaisha Toshiba Apparatus, method and program for generating occupancy grid map
US20180205941A1 (en) * 2017-01-17 2018-07-19 Facebook, Inc. Three-dimensional scene reconstruction from set of two dimensional images for consumption in virtual reality
US20190004534A1 (en) * 2017-07-03 2019-01-03 Baidu Usa Llc High resolution 3d point clouds generation from upsampled low resolution lidar 3d point clouds and camera images
US20190004535A1 (en) * 2017-07-03 2019-01-03 Baidu Usa Llc High resolution 3d point clouds generation based on cnn and crf models
CA3000134A1 (en) * 2017-11-27 2018-06-07 Cae Inc. Method and system for simulating a radar image
CN109443369A (en) * 2018-08-20 2019-03-08 北京主线科技有限公司 The method for constructing sound state grating map using laser radar and visual sensor
CN109214982A (en) * 2018-09-11 2019-01-15 大连理工大学 A kind of three-dimensional point cloud imaging method based on bicylindrical projection model
US20200286247A1 (en) * 2019-03-06 2020-09-10 Qualcomm Incorporated Radar-aided single image three-dimensional depth reconstruction
CN111694011A (en) * 2020-06-19 2020-09-22 安徽卡思普智能科技有限公司 Road edge detection method based on data fusion of camera and three-dimensional laser radar

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HANZHANG XUE等: "LiDAR-based Ground Segmentation Using Stixel Features", 《IEEE》 *
李英立等: "基于深度图像与三维栅格离线映射的机械臂环境建模方法", 《控制与决策》 *
王张飞等: "基于深度投影的三维点云目标分割和碰撞检测", 《光学精密工程》 *

Also Published As

Publication number Publication date
CN113160398B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN113486797B (en) Unmanned vehicle position detection method, unmanned vehicle position detection device, unmanned vehicle position detection equipment, storage medium and vehicle
US20220180642A1 (en) Object detection for distorted images
US10579065B2 (en) Algorithm and infrastructure for robust and efficient vehicle localization
EP4213068A1 (en) Target detection method and apparatus based on monocular image
WO2020216315A1 (en) Method and system for rapid generation of reference driving route, terminal and storage medium
CN110221616A (en) A kind of method, apparatus, equipment and medium that map generates
JP2022003508A (en) Trajectory planing model training method and device, electronic apparatus, computer-readable storage medium, and computer program
US11466992B2 (en) Method, apparatus, device and medium for detecting environmental change
CN111582189A (en) Traffic signal lamp identification method and device, vehicle-mounted control terminal and motor vehicle
CN111402414A (en) Point cloud map construction method, device, equipment and storage medium
CN113761999A (en) Target detection method and device, electronic equipment and storage medium
CN111860072A (en) Parking control method and device, computer equipment and computer readable storage medium
US20220254062A1 (en) Method, device and storage medium for road slope predicating
CN111913183A (en) Vehicle lateral obstacle avoidance method, device and equipment and vehicle
CN116258826A (en) Semantic map construction and boundary real-time extraction method for open-air mining area
WO2022099620A1 (en) Three-dimensional point cloud segmentation method and apparatus, and mobile platform
CN113160398B (en) Rapid three-dimensional grid construction system, method, medium, equipment and unmanned vehicle
CN116523970B (en) Dynamic three-dimensional target tracking method and device based on secondary implicit matching
CN113255555A (en) Method, system, processing equipment and storage medium for identifying Chinese traffic sign board
CN116215520A (en) Vehicle collision early warning and processing method and device based on ultrasonic waves and 3D looking around
CN110770540B (en) Method and device for constructing environment model
US20230182743A1 (en) Method and system for extracting road data and method and system for controlling self-driving car
CN114565906A (en) Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN112800873A (en) Method, device and system for determining target direction angle and storage medium
CN112183378A (en) Road slope estimation method and device based on color and depth image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant