CN114930401A - Point cloud-based three-dimensional reconstruction method and device and computer equipment - Google Patents

Point cloud-based three-dimensional reconstruction method and device and computer equipment Download PDF

Info

Publication number
CN114930401A
CN114930401A CN202080092974.0A CN202080092974A CN114930401A CN 114930401 A CN114930401 A CN 114930401A CN 202080092974 A CN202080092974 A CN 202080092974A CN 114930401 A CN114930401 A CN 114930401A
Authority
CN
China
Prior art keywords
point cloud
data
map data
parameterization
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080092974.0A
Other languages
Chinese (zh)
Inventor
李煊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DeepRoute AI Ltd
Original Assignee
DeepRoute AI Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DeepRoute AI Ltd filed Critical DeepRoute AI Ltd
Publication of CN114930401A publication Critical patent/CN114930401A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

A point cloud-based three-dimensional reconstruction method comprises the following steps: acquiring point cloud data and vehicle track data; generating point cloud map data according to the point cloud data and the vehicle track data; extracting geometric information and semantic information from the point cloud map data; determining a parameterization strategy corresponding to the point cloud map data according to the semantic information; carrying out parameterization processing on corresponding point cloud map data according to the determined parameterization strategy and the geometric information; and performing three-dimensional reconstruction according to the point cloud map data after the parameterization processing.

Description

Three-dimensional reconstruction method and device based on point cloud and computer equipment Technical Field
The application relates to a point cloud-based three-dimensional reconstruction method and device, computer equipment and a storage medium.
Background
Three-dimensional reconstruction refers to reconstructing the surface of an object from geometric information by extracting the geometric information of the object. In the automatic driving process, the traditional mode is that the point cloud data acquired by a vehicle-mounted sensor is acquired, and the geometric information in the point cloud data is extracted to carry out three-dimensional reconstruction, so that the detailed information of the surrounding environment of the vehicle can be completely reproduced.
However, under the condition that the driving safety of the vehicle is not affected and the vehicle normally runs, detailed information of the surrounding environment of the vehicle does not need to be completely reproduced, and if the three-dimensional reconstruction is carried out in a traditional mode, the data volume of the three-dimensional reconstruction is large, so that the efficiency of the three-dimensional reconstruction is low.
Disclosure of Invention
According to various embodiments disclosed herein, a method, an apparatus, a computer device, and a storage medium for point cloud-based three-dimensional reconstruction are provided.
A point cloud-based three-dimensional reconstruction method comprises the following steps:
acquiring point cloud data and vehicle track data;
generating point cloud map data according to the point cloud data and the vehicle track data;
extracting geometric information and semantic information from the point cloud map data;
determining a parameterization strategy corresponding to the point cloud map data according to the semantic information;
carrying out parameterization processing on corresponding point cloud map data according to the determined parameterization strategy and the geometric information; and
and performing three-dimensional reconstruction on the point cloud map data after the parameterization processing.
A point cloud-based three-dimensional reconstruction apparatus, comprising:
the acquisition module is used for acquiring point cloud data and vehicle track data;
the generating module is used for generating point cloud map data according to the point cloud data and the vehicle track data;
the extraction module is used for extracting geometric information and semantic information from the point cloud map data;
the determining module is used for determining a parameterization strategy corresponding to the point cloud map data according to the semantic information;
the parameterization module is used for carrying out parameterization processing on corresponding point cloud map data according to the determined parameterization strategy and the geometric information; and
and the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the point cloud map data after the parameterization processing.
A computer device comprising a memory and one or more processors, the memory having stored therein computer-readable instructions that, when executed by the processors, cause the one or more processors to perform the steps of:
a computer device comprising a memory and one or more processors, the memory having stored therein computer-readable instructions that, when executed by the one or more processors, cause the one or more processors to perform the steps of the various method embodiments described above.
One or more non-transitory computer-readable storage media storing computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps in the various method embodiments described above.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below. Other features and advantages of the application will be apparent from the description and drawings, and from the claims.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a view of an application scenario of a point cloud-based three-dimensional reconstruction method in one or more embodiments.
FIG. 2 is a schematic flow diagram of a point cloud based three-dimensional reconstruction in one or more embodiments.
Fig. 3 is a schematic flowchart illustrating a parameterization processing step performed on corresponding point cloud map data according to a determined parameterization policy and geometric information in one or more embodiments.
FIG. 4 is a schematic flow diagram illustrating a step of generating point cloud map data from vehicle trajectory data and point cloud data in one or more embodiments.
FIG. 5 is a schematic diagram illustrating calculation of trajectory data corresponding to respective frame point cloud data in one or more embodiments.
Fig. 6 is a schematic flow chart of a point cloud-based three-dimensional reconstruction in another embodiment.
FIG. 7 is a block diagram of an apparatus for point cloud based three-dimensional reconstruction in one or more embodiments.
FIG. 8 is a block diagram of a computer device in one or more embodiments.
Detailed Description
In order to make the technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The point cloud-based three-dimensional reconstruction method can be applied to an automatic driving application scene shown in fig. 1. The autonomous vehicle has a first on-board sensor 102, a second on-board sensor 104, and an on-board computer device 106 pre-installed therein. The first vehicle-mounted sensor may be referred to simply as the first sensor, the second vehicle-mounted sensor may be referred to simply as the second sensor, and the vehicle-mounted computer device may be referred to simply as the computer device. The first sensor 102 is in communication with a computer device 106 and the second sensor 104 is in communication with the computer device 106. During autonomous driving, the first sensor 104 transmits the collected point cloud data to the computer device 104. The second sensor 102 transmits the collected vehicle trajectory data to the computer device 104. The computer device 104 generates point cloud map data according to the vehicle trajectory data and the point cloud data, and extracts geometric information and semantic information from the point cloud map data. The computer device 104 thus determines a parameterization policy corresponding to the point cloud map data according to the semantic information. The computer device 104 parameterizes the corresponding point cloud map data according to the determined parameterization strategy and the geometric information. The computer device 104 performs a three-dimensional reconstruction of the parameterised data. The first sensor 104 may be, but is not limited to, a laser radar, a laser scanner, or the like. The second sensor 102 may be, but is not limited to, an RTK (Real-time kinematic) sensor, an IMU (Inertial measurement unit) sensor, a wheel speed meter, and the like.
In one embodiment, as shown in fig. 2, a method for three-dimensional reconstruction based on point cloud is provided, which is exemplified by the application of the method to the computer device in fig. 1, and includes the following steps:
step 202, point cloud data and vehicle track data are obtained.
In the automatic driving process of the vehicle, the surrounding environment can be scanned through a first sensor arranged on the vehicle, and corresponding point cloud data are obtained. The first sensor transmits the acquired point cloud data to the computer equipment. The second sensor transmits the acquired vehicle trajectory data to the computer device.
The first sensor may be a lidar, a laser scanner, or the like. The point cloud data is acquired by the first sensor in a visible range. The point cloud data records objects in a visual range in a point form, and a set of point data corresponding to a plurality of points on the surface of the object. Plural may mean two or more. The point cloud data may be three-dimensional point cloud data, and each frame of point cloud data may include point data corresponding to each of a plurality of points. The dot data may specifically include at least one of three-dimensional coordinates, laser reflection intensity, color information, and the like of the dot correspondence. The three-dimensional coordinates may be coordinates of the point in a cartesian coordinate system, and specifically include horizontal axis coordinates, vertical axis coordinates, and vertical axis coordinates of the point in the cartesian coordinate system. The cartesian coordinate system is a three-dimensional space coordinate system established with the first sensor as an origin, and the three-dimensional space coordinate system includes a horizontal axis (x-axis), a vertical axis (y-axis), and a vertical axis (z-axis). And a three-dimensional space coordinate system established by taking the first sensor as an origin meets the right-hand rule.
The second sensor may be an RTK (Real-time kinematic) sensor, an IMU (Inertial measurement unit) sensor, a wheel speed meter, or the like. The vehicle trajectory data may include trajectory points of the vehicle within a preset time period and position information of the trajectory points. The trace points can be arranged according to the time sequence. The position information of the track point may specifically include a longitude of the vehicle at each track point, a latitude of the vehicle at each track point, a time when the vehicle reaches each track point, a speed of the vehicle at each track point, a position coordinate of the vehicle at each track point, and the like.
In one embodiment, the vehicle track data transmitted by the second sensor can be directly adopted in the area with better satellite signals in the automatic driving process, and the second sensor cannot acquire the corresponding vehicle track data in the area with poorer satellite signals. Therefore, the computer equipment can optimize the vehicle track data transmitted by the second sensor, so that the vehicle track data corresponding to the regions with poor satellite signals, such as the regions under an overhead bridge, an underground garage and a high-rise building, can be predicted, and more accurate vehicle track data can be obtained. For example, the optimization means may include kalman filtering, factor graph optimization, and the like.
And step 204, generating point cloud map data according to the point cloud data and the vehicle track data.
Because the point cloud data may have a target that affects the generation of the point cloud map data, the target refers to an organism or a non-organism in the surrounding environment of the vehicle, and the target may be a dynamic target or a static target. For example, the targets affecting the point cloud map data generation may be dynamic targets such as vehicles or pedestrians traveling on a road and static targets such as temporarily parked vehicles, trash cans, and the like. Therefore, the computer device can remove the targets in the point cloud data which affect the generation of the point cloud map data. Specifically, the computer device may perform target detection on the point cloud data, so as to determine a position of a target to be removed in the point cloud data, and then remove the corresponding target according to the detected target position. The computer equipment can adopt a deep learning target detection mode to remove targets. For example, Spatial Pyramid Pooling Networks (SPPNet), Feature Pyramid Networks (FPN), and so on.
And matching the point cloud data subjected to the target removal processing with the vehicle track data by the computer equipment. The point cloud data may include multiple frames of point cloud data. Corresponding time stamps exist in each frame of point cloud data, so that the time sequence among multiple frames of point cloud data can be determined. Specifically, the computer device may determine, in the vehicle trajectory data, trajectory data corresponding to each frame of point cloud data in the point cloud data according to the timestamp of the point cloud data. And accumulating the track data corresponding to the multi-frame point cloud data according to the time sequence by the computer equipment so as to generate point cloud map data.
And step 206, extracting geometric information and semantic information from the point cloud map data.
The geometric information is information for representing surface characteristics of each object in the point cloud map data. For example, the geometric information may specifically include a bounding box represented by position coordinates of the object in the three-dimensional space, pose information of the object in the three-dimensional space, and a size of the object. The position coordinates may be expressed in (abscissa, ordinate), i.e., (x, y, z). The attitude information may be expressed by (roll angle, pitch angle, yaw angle), i.e., (roll, pitch, yaw). Dimensions may be expressed in terms of (length, width, height). The semantic information may include category information corresponding to each object in the point cloud map data. Such as vehicles, people, trees, etc.
After the point cloud map data is generated, the computing device extracts the geometric information and the semantic information of each object from the point cloud map data. The information extraction mode can be various, the geometric information and the semantic information of each object in the point cloud map data can be directly calculated, and the point cloud map data can also be converted into the grid map data, so that the geometric information and the semantic information of each object can be extracted from the grid map data.
When the computer device directly calculates the geometric information and semantic information of each object in the point cloud map data, the computing device may perform voxelization processing on the point cloud map data to obtain a feature matrix. Voxelization processing refers to converting point cloud map data into a volumetric mesh. The volume grid may be represented by a feature matrix. The feature matrix may include a plurality of matrix cells, one matrix cell representing one volume grid. The dimensions of the matrix cells may be length by width by height. The length, width and height of each matrix element may be the same. The computer device invokes a pre-trained deep learning model. For example, the deep learning model may be a three-dimensional convolutional neural network model. And the computer equipment inputs the characteristic matrix into the deep learning model, performs prediction operation on the characteristic matrix through the deep learning model, and outputs geometric information and semantic information corresponding to each object in the point cloud map data.
The point cloud map data generated by the computer device is three-dimensional data, and when the computer device extracts geometric information and semantic information of each object by converting the point cloud map data into raster map data, the computer device can project the point cloud map data to obtain a two-dimensional raster map. A two-dimensional grid map may include multiple grids. The size of the grid may be length by width. The length and width of each grid may be different. And then the computer equipment extracts the geometric information and semantic information corresponding to each object in the two-dimensional grid map.
And 208, determining a parameterization strategy corresponding to the point cloud map data according to the semantic information.
The semantic information may include category information corresponding to each object in the point cloud map data. And the computer equipment determines and determines the parameterization strategy corresponding to the object according to the class information corresponding to each object. The parameterization strategy can be to parameterize the object by adopting parameters such as points, lines, planes, volumes and the like. For example, the computer device can represent the road that is not flat but does not influence the driving as a plane, can regard fence and wall as the sign cuboid, can represent the railing as a rectangular district cylinder. The computer device can determine the corresponding parameters of the object according to the class information of the object. The parameters corresponding to the same object can be the same parameter or a combination of multiple parameters. The parameterization strategy can be used for determining parameters for replacing corresponding objects in the point cloud map data, the point cloud map data corresponding to the objects can be obtained by replacing the corresponding objects with the parameters, unnecessary point cloud map data can be hidden, and therefore the data volume of subsequent three-dimensional reconstruction is reduced.
The parameterization strategy can be selected according to the precision requirement corresponding to the environment where the vehicle is located, and specifically can comprise a low-precision parameterization strategy and a high-precision parameterization strategy. The low-precision parameterization strategy is used for carrying out parameterization processing on the point cloud map data through fewer parameters, and the high-precision parameterization strategy is used for carrying out parameterization processing on the point cloud map data through more parameters. Under the condition that the driving safety of the vehicle is not influenced and the vehicle normally runs, when the precision requirement corresponding to the environment where the vehicle is located is low, a low-precision parameterization strategy can be adopted, for example, the traffic light can be regarded as being composed of a light box and a rod by computer equipment, the light box can be represented by a cuboid, and the rod can be represented by a cylinder. When the precision requirement corresponding to the environment where the vehicle is located is high, a high-precision parameterization strategy can be adopted, for example, a Computer device adopts a CAD (Computer Aided Design) model to parameterize the traffic light body.
And step 210, carrying out parameterization processing on the corresponding point cloud map data according to the determined parameterization strategy and the geometric information.
And step 212, performing three-dimensional reconstruction on the point cloud map data after the parameterization processing.
After the computer equipment determines the parameterization strategy corresponding to each object, the parameters corresponding to each object can be obtained, so that the computer equipment determines the information such as the size and the position of each object according to the geometric information of each object, and determines the parameter information such as the size and the position of each parameter according to the information such as the shape and the size of each object. And the computer equipment carries out parameterization processing on the corresponding object according to the parameters, namely, carries out modeling processing on the corresponding object according to the parameters so as to obtain a parameter model corresponding to the point cloud map data.
When the object corresponds to only one parameter, the computer device parameterizes the corresponding object according to the parameter. When the object corresponds to various parameters, the computer equipment parameterizes corresponding parts of the object by using each parameter, so that the object corresponding to various parameters is subjected to combined modeling, and a parameter model corresponding to the object is obtained. And after the parameterization processing of all the objects in the point cloud map data is completed, performing three-dimensional reconstruction on the parameterized point cloud map data, and displaying the parameterized data in a point cloud map corresponding to the point cloud map data to obtain a three-dimensional environment map.
In one embodiment, the computer device may further obtain partial environmental data in the vehicle surroundings in the database during the three-dimensional construction from the point cloud data. For example, road information, building data, and the like around the vehicle are acquired in the database. For the partial data, the computer device does not need to acquire corresponding point cloud data through the first sensor. The amount of data for three-dimensional reconstruction can be further reduced.
In the present embodiment, the computer device generates point cloud map data from the point cloud data and the vehicle trajectory data by acquiring the point cloud data and the vehicle trajectory data. And extracting geometric information and semantic information from the point cloud map data. Compared with a mode of three-dimensional reconstruction completely depending on geometric information, the semantic information can more accurately express the significance of an object in a map, and meanwhile, the problem of inaccurate three-dimensional reconstruction caused by noise or data loss is effectively avoided. Under the condition that the driving safety of the vehicle is not influenced and the vehicle runs normally, the computer equipment can determine the parameterization strategy corresponding to the point cloud map data according to the semantic information and select the corresponding parameterization strategy according to different categories corresponding to the point cloud map data, so that the corresponding point cloud map data are parameterized according to the determined parameterization strategy and the geometric information, corresponding objects in the point cloud map data are replaced by the parameters, unnecessary point cloud map data can be hidden, and the data volume of the point cloud map data is reduced. And then the computer equipment carries out three-dimensional reconstruction on the point cloud map data after the parameterization processing, and the point cloud map data after the parameterization processing is the data after the data volume is reduced, so that the data volume of the three-dimensional reconstruction is reduced, and the efficiency of the three-dimensional reconstruction is effectively improved.
In one embodiment, as shown in fig. 3, the method further includes: the method comprises the following steps of carrying out parameterization processing on corresponding point cloud map data according to a determined parameterization strategy and geometric information, and specifically comprises the following steps:
step 302, obtaining model parameters corresponding to the corresponding point cloud map data from the historical parameters according to the parameterization strategy.
And step 304, carrying out parameterization processing on corresponding point cloud map data according to the geometric information and the determined model parameters.
And the computer equipment determines and determines the parameterization strategy corresponding to the object according to the class information corresponding to each object. The parameterization strategy can be to parameterize the object by using parameters such as points, lines, planes, and volumes. The parameters corresponding to the same object can be the same parameter or a combination of multiple parameters. The parameterization strategy can be used for determining parameters for replacing corresponding objects in the point cloud map data, and the computer equipment can obtain model parameters corresponding to the corresponding point cloud map data from historical parameters according to the parameters corresponding to the objects. The historical parameters may include parameter information corresponding to the object that has undergone three-dimensional reconstruction. The model parameters may include shape, structural information, etc. of the object.
The geometric information is information for representing surface characteristics of each object in the point cloud map data. For example, the geometric information may specifically include a bounding box represented by position coordinates of the object in the three-dimensional space, pose information of the object in the three-dimensional space, and a size of the object. And the computer equipment determines the position, the size and other information of the object in the geometric information, and then parameterizes the corresponding point cloud map data according to the model parameters at the position of the object.
In the embodiment, the model parameters are obtained from the historical parameters, so that the obtained model parameters better meet the requirements of the current automatic driving environment, and meanwhile, the existing parameters can be directly taken for the object parts which repeatedly appear, and the accuracy of three-dimensional reconstruction is further improved.
In one embodiment, as shown in fig. 4, the method further includes: generating point cloud map data according to the point cloud data and the vehicle track data, which specifically comprises the following steps:
step 402, identifying point cloud data corresponding to a preset object in the point cloud data, and deleting the point cloud data corresponding to the preset object.
And step 404, matching the deleted point cloud data with vehicle track data to obtain track data corresponding to the deleted point cloud data.
And step 406, generating point cloud map data according to the track data corresponding to the deleted point cloud data.
And identifying the point cloud data by the computer equipment, and determining an enclosing frame corresponding to the preset object, namely, the point cloud data corresponding to the preset object is the point cloud data in the enclosing frame. And deleting the point cloud data in the enclosing frame by the computer equipment to obtain the point cloud data after deletion processing. Because the point cloud data has no track data, the corresponding position of each frame of point cloud data can only be calculated through the timestamp, and therefore the point cloud data after deletion processing is matched with the vehicle track data by the computer equipment.
In one embodiment, matching the deleted point cloud data with the vehicle trajectory data to obtain trajectory data corresponding to the deleted point cloud data includes: determining a target track point in the vehicle track data according to the timestamp of each frame of point cloud data; calculating track data corresponding to the corresponding frame point cloud data according to the position coordinates of the target track points, the time stamps of the target track points and a preset relation; and obtaining the track data corresponding to the deleted point cloud data according to the track data corresponding to each frame of point cloud data.
The deleted point cloud data comprises multi-frame point cloud data. The computer device may determine target track points in the vehicle track data based on the timestamps of the frames of point cloud data. The target track point can be two track points adjacent to the timestamp of each frame of point cloud data, and each frame of point cloud data can be on a linear function formed by the two track points, namely, the track data corresponding to each frame of point cloud data can be calculated according to the linear relation among the three. The computer device calculates the track data corresponding to the corresponding frame point cloud data according to the position coordinates of the target track point, the timestamp of the target track point and the preset relationship, and specifically, the computer device calculates the abscissa of the track data corresponding to the frame point cloud data according to the abscissa of the target track point, the timestamp of the corresponding frame point cloud data and the preset relationship. For example, the preset relationship may be a linear functional relationship. Similarly, the computer device calculates the vertical coordinate of the track data corresponding to the frame of point cloud data according to the vertical coordinate of the target track point, the timestamp of the corresponding frame of point cloud data and the preset relation, and accordingly obtains the track data corresponding to the frame of point cloud data. And the computer equipment obtains the track data corresponding to the multi-frame point cloud data according to the calculation mode, and further obtains the track data corresponding to the deleted point cloud data.
As shown in fig. 5, in a two-dimensional coordinate system, a circle represents a position of a target track point, and a triangle represents a position corresponding to a point cloud. The position coordinates of the target track point with the time stamp of t1 are (x1, y1), the position coordinates of the target track point with the time stamp of t3 are (x3, y3), and the position coordinates of the designated frame point cloud data with the time stamp of t2 can be represented by (x2, y 1). (x1, y1) and (x3, y3) are known, and (x2, y1) are unknown. x2 ═ x1+ (x3-x1) × t2/(t3-t1), y2 ═ y1+ (y3-y1) × t2/(t3-t 1). The position coordinates of the specified frame point cloud data with the time stamp t2 can thus be obtained.
In this embodiment, the computer device determines the target track point in the vehicle track data according to the timestamp of each frame of point cloud data, and calculates the track data corresponding to the corresponding frame of point cloud data according to the position coordinate of the target track point, the timestamp of the target track point and the preset relationship, so as to obtain the track data corresponding to the deleted point cloud data according to the track data corresponding to each frame of point cloud data. The method can ensure that each frame of point cloud data is matched with the corresponding track data, and is favorable for improving the accuracy of the point cloud map data.
After the track data corresponding to the deleted point cloud data is obtained, the computer equipment can generate point cloud map data according to the track data corresponding to the deleted point cloud data. The point cloud map data comprises track data corresponding to each frame of point cloud data. By deleting the point cloud data corresponding to the preset object, redundant point cloud data can be removed, the influence of redundant data on the point cloud map data is avoided, and the accuracy and the effectiveness of the point cloud map data are improved. And matching the deleted point cloud data with the vehicle track data to obtain track data corresponding to the deleted point cloud data, and generating point cloud map data according to the track data corresponding to the deleted point cloud data. The accuracy of the point cloud map data is further improved, and the accuracy of the subsequent three-dimensional model data is improved.
In another embodiment, as shown in fig. 6, there is provided a method for point cloud-based three-dimensional reconstruction, including the following steps:
step 602, point cloud data and vehicle track data are obtained.
And step 604, generating point cloud map data according to the point cloud data and the vehicle track data.
And 606, extracting geometric information and semantic information from the point cloud map data.
And 608, performing area division on the point cloud map data according to the geometric information and the semantic information to obtain a plurality of map areas.
And step 610, determining a parameterization strategy corresponding to each map area according to the semantic information.
Step 612, performing parameterization processing on the point cloud map data of each map area according to the determined parameterization strategy and the geometric information to obtain area model data corresponding to each map area.
And 614, performing three-dimensional reconstruction on the area model data corresponding to the plurality of map areas.
After extracting the geometric information and the semantic information from the point cloud map data, the computer device divides the point cloud map data into a plurality of map areas according to the geometric information and the semantic information. The geometric information may specifically include a bounding box represented by position coordinates of the object in the three-dimensional space, pose information of the object in the three-dimensional space, and a size of the object. The semantic information may include category information corresponding to each object in the point cloud map data. Such as vehicles, people, trees, crosswalks, lane lines, etc.
Specifically, the computer device determines the area type corresponding to each object according to the type information of each object, and divides the point cloud map data according to the area type corresponding to each object and the geometric information of each object to obtain a plurality of areas. The area categories may be divided according to the influence on the driving behavior or may be divided according to the traffic rules. The area categories divided according to the influence on the driving behavior may include a line-pressing prohibition area, a deceleration area, and the like. For example, the line-pressing prohibition region may include a white solid line, a yellow solid line, and the like, and the deceleration region may include a road region before the crosswalk. The area categories divided according to the traffic rules may include a vehicle driving area, a traffic signal sign area, a non-driving area, a pedestrian crossing area, and the like. A plurality of different classes of objects may be included in the same area class. For example, the vehicle travelable area may include lane lines, vehicles, and the like. To facilitate distinguishing between different region classes, the same region class may be represented by the same color. For example, the crosswalk area is represented by white, the vehicle travel area by green, and the non-travel area by yellow.
And the computer equipment determines the parameterization strategy corresponding to each region type according to the semantic information. The parameterization strategy corresponding to each region type can be a parameterization mode corresponding to the region type. The parameterization mode can comprise a high-precision parameterization mode and a low-precision parameterization mode. Because the influence degrees of different region types on the driving behaviors can be different, the influence degrees of different region types on the driving behaviors are different, and the corresponding parameterization precision requirements can also be different, the parameterization modes corresponding to the region types can also be different. For example, the driving behavior is greatly influenced by the vehicle driving area and the traffic signal sign area, and a high-precision parameterization mode can be selected. The high-precision parameterization mode can be used for parameterizing the red and green lamp bodies through a Computer Aided Design (CAD) model. The influence of the non-driving area on the driving behavior is small, and a low-precision parameterization mode can be selected. The low-precision parameterization mode can be to parameterize an object by adopting simple parameters such as points, lines, surfaces, bodies and the like, for example, a traffic light can be considered to be composed of a light box and a rod, wherein the light box can be represented as a cuboid, and the rod can be represented as a cylinder.
The computer equipment obtains the parameters and the parameterization mode corresponding to each object according to the determined parameterization strategy, so that the computer equipment determines the information such as the size, the position and the like of the object according to the geometric information of each object, and determines the parameter information such as the size, the position and the like of each parameter according to the information such as the shape, the size and the like of the object. And carrying out parameterization processing on the point cloud map data corresponding to the objects in the corresponding map area by the computer equipment according to a parameterization mode, namely carrying out modeling processing on the objects in the corresponding map area so as to obtain area model data corresponding to each map area. The computer equipment can carry out parameterization processing on the map areas one by one, combine the area model data corresponding to the map areas obtained after processing, and further carry out three-dimensional reconstruction on the combined area model data.
In this embodiment, the computer device performs area division on the point cloud map data according to the geometric information and the semantic information to obtain a plurality of map areas, so as to determine a parameterization strategy corresponding to each area category according to the semantic information. And then, carrying out parameterization processing on the point cloud map data of each map area according to the determined parameterization strategy and the geometric information, combining the area model data corresponding to the plurality of map areas obtained after processing, and further carrying out three-dimensional reconstruction on the combined area model data. Because the semantic information of each region can influence the driving behavior in the automatic driving process, the point cloud map data is subjected to region division, the driving behavior can be prevented from being influenced, and safe driving is realized. The parameterization strategies corresponding to all map areas are determined, and the same parameterization strategy is adopted for point cloud map data of the same map area to conduct parameterization processing, so that the parameterization processing efficiency can be improved. Meanwhile, the point cloud map data are subjected to region division, and corresponding map regions can be quickly positioned according to region identification when the subsequent three-dimensional model data are updated, so that the region data are updated.
In one embodiment, the method further includes: acquiring updated point cloud data, and determining an area identifier corresponding to an object identifier according to the object identifier corresponding to the updated point cloud data; extracting original point cloud data corresponding to the object identification according to the map area corresponding to the area identification; and replacing the original point cloud data according to the updated point cloud data.
And after the point cloud map data is divided into a plurality of map areas by the computer equipment according to the geometric information and the semantic information, each map area has a corresponding area identifier. The area identification is a unique identification used to mark the map area. Since the road environment data around the vehicle has a certain change frequency, the point cloud data of each object corresponding to the map area is also updated accordingly. When the computer device obtains the update data, the corresponding object identifier can be obtained according to the update data. The update data may be point cloud data of objects that change during the autonomous driving. The update data may be uploaded by a user, or acquired by uploading abnormal information when the road environment data around the vehicle changes and the vehicle cannot run normally. Since the point cloud map data is divided into regions, each region of the map may include point cloud data corresponding to a plurality of objects, that is, point cloud data in an enclosure corresponding to each object. Therefore, the computer device can determine the area identification of the map area corresponding to the object identification, extract the point cloud data corresponding to the object identification according to the map area corresponding to the area identification, and then replace the point cloud data corresponding to the object identification according to the updating data.
In this embodiment, because the update data has the corresponding object identifier, and then the point cloud data that needs to be changed can be quickly located according to the object identifier, the computer device determines the area identifier corresponding to the object identifier, and can further narrow the search range of the data, thereby extracting the point cloud data corresponding to the object identifier according to the map area corresponding to the area identifier, and changing the point cloud data corresponding to the object identifier according to the update data, thereby further improving the data update efficiency.
In one embodiment, extracting geometric information and semantic information from the point cloud map data includes: carrying out voxelization processing on point cloud map data to obtain a feature matrix; calling a pre-trained deep learning model, inputting the feature matrix into the deep learning model, performing prediction operation on the feature matrix through the deep learning model, and outputting geometric information and semantic information corresponding to the feature matrix.
When the computing resource of the computer device is greater than or equal to the preset threshold, the computer device may perform voxelization processing on the point cloud map data to obtain a feature matrix. Specifically, the computer device calculates the difference between the maximum value and the minimum value of the point cloud data coordinate in the X direction, the Y direction and the Z direction according to the point cloud data. The computer device determines the length, width and height of the data area according to the three difference values. The data area contains all the point cloud data. The computer device may perform voxelization on the point cloud data according to a preset size. The predetermined dimension may be length, width, and height. The computer device divides the data area along the X direction according to the length in the preset size, divides the data area along the Y direction according to the width in the preset size, and divides the data area along the Z direction according to the height in the preset size, so that the characteristic matrix is obtained. The preset dimensions may be the same in length, width and height. The order of the multidirectional division of the data area is not limited.
For example, the computer device may divide the data area along the X direction according to the length in the preset size, divide the data area along the Y direction according to the width in the preset size, and divide the data area along the Z direction according to the height in the preset size, so as to obtain the feature matrix. The computer device may also divide the data area along the X direction according to the length in the preset size, divide the data area along the Z direction according to the height in the preset size, and divide the data area along the Y direction according to the width in the preset size to obtain the feature matrix.
And the computer equipment further inputs the characteristic matrix into a pre-trained deep learning model, performs prediction operation on the characteristic matrix through the deep learning model, and outputs geometric information and semantic information corresponding to the characteristic matrix. For example, the deep learning model may be a three-dimensional convolutional neural network model. The deep learning model may specifically include an input layer, a convolutional layer, a pooling layer, a fully connected layer, an output layer, and the like. The computer equipment can sequentially carry out operation corresponding to the network structure on the feature matrix according to the network structure of the deep learning model, and further obtain the geometric information and the semantic information corresponding to the feature matrix output by the deep learning model. The geometric information may specifically include a bounding box represented by position coordinates of the object in the three-dimensional space, pose information of the object in the three-dimensional space, and a size of the object. The semantic information may include category information corresponding to each object in the point cloud map data. Such as vehicles, people, trees, crosswalks, lane lines, etc.
In this embodiment, when the computing resource of the computer device is greater than or equal to the preset threshold, the computer device performs voxelization on the data area corresponding to the point cloud data, and can classify the point cloud data corresponding to the object when the object is blocked. Inputting the feature matrix obtained after the voxelization treatment into a deep learning model, performing prediction operation on the feature matrix through the deep learning model, and outputting geometric information and semantic information corresponding to the feature matrix. Because the deep learning model is trained in advance, the information extraction efficiency is improved.
In one embodiment, extracting geometric information and semantic information from the point cloud map data includes: projecting the point cloud map data to a preset visual angle to obtain grid map data; and performing feature extraction on the raster map data to obtain geometric information and semantic information in the raster map data.
The point cloud map data is three-dimensional point cloud data. The computer equipment projects the acquired point cloud map data to a preset visual angle, so that raster map data corresponding to the preset visual angle are obtained, and the three-dimensional point cloud data are converted into two-dimensional data. For example, the preset viewing angle may be a bird's-eye view angle or a front viewing angle. When the computer device projects the point cloud map data on the aerial view angle, the grid map data corresponding to the aerial view angle can be obtained. When the point cloud map data is projected on the front view angle by the computer equipment, the grid map data corresponding to the front view angle can be obtained.
The computer equipment can perform image recognition on the raster map data to obtain the geometric information and semantic information corresponding to each object in the raster map data. The geometric information may specifically include a bounding box represented by position coordinates of the object in the three-dimensional space, pose information of the object in the three-dimensional space, and a size of the object. The semantic information may include category information corresponding to each object in the point cloud map data. Such as vehicles, people, trees, crosswalks, lane lines, etc.
Furthermore, the computer equipment can also perform multi-view projection on the point cloud map data, so that grid map data corresponding to each view is obtained. The multiple perspectives may include a bird's-eye view and a front view. And the computer equipment performs image recognition on the raster map data corresponding to each visual angle to obtain the geometric information and the semantic information of the raster map data corresponding to each visual angle. And then the computer equipment fuses the geometric information and the semantic information of the grid map data corresponding to multiple visual angles, so that the geometric information and the semantic information of the same object under different visual angles are combined, and more accurate geometric information and semantic information can be obtained.
In one embodiment, the image recognition mode may also be that the grid map data is input into a neural network model for operation by calling the pre-established neural network model, and geometric information and semantic information corresponding to the grid map data are output. For example, the neural network model may be a two-dimensional convolutional neural network model. Because the deep learning model is trained in advance, the information extraction efficiency is improved.
In this embodiment, the computer device projects the point cloud map data to a preset viewing angle to obtain raster map data, and performs feature extraction on the raster map data to obtain geometric information and semantic information in the raster map data.
In one embodiment, as shown in fig. 7, there is provided a point cloud-based three-dimensional reconstruction apparatus, including: an obtaining module 702, a generating module 604, an extracting module 706, a determining module 708, a parameterizing module 710, and a three-dimensional reconstruction module 712, wherein:
an obtaining module 702 is configured to obtain the point cloud data and the vehicle track data.
And a generating module 704 for generating point cloud map data according to the point cloud data and the vehicle track data.
And an extracting module 706, configured to extract geometric information and semantic information from the point cloud map data.
The determining module 708 is configured to determine a parameterization policy corresponding to the point cloud map data according to the semantic information.
And the parameterization module 710 is used for carrying out parameterization processing on the corresponding point cloud map data according to the determined parameterization strategy and the geometric information.
And a three-dimensional reconstruction module 712, configured to perform three-dimensional reconstruction on the point cloud map data after the parameterization processing.
In one embodiment, the parameterization module 710 is further configured to obtain model parameters corresponding to the corresponding point cloud map data from the historical parameters according to a parameterization policy; and carrying out parameterization processing on the corresponding point cloud map data according to the geometric information and the determined model parameters.
In one embodiment, the generating module 704 is configured to identify point cloud data corresponding to a preset object in the point cloud data, and delete the point cloud data corresponding to the preset object; matching the deleted point cloud data with vehicle track data to obtain track data corresponding to the deleted point cloud data; and generating point cloud map data according to the track data corresponding to the deleted point cloud data.
In one embodiment, the generating module 704 is further configured to determine a target track point in the vehicle track data according to the timestamp of each frame of point cloud data; calculating track data corresponding to the corresponding frame point cloud data according to the position coordinates of the target track points, the time stamps of the target track points and a preset relation; and obtaining the track data corresponding to the deleted point cloud data according to the track data corresponding to each frame of point cloud data.
In one embodiment, the apparatus further includes: the dividing module is used for carrying out region division on the point cloud map data according to the geometric information and the semantic information to obtain a plurality of map regions; determining a parameterization strategy corresponding to each map area according to the semantic information; carrying out parameterization processing on the point cloud map data of each map area according to the determined parameterization strategy and the geometric information to obtain area model data corresponding to each map area; and performing three-dimensional reconstruction on the area model data corresponding to the plurality of map areas.
In one embodiment, the apparatus further comprises: the updating module is used for acquiring updating data and determining an area identifier corresponding to the object identifier according to the object identifier corresponding to the updating data; extracting point cloud data corresponding to the object identification according to the map area corresponding to the area identification; and replacing the point cloud data corresponding to the object identification according to the updating data.
In one embodiment, the extracting module 706 is further configured to perform voxelization processing on the point cloud map data to obtain a feature matrix; calling a pre-trained deep learning model, inputting the feature matrix into the deep learning model, performing prediction operation on the feature matrix through the deep learning model, and outputting geometric information and semantic information corresponding to the feature matrix.
In one embodiment, the extracting module 706 is further configured to project the point cloud map data to a preset view angle to obtain grid map data; and performing feature extraction on the raster map data to obtain geometric information and semantic information in the raster map data.
For specific limitations of the point cloud-based three-dimensional reconstruction apparatus, reference may be made to the above limitations of the point cloud-based three-dimensional reconstruction method, which are not described herein again. All or part of the modules in the point cloud-based three-dimensional reconstruction device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, the internal structure of which may be as shown in FIG. 8. The computer device includes a processor, a memory, a communication interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing point cloud data. The communication interface of the computer device is used for connecting and communicating with the first sensor and the second sensor. The computer program is executed by a processor to implement a method for point cloud based three-dimensional reconstruction.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
A computer device comprising a memory and one or more processors, the memory having stored therein computer-readable instructions that, when executed by the one or more processors, cause the one or more processors to perform the steps of the various method embodiments described above.
One or more non-transitory computer-readable storage media storing computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the various method embodiments described above.
It will be understood by those of ordinary skill in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware associated with computer readable instructions, which can be stored in a non-volatile computer readable storage medium, and when executed, can include processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus (Rambus) direct RAM (RDRAM), direct bused dynamic RAM (DRDRAM), and bused dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (20)

  1. A point cloud-based three-dimensional reconstruction method comprises the following steps:
    acquiring point cloud data and vehicle track data;
    generating point cloud map data according to the point cloud data and the vehicle track data;
    extracting geometric information and semantic information from the point cloud map data;
    determining a parameterization strategy corresponding to the point cloud map data according to the semantic information;
    carrying out parameterization processing on corresponding point cloud map data according to the determined parameterization strategy and the geometric information; and
    and performing three-dimensional reconstruction on the point cloud map data after the parameterization processing.
  2. The method of claim 1, wherein parameterizing the corresponding point cloud map data according to the determined parameterization strategy and the geometric information comprises:
    obtaining model parameters corresponding to corresponding point cloud map data in historical parameters according to the parameterization strategy; and
    and carrying out parameterization processing on corresponding point cloud map data according to the geometric information and the determined model parameters.
  3. The method of claim 1, wherein the generating point cloud map data from the point cloud data and the vehicle trajectory data comprises:
    identifying point cloud data corresponding to a preset object in the point cloud data, and deleting the point cloud data corresponding to the preset object;
    matching the deleted point cloud data with the vehicle track data to obtain track data corresponding to the deleted point cloud data; and
    and generating point cloud map data according to the track data corresponding to the deleted point cloud data.
  4. The method according to claim 3, wherein the deleted point cloud data comprises multi-frame point cloud data, and the step of matching the deleted point cloud data with the vehicle trajectory data to obtain trajectory data corresponding to the deleted point cloud data comprises the steps of:
    determining target track points in the vehicle track data according to the time stamps of the frames of point cloud data;
    calculating track data corresponding to corresponding frame point cloud data according to the position coordinates of the target track points, the time stamps of the target track points and a preset relation; and
    and obtaining track data corresponding to the deleted point cloud data according to the track data corresponding to each frame of point cloud data.
  5. The method of claim 1, further comprising:
    carrying out region division on the point cloud map data according to the geometric information and the semantic information to obtain a plurality of map regions;
    determining a parameterization strategy corresponding to each map area according to the semantic information;
    carrying out parameterization processing on the point cloud map data of each map area according to the determined parameterization strategy and the geometric information to obtain area model data corresponding to each map area; and
    and performing three-dimensional reconstruction on the area model data corresponding to the plurality of map areas.
  6. The method of claim 5, further comprising:
    acquiring updating data, and determining an area identifier corresponding to an object identifier according to the object identifier corresponding to the updating data;
    extracting point cloud data corresponding to the object identification according to the map area corresponding to the area identification; and
    and replacing the point cloud data corresponding to the object identification according to the updating data.
  7. The method of any one of claims 1 to 6, wherein the extracting geometric information and semantic information in the point cloud map data comprises:
    carrying out voxelization processing on point cloud map data to obtain a feature matrix; and
    calling a pre-trained deep learning model, inputting the feature matrix into the deep learning model, performing prediction operation on the feature matrix through the deep learning model, and outputting geometric information and semantic information corresponding to the feature matrix.
  8. The method of any one of claims 1 to 6, wherein the extracting geometric information and semantic information in the point cloud map data comprises:
    projecting the point cloud map data to a preset visual angle to obtain grid map data; and
    and extracting the features of the raster map data to obtain the geometric information and the semantic information in the raster map data.
  9. A point cloud based three-dimensional reconstruction apparatus comprising:
    the acquisition module is used for acquiring point cloud data and vehicle track data;
    the generating module is used for generating point cloud map data according to the point cloud data and the vehicle track data;
    the extraction module is used for extracting geometric information and semantic information from the point cloud map data;
    the determining module is used for determining a parameterization strategy corresponding to the point cloud map data according to the semantic information;
    the parameterization module is used for carrying out parameterization processing on corresponding point cloud map data according to the determined parameterization strategy and the geometric information; and
    and the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the point cloud map data after the parameterization processing.
  10. The apparatus of claim 9, wherein the parameterization module is further configured to obtain model parameters corresponding to the corresponding point cloud map data from historical parameters according to the parameterization policy; and carrying out parameterization processing on corresponding point cloud map data according to the geometric information and the determined model parameters.
  11. A computer device comprising one or more processors and memory having computer-readable instructions stored therein, which when executed by the one or more processors, cause the one or more processors to perform the steps of:
    acquiring point cloud data and vehicle track data;
    generating point cloud map data according to the point cloud data and the vehicle track data;
    extracting geometric information and semantic information from the point cloud map data;
    determining a parameterization strategy corresponding to the point cloud map data according to the semantic information;
    carrying out parameterization processing on corresponding point cloud map data according to the determined parameterization strategy and the geometric information; and
    and performing three-dimensional reconstruction on the point cloud map data after the parameterization processing.
  12. The computer device of claim 11, wherein the processor, when executing the computer readable instructions, further performs the steps of:
    obtaining model parameters corresponding to corresponding point cloud map data in historical parameters according to the parameterization strategy; and
    and carrying out parameterization processing on corresponding point cloud map data according to the geometric information and the determined model parameters.
  13. The computer device of claim 11, wherein the processor, when executing the computer readable instructions, further performs the steps of:
    identifying point cloud data corresponding to a preset object in the point cloud data, and deleting the point cloud data corresponding to the preset object;
    matching the deleted point cloud data with the vehicle track data to obtain track data corresponding to the deleted point cloud data; and
    and generating point cloud map data according to the track data corresponding to the deleted point cloud data.
  14. The computer device of claim 13, wherein the processor, when executing the computer readable instructions, further performs the steps of:
    determining target track points in the vehicle track data according to the time stamps of the frames of point cloud data;
    calculating track data corresponding to corresponding frame point cloud data according to the position coordinates of the target track points, the time stamps of the target track points and a preset relation; and
    and obtaining the track data corresponding to the deleted point cloud data according to the track data corresponding to each frame of point cloud data.
  15. The computer device of claim 11, wherein the processor, when executing the computer readable instructions, further performs the steps of:
    carrying out area division on the point cloud map data according to the geometric information and the semantic information to obtain a plurality of map areas;
    determining a parameterization strategy corresponding to each map area according to the semantic information;
    carrying out parameterization processing on the point cloud map data of each map area according to the determined parameterization strategy and the geometric information; and
    and performing three-dimensional reconstruction on the area model data corresponding to the plurality of map areas.
  16. One or more non-transitory computer-readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of:
    acquiring point cloud data and vehicle track data;
    generating point cloud map data according to the point cloud data and the vehicle track data;
    extracting geometric information and semantic information from the point cloud map data;
    determining a parameterization strategy corresponding to the point cloud map data according to the semantic information;
    carrying out parameterization processing on corresponding point cloud map data according to the determined parameterization strategy and the geometric information; and
    and performing three-dimensional reconstruction on the point cloud map data after the parameterization processing.
  17. The storage medium of claim 16, wherein the computer readable instructions, when executed by the processor, further perform the steps of:
    obtaining model parameters corresponding to corresponding point cloud map data in historical parameters according to the parameterization strategy; and
    and carrying out parameterization processing on corresponding point cloud map data according to the geometric information and the determined model parameters.
  18. The storage medium of claim 16, wherein the computer readable instructions, when executed by the processor, further perform the steps of:
    identifying point cloud data corresponding to a preset object in the point cloud data, and deleting the point cloud data corresponding to the preset object;
    matching the deleted point cloud data with the vehicle track data to obtain track data corresponding to the deleted point cloud data; and
    and generating point cloud map data according to the track data corresponding to the deleted point cloud data.
  19. The storage medium of claim 18, wherein the computer readable instructions, when executed by the processor, further perform the steps of:
    determining target track points in the vehicle track data according to the time stamps of the frames of point cloud data;
    calculating track data corresponding to the corresponding frame of point cloud data according to the position coordinates of the target track points, the timestamps of the target track points and a preset relation; and
    and obtaining track data corresponding to the deleted point cloud data according to the track data corresponding to each frame of point cloud data.
  20. The storage medium of claim 16, wherein the computer readable instructions, when executed by the processor, further perform the steps of:
    carrying out area division on the point cloud map data according to the geometric information and the semantic information to obtain a plurality of map areas;
    determining a parameterization strategy corresponding to each map area according to the semantic information;
    carrying out parameterization processing on point cloud map data of each map area according to the determined parameterization strategy and the geometric information; and
    and performing three-dimensional reconstruction on the area model data corresponding to the plurality of map areas.
CN202080092974.0A 2020-07-20 2020-07-20 Point cloud-based three-dimensional reconstruction method and device and computer equipment Pending CN114930401A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/102961 WO2022016311A1 (en) 2020-07-20 2020-07-20 Point cloud-based three-dimensional reconstruction method and apparatus, and computer device

Publications (1)

Publication Number Publication Date
CN114930401A true CN114930401A (en) 2022-08-19

Family

ID=79729577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080092974.0A Pending CN114930401A (en) 2020-07-20 2020-07-20 Point cloud-based three-dimensional reconstruction method and device and computer equipment

Country Status (2)

Country Link
CN (1) CN114930401A (en)
WO (1) WO2022016311A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140586B (en) * 2022-01-29 2022-05-17 苏州工业园区测绘地理信息有限公司 Three-dimensional modeling method and device for indoor space and storage medium
CN114416764A (en) * 2022-02-24 2022-04-29 上海商汤临港智能科技有限公司 Map updating method, device, equipment and storage medium
CN114842211B (en) * 2022-03-28 2024-07-23 高德软件有限公司 Angular point extraction, precision evaluation and carrier positioning method, equipment, medium and product
CN114646936A (en) * 2022-03-30 2022-06-21 北京洛必德科技有限公司 Point cloud map construction method and device and electronic equipment
CN114754762A (en) * 2022-04-14 2022-07-15 中国第一汽车股份有限公司 Map processing method and device
CN115147612B (en) * 2022-07-28 2024-03-29 苏州轻棹科技有限公司 Processing method for estimating vehicle size in real time based on accumulated point cloud
CN115628734B (en) * 2022-08-31 2024-04-30 白犀牛智达(北京)科技有限公司 Maintenance system of point cloud map
CN116543322A (en) * 2023-05-17 2023-08-04 深圳市保臻社区服务科技有限公司 Intelligent property routing inspection method based on community potential safety hazards
CN116883584B (en) * 2023-05-29 2024-03-26 东莞市捷圣智能科技有限公司 Track generation method and device based on digital-analog, electronic equipment and storage medium
CN117152399A (en) * 2023-10-30 2023-12-01 长沙能川信息科技有限公司 Model making method, device, equipment and storage medium based on transformer substation
CN117274351A (en) * 2023-11-02 2023-12-22 华东师范大学 Semantic-containing three-dimensional reconstruction method for multi-scale feature pyramid
CN117870715B (en) * 2024-03-13 2024-05-31 上海鉴智其迹科技有限公司 Map switching method and device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10223829B2 (en) * 2016-12-01 2019-03-05 Here Global B.V. Method and apparatus for generating a cleaned object model for an object in a mapping database
CN108205566B (en) * 2016-12-19 2021-09-28 北京四维图新科技股份有限公司 Method and device for managing point cloud based on track and navigation equipment
CN108345822B (en) * 2017-01-22 2022-02-01 腾讯科技(深圳)有限公司 Point cloud data processing method and device
CN109285220B (en) * 2018-08-30 2022-11-15 阿波罗智能技术(北京)有限公司 Three-dimensional scene map generation method, device, equipment and storage medium
CN110009727B (en) * 2019-03-08 2023-04-18 深圳大学 Automatic reconstruction method and system for indoor three-dimensional model with structural semantics

Also Published As

Publication number Publication date
WO2022016311A1 (en) 2022-01-27

Similar Documents

Publication Publication Date Title
CN114930401A (en) Point cloud-based three-dimensional reconstruction method and device and computer equipment
CN108345822B (en) Point cloud data processing method and device
CN111192295B (en) Target detection and tracking method, apparatus, and computer-readable storage medium
CN111860493B (en) Target detection method and device based on point cloud data
CN112069856A (en) Map generation method, driving control method, device, electronic equipment and system
CN113424121A (en) Vehicle speed control method and device based on automatic driving and computer equipment
CN111160302A (en) Obstacle information identification method and device based on automatic driving environment
US20220373353A1 (en) Map Updating Method and Apparatus, and Device
DE102016119130A1 (en) Probabilistic deduction using weighted integrals and hashing for object tracking
CN109872530B (en) Road condition information generation method, vehicle-mounted terminal and server
CN113678136A (en) Obstacle detection method and device based on unmanned technology and computer equipment
CN111179274B (en) Map ground segmentation method, device, computer equipment and storage medium
CN112348848A (en) Information generation method and system for traffic participants
WO2021134357A1 (en) Perception information processing method and apparatus, computer device and storage medium
CN114485698B (en) Intersection guide line generation method and system
CN115205803A (en) Automatic driving environment sensing method, medium and vehicle
CN117576652A (en) Road object identification method and device, storage medium and electronic equipment
CN110097077B (en) Point cloud data classification method and device, computer equipment and storage medium
CN114648551A (en) Trajectory prediction method and apparatus
CN114118247A (en) Anchor-frame-free 3D target detection method based on multi-sensor fusion
US11555928B2 (en) Three-dimensional object detection with ground removal intelligence
CN115236696B (en) Method and device for determining obstacle, electronic equipment and storage medium
US11544899B2 (en) System and method for generating terrain maps
CN115236672A (en) Obstacle information generation method, device, equipment and computer readable storage medium
Yigzaw An Analysis and Benchmarking in Autoware. AI and OpenPCDet LiDAR-based 3D Object Detection Models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination