Detailed Description
The technical solutions of the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It will be understood that when an element is referred to as being "fixed to" another element, it can be directly on the other element or intervening elements may also be present. When a component is considered to be "connected" to another component, it can be directly connected to the other component or intervening components may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
Some embodiments of the present invention are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
The embodiment of the invention provides a processing method of point cloud. The processing method of the point cloud provided by the embodiment of the invention can be applied to vehicles, such as unmanned vehicles, vehicles with advanced auxiliary driving (ADVANCED DRIVER ASSISTANCE SYSTEMS, ADAS) systems and the like. It can be appreciated that the method for processing the point cloud can also be applied to an unmanned aerial vehicle, for example, an unmanned aerial vehicle with a detection device for acquiring the point cloud data. The method for processing the point cloud provided by the embodiment of the invention can be applied to real-time ground three-dimensional reconstruction, and the ground three-dimensional reconstruction has the significance that the point cloud obtained by laser radar scanning contains most of ground points, and the ground points can influence the classification, identification and tracking of the subsequent obstacle point cloud, for example, in a typical application scene, the front area of a vehicle comprises a ground area, other vehicles, buildings, trees, fences, pedestrians and the like. In which the bottom of the wheels of the vehicle in front is in contact with the ground, and in other embodiments, the vehicle may have traffic signs and other objects in the front area, and the bottom of the traffic signs is also in contact with the ground. Therefore, when objects such as vehicles in front and traffic signs are identified, due to the sparse characteristic of single-frame laser point clouds, the existing method for reconstructing the three-dimensional scene by using the laser point clouds needs to accumulate multi-frame point clouds within a period of time for time sequence fusion so as to reconstruct the three-dimensional scene with higher quality. However, in the automatic driving system, the vehicle-mounted laser radar moves along with the vehicle, and due to the influence of the positioning error of the vehicle, the accumulated multi-frame point clouds have larger shake of the same surface on the z axis after fusion, so that the reconstruction accuracy is not ideal, and the ground point at the bottom of the front vehicle and/or the ground point at the bottom of the traffic sign can be easily and mistakenly identified as the three-dimensional point of the front vehicle or the traffic sign, or the bottom point of the front vehicle and/or the bottom point of the traffic sign are missed to be detected as the three-dimensional point. Therefore, when vehicles, traffic signs, buildings, trees, fences, pedestrians and the like are identified in the three-dimensional point cloud, it is necessary to identify the ground point cloud in the three-dimensional point cloud and filter out the ground points. However, the existing identification method for the ground point cloud is low in accuracy, so that errors exist in the identification of the ground point cloud, and further, the problems of false detection or omission of obstacles, particularly short obstacles, are caused. The processing method of the point cloud provided by the embodiment of the invention can correct the point cloud, reduce the negative influence of multi-frame accumulation, and further obtain a more ideal result.
The embodiment of the invention provides a processing method of point cloud. Fig. 1 is a flowchart of a processing method of a point cloud according to an embodiment of the present invention. As shown in fig. 1, the method in this embodiment may include:
Step S101, acquiring multi-frame three-dimensional point clouds containing a target area.
In the embodiment of the invention, the multi-frame three-dimensional point cloud is in a local coordinate system.
In an alternative embodiment, the multi-frame three-dimensional point cloud including the target area is obtained directly by obtaining the multi-frame three-dimensional point cloud under the local coordinate system. The local coordinate system is a coordinate system established with a carrier on which a detection device for detecting a multi-frame three-dimensional point cloud is mounted as an origin, for example, a coordinate system established with a vehicle as an origin. The carrier may be a vehicle or an unmanned aerial vehicle, which is not particularly limited in the present invention.
In another alternative embodiment, acquiring a multi-frame three-dimensional point cloud including a target area includes: acquiring multi-frame three-dimensional point clouds containing a target area under a coordinate system of detection equipment; and converting the three-dimensional point cloud detected by the detection equipment into a local coordinate system according to the conversion relation between the detection equipment coordinate system and the local coordinate system. Optionally, acquiring a multi-frame three-dimensional point cloud including a target area under a coordinate system of the detection device includes: and acquiring a three-dimensional point cloud containing a target area around the carrier detected by the detection equipment carried on the carrier.
In particular, as shown in fig. 2, a detection device 22 is provided on the vehicle 21, which detection device 22 may in particular be a binocular stereo camera, a TOF camera and/or a lidar. For example, during traveling of the vehicle 21, the traveling direction of the vehicle 21 is the direction indicated by the arrow in fig. 2, and the detection device 22 detects the three-dimensional point cloud of the surrounding environmental information of the vehicle 21 in real time. The detection device 22 is exemplified by a laser radar, and when a beam of laser light emitted by the laser radar irradiates on the surface of an object, the surface of the object will reflect the beam of laser light, and the laser radar can determine information such as the azimuth, the distance and the like of the object relative to the laser radar according to the laser light reflected by the surface of the object. If the laser beam emitted by the laser radar is scanned according to a certain track, for example, 360-degree rotation scanning, a large number of laser points are obtained, so that laser point cloud data of the object, that is, three-dimensional point cloud, can be formed.
The three-dimensional point cloud acquired in step S101 is continuous N-frame sparse point cloud data accumulated in the current time window.
Alternatively, the target area may be an object having a flat surface. The embodiment of the invention is described by taking the ground area as an example, but the invention is not limited to the ground area, and the target area can also be an object such as a wall surface or a desktop, and the invention is not limited to the ground area. The method of the embodiment of the invention can be also suitable for identifying the objects with flat surfaces such as the wall surface or the desktop surface.
And step S102, preprocessing multi-frame three-dimensional point clouds.
Because the multi-frame three-dimensional point cloud includes the point cloud or noise point of the non-target area, the multi-frame three-dimensional point cloud needs to be preprocessed to filter the point cloud or noise point of the non-target area.
Optionally, preprocessing the multi-frame three-dimensional point cloud includes: and removing noise points in the multi-frame three-dimensional point cloud, wherein the removed noise points refer to three-dimensional points which do not belong to the target area.
Step S103, determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and a preset correction model.
Specifically, the preprocessed multi-frame three-dimensional point cloud is input into a preset correction model, and the preset correction model outputs a height value correction parameter of the multi-frame three-dimensional point cloud.
And step S104, correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area.
In the present embodiment, it is assumed that the three-dimensional coordinates of a certain three-dimensional point in the three-dimensional point cloud are (x i,yi,zi),xi,yi,zi represents the coordinate values of the three-dimensional point in the direction X, Y, Z in the local coordinate system, respectively), and the height value refers to the coordinate values of the three-dimensional point in the direction Z in the local coordinate system.
Specifically, because the misidentification of the ground area and other objects in the three-dimensional point cloud obtained by laser radar scanning is mainly caused by the height value error of the ground area, the identification of the ground area can be corrected by correcting the height value of the multi-frame three-dimensional point cloud through the height value correction parameters, the identification precision of the ground is improved, and the three-dimensional reconstruction of the ground is realized. Continuing with the above-described typical application scenario as an example, the front area of the vehicle 21 includes a ground area, other vehicles, buildings, trees, fences, pedestrians, and the like. As shown in fig. 2, the bottom of the wheel of the front vehicle 23 of the vehicle 21 is in contact with the ground, and in other embodiments, there may be an object such as a traffic sign in the front area of the vehicle 21, and the bottom of the traffic sign is also in contact with the ground, so that when identifying the object such as the front vehicle 23 of the vehicle 21, the traffic sign, and the like, if the height value of the ground area is not accurate enough, the ground point at the bottom of the front vehicle 23 and/or the ground point at the bottom of the traffic sign can be easily and erroneously identified as the three-dimensional point of the front vehicle or the traffic sign. After the ground area is corrected by the height value correction parameter, the ground point at the bottom of the front vehicle 23 and/or the ground point at the bottom of the traffic sign can be identified as the three-dimensional point of the non-ground area, that is, the three-dimensional point of the front vehicle 23 or the traffic sign.
In the embodiment, a multi-frame three-dimensional point cloud containing a target area is obtained; preprocessing multi-frame three-dimensional point clouds; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. Because the correction model can determine the height value correction parameters for correcting the height values of the multi-frame three-dimensional point cloud, the recognition accuracy of the target area can be improved after the height values of the multi-frame three-dimensional point cloud are corrected according to the height value correction parameters.
The embodiment of the invention provides a processing method of point cloud. Fig. 3 is a flowchart of a processing method of a point cloud according to another embodiment of the present invention. As shown in fig. 3, based on the embodiment shown in fig. 1, the method in this embodiment may perform preprocessing on a multi-frame three-dimensional point cloud by projecting a three-dimensional point cloud obtained by laser radar scanning onto an XOY plane of a world coordinate system, and then determining whether a point in a grid is a point cloud belonging to a ground area according to a height range between the three-dimensional point clouds mapped in the grid of the XOY plane, including the following steps:
step S301, determining a height map according to the height values of the multi-frame three-dimensional point cloud, wherein the determined height map comprises a plurality of grids.
Optionally, determining the height map according to the height values of the multi-frame three-dimensional point cloud includes: determining a target plane under a world coordinate system; projecting a multi-frame three-dimensional point cloud under the local coordinate system to a target plane according to the conversion relation between the local coordinate system and the world coordinate system; and determining the height map according to the height values of the multi-frame three-dimensional point cloud projected in the target plane. Specifically, with the Z-axis vertically downward right-hand coordinate system as the world coordinate system, the target plane may be an XOY plane divided into a plurality of square grids of uniform size under the world coordinate system. Similarly, a local coordinate system with a vertical downward Z axis is established by taking a vehicle as an origin, so that X, Y, Z axes of the local coordinate system and X, Y, Z axes of a world coordinate system are aligned respectively, and if n frames of sparse point clouds are required to be accumulated to reconstruct the ground, a height map of the point clouds can be obtained by projecting the accumulated n frames of point clouds onto an XOY plane under the world coordinate system.
Specifically, according to the conversion relationship between the local coordinate system and the world coordinate system, each three-dimensional point in the three-dimensional point cloud under the local coordinate system is projected to the world coordinate system, for example, as follows: point j represents a three-dimensional point in the three-dimensional point cloud, the position of which in the local coordinate system is noted asThe position of the point j converted into world coordinate system is recorded as/>The conversion relation between the local coordinate system and the world coordinate system is R, and under the world coordinate system, the three-dimensional position of the laser radar, namely the translation vector, is t, and the formula can be adopted: /(I)The transformation of the point j into the position in the world coordinate system/>, can be obtainedThereby, a projected point of the point i in the world coordinate system can be calculated.
Similarly, projection points of other three-dimensional points except the point j in the three-dimensional point cloud in the target plane can be determined. And further determining a height map from the height values of the point i projected in the target plane and other three-dimensional points.
Step S302, determining a rough target area in the height map according to a preset target area height value.
In some embodiments, the preset target area height value may be a preset ground area height value, for which a preliminary ground area height value may be estimated by using the height of the vehicle in the local coordinate system, and if the maximum height of the vehicle is z 1 and the overall height of the vehicle is 1.5m, the preliminary ground area height value may be obtained by z 1 -1.5, and according to this result, the approximate ground area may be determined in the height map. The target area determined here is a grid range of a rough target area divided from the height map, and is not accurate, and may include three-dimensional points of other objects, so that the three-dimensional points need to be further filtered through subsequent processing.
Step S303, calculating the difference between the maximum height value and the minimum height value in the same grid in the grid where the approximate target area is located.
Assuming that there are w three-dimensional points mapped in a certain grid of the height map after projection, and the maximum height value of the height values in the w three-dimensional points is w h and the minimum height value is w 1, the difference between the maximum height value and the minimum height value in the grid can be obtained by calculating w h-wl.
Step S304, determining a grid with a difference value lower than a difference value threshold and a distance between the grid and a preset target area height value smaller than a preset distance.
Assuming that w h-wl is below the difference threshold and (w h-wl)-(z1 -0.5) is less than the preset distance, the grid corresponding to the three-dimensional point is marked. Specific marking methods can be seen in the prior art, for example, marking using different colors, the invention is not particularly limited herein.
Step 305, removing the three-dimensional point cloud outside the grid with the difference value lower than the difference value threshold and the distance between the three-dimensional point cloud and the preset height value of the target area smaller than the preset distance.
After the grids corresponding to the three-dimensional points with the w h-wl lower than the difference threshold value and the (w h-wl)-(z1 -0.5) smaller than the preset distance are marked through the steps, unmarked grids in the approximate target area can be removed, wherein the points in the unmarked grids can be regarded as non-ground point clouds or noise points, thereby completing the removal of the three-dimensional point clouds outside the grids with the difference lower than the difference threshold value and the distance between the three-dimensional point clouds and the preset height value of the target area smaller than the preset distance, and realizing the primary identification of the target area. At this time, the identified target area needs to be further corrected to improve the identification accuracy of the target area.
The embodiment of the invention provides a processing method of point cloud. Fig. 4 is a flowchart of a processing method of a point cloud according to another embodiment of the present invention. As shown in fig. 4, on the basis of the above embodiment, projecting the multi-frame three-dimensional point cloud under the local coordinate system to the target plane according to the conversion relationship between the local coordinate system and the world coordinate system may include:
in step S401, the target plane is divided into a plurality of grids with equal sizes, each grid having a grid number.
And step S402, calculating a grid number corresponding to the multi-frame three-dimensional point cloud in the local coordinate system in the target plane according to the conversion relation between the local coordinate system and the world coordinate system.
For example, the XOY plane in the local coordinate system is divided according to a square of 0.2×0.2m to obtain a plurality of grids, and the grids are numbered to obtain a grid number. Similarly, the XOY plane under the world coordinate system is also divided according to a square of 0.2 x 0.2m to obtain a plurality of grids, the grids are numbered, the x-axis and y-axis coordinates corresponding to the grids can be obtained according to the grid numbers and the size of 0.2 x 0.2m of each grid, the x-axis and y-axis coordinates are converted into the world coordinate system according to the conversion relation between the local coordinate system and the world coordinate system, the x-axis and y-axis coordinates under the world coordinate system are obtained, and then the grid corresponding to one grid in the local coordinate system in the world coordinate system can be obtained according to the x-axis and y-axis coordinates under the world coordinate system.
Step S403, calculating a corresponding height value of a multi-frame three-dimensional point cloud in a target plane under a local coordinate system according to a conversion relation between the local coordinate system and a world coordinate system;
Similarly, the height value corresponding to the multi-frame three-dimensional point cloud in the target plane under the local coordinate system can also be obtained according to the above description of the example of step S402.
In the step S402 and the step S403, the step S403 may be executed first, and then the step S402 may be executed, and the step S402 and the step S403 may be considered to be executed in parallel, and no execution sequence exists.
Step S404, determining a height map according to the grid number corresponding to the multi-frame three-dimensional point cloud in the target plane and the height value corresponding to the multi-frame three-dimensional point cloud in the target plane.
After the grid numbers corresponding to the multi-frame three-dimensional point clouds in the local coordinate system in the target plane are obtained through the calculation in the steps S401-S403, and the height values corresponding to the multi-frame three-dimensional point clouds in the local coordinate system in the target plane are obtained through the calculation, the grid numbers and the height values can be corresponding, and the mapping of the three-dimensional points in the local coordinate system to the world coordinate system is achieved, so that the height map is obtained.
The embodiment of the invention provides a processing method of point cloud. Fig. 5 is a flowchart of a processing method of a point cloud according to another embodiment of the present invention. As shown in fig. 5, on the basis of the foregoing embodiment, if the preset correction model includes an optimization solution model, determining, according to the preprocessed multi-frame three-dimensional point cloud and the preset correction model, a correction parameter of a height value of the multi-frame three-dimensional point cloud may include:
and step S501, inputting the preprocessed three-dimensional point cloud into an optimization solving model.
Optionally, the function equation of the optimization solution model is specifically as follows:
Wherein i represents an image frame number corresponding to the three-dimensional point cloud; j represents the number of the three-dimensional point in the three-dimensional point cloud image; Representing three-dimensional coordinate values of the j-th three-dimensional point in the i-th three-dimensional point cloud image, and m represents the total number of three-dimensional points in the i-th three-dimensional point cloud image; n represents the total number of three-dimensional point cloud images, namely the total number of multi-frame three-dimensional point cloud images is accumulated; the height value correction amount of the j-th three-dimensional point in the i-th three-dimensional point cloud image is represented by the following expression: wherein a i denotes a first correction coefficient, b i denotes a second correction coefficient, and c i denotes a third correction coefficient; s represents the number of the grid in the height map,/> The mean value of the height values after correction of the three-dimensional points accumulated at the s-th grid on the height map is represented by: /(I)Wherein K represents the total number of three-dimensional point clouds accumulated in the grid, wherein the difference is lower than a difference threshold value, and the distance between the K and the ground area is smaller than a preset distance, and i k represents the image frame number of the three-dimensional point cloud to which the kth point belongs; a is used for representing a first correction coefficient of the multi-frame three-dimensional point cloud, B is used for representing a second correction coefficient of the multi-frame three-dimensional point cloud, and C is used for representing a third correction coefficient of the multi-frame three-dimensional point cloud ,A=[a1...ai...,an]T,B=[b1...bi...,bn]T,C=[c1...ci...,ca]T.
And step S502, solving an optimization solving model by adopting a linear least square method to obtain a correction coefficient.
Specifically, it willAfter the equation (1) is input, the equation (1) is solved by a linear least square method, and a correction coefficient (a i,bi,ci),(ai,bi,ci) for correcting the i-th frame image when the equation (1) is the minimum value is obtained.
Similarly, after inputting other three-dimensional points into the above formula (1), correction coefficients for correcting other frame images can be obtained, and the correction coefficients of all frame images are as follows A=[a1...ai...,aa]T,B=[b1...bi...,bn]T,C=[c1...ci...,cn]T.
Alternatively, all three-dimensional points of all frame images can be input into the above formula (1), a linear equation set is established, and correction coefficients of all frame images can be obtained simultaneously by solving the linear equation set in parallel. The parallel computing can improve the computing efficiency and well meet the real-time requirement of the vehicle-mounted system.
And 503, determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient.
Optionally, the correction coefficients include a first correction coefficient, a second correction coefficient, and a third correction coefficient; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient, including: and calculating a height value correction parameter of the three-dimensional point cloud of the frame according to the first correction coefficient, the second correction coefficient and the third correction coefficient of the multi-frame three-dimensional point cloud and the three-dimensional coordinate value of the three-dimensional point cloud of the frame. Specifically, according to the first correction coefficient, the second correction coefficient, the third correction coefficient of the frame three-dimensional point cloud and the three-dimensional coordinate value of the frame three-dimensional point cloud, the height value correction parameter of the frame three-dimensional point cloud is calculated, and can be calculated according to the following function equation:
Wherein a i,bi,ci is a first correction coefficient, a second correction coefficient and a third correction coefficient of the i-th frame point cloud image respectively; (a i,bi,ci) a correction coefficient for correcting the i-th frame image, And d represents a height value correction parameter for correcting all three-dimensional points in the ith frame of image.
Will (a i,bi,ci) andAfter substituting the formula (2), the height value correction parameter d for correcting all three-dimensional points in the ith frame of image can be obtained.
Optionally, after solving to obtain the height value correction parameter d for correcting all three-dimensional points in the ith frame image, the height values of all three-dimensional points in the ith frame image can be corrected according to the height value correction parameter d for correcting all three-dimensional points in the ith frame image. For example, assume that the coordinate value of the jth three-dimensional point in the ith frame image before correction isThe coordinate value of the jth three-dimensional point in the modified ith frame image is/>
Fig. 6 is an effect diagram before the correction of the ground point cloud.
Fig. 7 is an effect diagram of the ground point cloud corrected by the method according to the embodiment of the present invention.
As shown in fig. 6 and 7, the area formed by the black dots in the drawing is the ground area, and it can be seen that the ground area identified in fig. 6 has large jitter and is distributed more smoothly and compactly on the Z axis, while the ground area identified in fig. 7 has narrower distribution on the Z axis, so that the ground area corrected by the method of the embodiment of the invention is identified more accurately.
The embodiment of the invention provides a processing system of point cloud. Fig. 8 is a block diagram of a processing system for point cloud according to an embodiment of the present invention, and as shown in fig. 8, a processing system 80 for point cloud includes a detecting device 81, a memory 82, and a processor 83. Wherein the detection device 81 is used for detecting multi-frame three-dimensional point clouds containing a target area; the memory 82 is used for storing program codes; the processor 83 invokes the program code, which when executed, is operative to: acquiring multi-frame three-dimensional point clouds containing target areas; preprocessing multi-frame three-dimensional point clouds; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. The detection device 81 in the present embodiment may be the detection device 22 in fig. 2.
Optionally, when the processor 83 preprocesses the multi-frame three-dimensional point cloud, the method is specifically used for: and removing noise points in the multi-frame three-dimensional point cloud, wherein the noise points refer to three-dimensional points which do not belong to the target area.
Optionally, when the processor 83 removes noise points in the multi-frame three-dimensional point cloud, the method is specifically used for: determining a height map according to the height values of the multi-frame three-dimensional point cloud, wherein the height map comprises a plurality of grids; determining a rough target area in a height map according to a preset target area height value; calculating the difference between the maximum height value and the minimum height value in the same grid in the grid where the approximate target area is located; determining a grid with a difference value lower than a difference value threshold value and a distance between the difference value and a preset target area height value smaller than a preset distance; and removing the three-dimensional point cloud outside the grid, wherein the difference value is lower than the difference value threshold value, and the distance between the three-dimensional point cloud and the preset target area height value is smaller than the preset distance.
Optionally, the processor 83 is specifically configured to, when acquiring the multi-frame three-dimensional point cloud: acquiring a multi-frame three-dimensional point cloud under a local coordinate system, wherein the local coordinate system is established by taking a carrier carrying detection equipment for detecting the multi-frame three-dimensional point cloud as an origin; the processor 83 is specifically configured to, when determining the altitude map according to the altitude values of the multi-frame three-dimensional point cloud: determining a target plane under a world coordinate system; projecting a multi-frame three-dimensional point cloud under the local coordinate system to a target plane according to the conversion relation between the local coordinate system and the world coordinate system; and determining the height map according to the height values of the multi-frame three-dimensional point cloud projected in the target plane.
Optionally, the processor 83 projects the multi-frame three-dimensional point cloud under the local coordinate system to the target plane according to the conversion relationship between the local coordinate system and the world coordinate system, which is specifically configured to: dividing the target plane into a plurality of grids with equal size, each grid having a grid number; according to the conversion relation between the local coordinate system and the world coordinate system, calculating a grid number corresponding to the multi-frame three-dimensional point cloud in the local coordinate system in the target plane; according to the conversion relation between the local coordinate system and the world coordinate system, calculating the corresponding height value of the multi-frame three-dimensional point cloud in the target plane under the local coordinate system; and determining a height map according to the grid number corresponding to the multi-frame three-dimensional point cloud in the target plane under the local coordinate system and the height value corresponding to the multi-frame three-dimensional point cloud in the target plane under the local coordinate system.
Optionally, the preset correction model includes an optimization solution model; the processor 83 is specifically configured to, when determining the height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and the preset correction model: inputting the preprocessed three-dimensional point cloud into the optimization solving model; solving the optimization solving model by adopting a linear least square method to obtain a correction coefficient; and determining the height value correction parameters of the multi-frame three-dimensional point cloud according to the correction coefficients.
Optionally, the correction coefficients include a first correction coefficient, a second correction coefficient, and a third correction coefficient; the processor 83 is specifically configured to, when determining the height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient: and calculating a height value correction parameter of the three-dimensional point cloud of the frame according to the first correction coefficient, the second correction coefficient and the third correction coefficient of the multi-frame three-dimensional point cloud and the three-dimensional coordinate value of the three-dimensional point cloud of the frame.
Optionally, when the processor 83 acquires the multi-frame three-dimensional point cloud in the local coordinate system, the method is specifically used for: acquiring multi-frame three-dimensional point clouds which are detected by detection equipment and contain a target area; and converting the multi-frame three-dimensional point cloud detected by the detection equipment into a local coordinate system according to the conversion relation between the detection equipment coordinate system and the local coordinate system.
Optionally, the detection device comprises at least one of: binocular stereo cameras, TOF cameras, and lidar.
Optionally, the target area is a ground area.
The specific principle and implementation manner of the processing system for point cloud provided in the embodiment of the present invention are similar to those of the foregoing embodiment, and are not repeated here.
The method comprises the steps that multi-frame three-dimensional point clouds containing target areas are obtained; preprocessing multi-frame three-dimensional point clouds; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. Because the correction model can determine the height value correction parameters for correcting the height values of the multi-frame three-dimensional point cloud, the recognition accuracy of the target area can be improved after the height values of the multi-frame three-dimensional point cloud are corrected according to the height value correction parameters.
The embodiment of the invention provides a movable platform. Fig. 9 is a block diagram of a movable platform according to an embodiment of the present invention. The embodiment of the invention provides a movable platform based on the technical scheme provided by the embodiment shown in fig. 8. As shown in fig. 9, the movable platform 90 includes: a fuselage 91, a power system 92 and a point cloud processing system 93. The processing system 93 of the point cloud in the present embodiment may be the processing system 80 of the point cloud provided in the above-described embodiment.
The specific principle and implementation manner of the processing system for point cloud provided in the embodiment of the present invention are similar to those of the embodiment shown in fig. 8, and are not repeated here.
In the embodiment, a multi-frame three-dimensional point cloud containing a target area is obtained; preprocessing multi-frame three-dimensional point clouds; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. Because the correction model can determine the height value correction parameters for correcting the height values of the multi-frame three-dimensional point cloud, the recognition accuracy of the target area can be improved after the height values of the multi-frame three-dimensional point cloud are corrected according to the height value correction parameters.
In addition, the present embodiment also provides a computer-readable storage medium having stored thereon a computer program that is executed by a processor to implement the processing method of the point cloud of the above embodiment.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform part of the steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above. The specific working process of the above-described device may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.