CN111699410B - Processing method, equipment and computer readable storage medium of point cloud - Google Patents

Processing method, equipment and computer readable storage medium of point cloud Download PDF

Info

Publication number
CN111699410B
CN111699410B CN201980012171.7A CN201980012171A CN111699410B CN 111699410 B CN111699410 B CN 111699410B CN 201980012171 A CN201980012171 A CN 201980012171A CN 111699410 B CN111699410 B CN 111699410B
Authority
CN
China
Prior art keywords
point cloud
frame
dimensional point
coordinate system
height value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201980012171.7A
Other languages
Chinese (zh)
Other versions
CN111699410A (en
Inventor
郑杨杨
刘晓洋
张晓炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuoyu Technology Co ltd
Original Assignee
Shenzhen Zhuoyu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhuoyu Technology Co ltd filed Critical Shenzhen Zhuoyu Technology Co ltd
Publication of CN111699410A publication Critical patent/CN111699410A/en
Application granted granted Critical
Publication of CN111699410B publication Critical patent/CN111699410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a processing method, equipment and a computer readable storage medium of a point cloud, wherein the method comprises the following steps: acquiring multi-frame three-dimensional point clouds containing target areas; preprocessing the multi-frame three-dimensional point cloud; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. According to the embodiment of the invention, the multi-frame three-dimensional point cloud is corrected, so that the problem of surface blurring caused by time sequence difference of multi-frame sparse point cloud can be solved, the recognition accuracy of a target area is improved, and a high-quality three-dimensional scene is reconstructed.

Description

Processing method, equipment and computer readable storage medium of point cloud
Technical Field
The embodiment of the invention relates to the field of automatic driving, in particular to a point cloud processing method, equipment and a computer readable storage medium.
Background
The laser radar is one of main sensors used in the field of three-dimensional scene reconstruction, and can generate sparse point clouds of a three-dimensional scene in real time according to a light reflection principle, and further reconstruct the three-dimensional scene at the current position by fusing multi-frame sparse point clouds.
Because single-frame laser point clouds are generally sparse, the existing method for reconstructing the three-dimensional scene by using the laser point clouds needs to accumulate multi-frame point clouds within a period of time for time sequence fusion so as to reconstruct the three-dimensional scene with higher quality. However, in an automatic driving system, the vehicle-mounted laser radar moves along with the vehicle, and the accumulated multi-frame point clouds can have the problem of larger jitter on the same surface after fusion due to the influence of vehicle positioning errors, so that the recognition accuracy of a target area is lower, and the three-dimensional scene reconstruction accuracy is not ideal. Especially in three-dimensional scenes reconstructed from the ground, missed detection or false detection of short and small obstacles can be caused.
Disclosure of Invention
The embodiment of the invention provides a processing method, equipment and a computer readable storage medium of point cloud, which are used for improving the identification precision of a target area and reconstructing a high-quality three-dimensional scene.
A first aspect of an embodiment of the present invention provides a method for processing a point cloud, including:
acquiring multi-frame three-dimensional point clouds containing target areas;
Preprocessing the multi-frame three-dimensional point cloud;
Determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and a preset correction model;
And correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area.
A second aspect of an embodiment of the present invention provides a processing system for a point cloud, including: a detection device, a memory, and a processor;
The detection equipment is used for detecting multi-frame three-dimensional point clouds containing a target area;
The memory is used for storing program codes; the processor invokes the program code, which when executed, is operable to:
acquiring multi-frame three-dimensional point clouds containing target areas;
Preprocessing the multi-frame three-dimensional point cloud;
Determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and a preset correction model;
And correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area.
A third aspect of an embodiment of the present invention provides a movable platform, including: a fuselage, a power system, and a point cloud processing system according to the second aspect.
A fourth aspect of an embodiment of the present invention provides a computer readable storage medium having stored thereon a computer program for execution by a processor to implement the method of the first aspect.
The embodiment provides a processing method, equipment and a computer readable storage medium for point cloud, which are implemented by acquiring multi-frame three-dimensional point cloud containing a target area; preprocessing multi-frame three-dimensional point clouds; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. Because the correction model can determine the height value correction parameters for correcting the height values of the multi-frame three-dimensional point cloud, the recognition accuracy of the target area can be improved after the height values of the multi-frame three-dimensional point cloud are corrected according to the height value correction parameters.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is a flowchart of a method for processing a point cloud according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an application scenario provided in an embodiment of the present invention;
FIG. 3 is a flowchart of a method for processing a point cloud according to another embodiment of the present invention;
FIG. 4 is a flowchart of a method for processing a point cloud according to another embodiment of the present invention;
FIG. 5 is a flowchart of a method for processing a point cloud according to another embodiment of the present invention;
FIG. 6 is an effect diagram before correction of a ground point cloud;
FIG. 7 is a graph showing the effect of modifying a ground point cloud using the method of the present invention;
FIG. 8 is a block diagram of a processing system for point clouds according to an embodiment of the present invention;
Fig. 9 is a block diagram of a movable platform according to an embodiment of the present invention.
Reference numerals:
21: a vehicle; 22: a detection device; 23: a front vehicle;
80: a processing system of the point cloud; 81: a detection device; 82: a memory; 83: a processor;
90: a movable platform; 91: a body; 92: a power system; 93: a processing system for point clouds.
Detailed Description
The technical solutions of the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It will be understood that when an element is referred to as being "fixed to" another element, it can be directly on the other element or intervening elements may also be present. When a component is considered to be "connected" to another component, it can be directly connected to the other component or intervening components may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
Some embodiments of the present invention are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
The embodiment of the invention provides a processing method of point cloud. The processing method of the point cloud provided by the embodiment of the invention can be applied to vehicles, such as unmanned vehicles, vehicles with advanced auxiliary driving (ADVANCED DRIVER ASSISTANCE SYSTEMS, ADAS) systems and the like. It can be appreciated that the method for processing the point cloud can also be applied to an unmanned aerial vehicle, for example, an unmanned aerial vehicle with a detection device for acquiring the point cloud data. The method for processing the point cloud provided by the embodiment of the invention can be applied to real-time ground three-dimensional reconstruction, and the ground three-dimensional reconstruction has the significance that the point cloud obtained by laser radar scanning contains most of ground points, and the ground points can influence the classification, identification and tracking of the subsequent obstacle point cloud, for example, in a typical application scene, the front area of a vehicle comprises a ground area, other vehicles, buildings, trees, fences, pedestrians and the like. In which the bottom of the wheels of the vehicle in front is in contact with the ground, and in other embodiments, the vehicle may have traffic signs and other objects in the front area, and the bottom of the traffic signs is also in contact with the ground. Therefore, when objects such as vehicles in front and traffic signs are identified, due to the sparse characteristic of single-frame laser point clouds, the existing method for reconstructing the three-dimensional scene by using the laser point clouds needs to accumulate multi-frame point clouds within a period of time for time sequence fusion so as to reconstruct the three-dimensional scene with higher quality. However, in the automatic driving system, the vehicle-mounted laser radar moves along with the vehicle, and due to the influence of the positioning error of the vehicle, the accumulated multi-frame point clouds have larger shake of the same surface on the z axis after fusion, so that the reconstruction accuracy is not ideal, and the ground point at the bottom of the front vehicle and/or the ground point at the bottom of the traffic sign can be easily and mistakenly identified as the three-dimensional point of the front vehicle or the traffic sign, or the bottom point of the front vehicle and/or the bottom point of the traffic sign are missed to be detected as the three-dimensional point. Therefore, when vehicles, traffic signs, buildings, trees, fences, pedestrians and the like are identified in the three-dimensional point cloud, it is necessary to identify the ground point cloud in the three-dimensional point cloud and filter out the ground points. However, the existing identification method for the ground point cloud is low in accuracy, so that errors exist in the identification of the ground point cloud, and further, the problems of false detection or omission of obstacles, particularly short obstacles, are caused. The processing method of the point cloud provided by the embodiment of the invention can correct the point cloud, reduce the negative influence of multi-frame accumulation, and further obtain a more ideal result.
The embodiment of the invention provides a processing method of point cloud. Fig. 1 is a flowchart of a processing method of a point cloud according to an embodiment of the present invention. As shown in fig. 1, the method in this embodiment may include:
Step S101, acquiring multi-frame three-dimensional point clouds containing a target area.
In the embodiment of the invention, the multi-frame three-dimensional point cloud is in a local coordinate system.
In an alternative embodiment, the multi-frame three-dimensional point cloud including the target area is obtained directly by obtaining the multi-frame three-dimensional point cloud under the local coordinate system. The local coordinate system is a coordinate system established with a carrier on which a detection device for detecting a multi-frame three-dimensional point cloud is mounted as an origin, for example, a coordinate system established with a vehicle as an origin. The carrier may be a vehicle or an unmanned aerial vehicle, which is not particularly limited in the present invention.
In another alternative embodiment, acquiring a multi-frame three-dimensional point cloud including a target area includes: acquiring multi-frame three-dimensional point clouds containing a target area under a coordinate system of detection equipment; and converting the three-dimensional point cloud detected by the detection equipment into a local coordinate system according to the conversion relation between the detection equipment coordinate system and the local coordinate system. Optionally, acquiring a multi-frame three-dimensional point cloud including a target area under a coordinate system of the detection device includes: and acquiring a three-dimensional point cloud containing a target area around the carrier detected by the detection equipment carried on the carrier.
In particular, as shown in fig. 2, a detection device 22 is provided on the vehicle 21, which detection device 22 may in particular be a binocular stereo camera, a TOF camera and/or a lidar. For example, during traveling of the vehicle 21, the traveling direction of the vehicle 21 is the direction indicated by the arrow in fig. 2, and the detection device 22 detects the three-dimensional point cloud of the surrounding environmental information of the vehicle 21 in real time. The detection device 22 is exemplified by a laser radar, and when a beam of laser light emitted by the laser radar irradiates on the surface of an object, the surface of the object will reflect the beam of laser light, and the laser radar can determine information such as the azimuth, the distance and the like of the object relative to the laser radar according to the laser light reflected by the surface of the object. If the laser beam emitted by the laser radar is scanned according to a certain track, for example, 360-degree rotation scanning, a large number of laser points are obtained, so that laser point cloud data of the object, that is, three-dimensional point cloud, can be formed.
The three-dimensional point cloud acquired in step S101 is continuous N-frame sparse point cloud data accumulated in the current time window.
Alternatively, the target area may be an object having a flat surface. The embodiment of the invention is described by taking the ground area as an example, but the invention is not limited to the ground area, and the target area can also be an object such as a wall surface or a desktop, and the invention is not limited to the ground area. The method of the embodiment of the invention can be also suitable for identifying the objects with flat surfaces such as the wall surface or the desktop surface.
And step S102, preprocessing multi-frame three-dimensional point clouds.
Because the multi-frame three-dimensional point cloud includes the point cloud or noise point of the non-target area, the multi-frame three-dimensional point cloud needs to be preprocessed to filter the point cloud or noise point of the non-target area.
Optionally, preprocessing the multi-frame three-dimensional point cloud includes: and removing noise points in the multi-frame three-dimensional point cloud, wherein the removed noise points refer to three-dimensional points which do not belong to the target area.
Step S103, determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and a preset correction model.
Specifically, the preprocessed multi-frame three-dimensional point cloud is input into a preset correction model, and the preset correction model outputs a height value correction parameter of the multi-frame three-dimensional point cloud.
And step S104, correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area.
In the present embodiment, it is assumed that the three-dimensional coordinates of a certain three-dimensional point in the three-dimensional point cloud are (x i,yi,zi),xi,yi,zi represents the coordinate values of the three-dimensional point in the direction X, Y, Z in the local coordinate system, respectively), and the height value refers to the coordinate values of the three-dimensional point in the direction Z in the local coordinate system.
Specifically, because the misidentification of the ground area and other objects in the three-dimensional point cloud obtained by laser radar scanning is mainly caused by the height value error of the ground area, the identification of the ground area can be corrected by correcting the height value of the multi-frame three-dimensional point cloud through the height value correction parameters, the identification precision of the ground is improved, and the three-dimensional reconstruction of the ground is realized. Continuing with the above-described typical application scenario as an example, the front area of the vehicle 21 includes a ground area, other vehicles, buildings, trees, fences, pedestrians, and the like. As shown in fig. 2, the bottom of the wheel of the front vehicle 23 of the vehicle 21 is in contact with the ground, and in other embodiments, there may be an object such as a traffic sign in the front area of the vehicle 21, and the bottom of the traffic sign is also in contact with the ground, so that when identifying the object such as the front vehicle 23 of the vehicle 21, the traffic sign, and the like, if the height value of the ground area is not accurate enough, the ground point at the bottom of the front vehicle 23 and/or the ground point at the bottom of the traffic sign can be easily and erroneously identified as the three-dimensional point of the front vehicle or the traffic sign. After the ground area is corrected by the height value correction parameter, the ground point at the bottom of the front vehicle 23 and/or the ground point at the bottom of the traffic sign can be identified as the three-dimensional point of the non-ground area, that is, the three-dimensional point of the front vehicle 23 or the traffic sign.
In the embodiment, a multi-frame three-dimensional point cloud containing a target area is obtained; preprocessing multi-frame three-dimensional point clouds; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. Because the correction model can determine the height value correction parameters for correcting the height values of the multi-frame three-dimensional point cloud, the recognition accuracy of the target area can be improved after the height values of the multi-frame three-dimensional point cloud are corrected according to the height value correction parameters.
The embodiment of the invention provides a processing method of point cloud. Fig. 3 is a flowchart of a processing method of a point cloud according to another embodiment of the present invention. As shown in fig. 3, based on the embodiment shown in fig. 1, the method in this embodiment may perform preprocessing on a multi-frame three-dimensional point cloud by projecting a three-dimensional point cloud obtained by laser radar scanning onto an XOY plane of a world coordinate system, and then determining whether a point in a grid is a point cloud belonging to a ground area according to a height range between the three-dimensional point clouds mapped in the grid of the XOY plane, including the following steps:
step S301, determining a height map according to the height values of the multi-frame three-dimensional point cloud, wherein the determined height map comprises a plurality of grids.
Optionally, determining the height map according to the height values of the multi-frame three-dimensional point cloud includes: determining a target plane under a world coordinate system; projecting a multi-frame three-dimensional point cloud under the local coordinate system to a target plane according to the conversion relation between the local coordinate system and the world coordinate system; and determining the height map according to the height values of the multi-frame three-dimensional point cloud projected in the target plane. Specifically, with the Z-axis vertically downward right-hand coordinate system as the world coordinate system, the target plane may be an XOY plane divided into a plurality of square grids of uniform size under the world coordinate system. Similarly, a local coordinate system with a vertical downward Z axis is established by taking a vehicle as an origin, so that X, Y, Z axes of the local coordinate system and X, Y, Z axes of a world coordinate system are aligned respectively, and if n frames of sparse point clouds are required to be accumulated to reconstruct the ground, a height map of the point clouds can be obtained by projecting the accumulated n frames of point clouds onto an XOY plane under the world coordinate system.
Specifically, according to the conversion relationship between the local coordinate system and the world coordinate system, each three-dimensional point in the three-dimensional point cloud under the local coordinate system is projected to the world coordinate system, for example, as follows: point j represents a three-dimensional point in the three-dimensional point cloud, the position of which in the local coordinate system is noted asThe position of the point j converted into world coordinate system is recorded as/>The conversion relation between the local coordinate system and the world coordinate system is R, and under the world coordinate system, the three-dimensional position of the laser radar, namely the translation vector, is t, and the formula can be adopted: /(I)The transformation of the point j into the position in the world coordinate system/>, can be obtainedThereby, a projected point of the point i in the world coordinate system can be calculated.
Similarly, projection points of other three-dimensional points except the point j in the three-dimensional point cloud in the target plane can be determined. And further determining a height map from the height values of the point i projected in the target plane and other three-dimensional points.
Step S302, determining a rough target area in the height map according to a preset target area height value.
In some embodiments, the preset target area height value may be a preset ground area height value, for which a preliminary ground area height value may be estimated by using the height of the vehicle in the local coordinate system, and if the maximum height of the vehicle is z 1 and the overall height of the vehicle is 1.5m, the preliminary ground area height value may be obtained by z 1 -1.5, and according to this result, the approximate ground area may be determined in the height map. The target area determined here is a grid range of a rough target area divided from the height map, and is not accurate, and may include three-dimensional points of other objects, so that the three-dimensional points need to be further filtered through subsequent processing.
Step S303, calculating the difference between the maximum height value and the minimum height value in the same grid in the grid where the approximate target area is located.
Assuming that there are w three-dimensional points mapped in a certain grid of the height map after projection, and the maximum height value of the height values in the w three-dimensional points is w h and the minimum height value is w 1, the difference between the maximum height value and the minimum height value in the grid can be obtained by calculating w h-wl.
Step S304, determining a grid with a difference value lower than a difference value threshold and a distance between the grid and a preset target area height value smaller than a preset distance.
Assuming that w h-wl is below the difference threshold and (w h-wl)-(z1 -0.5) is less than the preset distance, the grid corresponding to the three-dimensional point is marked. Specific marking methods can be seen in the prior art, for example, marking using different colors, the invention is not particularly limited herein.
Step 305, removing the three-dimensional point cloud outside the grid with the difference value lower than the difference value threshold and the distance between the three-dimensional point cloud and the preset height value of the target area smaller than the preset distance.
After the grids corresponding to the three-dimensional points with the w h-wl lower than the difference threshold value and the (w h-wl)-(z1 -0.5) smaller than the preset distance are marked through the steps, unmarked grids in the approximate target area can be removed, wherein the points in the unmarked grids can be regarded as non-ground point clouds or noise points, thereby completing the removal of the three-dimensional point clouds outside the grids with the difference lower than the difference threshold value and the distance between the three-dimensional point clouds and the preset height value of the target area smaller than the preset distance, and realizing the primary identification of the target area. At this time, the identified target area needs to be further corrected to improve the identification accuracy of the target area.
The embodiment of the invention provides a processing method of point cloud. Fig. 4 is a flowchart of a processing method of a point cloud according to another embodiment of the present invention. As shown in fig. 4, on the basis of the above embodiment, projecting the multi-frame three-dimensional point cloud under the local coordinate system to the target plane according to the conversion relationship between the local coordinate system and the world coordinate system may include:
in step S401, the target plane is divided into a plurality of grids with equal sizes, each grid having a grid number.
And step S402, calculating a grid number corresponding to the multi-frame three-dimensional point cloud in the local coordinate system in the target plane according to the conversion relation between the local coordinate system and the world coordinate system.
For example, the XOY plane in the local coordinate system is divided according to a square of 0.2×0.2m to obtain a plurality of grids, and the grids are numbered to obtain a grid number. Similarly, the XOY plane under the world coordinate system is also divided according to a square of 0.2 x 0.2m to obtain a plurality of grids, the grids are numbered, the x-axis and y-axis coordinates corresponding to the grids can be obtained according to the grid numbers and the size of 0.2 x 0.2m of each grid, the x-axis and y-axis coordinates are converted into the world coordinate system according to the conversion relation between the local coordinate system and the world coordinate system, the x-axis and y-axis coordinates under the world coordinate system are obtained, and then the grid corresponding to one grid in the local coordinate system in the world coordinate system can be obtained according to the x-axis and y-axis coordinates under the world coordinate system.
Step S403, calculating a corresponding height value of a multi-frame three-dimensional point cloud in a target plane under a local coordinate system according to a conversion relation between the local coordinate system and a world coordinate system;
Similarly, the height value corresponding to the multi-frame three-dimensional point cloud in the target plane under the local coordinate system can also be obtained according to the above description of the example of step S402.
In the step S402 and the step S403, the step S403 may be executed first, and then the step S402 may be executed, and the step S402 and the step S403 may be considered to be executed in parallel, and no execution sequence exists.
Step S404, determining a height map according to the grid number corresponding to the multi-frame three-dimensional point cloud in the target plane and the height value corresponding to the multi-frame three-dimensional point cloud in the target plane.
After the grid numbers corresponding to the multi-frame three-dimensional point clouds in the local coordinate system in the target plane are obtained through the calculation in the steps S401-S403, and the height values corresponding to the multi-frame three-dimensional point clouds in the local coordinate system in the target plane are obtained through the calculation, the grid numbers and the height values can be corresponding, and the mapping of the three-dimensional points in the local coordinate system to the world coordinate system is achieved, so that the height map is obtained.
The embodiment of the invention provides a processing method of point cloud. Fig. 5 is a flowchart of a processing method of a point cloud according to another embodiment of the present invention. As shown in fig. 5, on the basis of the foregoing embodiment, if the preset correction model includes an optimization solution model, determining, according to the preprocessed multi-frame three-dimensional point cloud and the preset correction model, a correction parameter of a height value of the multi-frame three-dimensional point cloud may include:
and step S501, inputting the preprocessed three-dimensional point cloud into an optimization solving model.
Optionally, the function equation of the optimization solution model is specifically as follows:
Wherein i represents an image frame number corresponding to the three-dimensional point cloud; j represents the number of the three-dimensional point in the three-dimensional point cloud image; Representing three-dimensional coordinate values of the j-th three-dimensional point in the i-th three-dimensional point cloud image, and m represents the total number of three-dimensional points in the i-th three-dimensional point cloud image; n represents the total number of three-dimensional point cloud images, namely the total number of multi-frame three-dimensional point cloud images is accumulated; the height value correction amount of the j-th three-dimensional point in the i-th three-dimensional point cloud image is represented by the following expression: wherein a i denotes a first correction coefficient, b i denotes a second correction coefficient, and c i denotes a third correction coefficient; s represents the number of the grid in the height map,/> The mean value of the height values after correction of the three-dimensional points accumulated at the s-th grid on the height map is represented by: /(I)Wherein K represents the total number of three-dimensional point clouds accumulated in the grid, wherein the difference is lower than a difference threshold value, and the distance between the K and the ground area is smaller than a preset distance, and i k represents the image frame number of the three-dimensional point cloud to which the kth point belongs; a is used for representing a first correction coefficient of the multi-frame three-dimensional point cloud, B is used for representing a second correction coefficient of the multi-frame three-dimensional point cloud, and C is used for representing a third correction coefficient of the multi-frame three-dimensional point cloud ,A=[a1...ai...,an]T,B=[b1...bi...,bn]T,C=[c1...ci...,ca]T.
And step S502, solving an optimization solving model by adopting a linear least square method to obtain a correction coefficient.
Specifically, it willAfter the equation (1) is input, the equation (1) is solved by a linear least square method, and a correction coefficient (a i,bi,ci),(ai,bi,ci) for correcting the i-th frame image when the equation (1) is the minimum value is obtained.
Similarly, after inputting other three-dimensional points into the above formula (1), correction coefficients for correcting other frame images can be obtained, and the correction coefficients of all frame images are as follows A=[a1...ai...,aa]T,B=[b1...bi...,bn]T,C=[c1...ci...,cn]T.
Alternatively, all three-dimensional points of all frame images can be input into the above formula (1), a linear equation set is established, and correction coefficients of all frame images can be obtained simultaneously by solving the linear equation set in parallel. The parallel computing can improve the computing efficiency and well meet the real-time requirement of the vehicle-mounted system.
And 503, determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient.
Optionally, the correction coefficients include a first correction coefficient, a second correction coefficient, and a third correction coefficient; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient, including: and calculating a height value correction parameter of the three-dimensional point cloud of the frame according to the first correction coefficient, the second correction coefficient and the third correction coefficient of the multi-frame three-dimensional point cloud and the three-dimensional coordinate value of the three-dimensional point cloud of the frame. Specifically, according to the first correction coefficient, the second correction coefficient, the third correction coefficient of the frame three-dimensional point cloud and the three-dimensional coordinate value of the frame three-dimensional point cloud, the height value correction parameter of the frame three-dimensional point cloud is calculated, and can be calculated according to the following function equation:
Wherein a i,bi,ci is a first correction coefficient, a second correction coefficient and a third correction coefficient of the i-th frame point cloud image respectively; (a i,bi,ci) a correction coefficient for correcting the i-th frame image, And d represents a height value correction parameter for correcting all three-dimensional points in the ith frame of image.
Will (a i,bi,ci) andAfter substituting the formula (2), the height value correction parameter d for correcting all three-dimensional points in the ith frame of image can be obtained.
Optionally, after solving to obtain the height value correction parameter d for correcting all three-dimensional points in the ith frame image, the height values of all three-dimensional points in the ith frame image can be corrected according to the height value correction parameter d for correcting all three-dimensional points in the ith frame image. For example, assume that the coordinate value of the jth three-dimensional point in the ith frame image before correction isThe coordinate value of the jth three-dimensional point in the modified ith frame image is/>
Fig. 6 is an effect diagram before the correction of the ground point cloud.
Fig. 7 is an effect diagram of the ground point cloud corrected by the method according to the embodiment of the present invention.
As shown in fig. 6 and 7, the area formed by the black dots in the drawing is the ground area, and it can be seen that the ground area identified in fig. 6 has large jitter and is distributed more smoothly and compactly on the Z axis, while the ground area identified in fig. 7 has narrower distribution on the Z axis, so that the ground area corrected by the method of the embodiment of the invention is identified more accurately.
The embodiment of the invention provides a processing system of point cloud. Fig. 8 is a block diagram of a processing system for point cloud according to an embodiment of the present invention, and as shown in fig. 8, a processing system 80 for point cloud includes a detecting device 81, a memory 82, and a processor 83. Wherein the detection device 81 is used for detecting multi-frame three-dimensional point clouds containing a target area; the memory 82 is used for storing program codes; the processor 83 invokes the program code, which when executed, is operative to: acquiring multi-frame three-dimensional point clouds containing target areas; preprocessing multi-frame three-dimensional point clouds; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. The detection device 81 in the present embodiment may be the detection device 22 in fig. 2.
Optionally, when the processor 83 preprocesses the multi-frame three-dimensional point cloud, the method is specifically used for: and removing noise points in the multi-frame three-dimensional point cloud, wherein the noise points refer to three-dimensional points which do not belong to the target area.
Optionally, when the processor 83 removes noise points in the multi-frame three-dimensional point cloud, the method is specifically used for: determining a height map according to the height values of the multi-frame three-dimensional point cloud, wherein the height map comprises a plurality of grids; determining a rough target area in a height map according to a preset target area height value; calculating the difference between the maximum height value and the minimum height value in the same grid in the grid where the approximate target area is located; determining a grid with a difference value lower than a difference value threshold value and a distance between the difference value and a preset target area height value smaller than a preset distance; and removing the three-dimensional point cloud outside the grid, wherein the difference value is lower than the difference value threshold value, and the distance between the three-dimensional point cloud and the preset target area height value is smaller than the preset distance.
Optionally, the processor 83 is specifically configured to, when acquiring the multi-frame three-dimensional point cloud: acquiring a multi-frame three-dimensional point cloud under a local coordinate system, wherein the local coordinate system is established by taking a carrier carrying detection equipment for detecting the multi-frame three-dimensional point cloud as an origin; the processor 83 is specifically configured to, when determining the altitude map according to the altitude values of the multi-frame three-dimensional point cloud: determining a target plane under a world coordinate system; projecting a multi-frame three-dimensional point cloud under the local coordinate system to a target plane according to the conversion relation between the local coordinate system and the world coordinate system; and determining the height map according to the height values of the multi-frame three-dimensional point cloud projected in the target plane.
Optionally, the processor 83 projects the multi-frame three-dimensional point cloud under the local coordinate system to the target plane according to the conversion relationship between the local coordinate system and the world coordinate system, which is specifically configured to: dividing the target plane into a plurality of grids with equal size, each grid having a grid number; according to the conversion relation between the local coordinate system and the world coordinate system, calculating a grid number corresponding to the multi-frame three-dimensional point cloud in the local coordinate system in the target plane; according to the conversion relation between the local coordinate system and the world coordinate system, calculating the corresponding height value of the multi-frame three-dimensional point cloud in the target plane under the local coordinate system; and determining a height map according to the grid number corresponding to the multi-frame three-dimensional point cloud in the target plane under the local coordinate system and the height value corresponding to the multi-frame three-dimensional point cloud in the target plane under the local coordinate system.
Optionally, the preset correction model includes an optimization solution model; the processor 83 is specifically configured to, when determining the height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and the preset correction model: inputting the preprocessed three-dimensional point cloud into the optimization solving model; solving the optimization solving model by adopting a linear least square method to obtain a correction coefficient; and determining the height value correction parameters of the multi-frame three-dimensional point cloud according to the correction coefficients.
Optionally, the correction coefficients include a first correction coefficient, a second correction coefficient, and a third correction coefficient; the processor 83 is specifically configured to, when determining the height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient: and calculating a height value correction parameter of the three-dimensional point cloud of the frame according to the first correction coefficient, the second correction coefficient and the third correction coefficient of the multi-frame three-dimensional point cloud and the three-dimensional coordinate value of the three-dimensional point cloud of the frame.
Optionally, when the processor 83 acquires the multi-frame three-dimensional point cloud in the local coordinate system, the method is specifically used for: acquiring multi-frame three-dimensional point clouds which are detected by detection equipment and contain a target area; and converting the multi-frame three-dimensional point cloud detected by the detection equipment into a local coordinate system according to the conversion relation between the detection equipment coordinate system and the local coordinate system.
Optionally, the detection device comprises at least one of: binocular stereo cameras, TOF cameras, and lidar.
Optionally, the target area is a ground area.
The specific principle and implementation manner of the processing system for point cloud provided in the embodiment of the present invention are similar to those of the foregoing embodiment, and are not repeated here.
The method comprises the steps that multi-frame three-dimensional point clouds containing target areas are obtained; preprocessing multi-frame three-dimensional point clouds; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. Because the correction model can determine the height value correction parameters for correcting the height values of the multi-frame three-dimensional point cloud, the recognition accuracy of the target area can be improved after the height values of the multi-frame three-dimensional point cloud are corrected according to the height value correction parameters.
The embodiment of the invention provides a movable platform. Fig. 9 is a block diagram of a movable platform according to an embodiment of the present invention. The embodiment of the invention provides a movable platform based on the technical scheme provided by the embodiment shown in fig. 8. As shown in fig. 9, the movable platform 90 includes: a fuselage 91, a power system 92 and a point cloud processing system 93. The processing system 93 of the point cloud in the present embodiment may be the processing system 80 of the point cloud provided in the above-described embodiment.
The specific principle and implementation manner of the processing system for point cloud provided in the embodiment of the present invention are similar to those of the embodiment shown in fig. 8, and are not repeated here.
In the embodiment, a multi-frame three-dimensional point cloud containing a target area is obtained; preprocessing multi-frame three-dimensional point clouds; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. Because the correction model can determine the height value correction parameters for correcting the height values of the multi-frame three-dimensional point cloud, the recognition accuracy of the target area can be improved after the height values of the multi-frame three-dimensional point cloud are corrected according to the height value correction parameters.
In addition, the present embodiment also provides a computer-readable storage medium having stored thereon a computer program that is executed by a processor to implement the processing method of the point cloud of the above embodiment.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform part of the steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above. The specific working process of the above-described device may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (20)

1. A method for processing a point cloud, comprising:
acquiring multi-frame three-dimensional point clouds containing target areas;
Preprocessing the multi-frame three-dimensional point cloud;
Determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and a preset correction model;
correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area;
The preset correction model comprises an optimization solving model, and determining the height value correction parameters of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and the preset correction model comprises the following steps:
inputting the preprocessed multi-frame three-dimensional point cloud into the optimization solving model;
solving the optimization solving model by adopting a linear least square method to obtain a correction coefficient;
and determining the height value correction parameters of the multi-frame three-dimensional point cloud according to the correction coefficients.
2. The method of claim 1, wherein the preprocessing the multi-frame three-dimensional point cloud comprises:
and removing noise points in the multi-frame three-dimensional point cloud, wherein the noise points are three-dimensional points which do not belong to the target area.
3. The method of claim 2, wherein the removing noise points in the multi-frame three-dimensional point cloud comprises:
Determining a height map according to the height values of the multi-frame three-dimensional point cloud, wherein the height map comprises a plurality of grids;
determining a rough target area in the height map according to a preset target area height value;
calculating the difference between the maximum height value and the minimum height value in the same grid in which the approximate target area is positioned;
Determining a grid in which the difference is lower than a difference threshold and the distance between the difference and the preset target area height value is smaller than a preset distance;
and removing the three-dimensional point cloud outside the grid, wherein the difference value is lower than a difference value threshold value, and the distance between the three-dimensional point cloud and the preset target area height value is smaller than the preset distance.
4. The method of claim 3, wherein the acquiring a multi-frame three-dimensional point cloud containing the target area comprises:
acquiring the multi-frame three-dimensional point cloud under a local coordinate system, wherein the local coordinate system is a coordinate system established by taking a carrier carrying detection equipment for detecting the multi-frame three-dimensional point cloud as an origin;
the determining the height map according to the height values of the multi-frame three-dimensional point cloud comprises the following steps:
determining a target plane under a world coordinate system;
Projecting the multi-frame three-dimensional point cloud under the local coordinate system to the target plane according to the conversion relation between the local coordinate system and the world coordinate system;
And determining a height map according to the height values of the multi-frame three-dimensional point cloud projected in the target plane.
5. The method of claim 4, wherein the projecting the multi-frame three-dimensional point cloud under the local coordinate system to the target plane according to the conversion relationship between the local coordinate system and the world coordinate system comprises:
dividing the target plane into a plurality of grids with equal sizes, wherein each grid is provided with a grid number;
according to the conversion relation between the local coordinate system and the world coordinate system, calculating a grid number corresponding to a multi-frame three-dimensional point cloud in the local coordinate system in the target plane;
according to the conversion relation between the local coordinate system and the world coordinate system, calculating a height value corresponding to the multi-frame three-dimensional point cloud in the target plane under the local coordinate system;
And determining the height map according to the grid number corresponding to the multi-frame three-dimensional point cloud in the local coordinate system in the target plane and the height value corresponding to the multi-frame three-dimensional point cloud in the local coordinate system in the target plane.
6. The method of claim 1, wherein the correction coefficients comprise a first correction coefficient, a second correction coefficient, and a third correction coefficient;
the determining the height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient comprises the following steps:
and calculating a height value correction parameter of the three-dimensional point cloud of the frame according to the first correction coefficient, the second correction coefficient, the third correction coefficient and the three-dimensional coordinate value of the three-dimensional point cloud of the frame.
7. The method of claim 4 or 5, wherein the acquiring the multi-frame three-dimensional point cloud in the local coordinate system comprises:
acquiring multi-frame three-dimensional point clouds which are detected by the detection equipment and contain a target area;
And converting the multi-frame three-dimensional point cloud detected by the detection equipment into the local coordinate system according to the conversion relation between the detection equipment coordinate system and the local coordinate system.
8. The method of claim 7, wherein the detection device comprises at least one of:
Binocular stereo cameras, TOF cameras, and lidar.
9. The method of any one of claims 1-6, wherein the target area is a ground area.
10. A system for processing a point cloud, comprising: a detection device, a memory, and a processor;
The detection equipment is used for detecting multi-frame three-dimensional point clouds containing a target area;
The memory is used for storing program codes; the processor invokes the program code, which when executed, is operable to:
acquiring multi-frame three-dimensional point clouds containing target areas;
Preprocessing the multi-frame three-dimensional point cloud;
Determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and a preset correction model;
correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area;
the preset correction model comprises an optimization solving model;
The processor is specifically configured to, when determining the height value correction parameter of the multi-frame three-dimensional point cloud according to the preprocessed multi-frame three-dimensional point cloud and a preset correction model:
inputting the preprocessed three-dimensional point cloud into the optimization solving model;
solving the optimization solving model by adopting a linear least square method to obtain a correction coefficient;
and determining the height value correction parameters of the multi-frame three-dimensional point cloud according to the correction coefficients.
11. The system of claim 10, wherein the processor is configured to, when preprocessing the multi-frame three-dimensional point cloud:
and removing noise points in the multi-frame three-dimensional point cloud, wherein the noise points are three-dimensional points which do not belong to the target area.
12. The system of claim 11, wherein the processor is configured to, when removing noise from the multi-frame three-dimensional point cloud:
Determining a height map according to the height values of the multi-frame three-dimensional point cloud, wherein the height map comprises a plurality of grids;
determining a rough target area in the height map according to a preset target area height value;
calculating the difference between the maximum height value and the minimum height value in the same grid in which the approximate target area is positioned;
Determining a grid in which the difference is lower than a difference threshold and the distance between the difference and the preset target area height value is smaller than a preset distance;
and removing the three-dimensional point cloud outside the grid, wherein the difference value is lower than a difference value threshold value, and the distance between the three-dimensional point cloud and the preset target area height value is smaller than the preset distance.
13. The system of claim 12, wherein the processor is configured to, when acquiring the multi-frame three-dimensional point cloud:
acquiring the multi-frame three-dimensional point cloud under a local coordinate system, wherein the local coordinate system is a coordinate system established by taking a carrier carrying detection equipment for detecting the multi-frame three-dimensional point cloud as an origin;
the processor is specifically configured to, when determining the altitude map according to the altitude values of the multi-frame three-dimensional point cloud:
determining a target plane under a world coordinate system;
Projecting the multi-frame three-dimensional point cloud under the local coordinate system to the target plane according to the conversion relation between the local coordinate system and the world coordinate system;
And determining a height map according to the height values of the multi-frame three-dimensional point cloud projected in the target plane.
14. The system according to claim 13, wherein the processor is configured to, when projecting the multi-frame three-dimensional point cloud under the local coordinate system to the target plane according to a conversion relationship between the local coordinate system and the world coordinate system:
dividing the target plane into a plurality of grids with equal sizes, wherein each grid is provided with a grid number;
according to the conversion relation between the local coordinate system and the world coordinate system, calculating a grid number corresponding to a multi-frame three-dimensional point cloud in the local coordinate system in the target plane;
according to the conversion relation between the local coordinate system and the world coordinate system, calculating a height value corresponding to the multi-frame three-dimensional point cloud in the target plane under the local coordinate system;
And determining the height map according to the grid number corresponding to the multi-frame three-dimensional point cloud in the local coordinate system in the target plane and the height value corresponding to the multi-frame three-dimensional point cloud in the local coordinate system in the target plane.
15. The system of claim 10, wherein the correction coefficients comprise a first correction coefficient, a second correction coefficient, and a third correction coefficient;
The processor is specifically configured to, when determining the height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient:
and calculating a height value correction parameter of the three-dimensional point cloud of the frame according to the first correction coefficient, the second correction coefficient, the third correction coefficient and the three-dimensional coordinate value of the three-dimensional point cloud of the frame.
16. The system according to claim 10 or 14, wherein when the processor obtains the multi-frame three-dimensional point cloud in the local coordinate system, the processor is specifically configured to:
acquiring multi-frame three-dimensional point clouds which are detected by the detection equipment and contain a target area;
And converting the multi-frame three-dimensional point cloud detected by the detection equipment into the local coordinate system according to the conversion relation between the detection equipment coordinate system and the local coordinate system.
17. The system of claim 16, wherein the detection device comprises at least one of:
Binocular stereo cameras, TOF cameras, and lidar.
18. The system of any one of claims 10-15, wherein the target area is a ground area.
19. A movable platform, comprising: a fuselage, a power system and a point cloud processing system of any of claims 10-18.
20. A computer readable storage medium, having stored thereon a computer program, the computer program being executed by a processor to implement the method of any of claims 1-9.
CN201980012171.7A 2019-05-29 2019-05-29 Processing method, equipment and computer readable storage medium of point cloud Active CN111699410B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/088931 WO2020237516A1 (en) 2019-05-29 2019-05-29 Point cloud processing method, device, and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111699410A CN111699410A (en) 2020-09-22
CN111699410B true CN111699410B (en) 2024-06-07

Family

ID=72476452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980012171.7A Active CN111699410B (en) 2019-05-29 2019-05-29 Processing method, equipment and computer readable storage medium of point cloud

Country Status (2)

Country Link
CN (1) CN111699410B (en)
WO (1) WO2020237516A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112435193B (en) * 2020-11-30 2024-05-24 中国科学院深圳先进技术研究院 Method and device for denoising point cloud data, storage medium and electronic equipment
WO2022126380A1 (en) * 2020-12-15 2022-06-23 深圳市大疆创新科技有限公司 Three-dimensional point cloud segmentation method and apparatus, and movable platform
CN116659376A (en) * 2021-09-30 2023-08-29 深圳市速腾聚创科技有限公司 Method and device for determining appearance size of dynamic target
CN114782438B (en) * 2022-06-20 2022-09-16 深圳市信润富联数字科技有限公司 Object point cloud correction method and device, electronic equipment and storage medium
CN115830262B (en) * 2023-02-14 2023-05-26 济南市勘察测绘研究院 Live-action three-dimensional model building method and device based on object segmentation
CN116309124B (en) * 2023-02-15 2023-10-20 霖鼎光学(江苏)有限公司 Correction method of optical curved surface mold, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831646A (en) * 2012-08-13 2012-12-19 东南大学 Scanning laser based large-scale three-dimensional terrain modeling method
CN106441151A (en) * 2016-09-30 2017-02-22 中国科学院光电技术研究所 Three-dimensional object European space reconstruction measurement system based on vision and active optics fusion
CN108254758A (en) * 2017-12-25 2018-07-06 清华大学苏州汽车研究院(吴江) Three-dimensional road construction method based on multi-line laser radar and GPS
CN109297510A (en) * 2018-09-27 2019-02-01 百度在线网络技术(北京)有限公司 Relative pose scaling method, device, equipment and medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8884962B2 (en) * 2006-10-20 2014-11-11 Tomtom Global Content B.V. Computer arrangement for and method of matching location data of different sources
CN106530380B (en) * 2016-09-20 2019-02-26 长安大学 A kind of ground point cloud dividing method based on three-dimensional laser radar
CN109521403B (en) * 2017-09-19 2020-11-20 百度在线网络技术(北京)有限公司 Parameter calibration method, device and equipment of multi-line laser radar and readable medium
CN110274602A (en) * 2018-03-15 2019-09-24 奥孛睿斯有限责任公司 Indoor map method for auto constructing and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831646A (en) * 2012-08-13 2012-12-19 东南大学 Scanning laser based large-scale three-dimensional terrain modeling method
CN106441151A (en) * 2016-09-30 2017-02-22 中国科学院光电技术研究所 Three-dimensional object European space reconstruction measurement system based on vision and active optics fusion
CN108254758A (en) * 2017-12-25 2018-07-06 清华大学苏州汽车研究院(吴江) Three-dimensional road construction method based on multi-line laser radar and GPS
CN109297510A (en) * 2018-09-27 2019-02-01 百度在线网络技术(北京)有限公司 Relative pose scaling method, device, equipment and medium

Also Published As

Publication number Publication date
CN111699410A (en) 2020-09-22
WO2020237516A1 (en) 2020-12-03

Similar Documents

Publication Publication Date Title
CN111699410B (en) Processing method, equipment and computer readable storage medium of point cloud
CN110869974B (en) Point cloud processing method, equipment and storage medium
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
CN110163930B (en) Lane line generation method, device, equipment, system and readable storage medium
Banerjee et al. Online camera lidar fusion and object detection on hybrid data for autonomous driving
CN112581612B (en) Vehicle-mounted grid map generation method and system based on fusion of laser radar and all-round-looking camera
CN113657224B (en) Method, device and equipment for determining object state in vehicle-road coordination
CN113111887B (en) Semantic segmentation method and system based on information fusion of camera and laser radar
KR20160123668A (en) Device and method for recognition of obstacles and parking slots for unmanned autonomous parking
CN111882612A (en) Vehicle multi-scale positioning method based on three-dimensional laser detection lane line
CN106934347B (en) Obstacle identification method and device, computer equipment and readable medium
CN114413881B (en) Construction method, device and storage medium of high-precision vector map
CN110119679B (en) Object three-dimensional information estimation method and device, computer equipment and storage medium
CN112166458B (en) Target detection and tracking method, system, equipment and storage medium
CN112097732A (en) Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium
CN113160327A (en) Method and system for realizing point cloud completion
CN115187941A (en) Target detection positioning method, system, equipment and storage medium
CN113989766A (en) Road edge detection method and road edge detection equipment applied to vehicle
CN113985405A (en) Obstacle detection method and obstacle detection equipment applied to vehicle
CN114549542A (en) Visual semantic segmentation method, device and equipment
CN111380529B (en) Mobile device positioning method, device and system and mobile device
CN109598199B (en) Lane line generation method and device
Oniga et al. A fast ransac based approach for computing the orientation of obstacles in traffic scenes
Pfeiffer et al. Ground truth evaluation of the Stixel representation using laser scanners
CN113052846B (en) Multi-line radar point cloud densification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240522

Address after: Building 3, Xunmei Science and Technology Plaza, No. 8 Keyuan Road, Science and Technology Park Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, 518057, 1634

Applicant after: Shenzhen Zhuoyu Technology Co.,Ltd.

Country or region after: China

Address before: 518057 Shenzhen Nanshan High-tech Zone, Shenzhen, Guangdong Province, 6/F, Shenzhen Industry, Education and Research Building, Hong Kong University of Science and Technology, No. 9 Yuexingdao, South District, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: SZ DJI TECHNOLOGY Co.,Ltd.

Country or region before: China

GR01 Patent grant