CN115082290A - Projection method, device and equipment of laser radar point cloud and storage medium - Google Patents
Projection method, device and equipment of laser radar point cloud and storage medium Download PDFInfo
- Publication number
- CN115082290A CN115082290A CN202210552013.4A CN202210552013A CN115082290A CN 115082290 A CN115082290 A CN 115082290A CN 202210552013 A CN202210552013 A CN 202210552013A CN 115082290 A CN115082290 A CN 115082290A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- target
- laser radar
- camera
- original
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 206010034719 Personality change Diseases 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 5
- 238000001514 detection method Methods 0.000 claims description 2
- 230000004927 fusion Effects 0.000 abstract description 17
- 238000010586 diagram Methods 0.000 description 6
- 238000005096 rolling process Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 235000015110 jellies Nutrition 0.000 description 1
- 239000008274 jelly Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/10—Selection of transformation methods according to the characteristics of the input images
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention relates to the technical field of automatic driving, and discloses a projection method, a device, equipment and a storage medium for laser radar point cloud, which are used for improving the accuracy of information fusion of a laser radar and a camera. The projection method of the laser radar point cloud comprises the following steps: acquiring an original point cloud scanned by a laser radar, and removing motion distortion of the original point cloud to obtain a first point cloud; converting the first point cloud to a camera coordinate system based on the target exposure time of the original camera image to obtain a second point cloud; and performing camera motion distortion addition on the second point cloud to obtain a target point cloud, and projecting the target point cloud to the original camera image.
Description
Technical Field
The invention relates to the technical field of automatic driving, in particular to a projection method, device and equipment of laser radar point cloud and a storage medium.
Background
In the field of automatic driving technology, in order to improve accuracy of environment sensing, information collected by different sensors is generally required to be fused, wherein information fusion of a laser radar sensor and an image sensor is the key point of sensor information fusion, and ensuring accuracy of sensor information fusion is the basis for ensuring accuracy of environment sensing.
The existing projection modes of the laser radar and the camera generally respectively carry out distortion removal on point clouds and images acquired by the laser radar and the camera, and then projection superposition is carried out on the point clouds and the images after distortion removal, but due to different distortion removal principles, larger errors still exist in the projection superposition after distortion removal, so that the accuracy of information fusion of the laser radar and the camera is reduced.
Disclosure of Invention
The invention provides a projection method, a projection device, projection equipment and a storage medium of laser radar point cloud, which are used for improving the accuracy of information fusion of a laser radar and a camera.
The invention provides a projection method of laser radar point cloud in a first aspect, which comprises the following steps:
acquiring an original point cloud scanned by a laser radar, and removing motion distortion of the original point cloud to obtain a first point cloud;
converting the first point cloud to a camera coordinate system based on the target exposure time of an original camera image to obtain a second point cloud;
and performing camera motion distortion addition on the second point cloud to obtain a target point cloud, and projecting the target point cloud to the original camera image.
Optionally, the obtaining an original point cloud scanned by the laser radar, and removing motion distortion of the original point cloud to obtain a first point cloud includes:
acquiring a first external parameter of a laser radar, wherein the first external parameter is used for indicating a relative position parameter between the laser radar and a self-vehicle coordinate system;
acquiring an original point cloud scanned by the laser radar, and determining the initial scanning time of the original point cloud;
and acquiring a first self-parking position posture change between the initial scanning time and the scanning time of each laser radar point, and converting each laser radar point in the original point cloud into a self-parking coordinate system through the first self-parking position posture change and the first external parameter to obtain a first point cloud.
Optionally, the converting the first point cloud to a camera coordinate system based on the target exposure time of the original camera image to obtain a second point cloud includes:
acquiring a second appearance parameter of the camera, wherein the second appearance parameter is used for indicating a relative position parameter between the camera and a self-vehicle coordinate system;
acquiring a target exposure time of an original camera image acquired by the camera, and acquiring a second self-parking attitude change between the initial scanning time and the target exposure time, wherein the target exposure time is used for indicating the exposure time of a middle row or a middle column of the original camera image;
and converting each laser radar point in the first point cloud to a camera coordinate system through the second self-parking position posture change and the second external parameter to obtain a second point cloud.
Optionally, the adding of camera motion distortion to the second point cloud to obtain the target point cloud includes:
determining a target projection time difference value, wherein the target projection time difference value is used for indicating the difference value between the exposure time of each laser radar point at a target projection position and the target exposure time;
and increasing the camera motion distortion of the second point cloud based on a preset camera distortion internal parameter and the target projection time difference to obtain a target point cloud, wherein the target point cloud comprises coordinate information of each laser radar point at the target projection position.
Optionally, the determining a target projection time difference value includes:
acquiring a unit exposure time difference value of the original camera image and a vehicle motion parameter in a scanning period of the original point cloud, wherein the unit exposure time difference value is used for indicating an exposure time difference value of the adjacent row or the adjacent column;
and calculating a target projection time difference value through the unit exposure time difference value, the self-vehicle motion parameter, a preset camera distortion internal parameter and a preset synchronization time error.
Optionally, the performing, based on a preset camera distortion internal parameter and the target projection time difference, a camera motion distortion increase on the second point cloud to obtain a target point cloud, including:
calculating a third self-parking position posture change of each laser radar point in the second point cloud within the target projection time difference period according to the target projection time difference;
and calculating the target projection position of each laser radar point through the third self-parking attitude change and preset camera distortion internal parameters to obtain target point cloud, wherein the target projection position is used for indicating the coordinate information of each laser radar point in the target point cloud.
Optionally, the calculation formula of the target point cloud is as follows:
P 0 =R*P 1 +v*t i
wherein, P 0 Representing coordinate information of a lidar point in a target point cloud, R representing a rotation matrix of the vehicle, P 1 Representing coordinate information of the corresponding laser radar point in the second point cloud, v representing the average speed of the vehicle in the scanning period of the original point cloud, t i Representing the unit exposure time difference.
The invention provides a projection device of laser radar point cloud in a second aspect, which comprises:
the system comprises an acquisition module, a detection module and a control module, wherein the acquisition module is used for acquiring an original point cloud scanned by a laser radar and removing motion distortion of the original point cloud to obtain a first point cloud;
the conversion module is used for converting the first point cloud to a camera coordinate system based on the target exposure time of the original camera image to obtain a second point cloud;
and the projection module is used for adding camera motion distortion to the second point cloud to obtain a target point cloud and projecting the target point cloud to the original camera image.
Optionally, the obtaining module is specifically configured to:
acquiring a first external parameter of a laser radar, wherein the first external parameter is used for indicating a relative position parameter between the laser radar and a self-vehicle coordinate system;
acquiring original point cloud scanned by the laser radar, and determining the initial scanning time of the original point cloud;
and acquiring a first self-parking position posture change between the initial scanning time and the scanning time of each laser radar point, and converting each laser radar point in the original point cloud into a self-parking coordinate system through the first self-parking position posture change and the first external parameter to obtain a first point cloud.
Optionally, the conversion module is specifically configured to:
acquiring a second external parameter of the camera, wherein the second external parameter is used for indicating a relative position parameter between the camera and a self-vehicle coordinate system;
acquiring a target exposure time of an original camera image acquired by the camera, and acquiring a second self-parking attitude change between the initial scanning time and the target exposure time, wherein the target exposure time is used for indicating the exposure time of a middle row or a middle column of the original camera image;
and converting each laser radar point in the first point cloud into a camera coordinate system through the second self-parking position posture change and the second external parameter to obtain a second point cloud.
Optionally, the projection module includes:
the determining unit is used for determining a target projection time difference value, and the target projection time difference value is used for indicating the difference value between the exposure time of each laser radar point at the target projection position and the target exposure time;
and the increasing unit is used for increasing the camera motion distortion of the second point cloud based on a preset camera distortion internal parameter and the target projection time difference value to obtain a target point cloud, and the target point cloud comprises coordinate information of each laser radar point at the target projection position.
Optionally, the determining unit is specifically configured to:
acquiring a unit exposure time difference value of the original camera image and a vehicle motion parameter in a scanning period of the original point cloud, wherein the unit exposure time difference value is used for indicating an exposure time difference value of the adjacent row or the adjacent column;
and calculating a target projection time difference value through the unit exposure time difference value, the self-vehicle motion parameters, preset camera distortion internal parameters and preset synchronous time errors.
Optionally, the adding unit is specifically configured to:
calculating a third self-parking position posture change of each laser radar point in the second point cloud within the target projection time difference period according to the target projection time difference;
and calculating the target projection position of each laser radar point through the third self-parking attitude change and preset camera distortion internal parameters to obtain target point cloud, wherein the target projection position is used for indicating the coordinate information of each laser radar point in the target point cloud.
Optionally, the calculation formula of the target point cloud is as follows:
P 0 =R*P 1 +v*t i
wherein, P 0 Representing coordinate information of a lidar point in a target point cloud, R representing a rotation matrix of the vehicle, P 1 Representing coordinate information of a corresponding laser radar point in the second point cloud, v representing the average speed of the vehicle in the scanning period of the original point cloud, and t i Representing the unit exposure time difference.
The invention provides a projection device of laser radar point cloud in a third aspect, which comprises: a memory and at least one processor, the memory having stored therein a computer program; the at least one processor invokes the computer program in the memory to cause the projection device of the lidar point cloud to perform the method of projecting the lidar point cloud described above.
A fourth aspect of the present invention provides a computer-readable storage medium having stored therein a computer program which, when run on a computer, causes the computer to execute the above-described method of projecting a lidar point cloud.
In the technical scheme provided by the invention, the method comprises the steps of obtaining an original point cloud scanned by a laser radar, and removing motion distortion of the original point cloud to obtain a first point cloud; converting the first point cloud to a camera coordinate system based on the target exposure time of an original camera image to obtain a second point cloud; and performing camera motion distortion addition on the second point cloud to obtain a target point cloud, and projecting the target point cloud to the original camera image. In the embodiment of the invention, in order to avoid projection errors caused by different distortion removal principles of a laser radar point cloud and a camera image, after an original point cloud scanned by a laser radar is obtained, only motion distortion removal is carried out on the original point cloud to obtain a first point cloud based on a target reference object coordinate system, the first point cloud is converted into the camera coordinate system to obtain a second point cloud in the same coordinate system with the original camera image, finally, based on the motion distortion principle of the camera, the motion distortion identical to that of the original camera image is added to the second point cloud to obtain a target point cloud, the target point cloud is projected to the original camera image to obtain a target point cloud image fused with laser radar and camera information, so that the laser radar point cloud has the same motion distortion as that of the camera image, and then, the same image distortion removal principle is adopted to simultaneously carry out motion distortion removal on the laser radar point cloud and the camera image in the target point cloud image, therefore, the invention can improve the accuracy of information fusion of the laser radar and the camera.
Drawings
FIG. 1 is a schematic diagram of an embodiment of a projection method of a lidar point cloud in an embodiment of the invention;
FIG. 2 is a schematic diagram of another embodiment of a projection method of a lidar point cloud in an embodiment of the invention;
FIG. 3 is a schematic diagram of an embodiment of a projection apparatus for lidar point cloud in an embodiment of the present invention;
FIG. 4 is a schematic diagram of another embodiment of a projection apparatus for lidar point cloud in an embodiment of the invention;
fig. 5 is a schematic diagram of an embodiment of a projection apparatus for lidar point cloud in an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a projection method, a projection device, projection equipment and a storage medium of laser radar point cloud, which are used for improving the accuracy of information fusion of a laser radar and a camera.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Moreover, the terms "comprises," "comprising," or "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It is understood that the execution subject of the present invention may be a projection apparatus of a laser radar point cloud, and may also be a terminal or a server, and the terminal may be an automatic driving terminal, which is not limited herein. The embodiment of the present invention is described by taking a terminal as an execution subject.
For convenience of understanding, a detailed process of the embodiment of the present invention is described below, and referring to fig. 1, an embodiment of the method for projecting a laser radar point cloud in the embodiment of the present invention includes:
101. acquiring an original point cloud scanned by a laser radar, and removing motion distortion of the original point cloud to obtain a first point cloud;
it should be noted that the original point cloud scanned by the laser radar refers to a point cloud scanned by the laser radar based on an internal principle, and the original point cloud is a laser radar point data set based on a laser radar coordinate system. It is understood that the original point cloud is not scanned instantaneously, but obtained when the laser radar completes one scan circle, for example, assuming that the time required for the laser radar to complete one scan circle is 100ms, the original point cloud is the point data set scanned by the laser radar within the 100 ms. If the self-vehicle moves, the original point cloud scanned by the laser radar has motion distortion, so that the motion distortion of the original point cloud is removed to improve the accuracy of information fusion of the laser radar and the camera, and the first point cloud without motion distortion can be obtained for subsequent further calculation.
In one embodiment, in order to remove motion distortion of the original point cloud, the original point cloud is converted into a target coordinate system at the same time, specifically, the terminal obtains a relative position parameter (i.e. an external reference) between the lidar and the target coordinate system, and then determines a target scanning time to be converted, wherein the target scanning time may be a scanning time of any one lidar point in the original point cloud, and the terminal converts each lidar point in the original point cloud into the target coordinate system at the target scanning time based on a lidar pose change between the target scanning time and the scanning time of each lidar point based on the relative position parameter between the lidar and the target coordinate system to obtain a first point cloud, wherein the target coordinate system may be any reference object coordinate system having a mapping relationship with the camera to be projected, such as a world coordinate system, a target coordinate system, a camera and a camera to be projected on the target coordinate system, A vehicle coordinate system (i.e., a baseline coordinate system), a virtual camera or other cameras, etc., and the details are not limited herein.
102. Converting the first point cloud to a camera coordinate system based on the target exposure time of the original camera image to obtain a second point cloud;
it should be noted that the original camera image is an image shot by a camera to be projected, that is, an image to be projected of an original point cloud, the camera coordinate system is a coordinate system of the camera to be projected, in order to fuse the second point cloud with the original camera image, the terminal converts the first point cloud into the camera coordinate system based on the target exposure time of the original camera image to obtain the second point cloud, wherein the camera to be projected scans in a line-by-line or line-by-line exposure manner to obtain the original camera image, and the camera to be projected is as follows: rolling shutter (rolling shutter) cameras. It can be understood that, taking a camera to be projected which scans in a line-by-line exposure manner as an example, in the case of a moving vehicle, the original camera image obtained by scanning in the line-by-line exposure manner has a jelly check, that is, the exposure time of each line of pixels in the original camera image is different, so that the original camera image has motion distortion.
In an embodiment, in order to make the second point cloud and the original camera image be in the same coordinate system, the camera coordinate system conversion is performed on the first point cloud based on a target exposure time of the original camera image to obtain the second point cloud, where the target exposure time is used to indicate an exposure time of a target row or a target column of the original camera image, for example, the target exposure time may be a row or a column exposure time at which the exposure time of the original camera image is earliest or a row or a column exposure time at which the exposure time of the original camera image is latest, and this is not limited herein. Specifically, the terminal converts the first point cloud to the camera coordinate system at the target exposure time through the camera pose change from the target exposure time to the target scanning time and the relative position parameter (namely external parameter) between the camera to be projected and the target coordinate system, so as to obtain the second point cloud.
103. And performing camera motion distortion addition on the second point cloud to obtain a target point cloud, and projecting the target point cloud to the original camera image.
It can be understood that, in order to improve the accuracy of the information fusion of the laser radar and the camera, the terminal obtains the target point cloud by adding the same motion distortion as the original camera image to the second point cloud, and then projects the target point cloud to the original camera image to obtain the fused point cloud image, so that the point cloud and the image in the fused point cloud image can be simultaneously subjected to distortion removal according to the same image distortion removal algorithm, and the laser radar point cloud and the camera image are accurately fused. Compared with the mode of respectively carrying out distortion removal and then fusion on the laser radar point cloud and the camera image, the method can reduce data loss generated in the fusion process and improve the synchronization degree of the laser radar point cloud and the camera image.
It should be noted that, because the exposure time of the camera image scanned by the line-by-line or column-by-column exposure method is not negligible, and is about 50ms, if only a single exposure time is considered for processing, the target point cloud with the camera motion distortion added thereto still has a deviation from the original camera image with the actual motion distortion, therefore, in order to further improve the accuracy of the camera motion distortion addition, in an embodiment, the terminal first calculates a difference between the exposure time of the target projection position of each lidar point in the target point cloud and the target exposure time, where the exposure time of the target projection position of each lidar point is an unknown variable, but the difference between the two can be obtained by known data, and then calculates the target projection time of each lidar point according to the difference between the two, and the target projection time is used to indicate the real projection time of the lidar point, and finally, determining target point cloud according to the camera pose change of each laser radar point from the real projection time to the target exposure time, and projecting the target point cloud to the original camera image.
In the embodiment of the invention, in order to avoid projection errors caused by different distortion removal principles of a laser radar point cloud and a camera image, after an original point cloud scanned by a laser radar is obtained, only motion distortion removal is carried out on the original point cloud to obtain a first point cloud based on a target reference object coordinate system, the first point cloud is converted into the camera coordinate system to obtain a second point cloud in the same coordinate system with the original camera image, finally, based on the motion distortion principle of the camera, the motion distortion identical to that of the original camera image is added to the second point cloud to obtain a target point cloud, the target point cloud is projected to the original camera image to obtain a target point cloud image fused with laser radar and camera information, so that the laser radar point cloud has the same motion distortion as that of the camera image, and then, the same image distortion removal principle is adopted to simultaneously carry out motion distortion removal on the laser radar point cloud and the camera image in the target point cloud image, therefore, the invention can improve the accuracy of information fusion of the laser radar and the camera.
Referring to fig. 2, another embodiment of the method for projecting a laser radar point cloud according to the embodiment of the present invention includes:
201. acquiring an original point cloud scanned by a laser radar, and removing motion distortion of the original point cloud to obtain a first point cloud;
specifically, step 201 includes: acquiring a first external parameter of the laser radar, wherein the first external parameter is used for indicating a relative position parameter between the laser radar and a self-vehicle coordinate system; acquiring an original point cloud scanned by a laser radar, and determining the initial scanning time of the original point cloud; and acquiring a first self-parking position posture change between the initial scanning time and the scanning time of each laser radar point, and converting each laser radar point in the original point cloud to a self-vehicle coordinate system through the first self-parking position posture change and a first external parameter to obtain a first point cloud.
In this embodiment, in order to improve the accuracy of information fusion between the lidar and the camera, the self-vehicle coordinate system is used as a bridge for converting the coordinate system between the lidar and the camera, and therefore, the terminal obtains a first external parameter of the lidar, wherein the first external parameter is used to indicate a relative position parameter between the lidar and the self-vehicle coordinate system, the terminal determines a lidar point with the earliest scanning time according to the scanning time of each lidar point in the original point cloud, and determines the earliest scanning time as the initial scanning time of the original point cloud, then the terminal calculates a first self-vehicle attitude change from the initial scanning time to the scanning time of each lidar point in the original point cloud, and finally, the terminal converts each lidar point in the original point cloud to the self-vehicle coordinate system at the initial scanning time based on the first external parameter and the first self-vehicle attitude change to obtain a first point cloud, the first point cloud comprises the own vehicle coordinate information of each laser radar point at the initial scanning moment.
202. Converting the first point cloud to a camera coordinate system based on the target exposure time of the original camera image to obtain a second point cloud;
specifically, step 202 includes: acquiring a second external parameter of the camera, wherein the second external parameter is used for indicating a relative position parameter between the camera and a self-vehicle coordinate system; acquiring target exposure time of an original camera image acquired by a camera, and acquiring second self-parking attitude change between the initial scanning time and the target exposure time, wherein the target exposure time is used for indicating the exposure time of a middle row or a middle column of the original camera image; and converting each laser radar point in the first point cloud to a camera coordinate system through second self-parking space attitude change and second external parameters to obtain a second point cloud.
In this embodiment, similarly to step 201, after the terminal obtains the first point cloud under the vehicle coordinate system, the terminal obtains the second external reference of the camera to be projected, where the second external reference is used to indicate a relative position parameter between the camera to be projected and the vehicle coordinate system, calculates a second vehicle pose change from the initial scanning time to the target exposure time, and converts each lidar point in the first point cloud to the camera coordinate system at the target exposure time through the second vehicle pose change and the second external reference to obtain a second point cloud, where the second point cloud includes camera coordinate information of each lidar point at the target exposure time. Note that the target exposure time is the exposure time of the middle row or the middle column of the original camera image.
203. Determining a target projection time difference value, wherein the target projection time difference value is used for indicating the difference value between the exposure time of each laser radar point at the target projection position and the target exposure time;
it should be noted that the target projection position is used to indicate a real projection position of the corresponding lidar point, that is, a projection position of the corresponding lidar point in the original camera image, and since the target projection position is an unknown parameter, but the target projection time difference can be obtained by calculating a known parameter, the terminal first calculates the target projection time difference, and then performs subsequent processing through the target projection time difference, thereby determining the target projection position of each lidar point in the target point cloud. In the embodiment, because the real projection position of the laser radar point is difficult to directly predict, the real projection position is indirectly predicted by predicting the time difference between the exposure time of the real projection position and the target exposure time, so that the accuracy of calculation of the real projection position is improved, and the accuracy of information fusion of the laser radar and the camera is further improved.
Specifically, step 203 includes: acquiring a unit exposure time difference value of an original camera image and a vehicle motion parameter in a scanning period of an original point cloud, wherein the unit exposure time difference value is used for indicating an exposure time difference value of adjacent rows or adjacent columns; and calculating a target projection time difference value through the unit exposure time difference value, the motion parameters of the vehicle, the preset camera distortion internal parameters and the preset synchronous time error.
In this embodiment, in order to determine the target projection time difference, the terminal calculates the target projection time difference according to the unit exposure time difference of the original camera image, the vehicle motion parameter, the preset camera distortion internal parameter, the preset synchronization time error, and other parameters, where the vehicle motion parameter includes an average rotation angular velocity and an average velocity, the camera distortion internal parameter includes a camera focal length, and the preset synchronization time error is an error value of the synchronization timestamp of a middle line or a middle column of the original camera image.
Specifically, for a camera scanning in a progressive exposure mode, solving the projection time difference of the target is firstly obtained by solving the following quadratic equation
Wherein Z in the above one-dimensional quadratic equation 2 、Z 1 、Y 2 、Y 1 For known intermediate variables, the calculation formula is as follows:
Z 2 =[ω*f*t i ] × *Z c +v*f*t i
Z 1 =Z c +[ω*t mse ] × +v*t mse
Y 2 =[ω*f*t i ] × *Y c +v*f*t i
Y 1 =Y c +[ω*t mse ] × +v*t mse
then according toCalculating the time difference of the target projection, and projecting the targetThe time difference Δ t is calculated by the formula:
in the above, ω represents the average rotation angular velocity of the vehicle in the scanning period of the original point cloud, f represents the focal length of the camera, and t represents i Representing the difference in unit exposure time, Z c Z coordinate representing target laser radar point in the second point cloud, v represents average speed of the vehicle in scanning period of the original point cloud, and t mse Indicating a preset synchronization time error, Y c Y coordinate, Y, representing target lidar point in second point cloud p Y-coordinate, Z, representing target lidar point in a target point cloud p And representing the Z coordinate of the target laser radar point in the target point cloud.
Further, after the target projection time difference is determined, the coordinates of each laser radar point in the target point cloud can be determined, and the calculation formula of the coordinates of each laser radar point in the target point cloud is as follows:
P 0 =R*P 1 +v*t i
wherein, P 0 Representing coordinate information of a lidar point in a target point cloud, R representing a rotation matrix of the vehicle, P 1 Representing coordinate information of a corresponding laser radar point in the second point cloud, v representing the average speed of the vehicle in the scanning period of the original point cloud, and t i Indicating a unit exposure time difference.
204. And increasing the camera motion distortion of the second point cloud based on a preset camera distortion internal parameter and a target projection time difference value to obtain a target point cloud, wherein the target point cloud comprises coordinate information of each laser radar point at a target projection position.
Specifically, step 204 includes: calculating the posture change of a third self-parking space of each laser radar point in the second point cloud within the time period of the target projection time difference through the target projection time difference; and calculating the target projection position of each laser radar point through the third self-parking attitude change and the preset camera distortion internal reference to obtain a target point cloud, wherein the target projection position is used for indicating the coordinate information of each laser radar point in the target point cloud.
In this embodiment, after the terminal determines the target projection time difference, the real projection time of each lidar point, that is, the exposure time of each lidar point at the target projection position, may be determined by the projection time of each lidar point in the second point cloud and the target projection time difference, the third self-parking pose change between the projection time of each lidar point in the second point cloud and the real projection time is calculated, and finally, the target projection position of each lidar point is determined according to the third self-parking pose change and the camera distortion internal parameter, so as to obtain the target point cloud, where the target projection position is used to indicate the coordinate information of each lidar point in the target point cloud.
In the embodiment of the invention, in order to avoid projection errors caused by different distortion removal principles of a laser radar point cloud and a camera image, after an original point cloud scanned by a laser radar is obtained, only motion distortion removal is carried out on the original point cloud to obtain a first point cloud based on a target reference object coordinate system, the first point cloud is converted into the camera coordinate system to obtain a second point cloud in the same coordinate system with the original camera image, finally, a target projection time difference value is predicted based on the motion distortion principle of the camera, the motion distortion identical with that of the original camera image is added to the second point cloud through the target projection time difference value to obtain a target point cloud, the target point cloud is projected to the original camera image to obtain a target point cloud image fused with laser radar and camera information, the laser radar point cloud has the same motion distortion as that of the camera image, and the same image distortion removal principle is subsequently adopted to simultaneously carry out laser radar and point cloud image in the target point cloud image The camera image is subjected to motion distortion removal, so that the accuracy of information fusion of the laser radar and the camera can be improved.
The above description is given of the projection method of the lidar point cloud in the embodiment of the present invention, and the following description is given of the projection apparatus of the lidar point cloud in the embodiment of the present invention, referring to fig. 3, where an embodiment of the projection apparatus of the lidar point cloud in the embodiment of the present invention includes:
the acquisition module 301 is configured to acquire an original point cloud scanned by a laser radar, and remove motion distortion of the original point cloud to obtain a first point cloud;
a conversion module 302, configured to convert the first point cloud to a camera coordinate system based on a target exposure time of an original camera image to obtain a second point cloud;
and the projection module 303 is configured to perform camera motion distortion addition on the second point cloud to obtain a target point cloud, and project the target point cloud to the original camera image.
In the embodiment of the invention, in order to avoid projection errors caused by different distortion removal principles of a laser radar point cloud and a camera image, after an original point cloud scanned by a laser radar is obtained, only motion distortion removal is carried out on the original point cloud to obtain a first point cloud based on a target reference object coordinate system, the first point cloud is converted into the camera coordinate system to obtain a second point cloud in the same coordinate system with the original camera image, finally, based on the motion distortion principle of the camera, the motion distortion identical to that of the original camera image is added to the second point cloud to obtain a target point cloud, the target point cloud is projected to the original camera image to obtain a target point cloud image fused with laser radar and camera information, so that the laser radar point cloud has the same motion distortion as that of the camera image, and then, the same image distortion removal principle is adopted to simultaneously carry out motion distortion removal on the laser radar point cloud and the camera image in the target point cloud image, therefore, the invention can improve the accuracy of information fusion of the laser radar and the camera.
Referring to fig. 4, another embodiment of the projection apparatus for lidar point cloud according to the embodiment of the present invention includes:
the acquisition module 301 is configured to acquire an original point cloud scanned by a laser radar, and remove motion distortion of the original point cloud to obtain a first point cloud;
a conversion module 302, configured to convert the first point cloud to a camera coordinate system based on a target exposure time of an original camera image to obtain a second point cloud;
and the projection module 303 is configured to perform camera motion distortion addition on the second point cloud to obtain a target point cloud, and project the target point cloud to the original camera image.
Optionally, the obtaining module 301 is specifically configured to:
acquiring a first external parameter of a laser radar, wherein the first external parameter is used for indicating a relative position parameter between the laser radar and a self-vehicle coordinate system;
acquiring original point cloud scanned by the laser radar, and determining the initial scanning time of the original point cloud;
and acquiring a first self-parking position posture change between the initial scanning time and the scanning time of each laser radar point, and converting each laser radar point in the original point cloud into a self-parking coordinate system through the first self-parking position posture change and the first external parameter to obtain a first point cloud.
Optionally, the conversion module 302 is specifically configured to:
acquiring a second appearance parameter of the camera, wherein the second appearance parameter is used for indicating a relative position parameter between the camera and a self-vehicle coordinate system;
acquiring a target exposure time of an original camera image acquired by the camera, and acquiring a second self-parking attitude change between the initial scanning time and the target exposure time, wherein the target exposure time is used for indicating the exposure time of a middle row or a middle column of the original camera image;
and converting each laser radar point in the first point cloud to a camera coordinate system through the second self-parking position posture change and the second external parameter to obtain a second point cloud.
Optionally, the projection module 303 includes:
a determining unit 3031, configured to determine a target projection time difference value, where the target projection time difference value is used to indicate a difference value between an exposure time of each lidar point at a target projection position and the target exposure time;
an adding unit 3032, configured to perform camera motion distortion addition on the second point cloud based on a preset camera distortion internal parameter and the target projection time difference value, to obtain a target point cloud, where the target point cloud includes coordinate information of each laser radar point at the target projection position.
Optionally, the determining unit 3031 is specifically configured to:
acquiring a unit exposure time difference value of the original camera image and a vehicle motion parameter in a scanning period of the original point cloud, wherein the unit exposure time difference value is used for indicating an exposure time difference value of the adjacent row or the adjacent column;
and calculating a target projection time difference value through the unit exposure time difference value, the self-vehicle motion parameter, a preset camera distortion internal parameter and a preset synchronization time error.
Optionally, the adding unit 3032 is specifically configured to:
calculating a third self-parking position posture change of each laser radar point in the second point cloud within the target projection time difference period according to the target projection time difference;
and calculating the target projection position of each laser radar point through the third self-parking attitude change and preset camera distortion internal parameters to obtain target point cloud, wherein the target projection position is used for indicating the coordinate information of each laser radar point in the target point cloud.
Optionally, the calculation formula of the target point cloud is as follows:
P 0 =R*P 1 +v*t i
wherein, P 0 Representing coordinate information of a lidar point in the target point cloud, R representing a rotation matrix of the vehicle, P 1 Representing coordinate information of a corresponding laser radar point in the second point cloud, v representing the average speed of the vehicle in the scanning period of the original point cloud, and t i Representing the unit exposure time difference.
In the embodiment of the invention, in order to avoid projection errors caused by different distortion removal principles of a laser radar point cloud and a camera image, after an original point cloud scanned by a laser radar is obtained, only motion distortion removal is carried out on the original point cloud to obtain a first point cloud based on a target reference object coordinate system, the first point cloud is converted into the camera coordinate system to obtain a second point cloud in the same coordinate system with the original camera image, finally, a target projection time difference value is predicted based on the motion distortion principle of the camera, the motion distortion identical with that of the original camera image is added to the second point cloud through the target projection time difference value to obtain a target point cloud, the target point cloud is projected to the original camera image to obtain a target point cloud image fused with laser radar and camera information, the laser radar point cloud has the same motion distortion as that of the camera image, and the same image distortion removal principle is subsequently adopted to simultaneously carry out laser radar and point cloud image in the target point cloud image The camera image is subjected to motion distortion removal, so that the accuracy of information fusion of the laser radar and the camera can be improved.
Fig. 3 and 4 describe the projection apparatus of the lidar point cloud in the embodiment of the present invention in detail from the perspective of the modular functional entity, and the projection apparatus of the lidar point cloud in the embodiment of the present invention is described in detail from the perspective of hardware processing.
Fig. 5 is a schematic structural diagram of a projection apparatus for lidar point cloud, where the projection apparatus 500 for lidar point cloud may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 510 (e.g., one or more processors) and a memory 520, and one or more storage media 530 (e.g., one or more mass storage devices) for storing applications 533 or data 532. Memory 520 and storage media 530 may be, among other things, transient or persistent storage. The program stored on storage medium 530 may include one or more modules (not shown), each of which may include a series of computer program operations in projection device 500 for a lidar point cloud. Still further, processor 510 may be configured to communicate with storage medium 530 to execute a series of computer program operations in storage medium 530 on projection device 500 of the lidar point cloud.
The projection device 500 for lidar point cloud may also include one or more power supplies 540, one or more wired or wireless network interfaces 550, one or more input-output interfaces 560, and/or one or more operating systems 531, such as Windows server, Mac OS X, Unix, Linux, FreeBSD, etc. Those skilled in the art will appreciate that the projection device configuration of the lidar point cloud shown in fig. 5 does not constitute a limitation of the projection device of the lidar point cloud, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The present invention also provides a computer device, which includes a memory and a processor, wherein the memory stores a computer readable computer program, and when the computer readable computer program is executed by the processor, the processor executes the steps of the method for projecting a lidar point cloud in the above embodiments.
The present invention also provides a computer-readable storage medium, which may be a non-volatile computer-readable storage medium, and which may also be a volatile computer-readable storage medium, having stored thereon a computer program, which, when run on a computer, causes the computer to perform the steps of the method for projecting a lidar point cloud.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several computer programs to enable a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A projection method of laser radar point cloud is characterized by comprising the following steps:
acquiring an original point cloud scanned by a laser radar, and removing motion distortion of the original point cloud to obtain a first point cloud;
converting the first point cloud to a camera coordinate system based on the target exposure time of an original camera image to obtain a second point cloud;
and performing camera motion distortion addition on the second point cloud to obtain a target point cloud, and projecting the target point cloud to the original camera image.
2. The lidar point cloud projection method of claim 1, wherein the obtaining an original point cloud scanned by a lidar and removing motion distortion of the original point cloud to obtain a first point cloud comprises:
acquiring a first external parameter of a laser radar, wherein the first external parameter is used for indicating a relative position parameter between the laser radar and a self-vehicle coordinate system;
acquiring original point cloud scanned by the laser radar, and determining the initial scanning time of the original point cloud;
and acquiring a first self-parking position posture change between the initial scanning time and the scanning time of each laser radar point, and converting each laser radar point in the original point cloud into a self-parking coordinate system through the first self-parking position posture change and the first external parameter to obtain a first point cloud.
3. The lidar point cloud projection method of claim 1, wherein the converting the first point cloud to a camera coordinate system based on a target exposure time of an original camera image to obtain a second point cloud comprises:
acquiring a second appearance parameter of the camera, wherein the second appearance parameter is used for indicating a relative position parameter between the camera and a self-vehicle coordinate system;
acquiring a target exposure time of an original camera image acquired by the camera, and acquiring a second self-parking attitude change between the initial scanning time and the target exposure time, wherein the target exposure time is used for indicating the exposure time of a middle row or a middle column of the original camera image;
and converting each laser radar point in the first point cloud to a camera coordinate system through the second self-parking position posture change and the second external parameter to obtain a second point cloud.
4. The lidar point cloud projection method of claim 1, wherein the performing camera motion distortion addition on the second point cloud to obtain a target point cloud comprises:
determining a target projection time difference value, wherein the target projection time difference value is used for indicating the difference value between the exposure time of each laser radar point at a target projection position and the target exposure time;
and increasing the camera motion distortion of the second point cloud based on a preset camera distortion internal parameter and the target projection time difference to obtain a target point cloud, wherein the target point cloud comprises coordinate information of each laser radar point at the target projection position.
5. The method of projecting lidar point cloud of claim 4, wherein determining a target projection time difference comprises:
acquiring a unit exposure time difference value of the original camera image and a vehicle motion parameter in a scanning period of the original point cloud, wherein the unit exposure time difference value is used for indicating an exposure time difference value of the adjacent row or the adjacent column;
and calculating a target projection time difference value through the unit exposure time difference value, the self-vehicle motion parameter, a preset camera distortion internal parameter and a preset synchronization time error.
6. The lidar point cloud projection method of claim 4, wherein the performing camera motion distortion enhancement on the second point cloud based on a preset camera distortion internal parameter and the target projection time difference to obtain a target point cloud comprises:
calculating a third self-parking position posture change of each laser radar point in the second point cloud within the target projection time difference period according to the target projection time difference;
and calculating the target projection position of each laser radar point through the third self-parking attitude change and preset camera distortion internal parameters to obtain target point cloud, wherein the target projection position is used for indicating the coordinate information of each laser radar point in the target point cloud.
7. The lidar point cloud projection method of claim 5, wherein the calculation formula of the target point cloud is:
P 0 =R*P 1 +v*t i
wherein, P 0 Representing coordinate information of a lidar point in a target point cloud, R representing a rotation matrix of the vehicle, P 1 Representing coordinate information of a corresponding laser radar point in the second point cloud, v representing the average speed of the vehicle in the scanning period of the original point cloud, and t i Representing the unit exposure time difference.
8. A projection device of laser radar point cloud is characterized by comprising:
the system comprises an acquisition module, a detection module and a control module, wherein the acquisition module is used for acquiring an original point cloud scanned by a laser radar and removing motion distortion of the original point cloud to obtain a first point cloud;
the conversion module is used for converting the first point cloud to a camera coordinate system based on the target exposure time of the original camera image to obtain a second point cloud;
and the projection module is used for adding camera motion distortion to the second point cloud to obtain a target point cloud and projecting the target point cloud to the original camera image.
9. A projection apparatus of lidar point cloud, comprising: a memory and at least one processor, the memory having stored therein a computer program;
the at least one processor invokes the computer program in the memory to cause the projection device of the lidar point cloud to perform the method of projecting the lidar point cloud of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method of projection of a lidar point cloud according to any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210552013.4A CN115082290A (en) | 2022-05-18 | 2022-05-18 | Projection method, device and equipment of laser radar point cloud and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210552013.4A CN115082290A (en) | 2022-05-18 | 2022-05-18 | Projection method, device and equipment of laser radar point cloud and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115082290A true CN115082290A (en) | 2022-09-20 |
Family
ID=83250077
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210552013.4A Pending CN115082290A (en) | 2022-05-18 | 2022-05-18 | Projection method, device and equipment of laser radar point cloud and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115082290A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116577796A (en) * | 2022-11-17 | 2023-08-11 | 昆易电子科技(上海)有限公司 | Verification method and device for alignment parameters, storage medium and electronic equipment |
-
2022
- 2022-05-18 CN CN202210552013.4A patent/CN115082290A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116577796A (en) * | 2022-11-17 | 2023-08-11 | 昆易电子科技(上海)有限公司 | Verification method and device for alignment parameters, storage medium and electronic equipment |
CN116577796B (en) * | 2022-11-17 | 2024-03-19 | 昆易电子科技(上海)有限公司 | Verification method and device for alignment parameters, storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2959315B1 (en) | Generation of 3d models of an environment | |
CN110880189B (en) | Combined calibration method and combined calibration device thereof and electronic equipment | |
CN110873883B (en) | Positioning method, medium, terminal and device integrating laser radar and IMU | |
CN112219087A (en) | Pose prediction method, map construction method, movable platform and storage medium | |
CN112837352B (en) | Image-based data processing method, device and equipment, automobile and storage medium | |
CN109631911B (en) | Satellite attitude rotation information determination method based on deep learning target recognition algorithm | |
JPH1183530A (en) | Optical flow detector for image and self-position recognizing system for mobile body | |
US20180075614A1 (en) | Method of Depth Estimation Using a Camera and Inertial Sensor | |
CN111623773B (en) | Target positioning method and device based on fisheye vision and inertial measurement | |
CN113587934B (en) | Robot, indoor positioning method and device and readable storage medium | |
CN114708583A (en) | Target object detection method, device, equipment and storage medium | |
CN115082290A (en) | Projection method, device and equipment of laser radar point cloud and storage medium | |
JP7351892B2 (en) | Obstacle detection method, electronic equipment, roadside equipment, and cloud control platform | |
CN115097419A (en) | External parameter calibration method and device for laser radar IMU | |
CN114821497A (en) | Method, device and equipment for determining position of target object and storage medium | |
CN111351487B (en) | Clock synchronization method and device for multiple sensors and computing equipment | |
CN115309630A (en) | Method, device and equipment for generating automatic driving simulation data and storage medium | |
CN115082289A (en) | Projection method, device and equipment of laser radar point cloud and storage medium | |
CN114879168A (en) | Laser radar and IMU calibration method and system | |
CN110232715B (en) | Method, device and system for self calibration of multi-depth camera | |
CN112184906A (en) | Method and device for constructing three-dimensional model | |
CN116977226B (en) | Point cloud data layering processing method and device, electronic equipment and storage medium | |
CN117495900B (en) | Multi-target visual tracking method based on camera motion trend estimation | |
CN117649619B (en) | Unmanned aerial vehicle visual navigation positioning recovery method, system, device and readable storage medium | |
CN116363173A (en) | Method, device, equipment and storage medium for processing vehicle driving data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |