CN112581505B - Simple automatic registration method for laser radar point cloud and optical image - Google Patents

Simple automatic registration method for laser radar point cloud and optical image Download PDF

Info

Publication number
CN112581505B
CN112581505B CN202011572898.1A CN202011572898A CN112581505B CN 112581505 B CN112581505 B CN 112581505B CN 202011572898 A CN202011572898 A CN 202011572898A CN 112581505 B CN112581505 B CN 112581505B
Authority
CN
China
Prior art keywords
point cloud
image
optical image
dimensional point
projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011572898.1A
Other languages
Chinese (zh)
Other versions
CN112581505A (en
Inventor
王强
范生宏
赵美风
勾志阳
张振鑫
王果
何龙
范文杰
崔铁军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Prodetec Tianjin Intelligent Equipment Technology Co ltd
Capital Normal University
Tianjin Normal University
Original Assignee
Prodetec Tianjin Intelligent Equipment Technology Co ltd
Capital Normal University
Tianjin Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Prodetec Tianjin Intelligent Equipment Technology Co ltd, Capital Normal University, Tianjin Normal University filed Critical Prodetec Tianjin Intelligent Equipment Technology Co ltd
Priority to CN202011572898.1A priority Critical patent/CN112581505B/en
Publication of CN112581505A publication Critical patent/CN112581505A/en
Application granted granted Critical
Publication of CN112581505B publication Critical patent/CN112581505B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention particularly discloses a simple automatic registration method of laser radar point cloud and optical image, which comprises a preprocessing process, point cloud projection based on an affine motion model, angular point feature extraction and matching, solving the mapping relation between a three-dimensional point cloud space coordinate and an optical image pixel coordinate through direct linear equation transformation, and carrying out data fusion; the invention has the following beneficial effects: 1) the method is simple and easy to implement, and the algorithm is stable; 2) the degree of automation is high while maintaining accuracy.

Description

Simple automatic registration method for laser radar point cloud and optical image
Technical Field
The invention belongs to the field of laser radar point cloud and image fusion, and particularly relates to a simple automatic registration method of laser radar point cloud and optical image.
Background
The laser radar (Light Detection and Ranging, LiDAR) is widely applied to the fields of unmanned automobiles, indoor positioning, mapping construction and the like, the development prospect is very wide, but the laser point cloud data is difficult to obtain target spectrum information, the color is single, the processing and understanding are not facilitated, the optical image data contains rich spectrum texture color information, the ground feature attribute can be rapidly identified, the visual effect is better, the three-dimensional laser point cloud data and the two-dimensional optical image data are registered and fused, the optical three-dimensional point cloud with rich texture can be obtained, and the capability of visually distinguishing the ground feature attribute is enhanced; most of the registration of a single image and a point cloud is based on an image registration method, firstly, the point cloud is projected to obtain a point cloud projection image, then, a proper quantization method is adopted to process the projection image to obtain a quantized image, and then, the quantized image and an optical image are registered to further obtain an index relation between the image and the point cloud; most of the current automatic registration methods select artificial calibration scenes to realize high-precision automatic registration, and are not flexible and simple.
Disclosure of Invention
Aiming at the existing problems, the invention provides a simple automatic registration method of laser radar point cloud and optical image, so as to solve the defects of the prior art.
The technical scheme adopted by the invention is as follows:
a simple automatic registration method for laser radar point cloud and optical images comprises the following steps:
step 1, the pretreatment process comprises:
selecting scenes, and selecting transparent and semitransparent regular structure scenes such as doors and windows; obtaining an optical image of a camera through the camera and quantizing the optical image;
obtaining three-dimensional point cloud through a laser radar, wherein transmission is formed at transparent and semitransparent positions due to transparent and semitransparent regular structure scenes such as scene doors and windows, reflection points are mainly concentrated at the edge positions of the windows and the doors, and the obtained three-dimensional point cloud is cut, denoised and diluted to obtain the preprocessed three-dimensional point cloud;
step 2, three-dimensional point cloud projection: sampling the preprocessed three-dimensional point cloud according to an affine motion model, projecting the preprocessed three-dimensional point cloud onto a plane vertical to an axis of a hemispherical curved surface in the affine motion model through rotation, translation and transformation to generate a projected image, and generating a depth image according to the depth value of the projected image, wherein the depth value is the distance from the three-dimensional point cloud to a laser radar, and different color values are given to corresponding pixels in the projected image;
step 3, matching the same-name points: performing angular point feature extraction on the depth image and the optical image by using a Harris operator, and matching homonymy points of the depth image and the optical image by using a correlation coefficient method to obtain a corresponding relation of the homonymy points of the depth image and the optical image;
step 4, according to the corresponding relation of the homonymous points obtained in the step 3, solving the mapping relation between the spatial coordinates of the preprocessed three-dimensional point cloud and the coordinates of the optical image through direct linear transformation;
and 5, data fusion: and (4) traversing the coordinates of each point in the preprocessed three-dimensional point cloud through the mapping relation obtained in the step (4), finding a corresponding pixel in the optical image by using the mapping relation, and assigning the texture information of the pixel to the corresponding point cloud to obtain the optical three-dimensional point cloud with rich texture.
Further, the affine motion model in step 2 is obtained by simulating the laser radar as a virtual sensor and setting the virtual sensor at sampling points of longitude and latitude of a hemisphere, so that the projection of the laser radar sensor under the affine motion model is mathematically expressed:
Figure BDA0002858147990000031
wherein, lambda is more than 0 and represents the zooming parameter of the virtual sensor;
theta belongs to [0,90), and theta represents an inclination angle and represents an axial included angle of the virtual sensor between the hemispherical sampling point and the hemispherical curved surface;
phi belongs to [0,2 pi ]), which represents the included angle between the projection of the virtual sensor to the plane vertical to the axis of the hemispherical curved surface and the positive direction of the Y axis of the plane;
ψ ∈ [0,2 π ]), which represents the rotation angle of the virtual sensor along its optical axis;
then, the rotation transformation matrix is expressed as follows:
Figure BDA0002858147990000032
then, the translation matrix is expressed as follows:
the observation point is translated along X, Y and Z axis by T3×1Represents:
Figure BDA0002858147990000033
in the above formula, x0、y0And z0Is the coordinate of the initial viewpoint, x is the origin of the coordinate system0=y0=z0=0,Tx、TyAnd TzX, Y and Z-axis translation, respectively;
then, the transformation matrix M is a 4 × 4 matrix, which is a new matrix composed of the rotation matrix (2) and the translation matrix (3), and is obtained by multiplying the two matrices:
Figure BDA0002858147990000041
where E is here an identity matrix, i.e.
Figure BDA0002858147990000042
After projection transformation, the three-dimensional point cloud coordinates preprocessed by the laser radar are transformed and a two-dimensional projection image is generated.
Further, sampling according to the affine motion model is to virtually arrange the laser radar on a quantization unit of longitude and latitude of a quantized affine motion model hemisphere to obtain a group of two-dimensional projection images and generate a corresponding group of depth images.
The three-dimensional laser point cloud virtualizes different projection positions through an affine motion model, coordinate projection conversion is carried out, an optimal projection surface is automatically found out, an optimal two-dimensional projection image is generated, and the 2D-3D registration problem is converted into the 2D-2D automatic registration problem.
Further, repeating the step 3, obtaining a corresponding relation between a group of depth images and the homonymous points of the optical images, calculating the correlation coefficient sum of the corner point characteristics of all the depth images and the optical images, selecting the two-dimensional point cloud depth image with the maximum correlation coefficient sum as the depth image, namely the optimal depth image, registering the optimal depth image with the optical images of the camera, further automatically searching the optimal projection plane under the condition that the relative position relation between the camera and the laser radar is unknown, and constructing the optimal depth projection image, thereby realizing the automatic registration with the optical images of the camera.
Further, the corner feature extraction in step 3 adopts a corner extraction method using gradient change and neighborhood smoothness as parameters, and since the method belongs to the prior art, the description will not be provided here.
Further, the principle of matching the same-name points in step 3 adopts the maximum correlation coefficient as the matching criterion, which is not described herein since it belongs to the prior art.
Further, in the depth image generation process in the step 2, different color values are given to corresponding pixels in the projection image according to the distance from the three-dimensional point cloud to the laser radar, and a depth image is generated.
Further, the specific solving process of the mapping relationship in step 4 is as follows:
the relation between the object space coordinate system and the coordinates of the optical image is established by the formulas (1), (2), (3) and (4):
Figure BDA0002858147990000051
wherein (u)A,vA) Is the coordinate of the space point A of the three-dimensional point cloud under the coordinate system of the optical image, (X)A,YA,ZA) The coordinates of the space point A are under a radar three-dimensional point cloud coordinate system;
let l in formula (5)12A direct linear transformation equation (6) can be derived as 1:
Figure BDA0002858147990000052
will l in the formula (6)1~l11As unknown array equation (7):
Figure BDA0002858147990000053
the equation is set forth according to equation (7):
L=(BTB)-1BTW (8)
wherein: l ═ L1,l2,l3,l4,l5,l6,l7,l8,l9,l10,l11)T,W=(-u1,-v1,-u2,-v2,…,-un,-vn)T
Figure BDA0002858147990000061
The direct linear transformation parameter l can be solved according to the known matching points in step 3 and formula (8)1~l11And obtaining the mapping relation between the space coordinate of the preprocessed three-dimensional point cloud and the coordinate of the optical image.
The invention has the following beneficial effects:
1) simple and easy to implement, and stable algorithm: the characteristic that the laser radar can penetrate through the glass is fully utilized, and the simple regular scene of the door and the window is utilized, so that obvious characteristic angular points are easily generated at the junction of the transparent glass and the frame, the regular angular points generated by the door and the window are relatively clear, and the stability and the robustness of characteristic extraction are improved. (ii) a
2) Under the condition of keeping the precision, the automation degree is high, the three-dimensional laser point cloud virtualizes different projection positions through an affine motion model, coordinate projection conversion is carried out, an optimal projection surface is automatically found out, an optimal two-dimensional projection image is generated, and the 2D-3D registration problem is converted into the 2D-2D automatic registration problem.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a flow chart of a simple automatic registration method for laser radar point cloud and optical image
FIG. 2 is a diagram of obtaining an optical image of a camera via a camera
FIG. 3 is a diagram of obtaining three-dimensional point cloud by laser radar, and clipping, denoising and thinning
FIG. 4 projection image after projection of preprocessed three-dimensional point cloud
FIG. 5 depth image Point selection
FIG. 6 depth map registration results
FIG. 7 Manual pointing
FIG. 8 Manual Point selection registration results
FIG. 9 affine motion model schematic
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present invention, it should be noted that the terms "first", "second", "third", etc. are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance, and furthermore, the terms "horizontal", "vertical", etc. do not mean that the components are absolutely horizontal or overhanging, but may be slightly inclined. For example, "horizontal" merely means that the direction is more horizontal than "vertical" and does not mean that the structure must be perfectly horizontal, but may be slightly inclined.
In the description of the present invention, it should also be noted that, unless otherwise explicitly stated or limited, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in a specific case to those of ordinary skill in the art.
Examples
Step 1, selecting a Bo-Limb building corridor of Tianjin university, wherein the scene is open, the number of shelters is small, the scene obviously belongs to transparent and semitransparent regular structure scenes such as a selection door and a window, the obtained camera optical image is shown in figure 2, a ground laser radar is arranged in the center of the scene to obtain three-dimensional point cloud data of a test area, and the three-dimensional point cloud after cutting, denoising and thinning is shown in figure 3.
Step 2, three-dimensional point cloud projection: sampling the preprocessed three-dimensional point cloud according to an affine motion model, projecting the preprocessed three-dimensional point cloud on a plane vertical to an axis of a hemispherical curved surface in the affine motion model according to a three-dimensional coordinate relation through rotation, translation and transformation to generate a projection image as shown in figure 4, namely converting a 2D-3D registration problem into 2D-2D, and generating a depth image according to a depth value of the projection image as shown in figure 5;
the projection method is that the laser radar is simulated as a virtual sensor and is arranged at the sampling points of the longitude and the latitude of a hemisphere, and then the mathematical expression of the projection of the laser radar sensor under the affine motion model is as follows:
Figure BDA0002858147990000091
wherein, lambda is more than 0 and represents the zooming parameter of the virtual sensor;
theta belongs to [0,90), and theta represents an inclination angle and represents an axial included angle of the virtual sensor between the hemispherical sampling point and the hemispherical curved surface;
phi belongs to [0,2 pi ]), which represents the included angle between the projection of the virtual sensor to the axial vertical plane of the hemispherical curved surface and the positive direction of the Y axis;
ψ ∈ [0,2 π ]), representing a rotation angle of the virtual sensor along its optical axis;
then, the rotation transformation matrix is expressed as follows:
Figure BDA0002858147990000092
then, the translation matrix is expressed as follows:
the observation point is translated along X, Y and Z axis by T3×1Represents:
Figure BDA0002858147990000093
in the above formula, x0、y0And z0Is the coordinate of the initial viewpoint, x is the origin of the coordinate system0=y0=z0=0。Tx、TyAnd TzX, Y and Z-axis translation, respectively;
then, the transformation matrix M is a 4 × 4 matrix, which is a new matrix composed of the rotation matrix (2) and the translation matrix (3), and is obtained by multiplying the two matrices:
Figure BDA0002858147990000101
where E is here an identity matrix, i.e.
Figure BDA0002858147990000102
After projection transformation, the three-dimensional point cloud coordinates preprocessed by the laser radar are transformed and a two-dimensional projection image is generated.
Under the condition that the relative position relation between the camera and the laser radar is unknown, the optimal projection plane is automatically searched, and the optimal depth projection image is constructed, so that the automatic registration with the image of the optical camera is realized.
Finding the optimal projection plane is to virtually set the laser radar on the quantization units of longitude and latitude of a quantized affine motion model hemisphere, as shown in fig. 9, transform the three-dimensional point cloud of the laser radar according to the affine motion model through regular sampling motion, simulate the laser radar sensor as a virtual sensor, and sample along the hemisphere; c represents the sensor position at the moment of observation, and C' represents moving it above the front view. Theta denotes latitude, phi denotes longitude, the hemisphere is divided by longitude and latitude, the sensor can move over the entire hemisphere, and the black dots denote sampling points (intersections of the longitude and latitude lines), that is, positions where the sensor can be located. From the view point of sensor motion, phi represents rotation, theta represents inclination angle, and corresponds to t in a one-to-one mode, the relation is that t is 1/cos theta, psi represents rotation of the sensor along the optical axis of the sensor, and lambda represents zooming caused by distance of the sensor. The longitude phi results in a rotation of the sensor and the latitude theta causes a down-sampling of the sensor in the longitudinal direction. And (3) combining the geometric model to provide a mathematical expression formula (1) of the projection of the laser radar sensor under the affine motion model. And then, virtually arranging the laser radar on different sampling points on a hemisphere through rotation, translation and transformation, obtaining a group of two-dimensional projection images by using black dots in figure 9 to represent the sampling points (intersection points of longitude lines and latitude lines), and endowing the projection images with different color values to corresponding pixels in the projection images according to the distance from the three-dimensional point cloud to the laser radar to generate a group of depth images.
Step 3, matching points with the same name: performing corner feature extraction on the depth image generated in the step 2 as shown in fig. 5 and the optical image of the optical image as shown in fig. 2 by using a Harris operator, matching a group of depth images and the optical image with the same name point as shown in fig. 2 by using a correlation coefficient method, traversing each corner feature in the depth image as a point to be matched, taking out an image signal with each point to be matched as a range of 7 × 7 in the center, simultaneously traversing feature points in an optical image region, extracting an image signal with the same name of 7 × 7 for each feature point, calculating a correlation coefficient of each corner feature in the depth image and the optical image region, taking a point with the maximum value of the correlation number as the same name point of the point to be matched, establishing an index, traversing a group of depth images to obtain a group of index relationships, calculating the sum of the correlation coefficients of the corner features of the depth image and the optical image, and selecting a two-dimensional depth image with the maximum sum of the correlation coefficients as an optimal depth image, considering that the relative position relationship between the camera and the laser radar is the best matched, and calculating to obtain the corresponding relationship between the optimal depth image and the homonymous point of the optical image;
52 pairs of homologous points were obtained using the above method.
Solving the mapping relation between the spatial coordinates of the preprocessed three-dimensional point cloud and the coordinates of the optical image through a linear equation;
the coordinates of the homonymous points are substituted into equation (8) to estimate the direct linear transformation parameters, which is shown in table 1,
TABLE 1 direct linear transformation parameters based on depth image
Figure BDA0002858147990000121
Meanwhile, manual point selection is adopted for comparison, then homonymous points are manually selected by a visual method for manual registration, the point selection is carried out manually as shown in figure 7, and direct linear transformation equation parameters are estimated for the homonymous points as shown in table 2.
TABLE 2 direct linear transformation parameters for manual registration
Figure BDA0002858147990000122
And 5, data fusion: and 4, obtaining a mapping relation between the three-dimensional point cloud space coordinate of the laser radar and the coordinate of the optical image, traversing the coordinate of each point in the laser radar point cloud, finding a corresponding pixel in the optical image by using the mapping relation, obtaining texture information of the pixel, and assigning the texture information to the point cloud. The result is shown in fig. 6, which shows the depth map matching result and the manual selection matching result is shown in fig. 8, and it can be seen that depth matching can be used instead of manual selection matching.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the present invention in any way. Any simple modification, change and equivalent changes of the above embodiments according to the technical essence of the invention are still within the protection scope of the technical solution of the invention.

Claims (8)

1. A simple automatic registration method for laser radar point cloud and optical images is characterized by comprising the following steps:
step 1, pretreatment process:
obtaining an optical image of the camera through the camera and quantizing the optical image;
obtaining three-dimensional point cloud through a laser radar, and cutting, denoising and thinning to obtain preprocessed three-dimensional point cloud;
step 2, three-dimensional point cloud projection: sampling the preprocessed three-dimensional point cloud according to an affine motion model, projecting the preprocessed three-dimensional point cloud onto a plane vertical to an axis of a hemispherical curved surface in the affine motion model through rotation, translation and transformation to generate a projection image, and generating a depth image according to the projection image;
step 3, matching points with the same name: performing angular point feature extraction on the depth image and the optical image by using a Harris operator, and matching homonymy points of the depth image and the optical image by using a correlation coefficient method to obtain a corresponding relation of the homonymy points of the depth image and the optical image;
step 4, according to the corresponding relation of the homonymous points obtained in the step 3, solving the mapping relation between the spatial coordinates of the preprocessed three-dimensional point cloud and the coordinates of the optical image through direct linear transformation;
and 5, data fusion: and traversing the coordinates of each point in the preprocessed three-dimensional point cloud through the mapping relation obtained in the step 4, finding a corresponding pixel in the optical image by using the mapping relation, and assigning the texture information of the pixel to the corresponding point cloud.
2. The method as claimed in claim 1, wherein the affine motion model in step 2 is obtained by simulating a lidar as a virtual sensor and setting the virtual sensor at the longitude and latitude sampling points of a hemisphere, so that the projection of the lidar sensor under the affine motion model is expressed mathematically:
Figure FDA0002858147980000021
wherein, lambda is more than 0 and represents the zooming parameter of the virtual sensor;
theta belongs to [0,90), represents an inclination angle and represents an axial included angle of the virtual sensor between the hemispherical sampling point and the hemispherical curved surface;
phi belongs to [0,2 pi ]), which represents the included angle between the projection of the virtual sensor to the plane vertical to the axis of the hemispherical curved surface and the positive direction of the Y axis of the plane;
ψ ∈ [0,2 π ]), which represents the rotation angle of the virtual sensor along its optical axis;
then, the rotation transformation matrix is expressed as follows:
Figure FDA0002858147980000022
then, the translation matrix is expressed as follows:
translating the observation viewpoint along X, Y and Z axis by T3×1Represents:
Figure FDA0002858147980000023
in the above formula, x0、y0And z0Is the coordinate of the initial viewpoint, x is the origin of the coordinate system0=y0=z0=0,Tx、TyAnd TzX, Y and Z-axis translation, respectively;
then, the transformation matrix M is a 4 × 4 matrix, which is a new matrix composed of the rotation matrix (2) and the translation matrix (3), and is obtained by multiplying the two matrices:
Figure FDA0002858147980000024
where E is here an identity matrix, i.e.
Figure FDA0002858147980000031
After projection transformation, the three-dimensional point cloud coordinates preprocessed by the laser radar are transformed and a two-dimensional projection image is generated.
3. The method as claimed in claim 2, wherein the sampling according to the affine motion model is performed by virtually arranging the lidar on a quantization unit of longitude and latitude of a quantized affine motion model hemisphere to obtain a set of two-dimensional projection images and generate a corresponding set of depth images.
4. The method as claimed in claim 3, wherein the step 3 is repeated for a set of depth images, a corresponding relationship between the set of depth images and the corresponding points of the optical image is obtained, a correlation coefficient sum of the corner point features of each depth image and the optical image is calculated, and the two-dimensional point cloud depth image with the largest correlation coefficient sum is selected as the depth image.
5. The method as claimed in claim 1, wherein the corner feature extraction in step 3 is performed by using a corner extraction method with gradient change and neighborhood smoothness as parameters.
6. The method as claimed in claim 1, wherein the principle of matching the same-name points in step 3 uses the maximum correlation coefficient as the matching criterion.
7. The method for automatically registering the point cloud of the lidar and the optical image according to claim 1, wherein in the depth image generation process in the step 2, the projection image is given different color values to corresponding pixels in the projection image according to the distance from the three-dimensional point cloud to the lidar, so that the depth image is generated.
8. The method for automatically registering the point cloud and the optical image of the lidar as claimed in claim 1, wherein the specific solving process of the mapping relationship in the step 4 is as follows:
establishing a relation between a space coordinate system of the three-dimensional point cloud and the coordinates of the optical image by the formulas (1), (2), (3) and (4):
Figure FDA0002858147980000041
wherein (u)A,vA) Is the coordinate of the space point A of the three-dimensional point cloud under the coordinate system of the optical image, (X)A,YA,ZA) The coordinate of the space point A is under the space coordinate system of the radar three-dimensional point cloud;
let l in formula (5)12A direct linear transformation equation (6) can be derived for 1:
Figure FDA0002858147980000042
let l in the formula (6)1~l11As unknown array equation (7):
Figure FDA0002858147980000043
the equation is set forth according to equation (7):
L=(BTB)-1BTW (8)
wherein: l ═ L1,l2,l3,l4,l5,l6,l7,l8,l9,l10,l11)T,W=(-u1,-v1,-u2,-v2,…,-un,-vn)T
Figure FDA0002858147980000051
The direct linear transformation parameter l can be solved according to the known matching points in step 3 and formula (8)1~l11And obtaining the mapping relation between the space coordinate of the preprocessed three-dimensional point cloud and the coordinate of the optical image.
CN202011572898.1A 2020-12-24 2020-12-24 Simple automatic registration method for laser radar point cloud and optical image Active CN112581505B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011572898.1A CN112581505B (en) 2020-12-24 2020-12-24 Simple automatic registration method for laser radar point cloud and optical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011572898.1A CN112581505B (en) 2020-12-24 2020-12-24 Simple automatic registration method for laser radar point cloud and optical image

Publications (2)

Publication Number Publication Date
CN112581505A CN112581505A (en) 2021-03-30
CN112581505B true CN112581505B (en) 2022-06-03

Family

ID=75139960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011572898.1A Active CN112581505B (en) 2020-12-24 2020-12-24 Simple automatic registration method for laser radar point cloud and optical image

Country Status (1)

Country Link
CN (1) CN112581505B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113176557B (en) * 2021-04-29 2023-03-24 中国科学院自动化研究所 Virtual laser radar online simulation method based on projection
CN113643208B (en) * 2021-08-24 2024-05-31 凌云光技术股份有限公司 Affine sampling method and affine sampling device for depth image

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102997871A (en) * 2012-11-23 2013-03-27 南京大学 Method for inverting effective leaf area index by utilizing geometric projection and laser radar
CN104794743A (en) * 2015-04-27 2015-07-22 武汉海达数云技术有限公司 Color point cloud producing method of vehicle-mounted laser mobile measurement system
CN106384106A (en) * 2016-10-24 2017-02-08 杭州非白三维科技有限公司 Anti-fraud face recognition system based on 3D scanning
CN107316325B (en) * 2017-06-07 2020-09-22 华南理工大学 Airborne laser point cloud and image registration fusion method based on image registration
US10528851B2 (en) * 2017-11-27 2020-01-07 TuSimple System and method for drivable road surface representation generation using multimodal sensor data
CN108195736B (en) * 2017-12-19 2020-06-16 电子科技大学 Method for extracting vegetation canopy clearance rate through three-dimensional laser point cloud
CN109410256B (en) * 2018-10-29 2021-10-15 北京建筑大学 Automatic high-precision point cloud and image registration method based on mutual information
CN109751965B (en) * 2019-01-04 2020-08-14 北京航天控制仪器研究所 Precise spherical coupling part matching and gap measuring method based on three-dimensional point cloud
CN111028340B (en) * 2019-12-10 2024-04-05 苏州大学 Three-dimensional reconstruction method, device, equipment and system in precise assembly

Also Published As

Publication number Publication date
CN112581505A (en) 2021-03-30

Similar Documents

Publication Publication Date Title
CN107167788B (en) Method and system for obtaining laser radar calibration parameters and laser radar calibration
CN110842940A (en) Building surveying robot multi-sensor fusion three-dimensional modeling method and system
CN109993793B (en) Visual positioning method and device
CN110223379A (en) Three-dimensional point cloud method for reconstructing based on laser radar
CN108389233B (en) Laser scanner and camera calibration method based on boundary constraint and mean value approximation
CN112581505B (en) Simple automatic registration method for laser radar point cloud and optical image
CN110532865B (en) Spacecraft structure identification method based on fusion of visible light and laser
Liang et al. Automatic registration of terrestrial laser scanning data using precisely located artificial planar targets
CN114998545A (en) Three-dimensional modeling shadow recognition system based on deep learning
Crombez et al. 3D point cloud model colorization by dense registration of digital images
CN114140539A (en) Method and device for acquiring position of indoor object
CN113345084B (en) Three-dimensional modeling system and three-dimensional modeling method
CN114137564A (en) Automatic indoor object identification and positioning method and device
CN114298151A (en) 3D target detection method based on point cloud data and image data fusion
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
CN109978982B (en) Point cloud rapid coloring method based on oblique image
CN113313741B (en) Point cloud self-registration method based on calibration sphere
Pénard et al. 3D building facade reconstruction under mesh form from multiple wide angle views
Wu et al. Derivation of Geometrically and Semantically Annotated UAV Datasets at Large Scales from 3D City Models
Hirzinger et al. Photo-realistic 3D modelling-From robotics perception to-wards cultural heritage
Ahmad Yusri et al. Preservation of cultural heritage: a comparison study of 3D modelling between laser scanning, depth image, and photogrammetry methods
Guo et al. Research on Floating Object Ranging and Positioning Based on UAV Binocular System
CN111010558B (en) Stumpage depth map generation method based on short video image
Wen et al. Mobile laser scanning systems for GPS/GNSS-denied environment mapping
Kovynev et al. Review of photogrammetry techniques for 3D scanning tasks of buildings

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant