CN111457930B - High-precision mapping positioning method by combining vehicle-mounted Lidar and unmanned aerial vehicle - Google Patents

High-precision mapping positioning method by combining vehicle-mounted Lidar and unmanned aerial vehicle Download PDF

Info

Publication number
CN111457930B
CN111457930B CN202010252893.4A CN202010252893A CN111457930B CN 111457930 B CN111457930 B CN 111457930B CN 202010252893 A CN202010252893 A CN 202010252893A CN 111457930 B CN111457930 B CN 111457930B
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
points
point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010252893.4A
Other languages
Chinese (zh)
Other versions
CN111457930A (en
Inventor
罗跃军
季铮
廖逸凡
周洁
方永火
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Heading Data Intelligence Co Ltd
Original Assignee
Wuhan University WHU
Heading Data Intelligence Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU, Heading Data Intelligence Co Ltd filed Critical Wuhan University WHU
Priority to CN202010252893.4A priority Critical patent/CN111457930B/en
Publication of CN111457930A publication Critical patent/CN111457930A/en
Application granted granted Critical
Publication of CN111457930B publication Critical patent/CN111457930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of spatial data processing and positioning, and discloses a high-precision mapping positioning method combining vehicle-mounted Lidar and an unmanned aerial vehicle, wherein a control surface element is obtained, the unmanned aerial vehicle images subjected to initial orientation processing are subjected to feature point extraction, feature matching is carried out between the images, homonymy feature points between the unmanned aerial vehicle images are obtained, and then a robust estimation method is used for carrying out gross error elimination on mismatching points; establishing one-to-many or one-to-one mapping relation between the characteristic surface element and the characteristic points on the unmanned aerial vehicle image; and further carrying out refined calculation on the unmanned aerial vehicle image and the camera external parameters through adjustment iteration of a beam method to obtain high-precision azimuth elements and parameters. The method has the advantages of high speed and high precision, improves the absolute positioning precision to be within 5cm and 10cm of elevation of a plane through the check of the field control points, and provides a low-cost technical solution for the unmanned aerial vehicle and Lidar to jointly obtain a high-precision map.

Description

High-precision mapping positioning method by combining vehicle-mounted Lidar and unmanned aerial vehicle
Technical Field
The invention belongs to the technical field of spatial data processing and positioning, and particularly relates to a high-precision mapping positioning method by combining a vehicle-mounted Lidar and an unmanned aerial vehicle.
Background
At present, with the gradual arrival of the 5G era, the related technology of the unmanned technology is matured day by day, and the automatic driving gradually occupies a place in the development of the national economic society. The high-precision map is used as a scarce resource in the field of unmanned driving and is just needed, plays a core role in the whole field, can help the automobile to sense complex road information such as gradient, curvature, course and the like in advance, and is combined with intelligent path planning to make a correct decision for the automobile. Therefore, the production and manufacturing of high-precision maps, especially high-precision road maps, are the most difficult and important links of unmanned technology.
The necessary condition for manufacturing the high-precision map is to collect and integrate related geographic data of a ground range covered by the high-precision map, the current domestic commonly used data collection mode is mainly to scan a region to be measured by using a vehicle-mounted lidar, finally generate point cloud, manufacture the high-precision map on the basis of the point cloud, and in the subsequent description, point cloud data obtained by scanning a data collection vehicle by using a road as a main object and performing subsequent processing is called vehicle-mounted lidar point cloud data.
In the surveying and mapping field, especially the photogrammetry field, unmanned aerial vehicle is as a new emerging carrier, because its characteristics such as flexible, the operation wide range, with low costs make unmanned aerial vehicle survey and map and receive the favor of relevant practitioner more and more, its application is also increasingly extensive.
Simultaneously, because the characteristics of unmanned aerial vehicle mapping data for unmanned aerial vehicle data and on-vehicle lidar point cloud data have good complementarity, consequently, adopt unmanned aerial vehicle mapping data to supply on-vehicle lidar point cloud data can effectively promote data acquisition efficiency and precision, and then simplify whole high accuracy map preparation flow.
Because the unmanned aerial vehicle and the measuring vehicle are two different measuring carriers, and the unavoidable measurement error exists, the data of the unmanned aerial vehicle and the point cloud data of the vehicle-mounted lidar cannot be strictly corresponding, and the data requirement of high-precision map manufacturing cannot be met, therefore, the data of the unmanned aerial vehicle and the point cloud data of the vehicle-mounted lidar need to be processed and transformed, so that the unmanned aerial vehicle and the point cloud data of the vehicle-mounted lidar can be well corresponding within the precision requirement range, and the process is called registration.
The main objects of the vehicle-mounted lidar point cloud data are roads and pavements, and the vehicle-mounted lidar point cloud data has the following main characteristics: firstly, a large amount of data are distributed in a belt shape, and a large amount of holes exist in a non-road surface area; secondly, a large amount of point clouds are coplanar, and the fitted plane has strong directivity; and thirdly, due to the existence of other vehicles on the road, the data acquisition time is long, and a batch of discrete points which need to be taken as error elimination may appear.
Because the data obtained by the unmanned aerial vehicle is mainly image data with position information, and the data obtained by the three-dimensional acquisition vehicle is laser point cloud data, the problem of registration between the image and the point cloud is always a problem to be solved in the industry.
The existing ideas for solving the registration problem mainly include two approaches: the method comprises the steps of firstly, registering images and point clouds based on semantic information, and secondly, registering the point clouds and the images based on point cloud registration.
The main idea of semantic information-based registration is to extract semantic information on a point cloud and an image respectively, in most cases, a series of feature points on the point cloud and the image, and to correspond a three-bit feature point on the point cloud with a two-bit feature point on the image, and to determine a transformation parameter of the image by using the three-bit feature point as a control, so that the two are brought into the same coordinate frame.
Based on the thought, two main technical methods are provided, wherein one of the two main technical methods is registration by using lidar data intensity information, the main thought is that feature points are extracted by using an intensity image generated by lidar in an image processing mode and are matched with the feature points of the unmanned aerial vehicle image, and then a corresponding relation is obtained. And secondly, extracting semantic information by directly utilizing the lidar three-dimensional point cloud data, and matching the semantic information with the characteristic points on the image to further complete registration. The method has the advantages of high precision, strong reliability and small calculation amount during registration, and has the defects that semantic information of point clouds and images needs to be obtained simultaneously, and semantic segmentation is often needed to be carried out on the images and the point clouds, so that the processing difficulty of data is improved by one level, and the method has a better effect only on data with typical ground objects at present. For a high-precision map required by unmanned driving, the main object is a road surface, the characteristics are often not obvious, and effective segmentation is difficult to perform, so that the method has an unsatisfactory effect.
The main idea of the registration between the point clouds is that the unmanned aerial vehicle image is used for firstly carrying out light beam adjustment to generate sparse point clouds, matching points of the sparse point clouds in lidar point clouds are found to form a corresponding relation, transformation parameters of the point clouds are determined, the sum of the distances between the two point clouds is enabled to be minimum, and the external orientation element of each image is recalculated.
The method has the advantages of high precision and good effect, but only can process rigid transformation and cannot process non-rigid transformation, and the method needs a good initial value, otherwise convergence is difficult; the non-rigid registration technology is mainly characterized in that after the rigid registration technology is processed, lidar point cloud is used as a control condition, the image data is subjected to secondary space-triplet adjustment, and the external orientation element of each image is adjusted, so that the non-rigid registration is completed.
An important step of the idea is determination of a corresponding relation between point clouds, which requires that coverage objects of two types of point cloud data are approximately the same, and the order of the point cloud densities are approximately equivalent, however, the point cloud density of a sparse point cloud obtained by an image beam method is often far smaller than that of a vehicle-mounted laser lidar point cloud, and due to the problem of view angles of two obtaining modes, unmanned aerial vehicle image data often can only obtain data of a top surface, side data are less, the point cloud is uniformly distributed integrally, the vehicle-mounted lidar point cloud data can obtain more side data, the top surface is less, and meanwhile, a large number of cavities generated in non-road areas exist. Therefore, a large amount of mismatching phenomena often exist in the corresponding points directly found between the two point clouds, and the registration accuracy is seriously influenced.
Through the above analysis, the problems and defects of the prior art are as follows: in the prior art, in a scheme for solving the registration problem of image and point cloud registration based on semantic information, the intensity image of the lidar is often poor in geometric accuracy, and finally the registration accuracy is often not high. Semantic information of the point cloud and the image needs to be obtained at the same time, and semantic segmentation is often required to be performed on the image and the point cloud, so that the difficulty in data processing is increased. For a required high-precision map, the features are often not obvious, and effective segmentation is difficult to perform, so that the effect of performing semantic segmentation on images and point clouds is not ideal.
In the existing registration scheme for carrying out point cloud and image registration based on point cloud, rigid transformation can only be processed, non-rigid transformation cannot be processed, and the method needs a better initial value, otherwise convergence is possibly difficult; and in the determination of the corresponding relation between the point clouds, the point cloud density of the sparse point cloud obtained by the image beam method is often far less than that of the vehicle-mounted laser lidar point cloud, and because of the visual angle problem of the two acquisition modes, the unmanned aerial vehicle image data can only obtain the data of the top surface, the side data are less, the point cloud is uniformly distributed, the vehicle-mounted lidar point cloud data acquire more side data, the top surface is less, and meanwhile, a large number of cavities generated in non-road areas exist. Therefore, a large amount of mismatching phenomena often exist in the corresponding points directly found between the two point clouds, and the registration accuracy is seriously influenced.
In the prior art, the registered point cloud data and the image data are similar in carrier and same in visual angle, and the processing effect on the data with larger visual angle difference is not ideal.
The difficulty in solving the above problems and defects is: the related method needs to have good universality, simplicity, economy and practicability. For the registration scheme of the image and the point cloud based on the semantic information, the biggest defect is that the calculation amount is complex, the time consumption is long, and the intensity image precision is poor in the semantic segmentation process. In a point cloud and image registration scheme based on point cloud registration, the defects to be solved are that point cloud density is seriously mismatched, visual angle difference is large, texture features are repeated and the like. In terms of difficulty, defects in the registration scheme of point clouds and images based on registration between the point clouds are relatively easy to solve.
The significance of solving the problems and the defects is as follows: the registration scheme has more excellent universality, simplicity, economy and practicability.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a high-precision mapping positioning method by combining a vehicle-mounted Lidar and an unmanned aerial vehicle. The problem of registration of vehicle-mounted lidar point cloud data and unmanned aerial vehicle data is solved.
The invention is realized in such a way that a high-precision mapping positioning method combining a vehicle-mounted Lidar and an unmanned aerial vehicle comprises the following steps:
step one, extracting a representative feature point set from vehicle-mounted lidar point cloud data, and fitting a small surface element on a road surface by using each representative feature point set meeting the requirements;
extracting feature points of each unmanned aerial vehicle image, performing feature matching between the images to obtain homonymous feature points between the unmanned aerial vehicle images, and performing gross error elimination on the feature points which are matched in error by using a RANSAC method;
step three, after the vehicle-mounted lidar point cloud data completes small surface element extraction work and unmanned aerial vehicle image feature point extraction and matching, a one-to-many or one-to-one mapping relation is established between the small surface elements and feature points on the unmanned aerial vehicle image;
eliminating rigid deformation of the vehicle-mounted laser lidar point cloud and the whole measurement area of the unmanned aerial vehicle image;
and step five, after rigid registration is completed, adjusting the deformation and dislocation of the interior of the unmanned aerial vehicle image, so that the deformation and dislocation of the interior of the unmanned aerial vehicle image are matched with the vehicle-mounted laser lidar point cloud.
Further, in the step one, the representative feature point set includes:
(1) a certain point P exists in the point set, so that the Euclidean distance from any point Q in the point set to P is smaller than a specific threshold value;
(2) after the whole point cloud is subjected to semantic segmentation, all points in the point set belong to the same semantic set;
(3) there is a plane x such that the distribution of distances from all points in the set of points to the plane is averaged to 0 and has a variance σ2And σ is less than a certain threshold.
Further, in the first step, the method for extracting the specific point set adopts a seed characteristic point method and a region growing method, and specifically includes:
1) comprehensively utilizing the geometrical characteristics of the point cloud and the optical characteristics of the intensity image to extract a plurality of seed characteristic points:
2) all points in the neighborhood with the distance d of the seed characteristic points are classified into a point set to finish point set extraction;
3) after extracting the point set, performing plane fitting by adopting a RANSAC method to obtain a series of initial small surface elements; after the initial small surface elements are obtained, a plurality of groups of small surface element seed point distances are smaller than a threshold value, the normal vector included angle is smaller than the threshold value, the small surface elements are combined, and iteration is carried out for multiple times, so that the stable state is finally achieved.
Further, in the third step, the method for constructing the mapping relationship includes:
i) according to the initial position information of each image and the frame size of the image, the edge key points of the small surface element extracted from the vehicle-mounted lidar point cloud data are back-projected onto each image according to a collinear equation;
ii) connecting key points reversely projected on the unmanned aerial vehicle image according to the connection sequence on the lighting point cloud, and recovering the projection of the small surface element on the unmanned aerial vehicle image;
iii) judging whether each feature point on each image falls into the projection of a certain small surface element, and if so, establishing a temporary corresponding relation between the small surface element and the certain feature points;
for all images, if all homonymous points of a certain feature point fall into the projections of the same small surface element on different images, establishing the corresponding relation of the homonymous feature point pair and the small surface element;
establishing temporary feature point pairs for all the projections of all the images and the small surface elements, reserving homonymous feature point pairs which can establish corresponding relations with the small surface elements, removing other feature points, and establishing one-to-many mapping relations between the small surface elements and the homonymous feature point pairs on the images;
the collinearity equation is:
Figure GDA0003282085230000061
the equation expresses a mathematical relation that three points of an object point, an image point and a projection center (generally, a lens center for an image) are positioned on a straight line, is one of the most basic formulas in photogrammetry, and has the characteristics of simple model, high accuracy, small calculation amount, suitability for large-scale calculation and the like.
Further, in the fourth step, the method for rigid registration and gross error rejection includes:
performing front intersection, gross error elimination, Euler angle recovery, external parameter updating and iteration until the change of the Euler angle is smaller than a threshold value;
the method of front meeting comprises: solving the three-dimensional space coordinates of each object point corresponding to each characteristic point pair establishing a mapping relation with the small surface element in the same coordinate system as the point cloud data of the vehicle-mounted lidar by using the position and the posture of each image and using a collinear equation;
the gross error rejection method comprises the following steps: a) and (3) threshold filtering, namely removing points with the distance between the small elements greater than theta, wherein theta satisfies the following conditions: theta 3muavWherein m isuavCalculating by using the GPS precision of the unmanned aerial vehicle image;
b) further performing gross error rejection by using RANSAC;
the Euler angle recovery method comprises the following steps: the geometric center of utilizing whole unmanned aerial vehicle to survey the district replaces the focus of empty three points, and the convergence condition that satisfies is that the sum of the European distance of empty three points to little surface element is minimum, and the mathematical expression is:
Figure GDA0003282085230000071
Figure GDA0003282085230000072
wherein, distance is to find the empty three points after rotation transformationiDistance to the corresponding small bin; when setting an initial value, setting the initial value of the Euler angle to 0 when starting iteration; the Euclidean distance is used as a cost function to guarantee the balance of the calculation speed and accuracy, the RANSAC method is adopted to guarantee that rigid registration has strong robustness, high-precision parameters can be estimated from a large number of coarse difference points, and the convergence is good.
The iterative method comprises the following steps: and updating the external parameter once for each solved Euler angle, reestablishing a mapping relation after updating, and carrying out rigid registration for the second time until the change of the Euler angles solved by two times of iteration is smaller than a threshold value.
Further, in the fifth step, the position and the posture of each unmanned aerial vehicle image are adjusted by utilizing vehicle-mounted laser lidar data, and meanwhile, the camera parameters of the unmanned aerial vehicle camera are calibrated.
Further, the fifth step specifically includes:
(I) adjustment of the area network by a beam method: and taking each light beam as a basic adjustment unit, listing an error equation according to a collinear condition equation, uniformly performing adjustment processing in the whole area, and solving the external orientation element of each image. The adjustment unit is a single light beam serving as an adjustment basic unit and a characteristic point coordinate (x)i,yi) The mean difference observations. Ground point coordinate (X) satisfying intersection of homonymous rayst,Yt,Zt) The coordinates are equal; for each feature point of each pair of homonymous feature points, satisfying:
Figure GDA0003282085230000073
Figure GDA0003282085230000074
Figure GDA0003282085230000075
costi,1=(Δxi 2+Δyi 2)
cost1=∑costi,1
wherein the whole measuring region has n +1 images;
(II) camera parameter calibration: when the adjustment of the area network by the beam method is carried out, the focal length and the imaging center coordinate of the camera are brought into the whole adjustment system; for the generation of distorted image passes k0,k1,k3,p0,p1The parameters are described as follows:
Figure GDA0003282085230000081
Figure GDA0003282085230000082
Figure GDA0003282085230000083
Figure GDA0003282085230000084
to sum up, the calibration parameters of the camera are combined
Figure GDA0003282085230000085
The following steps are changed:
Figure GDA0003282085230000086
finally, the optimization solution objective becomes the equation:
Figure GDA0003282085230000087
(III) control with lidar: under the condition of no error, all homonymous feature point pairs are positioned on a finite plane where the corresponding small surface element is positioned; adjusting internal and external parameters of the unmanned aerial vehicle image according to the best direction of overall fit of the homonymy feature point pair and the small surface element; using a segmentation direction weighted distance as a cost function, wherein the cost function is shown as the following formula for each feature point pair:
costj,2=I0(Δdxj 2+Δdyj 2+3Δdzj 2)+2000I1(Δdxj 2+Δdyj 2+3Δdzj 2)
wherein Δ dxj,Δdyj,ΔdzjThe distances from the empty three points to the limited plane of the corresponding small element in the X, Y and Z directions, I0,I1To indicate the function, the following equation is shown:
Figure GDA0003282085230000088
Figure GDA0003282085230000089
where μ is the median error of the empty three points after rigid registration. For all images of the survey area, the cost function is given by:
cost2=∑costj,2
(IV) integral adjustment: regularization and weighting are carried out, in the regularization, cost1As a distance in image space, cost2Cost is the distance in the object space2In the image space conversion, the conversion mode is as follows:
cost2′=cost2/gsd
gsd is the ground sampling distance, and the navigation height and the focal length are estimated by the following formula;
Figure GDA0003282085230000091
for cost1、cost2Different weights are given, and the overall cost function formula is shown as follows:
cost=P1cost1+(P2cost2)/gsd
wherein P is1,P2Respectively, the reciprocal of the internal precision of the unmanned aerial vehicle image after the last registration and the reciprocal of the lidar point cloud precision, wherein the whole non-rigid registration optimization problem is shown as the following formula:
Figure GDA0003282085230000092
and updating the external parameter once for each solution, reestablishing a mapping relation after updating, performing non-rigid registration for the second time, and iterating until the change of the external parameter of the image solved by two iterations is smaller than a threshold value.
Another object of the present invention is to provide a drone implementing the high-precision mapping positioning method using vehicle-mounted Lidar in combination with the drone.
Another object of the present invention is to provide a program storage medium for receiving user input, the stored computer program causing an electronic device to execute the method for high-precision mapping positioning using an onboard Lidar in combination with a drone.
It is another object of the present invention to provide a computer program product stored on a computer readable medium, comprising a computer readable program for providing a user input interface to implement the method for high-precision map localization in conjunction with a drone using an onboard Lidar, when executed on an electronic device.
By combining all the technical schemes, the invention has the advantages and positive effects that:
the method is mainly applied to the road pavement, so that a large number of planes can be extracted from the point cloud data very easily, and aiming at the characteristic of the pavement data and the real data quality, the method takes the vehicle-mounted lidar point cloud data and unmanned aerial vehicle image feature point data as main processing objects, takes the position and attitude information of an unmanned aerial vehicle image as assistance, and adopts a weak constraint registration method based on the small surface element semantic features to achieve the expected purpose.
The invention designs a registration mode suitable for the unmanned aerial vehicle image and the vehicle-mounted lidar point cloud data aiming at the characteristics of the unmanned aerial vehicle image and the vehicle-mounted lidar point cloud data, and the method has the characteristics of high speed, high precision and capability of realizing non-rigid registration in the registration process of the unmanned aerial vehicle image and the vehicle-mounted lidar point cloud.
Compared with the prior art, the invention has the advantages that: extracting a characteristic set from the vehicle-mounted point cloud data, and acquiring a control surface element; extracting feature points of the unmanned aerial vehicle images subjected to initial orientation processing, performing feature matching between the images to obtain homonymous feature points between the unmanned aerial vehicle images, and performing gross error elimination on mismatching points by using a robust estimation method; establishing one-to-many or one-to-one mapping relation between the characteristic surface element and the characteristic points on the unmanned aerial vehicle image; through adjustment iteration of a light beam method, the unmanned aerial vehicle image and the camera external parameters are further refined and solved to obtain high-precision orientation elements and parameters, and the rigid deformation of the whole measurement area of the vehicle-mounted laser point cloud and the unmanned aerial vehicle image is further reduced; the method has the advantages that in the registration process of the unmanned aerial vehicle image and the vehicle-mounted point cloud, the speed is high, the precision is high, the absolute positioning precision can be improved to be within 5cm and 10cm of the plane through the detection of the field control point, and a low-cost technical solution is provided for the joint acquisition of a high-precision map by the unmanned aerial vehicle and the Lidar.
The effects and advantages of the present invention obtained by combining experimental or experimental data with the comparison of the prior art further include:
compared with a point cloud and image registration scheme (scheme 1 is shown in figure 10) and a pure photogrammetry scheme (scheme 2 is shown in figure 11) based on point cloud registration, the method adopts two groups of data to carry out a comparison test, and experimental areas are shown in figures 10-11.
The external accuracy of the three-dimensional model after registration is shown in the following table:
Figure GDA0003282085230000101
Figure GDA0003282085230000111
the final three-dimensional model of the invention is shown in fig. 12-13.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained from the drawings without creative efforts.
Fig. 1 is a flowchart of a high-precision mapping and positioning method using a vehicle-mounted Lidar and an unmanned aerial vehicle in combination according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a high-precision mapping and positioning method using a vehicle-mounted Lidar and an unmanned aerial vehicle in combination according to an embodiment of the invention.
FIG. 3 is a diagram of an exemplary small surface element provided by an embodiment of the present invention
Fig. 4 is a flow chart of small voxel extraction according to an embodiment of the present invention.
Fig. 5 is a flowchart of image feature point extraction and matching according to an embodiment of the present invention.
Fig. 6 is a diagram of feature point pairs on different images corresponding to a small facet according to an embodiment of the present invention.
Fig. 7 is a flowchart of mapping relationship establishment according to an embodiment of the present invention.
Fig. 8 is a flow chart of rigid registration provided by an embodiment of the present invention.
Fig. 9 is a schematic diagram of a normal equation coefficient array structure provided in the embodiment of the present invention.
Fig. 10 is an effect diagram of a registration scheme 1 for performing point cloud and image registration based on point cloud registration in the prior art according to an embodiment of the present invention.
Fig. 11 is an effect diagram of a pure photogrammetry scheme 2 in a registration scheme for point clouds and images based on registration between point clouds in the prior art provided by an embodiment of the present invention.
FIG. 12 is a three-dimensional model map of FIG. 10 after registration according to an embodiment of the present invention.
FIG. 13 is a three-dimensional model map of FIG. 11 after registration according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the prior art, the vehicle-mounted lidar data and the unmanned aerial vehicle image data cannot be brought into the same geographic coordinate frame, and the registration between the image and the point cloud is poor.
Aiming at the problems in the prior art, the invention provides a high-precision mapping positioning method by combining a vehicle-mounted Lidar and an unmanned aerial vehicle, and the invention is described in detail below by combining the attached drawings.
As shown in fig. 1, the embodiment of the invention provides a high-precision mapping and positioning method by combining a vehicle-mounted Lidar and an unmanned aerial vehicle. The method comprises the following steps:
s101, small voxel extraction: and extracting a representative characteristic point set from the vehicle-mounted lidar point cloud data, and fitting a small surface element on the road surface by using each representative characteristic point set meeting the requirements.
S102, unmanned aerial vehicle image feature point extraction and matching: extracting feature points of each unmanned aerial vehicle image, performing feature matching between the images to obtain homonymous feature points between the unmanned aerial vehicle images, and performing gross error elimination on the feature points which are mismatched by using a RANSAC method.
S103, constructing a mapping relation: after the vehicle-mounted lidar point cloud data completes small surface element extraction work and unmanned aerial vehicle image feature point extraction and matching, a one-to-many or one-to-one mapping relation is established between the small surface elements and the feature points on the unmanned aerial vehicle image.
S104, rigid registration and gross error elimination: and rigid deformation of the vehicle-mounted laser lidar point cloud and the whole measurement area of the unmanned aerial vehicle image is eliminated.
S105, non-rigid registration and camera parameter calibration: after rigid registration is completed, deformation and dislocation inside the unmanned aerial vehicle image are adjusted, and the deformation and dislocation inside the unmanned aerial vehicle image are matched with the vehicle-mounted laser lidar point cloud.
Fig. 2 is a schematic diagram of a high-precision mapping and positioning method using a vehicle-mounted Lidar and an unmanned aerial vehicle in combination according to an embodiment of the invention.
In step S101, the main purpose of this step is to extract a batch of feature point sets with more significant representativeness from the vehicle-mounted lidar point cloud data, and each point set has the following characteristics: the following pointsA point P exists in the set, so that the Euclidean distance from any point Q in the point set to P is smaller than a specific threshold value; after the whole point cloud is subjected to semantic segmentation, all points in the point set belong to the same semantic set; a plane x exists, so that the distance distribution from all points in the point set to the plane is 0 in mean value and sigma in variance2And σ is less than a certain threshold. In summary, a plane can be fitted with each set of points that satisfies the above requirements. If a certain point set is just a subset of the set formed by all the points falling on a certain road surface, the bounding plane fitted by the point set can be regarded as the local part of the actual road, and in the subsequent expression, such a bounding plane is called a small bin on the road surface (fig. 3), and any point in the set formed by all the points falling on the road surface is called a road surface point on the road.
The mode of extracting a specific point set adopts a mode of seed characteristic point method and region growth, and the main idea is that if any point is a road point on a certain road, the probability of non-road points in the neighborhood range with the distance d is as follows:
Figure GDA0003282085230000131
where D is the road surface width, it can be seen that when D > D, p (a | B) ═ 0, and therefore when D is smaller than a certain threshold, it can be considered that when a certain point is a road surface point, all points in the neighborhood range whose distance is D are road surface points. Therefore, when extracting the point set, firstly, the geometric features of the point cloud and the optical features of the intensity image can be comprehensively utilized to extract a plurality of seed feature points, and then, all the points in the neighborhood with the distance d of the seed feature points are classified into a point set, thereby finishing the point set extraction work. After the point set is extracted, plane fitting is carried out by adopting a RANSAC method, so that a series of initial small surface elements are obtained. After the initial small surface elements are obtained, if a plurality of groups of small surface element seed points are close to each other (the distance is smaller than the threshold), the normal vectors are approximately collinear (the included angle is smaller than the threshold), the small surface elements can be combined, and the process is iterated for multiple times, so that a stable state is finally achieved. The main flow chart is shown in fig. 4.
In step S102, feature point extraction is mainly performed on each unmanned aerial vehicle image, feature matching is performed between the images to find homonymic feature points between the unmanned aerial vehicle images, then coarse error elimination is performed on the feature points that are mismatched by using a RANSAC method, the whole process is a very mature process in photogrammetry space-three encryption, and the process is shown in fig. 5.
In step S103, after completing the small bin extraction work and the extraction and matching of the feature points of the unmanned aerial vehicle image on the vehicle-mounted lidar point cloud data, a one-to-many or one-to-one mapping relationship needs to be established between the small bins and the feature points on the unmanned aerial vehicle image, and the main method is that, first, according to the initial position information of each image and the frame size of the image, the edge key points of the small bins extracted from the appropriate vehicle-mounted lidar point cloud data are back-projected onto each image according to the principle of the collinearity equation (formula 2). Secondly, connecting key points reversely projected on the unmanned aerial vehicle image according to the connection sequence on the lighting point cloud, so as to recover the projection of the small surface element on the unmanned aerial vehicle image, and referring to the projection of the small surface element on the image as the projection of the small surface element in the subsequent expression. Finally, judging whether each feature point on each image falls into the projection of a certain small surface element, if so, establishing a temporary corresponding relation between the small surface element and certain feature points (fig. 6), meanwhile, for all images, if all the same-name points of a certain feature point fall into the projections of the same small surface element on different images, establishing the corresponding relation between the same-name feature point pair and the small surface element, and for all the feature points establishing the temporary corresponding relation with the projection of the small surface element of all the images, reserving the same-name feature point pairs capable of establishing the corresponding relation with the small surface element, and rejecting the rest feature points, thereby establishing the one-to-many mapping relation between the small surface element and the same-name feature point pairs on the images. The flow chart is shown in fig. 7.
Figure GDA0003282085230000141
In step S104, the main purpose of this step is to eliminate the rigid deformation of the vehicle-mounted laser lidar point cloud and the whole measurement area of the unmanned aerial vehicle image, the main link is based on a rigid registration classical algorithm — iterative closest approximation (ICP), the main processing link has a forward intersection, gross error rejection, euler angle recovery, external parameters are updated, the whole process is iterated until the change of the euler angle is smaller than a threshold, and the flowchart is as shown in fig. 8.
(4.1)Front meetingThe main process of the link is to solve the three-dimensional space coordinates of each object point corresponding to each characteristic point pair establishing a mapping relation with the small surface element in the coordinate system same as the point cloud data of the vehicle-mounted lidar by using the position and the posture of each image and using the principle of a collinearity equation (formula 2), and the process is a very mature process in the photogrammetry space-three-dimensional encryption. The spatial points solved by the above process are referred to as empty three points in the following description. In the absence of errors, the three empty points must be strictly located on the finite plane where their corresponding small elements are located.
(4.2)Gross error rejectionThe method comprises the following steps of firstly, threshold filtering, wherein in the actual operation process, due to the existence of projection difference or phenomena such as mismatching, some empty three points may not be located on a finite plane where small plane elements are located, so that threshold filtering can be performed on the empty three points before registration, points with a distance greater than theta away from the small plane elements can be regarded as gross error points, and vehicle-mounted lidar point cloud data can be used as a true value in the link, so that theta is satisfied: theta 3muavWherein m isuavFor utilizing unmanned aerial vehicle image GPS precision to calculate. Secondly, the RANSAC is used for further removing gross errors. After the two steps, it can be considered that the remaining point pairs should be strictly located on the small bin without error.
(4.3)Euler angle recoveryThe link is mainly based on an iterative nearest neighbor method, originally in an ICP (inductively coupled plasma) algorithm, point clouds need to be centralized, but the empty three-point cloud is incompleteIn the whole process, the density of point cloud is uneven, so that the gravity center of three empty points is replaced by the geometric center of the whole unmanned aerial vehicle measuring area, the finally satisfied convergence condition is that the sum of Euclidean distances from the three empty points to the small surface element is the minimum, and the mathematical expression is as follows:
Figure GDA0003282085230000151
Figure GDA0003282085230000152
wherein, the distance method is to calculate the empty three points after the rotation transformationiThe distance to its corresponding small bin. In the actual operation process, because the current hardware equipment can make the position and the attitude data of the unmanned aerial vehicle have equivalent precision, when setting an initial value, the initial value of the euler angle can be set to 0 at the beginning of iteration, and the convergence of the unmanned aerial vehicle can be ensured.
(4.4)IterationAnd the process is iterated, the external parameter is updated once for each solved Euler angle, the mapping relation needs to be reestablished after the updating, the rigid registration is carried out for the second time, and the iteration is carried out until the change of the Euler angles solved by two iterations is smaller than the threshold value. In practice, typically three to five iterations ensure convergence.
In step S105, after rigid matching is completed, deformation and dislocation inside the unmanned aerial vehicle image are adjusted, so that the unmanned aerial vehicle image can be matched with the vehicle-mounted laser lidar point cloud, and therefore, the main purpose is to adjust the position and posture of each unmanned aerial vehicle image by using vehicle-mounted laser lidar data, and simultaneously check and correct camera parameters of an unmanned aerial vehicle camera, the process is performed iteratively, the main idea is self-checking beam method adjustment with control conditions, a normal equation coefficient matrix is a flanged strip matrix, and a structural schematic diagram is shown in fig. 9.
(5.1) Beam method area network adjustment
Adjustment link of area network by light beam methodAnd taking each light beam as a basic adjustment unit, listing an error equation according to a collinear condition equation, uniformly performing adjustment processing in the whole area, and solving the external orientation element of each image. The adjustment unit is a single light beam serving as an adjustment basic unit and a characteristic point coordinate (x)i,yi) The mean difference observations. Ground point coordinate (X) satisfying intersection of homonymous rayst,Yt,Zt) The coordinates are equal. For each feature point of each pair of homonymous feature points, the satisfaction thereof is
Figure GDA0003282085230000161
Figure GDA0003282085230000162
Figure GDA0003282085230000163
costi,1=(Δxi 2+Δyi 2) (8)
cost1=∑costi,1 (9)
Wherein the total number of n +1 images is measured.
(5.2) calibration parameters of Camera
In the actual operation process, as the number of times of use increases, the focal length of the camera and the position of the principal point change, so that a certain system error is generated for the three-in-one result, and the final registration accuracy is affected. Therefore, when the zonal net adjustment is performed by the beam method, the focal length and the imaging center coordinate of the camera need to be incorporated into the whole adjustment system. When the unmanned aerial vehicle images are shot, the images are distorted due to the lens, the radial distortion and the tangential distortion are mainly considered, and the k is mainly passed through0,k1,k3,p0,p1Five parameters are described, as shown in formulas (10-13):
Figure GDA0003282085230000164
Figure GDA0003282085230000165
Figure GDA0003282085230000171
Figure GDA0003282085230000174
in summary, in combination with the camera calibration parameters, equation (6) becomes:
Figure GDA0003282085230000172
finally, the optimization solution objective becomes equation (15):
Figure GDA0003282085230000173
(5.3) control by lidar
In the process, the unmanned aerial vehicle image data is controlled by using the mapping relation between the optimized small surface element and the homonymy characteristic point pair established and optimized in the rigid registration process, and the main idea is that all homonymy characteristic point pairs are strictly positioned on the finite plane where the corresponding small surface element is positioned under the condition of no error. Therefore, for adjusting the internal and external parameters of the unmanned aerial vehicle image, the best direction of overall fitting with the small surface element should be adjusted according to the same-name characteristic points. When the cost function is set, the precision of the image data of the unmanned aerial vehicle is greatly improved through a rigid registration link, so that the vehicle-mounted laser lidar data cannot be approximately regarded as a true value. Due to the characteristics of the lidar data, the precision of the lidar data in the depth direction is far higher than that of the lidar data in the plane direction, the vehicle-mounted lidar data can be approximately considered to be in the depth direction, namely the Z direction, for the unmanned aerial vehicle image, the precision of the Z coordinate of the three empty points is lower than that of the plane coordinate and is generally about 3-5 times, therefore, the distance weighted in the segmentation direction is considered to be used as a cost function, and for each characteristic point pair, the cost function is as shown in the formula (16):
costj,2=I0(Δdxj 2+Δdyj 2+3Δdzj 2)+2000I1(Δdxj 2+Δdyj 2+3Δdzj 2) (16)
wherein Δ dxj,Δdyj,ΔdzjThe distances from the empty three points to the limited plane of the corresponding small element in the X, Y and Z directions, I0,I1As an indicator function, the following equations (17-18) are shown:
Figure GDA0003282085230000181
Figure GDA0003282085230000182
where μ is the median error of the empty three points after rigid registration. For all images of the region, the cost function is shown in equation (19):
cost2=∑costj,2 (19)
(5.4) Total adjustment
For the whole non-rigid adjustment link, two cost functions need to be considered in general, and the main work is two: regularization and weighting, the regularization being mainly due to cost1And cost2Unit is different, cost1Mainly describes the distance in image space, and cost2It is described as the distance in object space, and therefore it is necessary to describecost2In the image space conversion, the conversion mode is as follows:
cost2′=cost2/gsd (20)
gsd is the ground sampling distance, using the altitude and the focal length to estimate (equation (21))
Figure GDA0003282085230000183
And because the point cloud precision of the vehicle-mounted laser lidar is similar to but different from the image precision of the unmanned aerial vehicle, different weights need to be given to the point cloud precision and the image precision of the unmanned aerial vehicle, so that the whole cost function is as shown in formula (22):
cost=P1cost1+(P2cost2)/gsd (22)
wherein P is1,P2The inverse of the internal accuracy of the unmanned aerial vehicle image after the last registration and the inverse of the lidar point cloud accuracy are respectively, so that the whole non-rigid registration optimization problem is shown as an equation (23):
Figure GDA0003282085230000184
the above process is performed iteratively, the external parameter is updated once for each solution, the mapping relation needs to be re-established after the updating, the non-rigid registration is performed for the second time, and the iteration is performed until the change of the external parameter of the image obtained by two times of iteration solution is smaller than the threshold value. In practice, iteration of one to two times generally ensures convergence. Thereby completing the entire registration process.
The invention is further described below in connection with specific experimental or simulation results.
Compared with a point cloud and image registration scheme (scheme 1 is shown in figure 10) and a pure photogrammetry scheme (scheme 2 is shown in figure 11) based on point cloud registration, the method adopts two groups of data to carry out a comparison test, and experimental areas are shown in figures 10-11.
The external accuracy of the three-dimensional model after registration is shown in the following table:
Figure GDA0003282085230000191
the final three-dimensional model of the invention is shown in fig. 12-13.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present invention may be implemented by software plus a necessary hardware platform, and may also be implemented by hardware entirely. Based on such understanding, all or part of the technical solution of the present invention, which contributes to the background, can be embodied in the form of a software product, which can be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes instructions for causing a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present invention.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, and any modifications, equivalents and improvements made by those skilled in the art within the spirit and principle of the present invention are intended to be covered by the present invention.

Claims (9)

1. A high-precision mapping and positioning method combining a vehicle-mounted Lidar and an unmanned aerial vehicle is characterized by comprising the following steps:
step one, extracting a representative characteristic point set from vehicle-mounted point cloud data, and fitting a small surface element on a road surface by using each characteristic point set which meets requirements and is representative;
the mode of extracting the specific point set adopts a seed characteristic point method and a mode of region growing, and if any point is a road point on a certain road, the probability of the occurrence of a non-road point in the neighborhood range with the distance d is as follows:
Figure FDA0003282085220000011
when D > D, p (A | B) ═ 0, and when D is smaller than a certain threshold, when a certain point is a road point, all points in the neighborhood range with the distance of D are road points; when extracting a point set, firstly, comprehensively utilizing the geometric characteristics of point cloud and the optical characteristics of an intensity image to extract a plurality of seed characteristic points, and then, utilizing all points in a neighborhood with the distance d of the seed characteristic points to be classified into a point set, thereby finishing the point set extraction work; after the point set is extracted, performing plane fitting by adopting a RANSAC method to obtain a series of initial small surface elements; after the initial small surface elements are obtained, if a plurality of groups of small surface element seed points are close to each other, the distances are smaller than a threshold value, the normal vectors are approximately collinear, and the collinear included angle is smaller than the threshold value, the small surface elements are combined, and the process is iterated for multiple times, so that a stable state is finally achieved;
extracting characteristic points of each unmanned aerial vehicle image, performing characteristic matching between the images to obtain homonymous characteristic points between the unmanned aerial vehicle images, and performing gross error elimination on the characteristic points which are mismatched by using a RANSAC method;
step three, after the vehicle-mounted point cloud data completes small surface element extraction work and unmanned aerial vehicle image feature point extraction and matching, a one-to-many or one-to-one mapping relation is established between the small surface elements and feature points on the unmanned aerial vehicle image;
eliminating rigid deformation of the vehicle-mounted point cloud and the whole measurement area of the unmanned aerial vehicle image;
and step five, after rigid registration is completed, adjusting the deformation and dislocation of the interior of the unmanned aerial vehicle image, so that the deformation and dislocation of the interior of the unmanned aerial vehicle image and the vehicle-mounted point cloud are subjected to non-rigid registration.
2. The method for positioning high-precision mapping by combining the vehicle-mounted Lidar and the unmanned aerial vehicle according to claim 1, wherein in the first step, the representative feature point set comprises:
(1) a point P exists, so that the Euclidean distance from any point Q in the point set to P is smaller than a specific threshold value;
(2) after the whole point cloud is subjected to semantic segmentation, all points in the point set belong to the same semantic set;
(3) there is a plane x such that the distribution of distances from all points in the set of points to the plane is averaged to 0 and has a variance σ2And σ is less than a certain threshold.
3. The high-precision mapping positioning method by combining the vehicle-mounted Lidar and the unmanned aerial vehicle as claimed in claim 1, wherein in the first step, the method for extracting the specific point set adopts a seed characteristic point method plus a region growing method, and specifically comprises the following steps:
1) comprehensively utilizing the geometrical characteristics of the point cloud and the optical characteristics of the intensity image to extract a plurality of seed characteristic points:
2) all points in the neighborhood with the distance d of the seed characteristic points are classified into a point set to finish point set extraction;
3) after extracting the point set, performing plane fitting by adopting a RANSAC method to obtain a series of initial small surface elements; after the initial small surface elements are obtained, a plurality of groups of small surface element seed point distances are smaller than a threshold value, the normal vector included angle is smaller than the threshold value, the small surface elements are combined, and iteration is carried out for multiple times, so that the stable state is finally achieved.
4. The high-precision mapping positioning method combining vehicle-mounted Lidar and a unmanned aerial vehicle according to claim 1, wherein in the third step, the method for constructing the mapping relation comprises the following steps:
i) according to the initial position information of each image and the frame size of the image, the edge key points of the small surface element extracted from the vehicle-mounted lidar point cloud data are back-projected onto each image according to a collinear equation;
ii) connecting key points reversely projected on the unmanned aerial vehicle image according to the connection sequence on the lighting point cloud, and recovering the projection of the small surface element on the unmanned aerial vehicle image;
iii) judging whether each feature point on each image falls into the projection of a certain small surface element, and if so, establishing a temporary corresponding relation between the small surface element and the certain feature points;
for all images, if all homonymous points of a certain feature point fall into the projections of the same small surface element on different images, establishing the corresponding relation of the homonymous feature point pair and the small surface element;
establishing temporary feature point pairs for all the projections of all the images and the small surface elements, reserving homonymous feature point pairs which can establish corresponding relations with the small surface elements, removing other feature points, and establishing one-to-many mapping relations between the small surface elements and the homonymous feature point pairs on the images;
the collinearity equation is:
Figure FDA0003282085220000031
5. the method for positioning high-precision mapping by combining vehicle-mounted Lidar with a drone of claim 1, wherein in the fourth step, the method of rigid registration and gross error rejection comprises:
performing front intersection, gross error elimination, Euler angle recovery, external parameter updating and iteration until the change of the Euler angle is smaller than a threshold value;
the method of front meeting comprises: solving the three-dimensional space coordinates of each object point corresponding to each characteristic point pair establishing a mapping relation with the small surface element in the same coordinate system as the point cloud data of the vehicle-mounted lidar by using the position and the posture of each image and using a collinear equation;
the gross error rejection method comprises the following steps: a) and (3) threshold filtering, namely removing points with the distance between the small elements greater than theta, wherein theta satisfies the following conditions: theta 3muavWherein m isuavCalculating by using the GPS precision of the unmanned aerial vehicle image;
b) further performing gross error rejection by using RANSAC;
the Euler angle recovery method comprises the following steps: the geometric center of utilizing whole unmanned aerial vehicle to survey the district replaces the focus of empty three points, and the convergence condition that satisfies is that the sum of the European distance of empty three points to little surface element is minimum, and the mathematical expression is:
Figure FDA0003282085220000032
Figure FDA0003282085220000033
wherein, distance is to find the empty three points after rotation transformationiDistance to the corresponding small bin; when setting an initial value, setting the initial value of the Euler angle to 0 when starting iteration;
the iterative method comprises the following steps: and updating the external parameter once for each solved Euler angle, reestablishing a mapping relation after updating, and carrying out rigid registration for the second time until the change of the Euler angles solved by two times of iteration is smaller than a threshold value.
6. The method for positioning high-precision mapping by combining the vehicle-mounted Lidar and the unmanned aerial vehicle as claimed in claim 1, wherein in the fifth step, the position and the posture of each unmanned aerial vehicle image are adjusted by using the vehicle-mounted laser Lidar data, and meanwhile, the camera parameters of the unmanned aerial vehicle camera are calibrated and iterated.
7. The high-precision mapping positioning method combining vehicle-mounted Lidar and a unmanned aerial vehicle as claimed in claim 6, wherein the fifth step specifically comprises:
(I) adjustment of the area network by a beam method: taking each light beam as a basic adjustment unit, listing an error equation according to a collinear condition equation, uniformly performing adjustment processing in the whole area, and solving the external orientation element of each image; the adjustment unit is a single light beam serving as an adjustment basic unit and a characteristic point coordinate (x)i,yi) Is a mean difference observation; ground point coordinate (X) satisfying intersection of homonymous rayst,Yt,Zt) The coordinates are equal; for each feature point of each pair of homonymous feature points, satisfying:
Figure FDA0003282085220000041
Figure FDA0003282085220000042
Figure FDA0003282085220000043
costi,1=(Δxi 2+Δyi 2)
cost1=∑costi,1
wherein the whole measuring region has n +1 images;
(II) camera parameter calibration: when the adjustment of the area network is performed by the beam method, the focal length and the imaging center coordinate of the camera are brought into the whole adjustment system; for the generation of distorted image passes k0,k1,k3,p0,p1The parameters are described as follows:
Figure FDA0003282085220000044
Figure FDA0003282085220000045
Figure FDA0003282085220000046
Figure FDA0003282085220000051
to sum up, the calibration parameters of the camera are combined
Figure FDA0003282085220000052
The following steps are changed:
Figure FDA0003282085220000053
finally, the optimization solution objective becomes the equation:
Figure FDA0003282085220000054
(III) control with lidar: using a segmentation direction weighted distance as a cost function, wherein the cost function is shown as the following formula for each feature point pair:
costj,2=I0(Δdxj 2+Δdyj 2+3Δdzj 2)+2000I1(Δdxj 2+Δdyj 2+3Δdzj 2)
wherein Δ dxj,Δdyj,ΔdzjThe distances from the empty three points to the limited plane of the corresponding small element in the X, Y and Z directions, I0,I1To indicate the function, the following equation is shown:
Figure FDA0003282085220000055
Figure FDA0003282085220000056
where μ is the median error of the empty three points after rigid registration. For all images of the survey area, the cost function is given by:
cost2=∑costj,2
(IV) integral adjustment: regularization and weighting are carried out, in the regularization, cost1As a distance in image space, cost2Cost is the distance in the object space2In the image space conversion, the conversion mode is as follows:
cost2′=cost2/gsd
gsd is the ground sampling distance, and the navigation height and the focal length are estimated by the following formula;
Figure FDA0003282085220000057
for cost1、cost2Different weights are given, and the overall cost function formula is shown as follows:
cost=P1cost1+(P2cost2)/gsd
wherein P is1,P2Respectively, the reciprocal of the internal precision of the unmanned aerial vehicle image after the last registration and the reciprocal of the lidar point cloud precision, wherein the whole non-rigid registration optimization problem is shown as the following formula:
Figure FDA0003282085220000061
and updating the external parameter once for each solution, reestablishing a mapping relation after updating, performing non-rigid registration for the second time, and iterating until the change of the external parameter of the image solved by two iterations is smaller than a threshold value.
8. An unmanned aerial vehicle for implementing the high-precision mapping positioning method by combining the vehicle-mounted Lidar and the unmanned aerial vehicle according to any one of claims 1 to 7.
9. A program storage medium for receiving user input, the stored computer program causing an electronic device to perform the method of high-precision map positioning using on-board Lidar in combination with a drone of any of claims 1 to 7.
CN202010252893.4A 2020-04-02 2020-04-02 High-precision mapping positioning method by combining vehicle-mounted Lidar and unmanned aerial vehicle Active CN111457930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010252893.4A CN111457930B (en) 2020-04-02 2020-04-02 High-precision mapping positioning method by combining vehicle-mounted Lidar and unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010252893.4A CN111457930B (en) 2020-04-02 2020-04-02 High-precision mapping positioning method by combining vehicle-mounted Lidar and unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN111457930A CN111457930A (en) 2020-07-28
CN111457930B true CN111457930B (en) 2021-11-23

Family

ID=71684425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010252893.4A Active CN111457930B (en) 2020-04-02 2020-04-02 High-precision mapping positioning method by combining vehicle-mounted Lidar and unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN111457930B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112268541B (en) * 2020-10-16 2022-04-15 中国有色金属长沙勘察设计研究院有限公司 Three-dimensional space detection method
CN112465732A (en) * 2020-11-27 2021-03-09 武汉大学 Registration method of vehicle-mounted laser point cloud and sequence panoramic image
CN115511932B (en) * 2022-09-29 2024-02-13 北京银河方圆科技有限公司 Registration method based on medical image, readable storage medium and electronic equipment
CN116222592B (en) * 2023-03-03 2023-09-29 北京数字政通科技股份有限公司 High-precision map generation method and system based on multi-source data

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103744086A (en) * 2013-12-23 2014-04-23 北京建筑大学 High-precision registration method for ground laser radar and close-range photography measurement data
CN103927731A (en) * 2014-05-05 2014-07-16 武汉大学 Low-altitude remote sensing image rapid and automatic splicing method without POS assisting
CN104123730A (en) * 2014-07-31 2014-10-29 武汉大学 Method and system for remote-sensing image and laser point cloud registration based on road features
CN104794743A (en) * 2015-04-27 2015-07-22 武汉海达数云技术有限公司 Color point cloud producing method of vehicle-mounted laser mobile measurement system
CN110006408A (en) * 2019-04-17 2019-07-12 武汉大学 LiDAR data " cloud control " aviation image photogrammetric survey method
CN110021039A (en) * 2018-11-15 2019-07-16 山东理工大学 The multi-angle of view material object surface point cloud data initial registration method of sequence image constraint
CN110443837A (en) * 2019-07-03 2019-11-12 湖北省电力勘测设计院有限公司 City airborne laser point cloud and aviation image method for registering and system under a kind of line feature constraint

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9329269B2 (en) * 2012-03-15 2016-05-03 GM Global Technology Operations LLC Method for registration of range images from multiple LiDARS
CN105069843A (en) * 2015-08-22 2015-11-18 浙江中测新图地理信息技术有限公司 Rapid extraction method for dense point cloud oriented toward city three-dimensional modeling
WO2017127711A1 (en) * 2016-01-20 2017-07-27 Ez3D, Llc System and method for structural inspection and construction estimation using an unmanned aerial vehicle
CN109115186B (en) * 2018-09-03 2020-07-28 山东科技大学 360-degree measurable panoramic image generation method for vehicle-mounted mobile measurement system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103744086A (en) * 2013-12-23 2014-04-23 北京建筑大学 High-precision registration method for ground laser radar and close-range photography measurement data
CN103927731A (en) * 2014-05-05 2014-07-16 武汉大学 Low-altitude remote sensing image rapid and automatic splicing method without POS assisting
CN104123730A (en) * 2014-07-31 2014-10-29 武汉大学 Method and system for remote-sensing image and laser point cloud registration based on road features
CN104794743A (en) * 2015-04-27 2015-07-22 武汉海达数云技术有限公司 Color point cloud producing method of vehicle-mounted laser mobile measurement system
CN110021039A (en) * 2018-11-15 2019-07-16 山东理工大学 The multi-angle of view material object surface point cloud data initial registration method of sequence image constraint
CN110006408A (en) * 2019-04-17 2019-07-12 武汉大学 LiDAR data " cloud control " aviation image photogrammetric survey method
CN110443837A (en) * 2019-07-03 2019-11-12 湖北省电力勘测设计院有限公司 City airborne laser point cloud and aviation image method for registering and system under a kind of line feature constraint

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Non-Rigid Vehicle-Borne LiDAR-Assisted Aerotriangulation;Li Zheng等;《Remote Sensing》;20190518;第11卷(第10期);第1-16页(1188) *
NRLI-UAV: Non-rigid registration of sequential raw laser scans and images for low-cost UAV LiDAR point cloud quality improvement;JianpingLi等;《ISPRS Journal of Photogrammetry and Remote Sensing》;20191231;第158卷;第123-145页 *
Registration of Aerial Optical Images with LiDAR Data Using the Closest Point Principle and Collinearity Equations;Rongyong Huang等;《Sensors》;20180601;第18卷(第6期);第1-21页(1770) *
激光扫描与光学影像数据配准的研究进展;张帆等;《测绘通报》;20080225(第2期);第7-10页 *
激光点云与数字影像融合的目标细部重建;林承达等;《计算机工程与应用》;20140515;第51卷(第21期);第185-190、208页 *

Also Published As

Publication number Publication date
CN111457930A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN111457930B (en) High-precision mapping positioning method by combining vehicle-mounted Lidar and unmanned aerial vehicle
CN108802785B (en) Vehicle self-positioning method based on high-precision vector map and monocular vision sensor
CN112102458B (en) Single-lens three-dimensional image reconstruction method based on laser radar point cloud data assistance
CN110570428A (en) method and system for segmenting roof surface patch of building from large-scale image dense matching point cloud
CN108171131B (en) Improved MeanShift-based method for extracting Lidar point cloud data road marking line
CN111598823A (en) Multi-source mobile measurement point cloud data air-ground integrated fusion method and storage medium
CN110006408B (en) LiDAR data cloud control aerial image photogrammetry method
CN111656136A (en) Vehicle positioning system using laser radar
CN111882612A (en) Vehicle multi-scale positioning method based on three-dimensional laser detection lane line
WO2018061010A1 (en) Point cloud transforming in large-scale urban modelling
CN104123730A (en) Method and system for remote-sensing image and laser point cloud registration based on road features
CN110288659B (en) Depth imaging and information acquisition method based on binocular vision
CN114332348B (en) Track three-dimensional reconstruction method integrating laser radar and image data
CN114565616B (en) Unstructured road state parameter estimation method and system
CN112465732A (en) Registration method of vehicle-mounted laser point cloud and sequence panoramic image
CN106096497B (en) A kind of house vectorization method for polynary remotely-sensed data
Jang et al. Road lane semantic segmentation for high definition map
CN112270698A (en) Non-rigid geometric registration method based on nearest curved surface
CN109100719A (en) Combine plotting method with the topographic map of optical image based on satellite-borne SAR image
CN112183434B (en) Building change detection method and device
CN112767461A (en) Automatic registration method for laser point cloud and sequence panoramic image
CN117422753A (en) High-precision scene real-time three-dimensional reconstruction method combining optics and SAR (synthetic aperture radar) images
CN110443837B (en) Urban area airborne laser point cloud and aerial image registration method and system under constraint of linear features
CN113947724A (en) Automatic line icing thickness measuring method based on binocular vision
CN110927765B (en) Laser radar and satellite navigation fused target online positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant