CN114463521A - Building target point cloud rapid generation method for air-ground image data fusion - Google Patents

Building target point cloud rapid generation method for air-ground image data fusion Download PDF

Info

Publication number
CN114463521A
CN114463521A CN202210012783.XA CN202210012783A CN114463521A CN 114463521 A CN114463521 A CN 114463521A CN 202210012783 A CN202210012783 A CN 202210012783A CN 114463521 A CN114463521 A CN 114463521A
Authority
CN
China
Prior art keywords
seed
point
plane
points
patch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210012783.XA
Other languages
Chinese (zh)
Other versions
CN114463521B (en
Inventor
闫利
谢洪
娄紫璇
韦朋成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202210012783.XA priority Critical patent/CN114463521B/en
Publication of CN114463521A publication Critical patent/CN114463521A/en
Application granted granted Critical
Publication of CN114463521B publication Critical patent/CN114463521B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a rapid building target point cloud generation method for air-ground image data fusion, and belongs to the field of point cloud data processing. The invention takes abundant building plane characteristics in an urban scene as constraints, researches and provides an efficient and reliable dense matching method facing to air-to-ground image fusion, introduces an edge detection operator to improve the number of initial seed points and optimize the distribution of the seed points, adds a plane filtering method to filter out non-planar seed points, corrects the initial direction of the seed points through plane fitting, quickly generates a patch through normalized cross-correlation coefficient constraint to improve the generation efficiency of dense point cloud in a building plane area, and accelerates the time efficiency of air-to-ground image data fusion in the dense matching process.

Description

Building target point cloud rapid generation method for air-ground image data fusion
Technical Field
The invention belongs to the field of point cloud data processing, and particularly relates to a rapid building target point cloud generation method for air-ground image data fusion.
Background
The urban geographic information digitalized product as a geographic information carrier plays a very important role in construction of smart cities and digital cities, and therefore the three-dimensional reconstruction technology is developed rapidly. However, when only a single aerial data is used for three-dimensional reconstruction, due to the problems of shielding, too large shooting inclination angle and the like, geometric defects such as distortion, holes and the like and texture blurring are easy to occur on a part close to the ground, and the problems are particularly prominent when a vertical face of a building target is reconstructed. However, although the three-dimensional reconstruction using single ground data can obtain near-ground data with clear texture and small geometric deformation, the problem is that the measurement distance is limited, and complete measurement data of the target, such as roof data of a building, cannot be obtained. As the aviation data and the ground data are mutually supplemented, the three-dimensional reconstruction by combining the air data and the ground data becomes a feasible method for solving the problems of incomplete target, missing detailed information and the like caused by only using a single data source for three-dimensional reconstruction at present.
Existing multi-view dense matching methods can be divided into four categories: voxel-based methods require the bounding box of the known scene and reconstruct the accuracy subject to the resolution limit of the voxel grid; the depth map-based method needs to estimate the depth maps of all images, and then the depth maps are fused into a unified three-dimensional scene representation with slightly low precision; the deep learning-based method requires training parameters of a deep neural network using a large amount of data, and then predicting depth information directly based on images. The method can obtain better results in weak texture regions, but depends on training data to the utmost extent, the reconstruction quality can be reduced rapidly when the training data is insufficient, and the method has higher requirements on hardware and is only suitable for processing data with smaller resolution at present; the patch-based approach represents the scene surface by collecting patches of the scene, and the integrity of the reconstructed model relies on the key point matching. The first three methods all need to acquire prior knowledge, which is low in feasibility for large-scale urban scenes, and the last method based on the patch does not depend on any prior information, has certain filtering capacity for abnormal points, and is good in robustness. According to the qualitative evaluation result of the middleburg reference data set, a Patch-based Multi-View dense matching algorithm (PMVS) obtains good accuracy and integrity reconstruction results for all six groups of data, and the algorithm performance is excellent. Unfortunately, the time required for reconstruction using the PMVS algorithm is relatively long, which is not favorable for practical application to some extent. Aiming at the defect of the PMVS algorithm, partial scholars propose improvement methods, although the time efficiency is improved to different degrees, the efficiency improvement range is limited, the maximum efficiency is only about 10%, and the performance of the algorithm is not quantitatively evaluated.
The key of the air-ground image data fusion is to acquire pose parameters of the air-ground images under a unified coordinate system, and the pose parameters can be acquired by using an image feature matching mode, however, the applicable visual angle difference of the existing feature matching algorithm is limited, and the existing feature matching algorithm is not enough to be directly used for large visual angle feature matching on the air-ground images. The method for obtaining the pose conversion relation between the air and ground images by utilizing the dense point cloud generated by the images to realize the uniform pose parameters of the air and ground images is an effective feasible path for realizing the automatic air and ground image data fusion, but the existing dense matching point cloud generating method has low efficiency and is not beneficial to realizing the rapid automatic fusion.
The PMVS algorithm comprises three processes: initial feature matching, patch expansion and patch filtering. Firstly, extracting characteristic points through Harris and DoG (difference-of-Gaussian) operators and carrying out characteristic matching to obtain sparse seed points, which are also called as initial surface patches. Then, a dense patch is obtained by expanding around the initial patch, and the position and normal direction of each newly generated patch need to be optimized at this stage. And finally, filtering out wrong patches in the dense patches. However, this method has the following disadvantages:
1) the DoG operator needs to construct an image pyramid, the time consumption is long when spot features are mainly extracted to generate seed points, and the generated seed points are located in atypical target regions such as leaves and the ground, so that the significance is low.
2) The existing dense matching method generates point cloud in a point-by-point optimization mode, and obtains the optimal seed point position and the optimal seed point direction by minimizing the image luminosity difference value.
Disclosure of Invention
Aiming at the defects of the prior art, the method is improved on the basis of PMVS, the feature significance of the building area is improved by introducing the edge detection operator, the corner feature is extracted by using the corner operator, the time efficiency is improved, and the obtained seed points are basically located in the building area. Before the surface patch is expanded, a method direction correction step is introduced to provide a relatively correct initial value for a subsequent method direction optimization step, and meanwhile, the optimization calculation of seed points on a plane is reduced through plane constraint, and the calculation time of an algorithm is reduced.
The invention takes abundant building plane characteristics in an urban scene as constraints, researches and provides an efficient and reliable dense matching method facing to air-to-ground image fusion, introduces an edge detection operator to improve the number of initial seed points and optimize the distribution of the seed points, adds a plane filtering method to filter out non-planar seed points, corrects the initial direction of the seed points through plane fitting, quickly generates a patch through normalized cross-correlation coefficient constraint to improve the generation efficiency of dense point cloud in a building plane area, and accelerates the time efficiency of air-to-ground image data fusion in the dense matching process.
In order to achieve the purpose, the invention provides a rapid building target point cloud generation method for air-to-ground image data fusion, which comprises the following steps:
step 1, firstly, binarizing an image, extracting angular point characteristics and further obtaining initial seed points;
step 2, optimizing and adjusting the position and normal direction of an initial seed point by maximizing the normalized cross-correlation coefficient average value NCC of an image window corresponding to a seed patch, and correcting the normal direction of the initial seed point to the direction vertical to the plane by adopting a plane fitting mode to obtain a seed point set positioned on the plane, wherein the seed patch is the plane which passes through the seed point and is vertical to the normal direction of the seed point;
step 3, calculating the positions of adjacent expanded seed points through a seed surface patch where the seed points are located when the seed points are expanded on a plane for the seed points meeting the plane constraint condition in the seed point set, and assigning the normal direction of the seed points to the newly expanded adjacent seed points; and for the seed points which do not meet the plane constraint condition, expanding and generating new seed points according to a PMVS method, and adding the seed points which are successfully expanded into the seed point set, thereby quickly generating the plane point cloud.
Further, in the step 1, introducing a Canny edge detection operator to carry out binarization processing on the original image, extracting corner features by using a Harris operator, and after obtaining the corner features, selecting a plurality of Harris corners with the largest response values in a fixed-size pixel region of each image, wherein the pixel region is a grid;
sequentially taking each image as a reference image, carrying out priority ranking on the rest images according to the visual angle and the distance, and selecting a plurality of images with the top priorities as images to be matched;
finding out the homonymous feature of each feature point in a reference image grid on an image to be matched through epipolar geometric constraint, calculating a spatial three-dimensional point coordinate corresponding to the feature point through triangulation, entering the next grid on the image once the three-dimensional point coordinate of one feature point in the grid is successfully calculated or all the feature points in the grid are completely calculated, and starting new three-dimensional point coordinate calculation until all the grids on the image are completely calculated;
and all the images are operated according to the steps to obtain initial seed points of the three-dimensional point cloud model.
Further, in step 2, the specific calculation mode of NCC is as follows;
the position of the initial seed point is obtained by triangularization under epipolar geometric constraint, the initial normal direction of the seed surface patch points to the optical center of the reference image from the seed point, the position and the normal direction of the initial seed point are optimized and adjusted by maximizing the normalized cross correlation coefficient average value NCC of the image window corresponding to the seed surface patch, and the optimization function is shown in a formula (1);
Figure BDA0003459631110000031
in the formula: NCC represents the average value of normalized cross-correlation coefficients of relevant windows corresponding to the projection of the seed points to the visible image; v (p) a set of visible images representing seed points; l v (p) l represents the number of visible images of the seed point; r (p) represents a reference image corresponding to the seed point; i isjA visible image representing a seed point; ncc (-) represents the normalized cross-correlation coefficient calculation function of the seed point, see equation (a)2);
Figure BDA0003459631110000032
In the formula: f and g respectively represent the pixel gray values of the image window corresponding to the patch;
Figure BDA0003459631110000033
and
Figure BDA0003459631110000034
representing the average value of the pixel gray levels in the window; deltafAnd deltagRepresenting the standard deviation of the grey values of the pixels of the window.
Further, in step 2, the specific implementation manner of correcting the direction of the initial seed point method to the direction perpendicular to the plane is as follows,
constructing a kd-Tree for all seed points, first, for each seed point qiSearching neighbor points in a larger neighborhood range, setting a strict plane fitting condition threshold1, wherein | planar point | ≧ max (0.7 | neighbors |,5), obtaining plane parameters where the seed points are located and a new method direction, performing plane fitting on all neighborhood points, wherein a part of all neighborhood points are located on a plane and are represented by planar points, the neighbors represent neighborhood points, and | | | represents the quantity;
constructing a patch according to the new method direction, calculating the NCC of the patch, and if the NCC is greater than a certain threshold value, considering that the new method direction is correct, and correcting the method direction of the seed point to the correct direction;
next, for each seed point q for which the normal vector has been correctedjSearching its neighborhood, finding out the neighbor seed points less than n pixels corresponding space distance from the seed point normal plane, and for each seed point neighbor in uncorrected normal directionjSearching the neighborhood and fitting a plane to obtain a new normal direction of the neighbor point;
if neighbor is usedjIn the new normal direction of
Figure BDA0003459631110000041
And q isjNormal _ patch ofjThe included angle between the adjacent points is less than 20 degrees, and the neighbor point neighbor is considered to bejAlso on the plane, updating the neighbor point neighborjIn the normal direction of
Figure BDA0003459631110000042
And finally, removing seed points which are not corrected all the time in the method direction in the initial seed point set.
Further, the seed point satisfying the plane constraint condition in step 3 refers to a point having NCC greater than a certain threshold.
Further, the specific implementation manner of step 3 is as follows;
taking out the seed points which pass through the normal direction correction in the seed point set according to the priority, establishing a patch with the side length of M pixels on a normal plane of the seed points and calculating NCC (nearest neighbor) which is considered to be larger than a certain threshold value, wherein the seed points are positioned on the plane, generating new seed points around the seed points directly, and setting the normal direction attribute of the seed points as passing through the fitting correction;
the new method for calculating the three-dimensional point coordinates of the seed points comprises the following steps:
establishing a local space coordinate system by taking the center of the patch as an origin, starting from the center of the patch on an x axis, taking the x axis direction of the patch corresponding to the reference image, and using axThe z-axis is the normal vector n (p) of the patch, and the y-axis is perpendicular to the x-axis and the z-axis from the center of the patch, and is represented by ayExpressing that the x-axis and y-axis vectors are normalized to a unit vector in object space, denoted by ax′,ay' represents; wherein a isx′=(x1,y1,z1),ay′=(x2,y2,z2),xi,yi,ziThree components of the vector in the object space three-dimensional space are represented;
calculating the spatial resolution of the image, namely the distance of the unit pixel corresponding to the space, wherein the calculation formula is shown in formula (3);
Figure BDA0003459631110000051
in the formula: d represents the space distance of the unit pixel corresponding to the plane of the patch; z represents the distance from the patch center to the camera optical center; f represents the camera standoff;
calculating and expanding the coordinate P of the generated new three-dimensional pointnewThe calculation formula is shown in formula (4):
Figure BDA0003459631110000052
in the formula: pnew=(xnew,ynew,znew) Representing the coordinates of the center point of the new face generated by expansion; (x)0,y0,z0) Representing coordinates of a center point of the patch; the delta x and the delta y are { -1,0,1}, and represent the step length of the plane coordinate displacement value of the center point of the extended patch on the patch;
therefore, a patch is generated by fast expanding on a plane, point-by-point optimization is continuously carried out on points which do not belong to the plane in the expanding process through a PMVS (pulse-to-average power ratio) method, and the sending direction of a new patch is set to be not corrected through fitting;
and continuously expanding the new surface patches according to the method until the seed point set is empty, and finishing the expansion.
Further, the threshold value of NCC is 0.8.
Compared with the prior art, the invention has the advantages and beneficial effects that:
1) the invention considers the geometric characteristics of the building on the image, introduces the edge detection operator, improves the quantity and the distribution condition of the dense point cloud seed points, and improves the significance of the corner points.
2) The invention introduces a point cloud method direction correcting step, can correct the direction of the generated seed point method to the correct direction, reduces the optimization calculation through plane constraint and improves the patch expansion speed.
3) The invention provides an improved algorithm, and under the condition of obtaining the same effect as a mass dense matching algorithm, the point cloud generation efficiency is obviously improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a characteristic comparison diagram before and after adding a canny operator in the embodiment of the invention, (a) (b) are edge diagrams extracted from an original image and the canny operator respectively; (c) (d) comparing the quantity and distribution of the features before and after adding the canny operator to the region 1; (e) (f) comparing the quantity and distribution of the features before and after adding the canny operator to the region 2.
FIG. 3 is a diagram of related variables in a direction correction algorithm according to an embodiment of the present invention, where the open circles represent seed points in a first direction of the correction method; the open square points and the solid square points represent the seed points for the second set of correction method directions.
FIG. 4 shows seed point filtering and normal vector correction, (a) (b) (c) shows the position and normal direction of the seed point before correction; (d) and (e) and (f) are the corrected positions and normal directions of the seed points.
Fig. 5 is a patch model in the present embodiment, (a) is a schematic diagram of the patch model; (b) the size of a patch of the PMVS algorithm; (c) is the patch size herein.
Detailed Description
The technical solution of the present invention is further explained with reference to the drawings and the embodiments.
In order to obtain the conversion relation of the air-ground point clouds, the whole urban area does not need to be reconstructed, the point clouds of the buildings can be used as a drive for obtaining the conversion relation by considering the obvious geographic elements of the buildings as the urban area, and the conversion relation between the air-ground images is obtained by registering the air-ground point clouds of the buildings, so that the rapid and reliable reconstruction of the buildings is very key for improving the efficiency of the whole combined modeling process. According to the characteristic that urban buildings present obvious plane features, a multi-view dense matching algorithm based on a surface patch is improved, an edge detection operator is introduced to increase the number of initial seed points and optimize the distribution of the seed points, a plane filtering method is added to filter out non-planar seed points, the initial method direction of the seed points is corrected through plane fitting, the surface patch is quickly generated through normalization cross-correlation coefficient constraint to improve the generation efficiency of dense point cloud in a plane area of the building, the dense matching process is accelerated, and a reliable data base is provided for further utilizing the point cloud to perform space-ground image fusion.
The method takes an image as input, firstly, an edge detection operator is introduced to binarize the image, and the significance of the corner point characteristics is improved; after the seed points are generated, correcting the direction of the seed point method to the direction vertical to the plane by adopting a plane fitting mode, and providing conditions for the subsequent rapid generation of dense point cloud; and finally, quickly generating plane point clouds through plane constraint, reducing optimization calculation in the point cloud generation process, and improving the time efficiency of dense point cloud generation, wherein the optimization process is shown in figure 1. The method comprises the following steps:
step (1), firstly, introducing an edge detection operator to binarize an image, and extracting angular point features by using a Harris operator to further obtain initial seed points
And introducing a Canny edge detection operator to carry out binarization processing on the original image, and extracting corner features by using a Harris operator. As shown in fig. 2, (a) is an original image, and (b) is an edge image extracted by using the Canny operator, it can be seen from the comparison of the two regions (c) (d) and (e) (f) that the number of feature corners extracted by adding the Canny operator is significantly increased.
After obtaining the corner features, 8 Harris corners with the largest response values are selected in 32 × 32pixel regions (hereinafter referred to as image grids) of each image.
And taking each image as a reference image in sequence, carrying out priority ranking on the rest images according to the visual angle and the distance, and selecting 6 images with the top priority as images to be matched.
And once the three-dimensional point coordinate of one feature point in the grid is successfully calculated or all the feature points in the grid are completely calculated, entering the next grid on the image, and starting new three-dimensional point coordinate calculation until all the grids on the image are completely calculated.
And all the images are operated according to the steps to obtain initial seed points of the three-dimensional point cloud model.
Step (2), optimizing and adjusting the position and normal direction of the initial seed point by maximizing the normalized cross correlation coefficient average value NCC of the image window corresponding to the seed patch, correcting the normal direction of the initial seed point to the direction vertical to the plane by adopting a plane fitting mode, and obtaining a seed point set positioned on the plane, wherein the seed patch is a plane passing through the seed point and vertical to the normal direction of the seed point, and the specific method comprises the following steps:
the position of the initial seed point is obtained by triangularization under epipolar geometric constraint, the initial normal direction of the seed patch points to the optical center of the reference image from the seed point, then the position and the normal direction of the initial seed point are optimized and adjusted by maximizing the normalized cross-correlation coefficient average value NCC of the image window corresponding to the seed patch, and the optimization function is shown in formula (1).
Figure BDA0003459631110000071
In the formula: NCC represents the average value of normalized cross-correlation coefficients of relevant windows corresponding to the projection of the seed points to the visible image; v (p) a set of visible images representing seed points; l v (p) l represents the number of visible images of the seed point; r (p) represents a reference image corresponding to the seed point; i isjA visible image representing a seed point; ncc (-) represents the normalized cross-correlation coefficient calculation function of the seed point, see equation (2).
Figure BDA0003459631110000072
In the formula: f and g respectively represent the pixel gray values of the image window corresponding to the patch;
Figure BDA0003459631110000073
and
Figure BDA0003459631110000074
representing the average value of the pixel gray levels in the window; deltafAnd deltagRepresenting the standard deviation of the grey values of the pixels of the window.
The implementation process is schematically shown in FIG. 3.
A kd-Tree (k-dimensional tree) is constructed for all seed points. First, for each seed point qiSearching neighbor points in a larger neighborhood range (search radius is 64pixels), setting a harsher plane fitting condition threshold1(| planar point | ≧ max (0.7 | neighbors |,5)) to obtain a plane parameter where the seed point is located and a new normal direction, performing plane fitting on all neighborhood points, wherein part of all neighborhood points are located on a plane and are represented by planar points, neighbor points are represented by neighbor points, | represents the quantity.
And constructing a patch according to the new method direction, calculating the NCC of the patch, and if the NCC is more than or equal to 0.8 and 0.8 is an empirical value, taking other values, considering that the new method direction is correct, and correcting the method direction of the seed point to the correct direction.
Through the steps, a part of seed points obtain corrected normal directions, and the fact that the first batch of corrected seed point normal vectors are correct can be better guaranteed by setting strict screening conditions at the stage.
Next, for each seed point q for which the normal vector has been correctedjSearching its neighborhood (32 pixels), finding out the neighbor seed point less than 4pixels corresponding space distance from the seed point normal plane, and for each seed point neighbor in uncorrected directionjAnd searching the neighborhood and fitting a plane to obtain a new normal direction of the neighbor point.
If neighbor is usedjIn the new normal direction of
Figure BDA0003459631110000081
And q isjNormal _ patch ofjThe included angle between the adjacent points is less than 20 degrees, and the neighbor point neighbor is considered to bejAlso on the plane, updating the neighbor point neighborjIn the normal direction of
Figure BDA0003459631110000082
And finally, removing seed points which are not corrected all the time in the method direction in the initial seed point set. The number and normal direction pairs before and after the seed point filtering are shown in fig. 4, (a) (d) are the seed point distribution before and after correction; (b) (e) is a front view of the direction of the seed points before and after correction; (c) and (f) is a top view of the seed point method before and after correction.
Step (3), for the seed points meeting the plane constraint condition in the seed point set, calculating the positions of adjacent expanded seed points through the seed patches where the seed points are located when the seed points are expanded on the plane, and assigning the normal direction of the seed points to the newly expanded adjacent seed points; and for the seed points which do not meet the plane constraint condition, expanding and generating new seed points according to a PMVS (Perspercinography) method, and adding the seed points which are successfully expanded into the seed point set so as to reduce the optimization calculation in the plane point cloud expansion process and improve the plane point cloud expansion efficiency, thereby quickly generating the plane point cloud.
And taking out the seed points which are corrected by the normal direction in the seed point set according to the priority, establishing a patch with the side length of 32 pixels on a normal plane of the seed points and calculating NCC (nearest neighbor) which is more than or equal to 0.8, wherein the seed points are positioned on the plane, so that new seed points can be directly generated around the seed points, the normal direction attribute of the seed points is set to be corrected by fitting, and the optimal calculation of the positions and the normal direction of the seed points is reduced. As shown in fig. 5: (a) the method comprises the steps of (a) forming a patch model by a central point and normal vectors, (b) forming a patch size model by PMVS, (c) forming a patch model by the method, taking a seed point as the center, taking a plane which is perpendicular to the normal direction of the seed point and has the size of 32 pixels by 32 pixels as a seed patch, and expanding 8 neighborhoods around the seed patch to generate new seed points, wherein the open points represent the existing seed points, and the solid points represent new three-dimensional points generated by expanding the seed points.
The new three-dimensional point coordinate calculation method is as follows:
establishing a local space coordinate system by taking the center of the patch as an origin, starting from the center of the patch on an x axis, taking the x axis direction of the patch corresponding to the reference image, and using axThe z-axis is the normal vector n (p) of the patch, and the y-axis is perpendicular to the x-axis and the z-axis from the center of the patch, and is represented by ayExpressing, normalizing the x-axis and y-axis vectors into an object spaceUnit vector in space, using ax′,ay' means. Wherein a isx′=(x1,y1,z1),ay′=(x2,y2,z2),xi,yi,zi(i ═ 1, 2) represents the three components of the vector in the object space three-dimensional space.
The spatial resolution of the image, i.e. the distance of the unit pixel corresponding to the space, is calculated, and the calculation formula is shown in formula (3).
Figure BDA0003459631110000091
In the formula: d represents the space distance of the unit pixel corresponding to the plane of the patch; z represents the distance from the patch center to the camera optical center; f denotes the camera standoff.
Calculating and expanding the coordinate P of the generated new three-dimensional pointnewThe calculation formula is shown in formula (4):
Figure BDA0003459631110000092
in the formula: pnew=(xnew,ynew,znew) Representing the coordinates of the center point of the new face generated by expansion; (x)0,y0,z0) Representing the coordinates of the center point of the seed patch; and the delta x and the delta y are { -1,0,1} and represent the step size of the plane coordinate displacement value of the center point of the extended patch on the seed patch.
The method can rapidly expand on a plane to generate a patch through the steps, point-by-point optimization is continuously carried out on points which do not belong to the plane in the expanding process through a PMVS (pulse-to-multipoint) method, and the direction of a new patch method is set to be not corrected through fitting.
And continuously expanding the new surface patches according to the method until the seed point set is empty, and finishing the expansion.
According to the invention, by inputting the image and the external orientation elements thereof, the dense point cloud of the scene shot by the image can be rapidly obtained, the point cloud effect is higher, and the calculation efficiency is high.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (7)

1. A building target point cloud rapid generation method for air-to-ground image data fusion is characterized by comprising the following steps:
step 1, firstly, binarizing an image, extracting angular point characteristics and further obtaining initial seed points;
step 2, optimizing and adjusting the position and normal direction of an initial seed point by maximizing the normalized cross-correlation coefficient average value NCC of an image window corresponding to a seed patch, and correcting the normal direction of the initial seed point to the direction vertical to the plane by adopting a plane fitting mode to obtain a seed point set positioned on the plane, wherein the seed patch is the plane which passes through the seed point and is vertical to the normal direction of the seed point;
step 3, calculating the positions of adjacent expanded seed points through a seed surface patch where the seed points are located when the seed points are expanded on a plane for the seed points meeting the plane constraint condition in the seed point set, and assigning the normal direction of the seed points to the newly expanded adjacent seed points; and for the seed points which do not meet the plane constraint condition, expanding and generating new seed points according to a PMVS method, and adding the seed points which are successfully expanded into the seed point set, thereby quickly generating the plane point cloud.
2. The method for rapidly generating the building target point cloud for the air-to-ground image data fusion as claimed in claim 1, wherein: introducing a Canny edge detection operator to carry out binarization processing on an original image, extracting corner features by using a Harris operator, and selecting a plurality of Harris corners with the largest response values in a fixed-size pixel region of each image after obtaining the corner features, wherein the pixel region is a grid;
sequentially taking each image as a reference image, carrying out priority ranking on the rest images according to the visual angle and the distance, and selecting a plurality of images with the top priorities as images to be matched;
finding out the homonymous feature of each feature point in a reference image grid on an image to be matched through epipolar geometric constraint, calculating a spatial three-dimensional point coordinate corresponding to the feature point through triangulation, entering the next grid on the image once the three-dimensional point coordinate of one feature point in the grid is successfully calculated or all the feature points in the grid are completely calculated, and starting new three-dimensional point coordinate calculation until all the grids on the image are completely calculated;
and all the images are operated according to the steps to obtain initial seed points of the three-dimensional point cloud model.
3. The method for rapidly generating the building target point cloud for the air-to-ground image data fusion as claimed in claim 1, wherein: in step 2, the specific calculation mode of NCC is as follows;
the position of the initial seed point is obtained by triangularization under epipolar geometric constraint, the initial normal direction of the seed surface patch points to the optical center of the reference image from the seed point, the position and the normal direction of the initial seed point are optimized and adjusted by maximizing the normalized cross correlation coefficient average value NCC of the image window corresponding to the seed surface patch, and the optimization function is shown in a formula (1);
Figure FDA0003459631100000021
in the formula: NCC represents the average value of normalized cross-correlation coefficients of relevant windows corresponding to the projection of the seed points to the visible image; v (p) a set of visible images representing seed points; l v (p) l represents the number of visible images of the seed point; r (p) represents a reference image corresponding to the seed point; i isjA visible image representing a seed point; ncc (-) represents the normalized cross-correlation coefficient calculation function of the seed point, see formula (2);
Figure FDA0003459631100000022
in the formula: f and g respectively represent the pixel gray values of the image window corresponding to the patch;
Figure FDA0003459631100000023
and
Figure FDA0003459631100000024
representing the average value of the pixel gray levels in the window; deltafAnd deltagRepresenting the standard deviation of the pixel gray values of the window.
4. The method for rapidly generating the building target point cloud for the air-to-ground image data fusion as claimed in claim 1, wherein: in step 2, the specific implementation manner of correcting the direction of the initial seed point method to the direction perpendicular to the plane is as follows,
constructing a kd-Tree for all seed points, first, for each seed point qiSearching neighbor points in a larger neighborhood range, setting a strict plane fitting condition threshold1, wherein | planar point | ≧ max (0.7 | neighbors |,5), obtaining plane parameters where the seed points are located and a new method direction, performing plane fitting on all neighborhood points, wherein a part of all neighborhood points are located on a plane and are represented by planar points, the neighbors represent neighborhood points, and | | | represents the quantity;
constructing a patch according to the new method direction, calculating the NCC of the patch, and if the NCC is greater than a certain threshold value, considering that the new method direction is correct, and correcting the method direction of the seed point to the correct direction;
next, for each seed point q for which the normal vector has been correctedjSearching its neighborhood, finding out the neighbor seed points less than n pixels corresponding space distance from the seed point normal plane, and for each seed point neighbor in uncorrected normal directionjSearching the neighborhood and fitting a plane to obtain a new normal direction of the neighbor point;
if neighbor is usedjIn the new normal direction of
Figure FDA0003459631100000025
And q isjNormal _ patch of (1)jThe included angle between the adjacent points is less than 20 degrees, and the neighbor point neighbor is considered to bejAlso on the plane, updating the neighbor point neighborjIn the direction of
Figure FDA0003459631100000026
And finally, removing seed points which are not corrected all the time in the method direction in the initial seed point set.
5. The method for rapidly generating the building target point cloud for the air-to-ground image data fusion as claimed in claim 1, wherein: the seed point satisfying the plane constraint condition in step 3 is a point at which NCC is greater than a certain threshold.
6. The method for rapidly generating the building target point cloud for the air-to-ground image data fusion as claimed in claim 1, wherein: the specific implementation manner of the step 3 is as follows;
taking out the seed points which pass through the normal direction correction in the seed point set according to the priority, establishing a patch with the side length of M pixels on a normal plane of the seed points and calculating NCC (nearest neighbor) which is considered to be larger than a certain threshold value, wherein the seed points are positioned on the plane, generating new seed points around the seed points directly, and setting the normal direction attribute of the seed points as passing through the fitting correction;
the new method for calculating the three-dimensional point coordinates of the seed points comprises the following steps:
establishing a local space coordinate system by taking the center of the patch as an origin, starting from the center of the patch on an x axis, taking the x axis direction of the patch corresponding to the reference image, and using axThe z-axis is the normal vector n (p) of the patch, and the y-axis is perpendicular to the x-axis and the z-axis from the center of the patch, and is represented by ayExpressing that the x-axis and y-axis vectors are normalized to a unit vector in object space, denoted by ax′,ay' represents; wherein a isx′=(x1,y1,z1),ay′=(x2,y2,z2),xi,yi,ziThree components of the vector in the object space three-dimensional space are represented;
calculating the spatial resolution of the image, namely the distance of the unit pixel corresponding to the space, wherein the calculation formula is shown in formula (3);
Figure FDA0003459631100000031
in the formula: d represents the space distance of the unit pixel corresponding to the plane of the patch; z represents the distance from the patch center to the camera optical center; f represents the camera standoff;
calculating and expanding the coordinate P of the generated new three-dimensional pointnewThe calculation formula is shown in formula (4):
Figure FDA0003459631100000032
in the formula: p isnew=(xnew,ynew,znew) Representing the coordinates of the center point of the new face generated by expansion; (x)0,y0,z0) Representing coordinates of a center point of the patch; the delta x and the delta y are { -1,0,1}, and represent the step length of the plane coordinate displacement value of the center point of the extended patch on the patch;
therefore, a patch is generated by fast expanding on a plane, point-by-point optimization is continuously carried out on points which do not belong to the plane in the expanding process through a PMVS (pulse-to-multipoint) method, and the direction of a new patch method is set to be not corrected through fitting;
and continuously expanding the new surface patches according to the method until the seed point set is empty, and finishing the expansion.
7. The method for rapidly generating the building target point cloud for the air-to-ground image data fusion as claimed in claim 4, 5 or 6, wherein: the threshold value of NCC is 0.8, and it is considered that the seed point is located on the plane when NCC > is 0.8.
CN202210012783.XA 2022-01-07 2022-01-07 Building target point cloud rapid generation method for air-ground image data fusion Active CN114463521B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210012783.XA CN114463521B (en) 2022-01-07 2022-01-07 Building target point cloud rapid generation method for air-ground image data fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210012783.XA CN114463521B (en) 2022-01-07 2022-01-07 Building target point cloud rapid generation method for air-ground image data fusion

Publications (2)

Publication Number Publication Date
CN114463521A true CN114463521A (en) 2022-05-10
CN114463521B CN114463521B (en) 2024-01-30

Family

ID=81410114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210012783.XA Active CN114463521B (en) 2022-01-07 2022-01-07 Building target point cloud rapid generation method for air-ground image data fusion

Country Status (1)

Country Link
CN (1) CN114463521B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114937123A (en) * 2022-07-19 2022-08-23 南京邮电大学 Building modeling method and device based on multi-source image fusion
CN115187843A (en) * 2022-07-28 2022-10-14 中国测绘科学研究院 Depth map fusion method based on object space voxel and geometric feature constraint

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103017739A (en) * 2012-11-20 2013-04-03 武汉大学 Manufacturing method of true digital ortho map (TDOM) based on light detection and ranging (LiDAR) point cloud and aerial image
US20130208997A1 (en) * 2010-11-02 2013-08-15 Zte Corporation Method and Apparatus for Combining Panoramic Image
CN105069843A (en) * 2015-08-22 2015-11-18 浙江中测新图地理信息技术有限公司 Rapid extraction method for dense point cloud oriented toward city three-dimensional modeling
CN105825543A (en) * 2016-03-31 2016-08-03 武汉大学 Multi-view dense point cloud generation method and system based on low-altitude remote sensing images
CN105957076A (en) * 2016-04-27 2016-09-21 武汉大学 Clustering based point cloud segmentation method and system
CN111079611A (en) * 2019-12-09 2020-04-28 成都奥伦达科技有限公司 Automatic extraction method for road surface and marking line thereof
CN112489212A (en) * 2020-12-07 2021-03-12 武汉大学 Intelligent three-dimensional mapping method for building based on multi-source remote sensing data
CN112927360A (en) * 2021-03-24 2021-06-08 广州蓝图地理信息技术有限公司 Three-dimensional modeling method and system based on fusion of tilt model and laser point cloud data
US20210358206A1 (en) * 2020-05-14 2021-11-18 Star Institute Of Intelligent Systems Unmanned aerial vehicle navigation map construction system and method based on three-dimensional image reconstruction technology

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130208997A1 (en) * 2010-11-02 2013-08-15 Zte Corporation Method and Apparatus for Combining Panoramic Image
CN103017739A (en) * 2012-11-20 2013-04-03 武汉大学 Manufacturing method of true digital ortho map (TDOM) based on light detection and ranging (LiDAR) point cloud and aerial image
CN105069843A (en) * 2015-08-22 2015-11-18 浙江中测新图地理信息技术有限公司 Rapid extraction method for dense point cloud oriented toward city three-dimensional modeling
CN105825543A (en) * 2016-03-31 2016-08-03 武汉大学 Multi-view dense point cloud generation method and system based on low-altitude remote sensing images
CN105957076A (en) * 2016-04-27 2016-09-21 武汉大学 Clustering based point cloud segmentation method and system
CN111079611A (en) * 2019-12-09 2020-04-28 成都奥伦达科技有限公司 Automatic extraction method for road surface and marking line thereof
US20210358206A1 (en) * 2020-05-14 2021-11-18 Star Institute Of Intelligent Systems Unmanned aerial vehicle navigation map construction system and method based on three-dimensional image reconstruction technology
CN112489212A (en) * 2020-12-07 2021-03-12 武汉大学 Intelligent three-dimensional mapping method for building based on multi-source remote sensing data
CN112927360A (en) * 2021-03-24 2021-06-08 广州蓝图地理信息技术有限公司 Three-dimensional modeling method and system based on fusion of tilt model and laser point cloud data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YINGLONG HU ET AL.: ""Unmanned Aerial Vehicle and Ground Remote Sensing Applied in 3D Reconstruction of Historical Building Groups in Ancient Villages"", 《2018 FIFTH INTERNATIONAL WORKSHOP ON EARTH OBSERVATION AND REMOTE SENSING APPLICATIONS》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114937123A (en) * 2022-07-19 2022-08-23 南京邮电大学 Building modeling method and device based on multi-source image fusion
CN114937123B (en) * 2022-07-19 2022-11-04 南京邮电大学 Building modeling method and device based on multi-source image fusion
CN115187843A (en) * 2022-07-28 2022-10-14 中国测绘科学研究院 Depth map fusion method based on object space voxel and geometric feature constraint
CN115187843B (en) * 2022-07-28 2023-03-14 中国测绘科学研究院 Depth map fusion method based on object space voxel and geometric feature constraint

Also Published As

Publication number Publication date
CN114463521B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
CN112288875B (en) Rapid three-dimensional reconstruction method for unmanned aerial vehicle mine inspection scene
CN108038906B (en) Three-dimensional quadrilateral mesh model reconstruction method based on image
WO2020165557A1 (en) 3d face reconstruction system and method
CN110866531A (en) Building feature extraction method and system based on three-dimensional modeling and storage medium
CN112686935B (en) Airborne sounding radar and multispectral satellite image registration method based on feature fusion
CN114463521B (en) Building target point cloud rapid generation method for air-ground image data fusion
CN113192179A (en) Three-dimensional reconstruction method based on binocular stereo vision
CN114782628A (en) Indoor real-time three-dimensional reconstruction method based on depth camera
CN108629742B (en) True ortho image shadow detection and compensation method, device and storage medium
CN112164145A (en) Method for rapidly extracting indoor three-dimensional line segment structure based on point cloud data
CN114359503A (en) Oblique photography modeling method based on unmanned aerial vehicle
CN116805356A (en) Building model construction method, building model construction equipment and computer readable storage medium
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN113034681B (en) Three-dimensional reconstruction method and device for spatial plane relation constraint
CN107610216B (en) Particle swarm optimization-based multi-view three-dimensional point cloud generation method and applied camera
CN112002007B (en) Model acquisition method and device based on air-ground image, equipment and storage medium
CN110969650B (en) Intensity image and texture sequence registration method based on central projection
CN113345084A (en) Three-dimensional modeling system and three-dimensional modeling method
CN116091706B (en) Three-dimensional reconstruction method for multi-mode remote sensing image deep learning matching
CN116228964A (en) Unmanned aerial vehicle oblique photography three-dimensional model and unmanned aerial vehicle image joint modeling method
CN107194334B (en) Video satellite image dense Stereo Matching method and system based on optical flow estimation
CN115512055A (en) Method and device for performing indoor structure three-dimensional reconstruction based on two-dimensional video and computer equipment
CN114491697A (en) Tree point cloud completion method based on deep learning
Yu et al. Advanced approach for automatic reconstruction of 3d buildings from aerial images
Wang et al. Identifying and filling occlusion holes on planar surfaces for 3-D scene editing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant