CN113345072A - Multi-view remote sensing topographic image point cloud reconstruction method and system - Google Patents

Multi-view remote sensing topographic image point cloud reconstruction method and system Download PDF

Info

Publication number
CN113345072A
CN113345072A CN202110608819.6A CN202110608819A CN113345072A CN 113345072 A CN113345072 A CN 113345072A CN 202110608819 A CN202110608819 A CN 202110608819A CN 113345072 A CN113345072 A CN 113345072A
Authority
CN
China
Prior art keywords
image
point
points
surface elements
remote sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110608819.6A
Other languages
Chinese (zh)
Inventor
杨景玉
王阳萍
刘喜兵
党建武
雍玖
岳彪
王文润
王松
杨旭
张晶
穆聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guohua Satellite Data Technology Co., Ltd
Lanzhou Jiaotong University
Original Assignee
Lanzhou Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lanzhou Jiaotong University filed Critical Lanzhou Jiaotong University
Priority to CN202110608819.6A priority Critical patent/CN113345072A/en
Publication of CN113345072A publication Critical patent/CN113345072A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a point cloud reconstruction method and a point cloud reconstruction system for a multi-view remote sensing terrain image, wherein the method comprises the following steps: acquiring a plurality of multi-view remote sensing terrain images; carrying out feature extraction on the multi-view remote sensing topographic image to obtain image features; under the set constraint condition, matching the feature points according to the image features; carrying out forward intersection on two characteristic points in the characteristic point pairs to obtain model points; generating surface elements according to the model points, wherein the surface elements are used as seed points, and the seed points form a seed point set; diffusing according to the surface elements corresponding to the various sub points to obtain dense surface elements; filtering a plurality of dense surface elements according to set filtering conditions to obtain filtered surface elements; updating the seed point set by using the filtered surface element; returning to the step of diffusing according to the surface elements corresponding to the various sub-points to obtain dense surface elements until the iteration times reach the preset times to obtain the three-dimensional point cloud of the multi-view remote sensing terrain image; the invention improves the reconstruction precision and efficiency.

Description

Multi-view remote sensing topographic image point cloud reconstruction method and system
Technical Field
The invention relates to the technical field of point cloud reconstruction, in particular to a point cloud reconstruction method and system for a multi-view remote sensing terrain image.
Background
The point cloud reconstruction technology for the multi-view remote sensing terrain images refers to the technology that a spatial three-dimensional point cloud structure of a target scene is restored through a series of multi-view terrain images collected by a camera according to camera parameters and an internal constraint relation between the images. The multi-view reconstructed images require at least two images and a certain degree of overlap exists between the images.
The existing point cloud reconstruction technology based on multi-view images is mainly divided into four categories: depth map fusion, voxel, variable polygon mesh, and face element expansion. The depth map fusion method represented by a semi-global stereo matching (SGM) algorithm requires that a dense and accurate depth map is obtained firstly, and then a three-dimensional point cloud scene is reconstructed by the depth map fusion method under some constraint conditions. The depth map obtained by the method has high noise and high redundancy, and influences the reconstruction precision and efficiency. The voxel method, represented by the graph cut algorithm, requires the initialization of a bounding box containing the reconstructed scene, the reconstruction accuracy of which is limited by the resolution of the voxel grid. The variable polygon mesh method represented by the visual shell model algorithm needs a good starting point to initialize the corresponding optimization process, is usually limited to a scene data set, has low flexibility, and limits the applicability of the method. The existing binning expansion method involves many parameters, so that the re-efficiency is low.
Disclosure of Invention
The invention aims to provide a multi-view remote sensing terrain image point cloud reconstruction method and system, which improve reconstruction accuracy and efficiency.
In order to achieve the purpose, the invention provides the following scheme:
a point cloud reconstruction method for a multi-view remote sensing topographic image comprises the following steps:
acquiring a plurality of multi-view remote sensing terrain images;
carrying out feature extraction on the multi-view remote sensing topographic image to obtain image features;
under the set constraint condition, carrying out feature point matching according to the image features to obtain a plurality of feature point pairs;
carrying out forward intersection on two characteristic points in the characteristic point pairs to obtain model points;
generating surface elements according to the model points, wherein the surface elements are used as seed points, and the seed points form a seed point set;
diffusing according to the surface elements corresponding to the seed points to obtain dense surface elements;
filtering the dense surface elements according to set filtering conditions to obtain filtered surface elements;
updating the seed point set by the filtered surface element;
and returning to the step of diffusing according to the surface elements corresponding to the seed points to obtain dense surface elements until the iteration times reach the preset times, and obtaining the three-dimensional point cloud of the multi-view remote sensing terrain image.
Optionally, the performing feature extraction on the multi-view remote sensing terrain image to obtain image features specifically includes:
cutting the multi-view remote sensing terrain image to obtain an image block;
extracting the features of the image blocks by adopting a concurrent SIFT operator to obtain image features; and recording the image characteristics corresponding to the position information of the image block.
Optionally, under the set constraint condition, performing feature point matching according to the image features to obtain a plurality of feature point pairs, specifically including:
determining reference images and non-reference images from the multi-view remote sensing terrain images, wherein the number of the reference images is 1;
under a set constraint condition, finding out matched feature points corresponding to each feature point in the reference image from the non-reference image, wherein each feature point and the corresponding matched feature point form a feature point pair; the feature points are points in the image feature.
Optionally, the set constraints include epipolar line constraints and ground elevation range constraints.
Optionally, the generating a surface element according to each model point, each surface element serving as a seed point, and each seed point constituting a seed point set specifically includes:
calculating the distance from each model point to the camera center of the reference image;
sorting the corresponding model points according to the distance from small to large;
and initializing the surface element according to the model points in sequence according to the sequence of the model points, and if the surface element is obtained by the initialized surface element, taking the surface element as a seed point, wherein the seed points form a seed point set.
Optionally, the diffusing the surface element corresponding to each seed point to obtain a dense surface element specifically includes:
if a surface element with an average correlation coefficient larger than a set coefficient threshold value exists in a neighborhood of the grid where the seed point is located, or a surface element with a distance from the seed point to the surface element within a first set distance range exists in the neighborhood, surface element diffusion is not performed on the neighborhood, otherwise, surface element diffusion is performed on the neighborhood; the normal vectors of the diffused surface elements are the same as those of the surface elements corresponding to the seed points, and the centers of the diffused surface elements are the intersection points of the light rays passing through the grid centers of the neighborhoods and the planes where the surface elements corresponding to the seed points are located.
Optionally, the filtering the dense surface elements according to a set filtering condition to obtain filtered surface elements specifically includes:
deleting the surface element with the minimum depth in the dense surface elements, deleting the surface elements with the normal vector included angles larger than a set included angle range in the dense surface elements, deleting the surface elements with the distances larger than a second set distance range in the dense surface elements, and obtaining the filtered surface elements.
The invention also discloses a multi-view remote sensing topographic image point cloud reconstruction system, which comprises:
the multi-view remote sensing terrain image acquisition module is used for acquiring a plurality of multi-view remote sensing terrain images;
the image feature extraction module is used for extracting features of the multi-view remote sensing topographic image to obtain image features;
the characteristic point pair obtaining module is used for matching characteristic points according to the image characteristics under a set constraint condition to obtain a plurality of characteristic point pairs;
the model point acquisition module is used for carrying out forward intersection on two feature points in the feature point pairs to obtain model points;
the seed point acquisition module is used for generating surface elements according to the model points, wherein each surface element is used as a seed point, and each seed point forms a seed point set;
the surface element diffusion module is used for diffusing the surface elements corresponding to the seed points to obtain dense surface elements;
the surface element filtering module is used for filtering the dense surface elements according to set filtering conditions to obtain filtered surface elements;
a seed point updating module for updating the seed point set by the filtered surface element;
a returning module, configured to return "performing diffusion according to the surface element corresponding to each seed point to obtain a dense surface element" when the iteration number does not reach the preset number;
and the three-dimensional point cloud determining module is used for obtaining the three-dimensional point cloud of the multi-view remote sensing terrain image when the iteration times reach the preset times.
Optionally, the image feature extraction module specifically includes:
the image cutting unit is used for cutting the multi-view remote sensing terrain image to obtain an image block;
the image feature acquisition unit is used for extracting features of the image blocks by adopting a concurrent SIFT operator to obtain image features; and recording the image characteristics corresponding to the position information of the image block.
Optionally, the characteristic point pair obtaining module specifically includes:
the reference image determining unit is used for determining reference images and non-reference images from the multi-view remote sensing terrain images, and the number of the reference images is 1;
a feature point pair obtaining unit, configured to find, under a set constraint condition, a matching feature point corresponding to each feature point in the reference image from the non-reference image, where each feature point and the corresponding matching feature point form a feature point pair; the feature points are points in the image feature.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention discloses a multi-view remote sensing topographic image point cloud reconstruction method and a multi-view remote sensing topographic image point cloud reconstruction system, wherein image characteristics of a plurality of multi-view remote sensing topographic images are obtained, characteristic point pairs are obtained according to the image characteristics, two characteristic points in the characteristic point pairs are subjected to forward intersection to obtain model points, surface elements are generated according to the model points, each surface element is used as a seed point, the diffusion and filtering processes of the surface elements corresponding to the seed points are iterated to obtain three-dimensional point clouds of the multi-view remote sensing topographic images, the used parameters in the reconstruction process are few, the reconstruction efficiency is improved, and the reconstruction accuracy is improved by iterating the diffusion and filtering processes of the surface elements corresponding to the seed points.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a multi-view remote sensing topographic image point cloud reconstruction method of the present invention;
FIG. 2 is a schematic structural diagram of a multi-view remote sensing topographic image point cloud reconstruction system according to the present invention;
FIG. 3 is a block strategy diagram of a multi-view remote sensing topographic image of the present invention;
FIG. 4 is a schematic diagram of image characteristics of a multi-view remote sensing topographic image in accordance with the present invention;
fig. 5 is a schematic diagram of a three-dimensional point cloud obtained by the multi-view remote sensing topographic image point cloud reconstruction method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a multi-view remote sensing terrain image point cloud reconstruction method and system, which improve reconstruction accuracy and efficiency.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a schematic flow chart of a multi-view remote sensing topographic image point cloud reconstruction method of the present invention, and as shown in fig. 1, the multi-view remote sensing topographic image point cloud reconstruction method includes:
step 101: and acquiring a plurality of multi-view remote sensing terrain images.
Step 102: and carrying out feature extraction on the multi-view remote sensing terrain image to obtain image features.
The method comprises the following steps of carrying out feature extraction on the multi-view remote sensing terrain image to obtain image features, and specifically comprises the following steps:
and cutting the multi-view remote sensing terrain image to obtain an image block. The plurality of image blocks are image blocks with overlapping areas.
Extracting the features of the image blocks by adopting a concurrent SIFT operator to obtain image features; and recording the image characteristics corresponding to the position information of the image block.
The multi-view remote sensing terrain image blocking strategy is shown in fig. 3, a large-format multi-view remote sensing terrain image is subjected to stripe cutting, then each stripe is cut into image blocks, the position index information of the image blocks is stored by adopting a tree structure, and the position index information is written into a feature file according to the position index information recorded during cutting. The image features are shown in fig. 4.
Step 103: under the set constraint condition, carrying out feature point matching according to the image features to obtain a plurality of feature point pairs;
under the set constraint condition, performing feature point matching according to the image features to obtain a plurality of feature point pairs, specifically comprising:
determining reference images and non-reference images from the multi-view remote sensing terrain images, wherein the number of the reference images is 1;
under a set constraint condition, finding out matched feature points corresponding to each feature point in the reference image from the non-reference image, wherein each feature point and the corresponding matched feature point form a feature point pair; the feature points are points in the image feature.
The set constraint conditions comprise epipolar line constraint and ground elevation range constraint.
Step 104: carrying out forward intersection on two characteristic points in the characteristic point pairs to obtain model points;
step 105: generating surface elements according to the model points, wherein the surface elements are used as seed points, and the seed points form a seed point set;
generating a surface element according to each model point, wherein each surface element is used as a seed point, and each seed point forms a seed point set, and the method specifically comprises the following steps:
and calculating the distance from each model point to the camera center of the reference image.
Sorting the corresponding model points according to the distance from small to large;
and initializing the surface element according to the model points in sequence according to the sequence of the model points, and if the surface element is obtained by the initialized surface element, taking the surface element as a seed point, wherein the seed points form a seed point set.
Step 106: diffusing according to the surface elements corresponding to the seed points to obtain dense surface elements;
the diffusing is performed according to the surface element corresponding to each seed point to obtain a dense surface element, and the method specifically comprises the following steps:
if a surface element with an average correlation coefficient larger than a set coefficient threshold value exists in a neighborhood of the grid where the seed point is located, or a surface element with a distance from the seed point to the surface element within a first set distance range exists in the neighborhood, surface element diffusion is not performed on the neighborhood, otherwise, surface element diffusion is performed on the neighborhood; the normal vectors of the diffused surface elements are the same as those of the surface elements corresponding to the seed points, and the centers of the diffused surface elements are the intersection points of the light rays passing through the grid centers of the neighborhoods and the planes where the surface elements corresponding to the seed points are located. The light passing through the grid center of the neighborhood refers to the light passing through the camera center and the grid center of the neighborhood.
The mesh is divided between a reference image and a non-reference image defined in a set of multi-view remote sensing terrain images, and is generally β × β pixels (β ═ 2). One grid corresponds to one image block.
Step 107: filtering the dense surface elements according to set filtering conditions to obtain filtered surface elements;
step 108: updating the seed point set by the filtered surface element;
the filtering of the dense surface elements according to the set filtering condition to obtain filtered surface elements specifically includes:
deleting the surface element with the minimum depth in the dense surface elements, deleting the surface elements with the normal vector included angles larger than a set included angle range in the dense surface elements, deleting the surface elements with the distances larger than a second set distance range in the dense surface elements, and obtaining the filtered surface elements.
Step 109: judging whether the iteration times reach preset times or not;
if the iteration times do not reach the preset times, returning to the step 106;
if the iteration number has reached the preset number, go to step 110;
step 110: and obtaining the three-dimensional point cloud of the multi-view remote sensing terrain image.
The point cloud reconstruction method for the multi-view remote sensing terrain image improves reconstruction accuracy and efficiency.
The point cloud reconstruction method of the multi-view remote sensing topographic image of the invention is explained in detail below.
The invention provides a PMVS algorithm-based multi-view remote sensing terrain image point cloud reconstruction method which specifically comprises the following steps:
and Step1, acquiring the multi-view remote sensing terrain image.
Traditional live-action three-dimensional models are obtained by reconstructing unmanned aerial vehicles and low-altitude aerial images, and when meeting aviation control or difficult-to-fly areas, the two-dimensional image data can be obtained with certain difficulty. With the use of the three-dimensional surveying and mapping remote sensing satellite in a large quantity, the acquisition of wide-range, high-precision and high-timeliness multi-view remote sensing image data is not limited, and an image data source can be provided for carrying out the reconstruction of a three-dimensional model of a ground object in an unmanned area, an overseas area and other terrain monitoring areas.
Step2, carrying out feature detection on the multi-view remote sensing terrain image to obtain image features; the method specifically comprises the following steps:
and cutting the multi-view remote sensing terrain image to be reconstructed to obtain an image block with an overlapping area, as shown in fig. 3.
SIFT (Scale-invariant feature transform) feature detection is concurrently performed on the image blocks obtained by cutting, and the feature detection is written into a feature file according to position index information recorded during cutting, as shown in fig. 4.
The specific steps of the concurrent SIFT operator based on the image block are as follows:
1) and partitioning the multi-view remote sensing terrain image.
The size of a resource No. 3 remote sensing terrain image with a large breadth can exceed 24000 multiplied by 24000 pixels and is about 4.8G, and computer hardware resources cannot be fully utilized when feature points are detected. The segmentation strategy of obtaining image blocks by cutting the strips first and then cutting the strips is adopted for the multi-view remote sensing terrain image, as shown in fig. 3. Cutting a remote sensing image into image blocks with the same size in a main thread, and then concurrently carrying out SIFT feature detection; and the three-layer tree structure is used for storing position index information of rows and blocks, wherein the rows refer to the rows of the strips, and the blocks refer to the block number of the image blocks, so that the performance reduction of the hard disk caused by disordered block read-write operation is avoided.
2) And performing feature detection by using an SIFT operator, and writing the feature file according to the position index information recorded during cutting.
The SIFT operator is specifically realized by the following steps:
a. detecting an extreme value of the scale space;
and performing convolution operation on the image I by using Gaussian functions G (x, y, sigma) of space factors sigma of different scales to obtain image Gaussian pyramids under different scales. Then, two adjacent Gaussian images in the same scale are subtracted in the image pyramid to obtain a Gaussian Difference pyramid image DoG (Difference of Gaussian function), that is, Difference of Gaussian function
Figure BDA0003094704890000081
Where k is a constant factor. And comparing each sampling point (pixel) with eight adjacent points in the current image and nine adjacent points in the upper and lower scales, and when the sampling point is greater than or less than all the adjacent points, selecting the sampling point as a local maximum value and a local minimum value of D (x, y, delta), namely as a candidate key point (characteristic point).
L (x, y, k δ) ═ G (x, y, k δ) × I (x, y), and L (x, y, k δ) represents the convolution of the gaussian function of the scale k δ with the image I (x, y).
The formula of the gaussian kernel G (x, y, δ) is as follows, where δ is a scale space factor that determines the smoothness of the image, and represents the convolution operation;
Figure BDA0003094704890000082
b. key point localization
After the candidate key points are obtained, data near the candidate key points are subjected to detailed fitting through a method of fitting a three-dimensional quadratic function, so that the positions and the scales of the key points are accurately determined. And the DoG operator can generate strong edge response, and candidate key points with low contrast and unstable edge response need to be removed in order to improve the stability and the anti-noise capability of feature point matching.
c. Feature point principal direction assignment and feature description
In order to make the feature point descriptors have rotational invariance, one or more reference directions need to be assigned to each keypoint. The gaussian smoothed image L has a modulus m (x, y) and a direction θ (x, y) of the gradient at the point (x, y). As shown in the following equation:
Figure BDA0003094704890000091
Figure BDA0003094704890000092
and taking a 16 × 16 pixel area by taking the key point as the center in the scale space where the key point is located, uniformly dividing the area into 16 sub-areas with the size of 4 × 4 pixels, and counting the gradient histograms of each sub-block in eight directions. The 8-direction gradient histograms of the 4 × 4 sub-regions of pixels are then sorted to form a 128-dimensional feature vector, 4 × 4 × 8. After the feature vector is generated, normalization processing is also needed to remove the influence of illumination and the like.
d. Writing the characteristics of each image block into a characteristic file according to the position index information recorded during cutting;
step3, matching the characteristics of the image under constraint conditions to obtain matched characteristic point pairs;
the method comprises the following concrete steps:
a. the reference picture and other pictures (non-reference pictures) are selected.
Each image is taken as a reference image R (p) in turn, and an image I (p) with the included angle between the main optical axis and R (p) smaller than alpha is selected from other images. The reference image is then matched to these images (σ 60 °).
b. And selecting candidate matching feature points.
For each feature point F detected on the reference image, candidate matching points F 'are searched on other images under the conditions of epipolar line constraint (epipolar line geometric constraint) and ground height range constraint, and the candidate matching points F' form a set F.
The plane formed by the shooting baseline and any object space point is called the epipolar plane passing through the object space point, and the intersection line of the epipolar plane and the image plane is the epipolar line. According to the theory of photogrammetry and computer vision, the homonymous points are necessarily positioned on homonymous epipolar lines of stereo pairs, and the constraint relation is called epipolar line geometric constraint.
The epipolar line has the following basic properties:
1) all epipolar lines on the oblique image are not parallel to each other, but intersect at an epipolar point (pole).
2) On an ideal image plane, all epipolar lines are parallel to each other, and not only the epipolar lines on the same image plane are parallel, but also the corresponding epipolar lines on the image pair are parallel, and the vertical parallax is zero, which is very useful for stereo observation.
3) The characteristic that a certain point on the left (right) image is necessarily on the homonym epipolar line on the right (left) image is the basic basis for realizing epipolar line correlation.
The epipolar constraint reduces the original two-dimensional search space to a one-dimensional epipolar space, greatly reduces the search range, and is very effective constraint for improving the matching efficiency.
c. Calculating coordinates of model points
Calculating model points by front intersection of each pair of characteristic points (f, f'), calculating the distance between each model point and the center of the reference image camera, and sequencing the model points according to the distance from near to far; the coordinates of the object point can be found only if the coordinates of at least one pair of image points of the same name (the same-name points) are known. One pixel can be listed with two equations, two with four equations, solving three unknowns. The coordinates of the model points are determined by the least square method, X, Y, Z and (X, Y, Z). A pair of homonymous image points refers to a pair of characteristic point pairs (f, f').
If the P matrix is known, the coordinate relationship (projection formula) of the object side (model point) and the image side (feature point) can be expressed as: [ xz yz z]T=P[X Y Z 1]T(x, y) represents the coordinates of the feature point, and z represents a constant.
And P is a projection matrix, and the P matrix is solved according to the exterior orientation elements of the image and the camera parameters and is mainly used for solving the coordinates of object space points.
Figure BDA0003094704890000101
Where the matrix K is commonly referred to as the intrinsic matrix, it is determined by the properties of the camera itself: (f)x,fy) Denotes the vertical and horizontal focal lengths, (C)x,Cy) Representing the principal point coordinates. R is the rotation matrix of the camera, [ X ]s,Ys,Zs]Is the coordinate of the center of projection in the object coordinate system.
Step4, model points are obtained by crossing the characteristic points forward, and a bin p is generated as a seed point. The bins are initialized by model points in turn, and if one model point fails, the next model point is considered. The bin p is a spatial facet rectangle composed of a central point c (p) and a direction vector n (p), approximately tangential to the reconstructed object surface, with a size generally defined as n × n pixels projected on its reference image r (p). The attributes of bin p further include a visible image set and a quasi-visible image set, v (p) being the quasi-visible image set of bin p, i.e. all the image sets containing bin p; v (p) is the set of images for the visible image set of bins p, and V (p) is the set of images for which all bins p in V (p) can be seen.
a. The center c (p) of the initialization bin is a model point, and the unit normal vector n (p) of the initialization bin is a unit vector pointing to the center of the reference image camera from the center of the initialization bin.
b. And (c) initializing the visible image set V (P) and the quasi-visible image set V (P), and if the visible image set V (P) and the quasi-visible image set V (P) are not initialized, returning to the step a to initialize the bin according to the next model point.
c. And optimizing the central coordinates and normal vectors of the surface elements.
d. Updating V (P) and V x (P).
Step 5: diffusing the seed points to obtain a dense surface element; and diffusing the data to the neighborhood of the grid where the seed surface element is positioned, and if a surface element which is close to the seed surface element exists in the neighborhood or a surface element with a larger average correlation coefficient exists in the neighborhood, not diffusing the data to the neighborhood.
a. The normal vector of the new surface element is the same as that of the seed surface element, and the center of the new surface element is the intersection point of the light passing through the center of the neighborhood grid and the plane where the seed surface element is located.
b. The next steps are similar to generating seed points, i.e., calculating V (p) and V (p), optimizing bins, updating V (p) and V (p), and if the number of photos in V (p) is greater than a threshold, it is considered that one bin is successfully diffused, otherwise, the diffusion fails. And then continue to diffuse the next new bin until diffusion is no longer possible.
Step 6: and filtering the dense surface elements obtained by the diffusion according to the filtering condition.
And (4) continuously expanding the new surface element as a seed surface element, removing the wrong surface element by using two filters of a visual consistency constraint condition and a weak form regularization condition, iterating for n times, and finally obtaining an even and dense terrain surface element set.
The visual consistency constraint is shown in equation (6) for one bin p0By U, p is indicated0The set of occluded bins. When p is0When the relation of formula (6) is satisfied with the surface element in the U, p is0Removed as false reconstruction bins that lie outside the surface of the topographical scene.
Figure BDA0003094704890000121
Using weak regularization conditions as clustering constraints: for each bin p0Collecting V (p)0) Middle projection to p0The image block and the surface element set of all the adjacent image blocks. If p is0The ratio of the bin in the neighborhood to the number of bins in the set is less than e (e 0.25), then p is assigned0Culled as outliers.
The reconstruction results are shown in fig. 5.
Fig. 2 is a schematic structural diagram of a multi-view remote sensing topographic image point cloud reconstruction system of the present invention, as shown in fig. 2, the multi-view remote sensing topographic image point cloud reconstruction system includes:
a multi-view remote sensing terrain image obtaining module 201, configured to obtain a plurality of multi-view remote sensing terrain images;
the image feature extraction module 202 is configured to perform feature extraction on the multi-view remote sensing terrain image to obtain image features;
a feature point pair obtaining module 203, configured to perform feature point matching according to the image features under a set constraint condition, to obtain a plurality of feature point pairs;
a model point obtaining module 204, configured to perform a forward intersection on two feature points in the feature point pairs to obtain a model point;
a seed point obtaining module 205, configured to generate surface elements according to each model point, where each surface element is used as a seed point, and each seed point forms a seed point set;
a surface element diffusion module 206, configured to perform diffusion according to the surface elements corresponding to the seed points to obtain dense surface elements;
a surface element filtering module 207, configured to filter the dense surface elements according to a set filtering condition, so as to obtain filtered surface elements;
a seed point updating module 208, configured to update the seed point set with the filtered bin;
a returning module 209, configured to return "performing diffusion according to the surface element corresponding to each seed point to obtain a dense surface element" when the iteration number does not reach the preset number;
and the three-dimensional point cloud determining module 210 is used for obtaining the three-dimensional point cloud of the multi-view remote sensing terrain image when the iteration times reach preset times.
The image feature extraction module 202 specifically includes:
the image cutting unit is used for cutting the multi-view remote sensing terrain image to obtain an image block;
the image feature acquisition unit is used for extracting features of the image blocks by adopting a concurrent SIFT operator to obtain image features; and recording the image characteristics corresponding to the position information of the image block.
The characteristic point pair obtaining module 203 specifically includes:
the reference image determining unit is used for determining reference images and non-reference images from the multi-view remote sensing terrain images, and the number of the reference images is 1;
a feature point pair obtaining unit, configured to find, under a set constraint condition, a matching feature point corresponding to each feature point in the reference image from the non-reference image, where each feature point and the corresponding matching feature point form a feature point pair; the feature points are points in the image feature.
The set constraint conditions comprise epipolar line constraint and ground elevation range constraint.
The seed point obtaining module 205 specifically includes:
the distance calculation unit is used for calculating the distance from each model point to the camera center of the reference image;
the sorting unit is used for sorting the corresponding model points according to the distance from small to large;
and the seed point determining unit is used for sequentially initializing the surface element according to the model points according to the sequence of the model points, and if the surface element is obtained by the initialized surface element, the surface element is used as a seed point, and the seed points form a seed point set.
The surface element diffusion module 206 specifically includes:
a surface element diffusion unit, configured to, if a surface element whose average correlation coefficient is greater than a set coefficient threshold already exists in a neighborhood of a grid in which the seed point is located, or a surface element whose distance from the surface element corresponding to the seed point is within a first set distance range already exists in the neighborhood, not perform surface element diffusion on the neighborhood, or otherwise perform surface element diffusion on the neighborhood; the normal vectors of the diffused surface elements are the same as those of the surface elements corresponding to the seed points, and the centers of the diffused surface elements are the intersection points of the light rays passing through the grid centers of the neighborhoods and the planes where the surface elements corresponding to the seed points are located.
The surface element filtering module 207 specifically includes:
and the surface element filtering unit is used for deleting a plurality of surface elements with the minimum depth in the dense surface elements, deleting a plurality of surface elements with normal vector included angles larger than a set included angle range in the dense surface elements, deleting a plurality of surface elements with distances larger than a second set distance range in the dense surface elements, and obtaining filtered surface elements.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. A point cloud reconstruction method for a multi-view remote sensing topographic image is characterized by comprising the following steps:
acquiring a plurality of multi-view remote sensing terrain images;
carrying out feature extraction on the multi-view remote sensing topographic image to obtain image features;
under the set constraint condition, carrying out feature point matching according to the image features to obtain a plurality of feature point pairs;
carrying out forward intersection on two characteristic points in the characteristic point pairs to obtain model points;
generating surface elements according to the model points, wherein the surface elements are used as seed points, and the seed points form a seed point set;
diffusing according to the surface elements corresponding to the seed points to obtain dense surface elements;
filtering the dense surface elements according to set filtering conditions to obtain filtered surface elements;
updating the seed point set by the filtered surface element;
and returning to the step of diffusing according to the surface elements corresponding to the seed points to obtain dense surface elements until the iteration times reach the preset times, and obtaining the three-dimensional point cloud of the multi-view remote sensing terrain image.
2. The point cloud reconstruction method for the multi-view remote sensing topographic image according to claim 1, wherein the extracting the features of the multi-view remote sensing topographic image to obtain image features specifically comprises:
cutting the multi-view remote sensing terrain image to obtain an image block;
extracting the features of the image blocks by adopting a concurrent SIFT operator to obtain image features; and recording the image characteristics corresponding to the position information of the image block.
3. The point cloud reconstruction method for the multi-view remote sensing topographic image according to claim 1, wherein the step of performing feature point matching according to the image features under a set constraint condition to obtain a plurality of feature point pairs specifically comprises:
determining reference images and non-reference images from the multi-view remote sensing terrain images, wherein the number of the reference images is 1;
under a set constraint condition, finding out matched feature points corresponding to each feature point in the reference image from the non-reference image, wherein each feature point and the corresponding matched feature point form a feature point pair; the feature points are points in the image feature.
4. The point cloud reconstruction method for the multi-view remote sensing topographic image of claim 3, wherein the set constraint conditions comprise epipolar line constraint and ground elevation range constraint.
5. The point cloud reconstruction method for the multi-view remote sensing topographic image according to claim 3, wherein a bin is generated according to each model point, each bin is used as a seed point, and each seed point constitutes a seed point set, which specifically comprises:
calculating the distance from each model point to the camera center of the reference image;
sorting the corresponding model points according to the distance from small to large;
and initializing the surface element according to the model points in sequence according to the sequence of the model points, and if the surface element is obtained by the initialized surface element, taking the surface element as a seed point, wherein the seed points form a seed point set.
6. The point cloud reconstruction method for the multi-view remote sensing topographic image according to claim 1, wherein the diffusing is performed according to the surface elements corresponding to the seed points to obtain dense surface elements, and specifically comprises:
if a surface element with an average correlation coefficient larger than a set coefficient threshold value exists in a neighborhood of the grid where the seed point is located, or a surface element with a distance from the seed point to the surface element within a first set distance range exists in the neighborhood, surface element diffusion is not performed on the neighborhood, otherwise, surface element diffusion is performed on the neighborhood; the normal vectors of the diffused surface elements are the same as those of the surface elements corresponding to the seed points, and the centers of the diffused surface elements are the intersection points of the light rays passing through the grid centers of the neighborhoods and the planes where the surface elements corresponding to the seed points are located.
7. The point cloud reconstruction method for the multi-view remote sensing topographic image according to claim 1, wherein the filtering of the plurality of dense surface elements according to the set filtering condition to obtain filtered surface elements specifically comprises:
deleting the surface element with the minimum depth in the dense surface elements, deleting the surface elements with the normal vector included angles larger than a set included angle range in the dense surface elements, deleting the surface elements with the distances larger than a second set distance range in the dense surface elements, and obtaining the filtered surface elements.
8. A multi-view remote sensing topographic image point cloud reconstruction system is characterized by comprising:
the multi-view remote sensing terrain image acquisition module is used for acquiring a plurality of multi-view remote sensing terrain images;
the image feature extraction module is used for extracting features of the multi-view remote sensing topographic image to obtain image features;
the characteristic point pair obtaining module is used for matching characteristic points according to the image characteristics under a set constraint condition to obtain a plurality of characteristic point pairs;
the model point acquisition module is used for carrying out forward intersection on two feature points in the feature point pairs to obtain model points;
the seed point acquisition module is used for generating surface elements according to the model points, wherein each surface element is used as a seed point, and each seed point forms a seed point set;
the surface element diffusion module is used for diffusing the surface elements corresponding to the seed points to obtain dense surface elements;
the surface element filtering module is used for filtering the dense surface elements according to set filtering conditions to obtain filtered surface elements;
a seed point updating module for updating the seed point set by the filtered surface element;
a returning module, configured to return "performing diffusion according to the surface element corresponding to each seed point to obtain a dense surface element" when the iteration number does not reach the preset number;
and the three-dimensional point cloud determining module is used for obtaining the three-dimensional point cloud of the multi-view remote sensing terrain image when the iteration times reach the preset times.
9. The system for point cloud reconstruction of multi-view remote sensing topographic images according to claim 8, wherein the image feature extraction module specifically comprises:
the image cutting unit is used for cutting the multi-view remote sensing terrain image to obtain an image block;
the image feature acquisition unit is used for extracting features of the image blocks by adopting a concurrent SIFT operator to obtain image features; and recording the image characteristics corresponding to the position information of the image block.
10. The system for point cloud reconstruction of multi-view remote sensing topographic images according to claim 8, wherein the characteristic point pair obtaining module specifically comprises:
the reference image determining unit is used for determining reference images and non-reference images from the multi-view remote sensing terrain images, and the number of the reference images is 1;
a feature point pair obtaining unit, configured to find, under a set constraint condition, a matching feature point corresponding to each feature point in the reference image from the non-reference image, where each feature point and the corresponding matching feature point form a feature point pair; the feature points are points in the image feature.
CN202110608819.6A 2021-06-01 2021-06-01 Multi-view remote sensing topographic image point cloud reconstruction method and system Pending CN113345072A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110608819.6A CN113345072A (en) 2021-06-01 2021-06-01 Multi-view remote sensing topographic image point cloud reconstruction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110608819.6A CN113345072A (en) 2021-06-01 2021-06-01 Multi-view remote sensing topographic image point cloud reconstruction method and system

Publications (1)

Publication Number Publication Date
CN113345072A true CN113345072A (en) 2021-09-03

Family

ID=77474098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110608819.6A Pending CN113345072A (en) 2021-06-01 2021-06-01 Multi-view remote sensing topographic image point cloud reconstruction method and system

Country Status (1)

Country Link
CN (1) CN113345072A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119995A (en) * 2021-11-08 2022-03-01 山东科技大学 Air-ground image matching method based on object space surface element
CN114998397A (en) * 2022-05-20 2022-09-02 中国人民解放军61540部队 Multi-view satellite image stereopair optimization selection method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825543A (en) * 2016-03-31 2016-08-03 武汉大学 Multi-view dense point cloud generation method and system based on low-altitude remote sensing images
CN106600686A (en) * 2016-12-06 2017-04-26 西安电子科技大学 Three-dimensional point cloud reconstruction method based on multiple uncalibrated images
CN108198230A (en) * 2018-02-05 2018-06-22 西北农林科技大学 A kind of crop and fruit three-dimensional point cloud extraction system based on image at random
CN108921939A (en) * 2018-07-04 2018-11-30 王斌 A kind of method for reconstructing three-dimensional scene based on picture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825543A (en) * 2016-03-31 2016-08-03 武汉大学 Multi-view dense point cloud generation method and system based on low-altitude remote sensing images
CN106600686A (en) * 2016-12-06 2017-04-26 西安电子科技大学 Three-dimensional point cloud reconstruction method based on multiple uncalibrated images
CN108198230A (en) * 2018-02-05 2018-06-22 西北农林科技大学 A kind of crop and fruit three-dimensional point cloud extraction system based on image at random
CN108921939A (en) * 2018-07-04 2018-11-30 王斌 A kind of method for reconstructing three-dimensional scene based on picture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨秀峰: ""基于无人机遥感数据的三维地形快速重建技术研究"", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119995A (en) * 2021-11-08 2022-03-01 山东科技大学 Air-ground image matching method based on object space surface element
CN114119995B (en) * 2021-11-08 2024-03-15 山东科技大学 Space-to-ground image matching method based on object space element
CN114998397A (en) * 2022-05-20 2022-09-02 中国人民解放军61540部队 Multi-view satellite image stereopair optimization selection method
CN114998397B (en) * 2022-05-20 2023-04-11 中国人民解放军61540部队 Multi-view satellite image stereopair optimization selection method

Similar Documents

Publication Publication Date Title
CN111815757B (en) Large member three-dimensional reconstruction method based on image sequence
US10049492B2 (en) Method and apparatus for rendering facades of objects of interest from three-dimensional point clouds
Tola et al. Efficient large-scale multi-view stereo for ultra high-resolution image sets
Xiao et al. Street environment change detection from mobile laser scanning point clouds
Gross et al. Extraction of lines from laser point clouds
Kim et al. Evaluation of 3D feature descriptors for multi-modal data registration
CN113689535B (en) Building model generation method and device based on unmanned aerial vehicle image
CN114419085A (en) Automatic building contour line extraction method and device, terminal device and storage medium
CN114332134B (en) Building facade extraction method and device based on dense point cloud
Korah et al. Strip histogram grid for efficient lidar segmentation from urban environments
Karsli et al. Automatic building extraction from very high-resolution image and LiDAR data with SVM algorithm
Awrangjeb et al. Rule-based segmentation of LIDAR point cloud for automatic extraction of building roof planes
CN113345072A (en) Multi-view remote sensing topographic image point cloud reconstruction method and system
Özdemir et al. A multi-purpose benchmark for photogrammetric urban 3D reconstruction in a controlled environment
CN112581511B (en) Three-dimensional reconstruction method and system based on near vertical scanning point cloud rapid registration
Rothermel et al. A median-based depthmap fusion strategy for the generation of oriented points
Boussaha et al. Large scale textured mesh reconstruction from mobile mapping images and lidar scans
Gallup Efficient 3D reconstruction of large-scale urban environments from street-level video
CN109785421B (en) Texture mapping method and system based on air-ground image combination
Jisen A study on target recognition algorithm based on 3D point cloud and feature fusion
Abdel-Wahab et al. Efficient reconstruction of large unordered image datasets for high accuracy photogrammetric applications
Novacheva Building roof reconstruction from LiDAR data and aerial images through plane extraction and colour edge detection
Lee et al. Determination of building model key points using multidirectional shaded relief images generated from airborne LiDAR data
Xiao Automatic building outlining from multi-view oblique images
Hsu et al. Automatic registration and visualization of occluded targets using ladar data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220221

Address after: 730070 No. 88 Anning West Road, Anning District, Gansu, Lanzhou

Applicant after: Lanzhou Jiaotong University

Applicant after: Guohua Satellite Data Technology Co., Ltd

Address before: 730070 No. 88 Anning West Road, Anning District, Gansu, Lanzhou

Applicant before: Lanzhou Jiaotong University